We want to print how much total data we are attempting to write if a
failure occurs
Bug: 336461151
Test: th
Change-Id: I269b7e280df34994c80fa8ef7d39163d053fa9ea
Remove hard coded global variables referencing cow version in
libsnapshot. This value should stem from the build system, or set
individually in test cases.
Bug: 307452468
Test: th
Change-Id: I3d536246008acca92cd93e77886e5f7d17a131e0
If the COW device is allocated only from /data, then
the COW device name will end with -cow-img. Hence, check
that path as well.
Bug: 335552315
Test: snapshotctl apply-update
Change-Id: Id3c5cf8afd77994da117de41bb98a226b350f8e4
Signed-off-by: Akilesh Kailash <akailash@google.com>
Zstd compression goes all the way down to -7. zstd compression level -3
gives around the same compression ratio as lz4 level=3. Need further
testing to see performance comparison
Test: ota_from_target_files, update_device.py
Change-Id: Ic082b31aa8af938f80be628c73667e02353835f0
If partitions are mounted off the daemon, there is no need
to kill if the tests are being run for legacy vab snapshots.
This also removes vabc_legacy_test as it is no longer required.
Bug: 331053511
Test: vab_legacy_test, vts_libsnapshot_test on Pixel - No flake observed
with 10 iterations
Change-Id: Ie8b29fef77948d23d920c19d816376290cf2fed9
Signed-off-by: Akilesh Kailash <akailash@google.com>
If on Androd 12, continue to use snapshots
Bug: 304829384
Test: OTA on Pixel
Change-Id: I94890c308e7f297b695ddbd71659ebb1cf4d278c
Signed-off-by: Akilesh Kailash <akailash@google.com>
ro.virtual_ab.userspace.snapshots.enabled is a vendor property which isn't present in Android S. Hence, during OTA install with S vendor, userspace_snapshots is disabled.
However, both update_engine and snapuserd are already on the system partition during install. Hence, forcefully enable userspace_snapshots if this is a path of legacy dm-snapshots with Vendor on Android S.
Bug: 331156940
Test: OTA tests on treehugger, pixel OTA
Change-Id: I3d1c03493d83e670e37df088d4b676c4aa1dc720
Signed-off-by: Akilesh Kailash <akailash@google.com>
Change the meaning of batch_size_. Previously, a batch size of 200 meant
200 compressed data ops. With variable block size, each compressed data
op can be up to 256k uncompressed size -> batch size meaning should be
changed to 200 blocks of block size.
With this being said, the default batch size can be increased to 4mb to
better accomodate variable block size
The way we calculate the number of blocks to compress at once also
needs to be changed. Since there's no way of determining the comperssed
data size ahead of time, allow overwriting the cache by batch_size_ and
then flushing the cache as needed
Bug: 322279333
Test: Ota on pixel and measuring system image cow
Change-Id: Ie8e08d109dc5c3b4f5f36a740bbbcd37362a7ab3
With the modified cache system we might be processing big replace ops at
once. Our old tests didn't catch a case where a big replace op was
chunked into seperate batch writes.
Test: cow_api_test
Change-Id: I3bc80a2f9ed3e4c73dd9f74d9affaba79d49e4d2
Currently if our iov that we are trying to write is greater than 1024
our write will fail with error "INVALID ARGUMENT". This is because
pwritev() system call takes a max input size of IOV_MAX (which is device
dependant).
With our increased cache size of 1mb or maybe even more (or if user
configures batch size to be large), our write size could be greater than
IOV_MAX, and will fail with an unhelpful error. We should chunk these
writes to ensure they succeed.
Bug: 322279333
Test: cow_api_test + manual testing with large iov write sizes
Change-Id: Ia1fb53cbfc743cfcdfc7256ff9df075ad0c2dd38
Cow Writer used to fail with ENOSPC during initialization.
Try temporaryFile creation with path set to current working directory.
Bug: 328879200
Test: th, snapuserd_test
Change-Id: I7b3833967952c74142f1d5a35cb8d94dd6d894fc
Signed-off-by: Akilesh Kailash <akailash@google.com>
Clean up all the legacy dm-snapshot based approach.
This will continue to maintain backward compatibility with vendor
partition on Android S.
Bug: 304829384
Test: OTA on Pixel
th - OTA from Android S to TOT with vendor on S
Change-Id: Id9263be881dd1a06923cd0ace007f1c027e6969d
Signed-off-by: Akilesh Kailash <akailash@google.com>
During OTA install, when vendor partition is on Android S, add a new flag in SnapshotUpdateStatus file to indicate that we need to support dm-snapshot based snapshot process. This will be used only during post OTA reboot.
The primary change here is the OTA install path. Earlier, dm-snapshot based approach was used; with this change, since both "snapuserd" and "update-engine" resides on system partition, OTA installation will use the userspace snapshot approach.
To maintain backward compatibility, the new flag "legacy_snapuserd" is used only after OTA reboot. This flag will make sure that update-engine will take the dm-snapshot based approach post reboot and for the entire duration of snapshot-merge.
Additionally, during first-stage init if the vendor is on Android S, then "snapuserd" binary will continue to work based off dm-snapshot as none of this change will impact the mount process.
Bug: 304829384
Test: OTA on Pixel
th - OTA from Android S to TOT with vendor on S
Change-Id: Idd9a60b09417cee141b2810e2d4b35e91c845a5c
Signed-off-by: Akilesh Kailash <akailash@google.com>
We have updated logic for v3 cow we should allow non-data ops to be
cached at 16x the amount of data ops. Changing the reserve size to match
this.
Test: th
Change-Id: I825ffef4e1a2ce4eb5c105d266bf95cb3d776ed9
This adds an option "ima" in dmctl.
$ dmctl ima product-verity
Targets in the device-mapper table for product-verity:
0-7463768: verity, target_name=verity,target_version=1.9.0,hash_failed=V,verity_version=1,data_device_name=254:4,hash_device_name=254:4,verity_algorithm=sha256,root_digest=d7af9fcb04d184219ba5477b97bb2bbc89fd23a46e03d1dea31d674cc4934769,salt=19d4f2345adfc8b7cc22a3c2f21dd413e5020fc7920a08a33f46f3c61492dfcc,ignore_zero_blocks=y,check_at_most_once=n,verity_mode=restart_on_corruption;
Change-Id: I057970b6c786b3f9a394b4919f5f5115b27cbc08
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Remove temporary estimation solution (using 4k to overestimate the cow).
With the updated estimation logic, we should be able to accurately
estimate the cow size with variable block sized compression
Bug: 322279333
Test: th
Change-Id: I199970048605a8d21d3791614ad88ca61662e1a3
Since http://r.android.com/2994274, snapshotctl can be run by init.
Therefore, we need it to log to logd for better debuggability.
Bug: 311377497
Test: adb shell setprop sys.snapshotctl.map requested
Test: adb shell setprop sys.snapshotctl.unmap requested
Change-Id: I287ecf77d45fb9e6c44bea36e14d2624029afea5
This helps reviewers understand which VTS requirement these test are
covering
Bug: 302208814
Test: th
Change-Id: I52712d9888b73fc7a9f8eeeb035a3303618efd69
This reverts commit 1af7260931.
Fix: Allow BootControlClient.h to be used in -user builds
Bug: 319309466
Test: Build on -user branch
Change-Id: I93e95e35b29a98816b2f33fe9fa6859655934cd5
$snapshotctl apply-update <directory containing snapshot patches>
This command is equivalent to applying incremental-ota but
bypasses all of the update-engine subsystem. Snapshot-patches
which are created on the host will be used directly and
will be written to the COW block devices.
No change to any of the libsnapshot or the I/O path logic.
Once the snapshot patches are applied, device is ready
to reboot as if an OTA update is applied.
Once the device reboots, snapshot merge will be initiated
as usual.
This will help test the changes to libsnapshot + init + snapuserd
extremely quick.
Incremental flashing becomes quite simple in the CI workflow.
Here are numbers tested on live builds where the actual builds/testing
is done on CI.
Patch-Create+Apply = Create the snapshot patches between two
builds and apply them to the device
Branch(main) Patch-Creation+Apply Merge Snapshot-size
=================================================================
Build-1 -> Build-2 14 seconds 40 seconds 160MB
Build-2 -> Build-3 21 seconds 26 seconds 331MB
Build-3 -> Build-4 30 seconds 45 seconds 375MB
Build-X -> Build-X 3 seconds 4 seconds 8MB
Bug: 319309466
Test: On Pixel 6, incremental builds
Change-Id: I271b2cb5df4abde91571ec70ce06f926a1d01694
Signed-off-by: Akilesh Kailash <akailash@google.com>
Now that V3 is enabled, relax the header version check.
For V3, header op_count_max contains the information of the device size.
Bug: 299011882
Test: snapshotctl map-snapshots on Pixel with V3 format
Change-Id: Ia1cb20b24857136a742e20408ee95e56e98b256a
Signed-off-by: Akilesh Kailash <akailash@google.com>
Alternate dispatching blocks between threads rather than splitting the
data beforehand and then sending to threads in order to ensure that
single threading + multithreading chunks data at the same locations.
Without this change, the resulting op count + data section of the cow
will differ between --enable-threading && --disable-threading at
runtime, which is a result we don't want
Test: th
Change-Id: I3ed8add0552745a281fce2aa7f1d1d32eb547e63
Log the compression algorithm and compression factor used during OTA for easier debugging
Test: th
Change-Id: Ic50989d7e233983d6299163fc647eb739a0b7cb2
Since variable block compresses blocks and there is no longer a 1:1
mapping between ops to blocks, we need to update this check in
EmitBlocks to the actual number of compressed blocks written.
Since single threaded + multi threaded + no compression invoke different
code paths. Ensure that that blocks written are still equivalent to
blocks.size(). Adding two test cases to cover these situations.
Test: th
Change-Id: If81eccf74333292a114268862dde0fe49681ef35
1: Move to v3 COW writer
2: Enable variable block size. Default compression set to lz4
with compression factor 64KiB
3: Prepare merge sequence so that device can initiate the merge
4: Verify the merge order
Bug: 319309466
Test: On Pixel 6
This was tested on live builds where the actual builds/testing
is done on CI.
Patch-Create+Apply = Create the snapshot patches between two
builds and apply them to the device
Branch(main) Patch-Creation+Apply Snapshot-size
=============================================================
Build-1 -> Build-2 14 seconds 160MB
Build-2 -> Build-3 21 seconds 331MB
Build-3 -> Build-4 30 seconds 375MB
Build X -> Build X 3 seconds 8MB
Change-Id: I96437032de029d89de62ba11fe37d9287b0a4071
Signed-off-by: Akilesh Kailash <akailash@google.com>
In newer versions of libc++, std::char_traits<T> is no longer defined
for non-character types, and a result, std::basic_string<T> and
std::basic_string_view<T> are also no longer defined for non-character
types. See
https://discourse.llvm.org/t/deprecating-std-string-t-for-non-character-t/66779.
Replace them with std::vector<T> and std::span<const T>.
Bug: 175635923
Test: m MODULES-IN-system-core-fs_mgr
Test: /data/nativetest64/cow_api_test/cow_api_test
Change-Id: Ife2e87833ced43ff24e5765998cb6993e4f9b4c0
The flow of I/O path is as follows:
1: When there is a I/O request for a given sector, we first
check the in-memory COW operation mapping for that sector.
2: If the mapping of sector to COW operation is found, then the
existing I/O path will work seamlessly. Even if the COW operation
encodes multiple blocks, we will discard the remaining data.
3: If the mapping of sector to COW operation is not found:
a: Find the previous COW operation as the vector has sorted sectors.
b: If the previous COW operation is a REPLACE op:
i: Check if the current sector is encoded in the previous COW
operations compressed block.
ii: If the sector falls within the range of compressed blocks,
retrieve the block offset.
iii: De-compress the COW operation based on the compression
factor.
iv: memcpy the data based on the block offset.
v: cache the COW operation pointer as subsequent I/O requests
are sequential and can just be a memcpy at the correct offset.
c: If the previous COW operation is not a REPLACE op or if the
requested sector does not fall within the compression factor
of the previous COW operation, then fallback and read the data
from base device.
Snapshot-merge:
During merge of REPLACE ops, read the entire op in one shot, de-compress
multiple blocks and write all the blocks in one shot.
Performance:
go/variable-block-vabc-perf covers detail performance runs
on Pixel 6 for full and incremental OTA.
Bug: 319309466
Test: snapuserd_test covers all the I/O path with various block sizes.
About 252 cases with all combinations and tunables.
[==========] 252 tests from 4 test suites ran. (702565 ms total)
[ PASSED ] 252 tests.
On Pixel 6:
=======================================
COW Writer V3:
for i in full, incremental OTA
for j in 4k, 16k, 32k, 64k, 128, 256k
for k in lz4, zstd, gz
install OTA, reboot, verify merge
=======================================
COW Writer V2:
for i in full, incremental OTA
for j in 4k
for k in lz4, zstd, gz
install OTA, reboot, verity merge
=====================================
Change-Id: I4c3b5c3efa0d09677568b4396cc53db0e74e7c99
Signed-off-by: Akilesh Kailash <akailash@google.com>
This patch supports compression for bigger block size.
3 bits [57-59] in the COW Operation "source_info_" field is used to store
the compression factor. Supported compression factors are power of 2
viz: 4k, 8k, 16k, 32k, 64k, 128k, 256k.
Only REPLACE operations will have the bigger block size support for now.
This can be extended to other operations later.
The write path in EmitBlocks() has the core logic wherein consecutive
sequence of REPLACE ops are compressed based on the compression factor
settings. Thus, for a 64k compression factor, there will be just one
COW operation which encodes all the 16 operation and the entire 64k
block is compressed in one shot.
NOTE: There is no read I/O path support in this patch. Subsequent patch
will have the read support.
Performance data (with read I/O path support in subsequent patch):
go/variable-block-vabc-perf covers detail performance runs
on Pixel 6 for full and incremental OTA.
TL;DR:
Performance of a full OTA (All numbers are compared against 4k block
size)
=======================================
Snapshot-size:
~10-11% decrease in snapshot-size (disk-space) for zstd with 256k block
size.
~8% decrease in snapshot-size (disk-space) for lz4
Install time:
~13% decrease in OTA install time for zstd with 256k block size.
Snapshot-merge:
~50% decrease in snapshot-merge time with 256k block size for zstd
Post OTA boot-time:
~10.5 decrease in boot time for 64k block size for zstd
In-memory footprint for COW operations:
~80% decrease in memory footprint for 256k block size. (58MB -> 9.2MB)
============================================
For more improvements, further tuning of zstd/lz4 is
required primariy the compression levels, zstd compression window,
performance of gz with compression levels.
Bug: 319309466
Test: cow_api test covering all the supported block sizes for v3 writer.
On Pixel 6:
=======================================
COW Writer V3:
for OTA in full, incremental OTA
for block_size in 4k, 16k, 32k, 64k, 128k, 256k
for compression_algo in lz4, zstd, gz, none
install OTA, reboot, verify merge
=======================================
COW Writer V2:
for OTA in full, incremental OTA
for block_size in 4k
for compression_algo in lz4, zstd, gz, none
install OTA, reboot, verity merge
=====================================
Change-Id: I96201f1609582aa9d44d8085852e284b0c4a426d
Signed-off-by: Akilesh Kailash <akailash@google.com>
Intermediate CL needed before variable block size can land. Since v3 is
enabled on cuttlefish, the base build needs to write the
compression_factor in order for reader to properly parse. Otherwise
we'll fail OTA test
Test: th
Change-Id: Ia353aae8e668858851073f09308909ae70d7854e
In the case that op_count_max is read in as zero, we should use the
upper bound of max blocks as the estimation. One case in which this
error can happen is if a v2 cow estimator is used, we should still be
able to run an OTA if we upper bound our ops buffer size estimation.
Test: th
Change-Id: I97ca66368d6631bf43c8911ed66f99c9e8096e2d
Parse manifest compression_factor and set CowOptions appropriately. This
allows v3 writer to use compression factor in OTA. Updating some
comments about supported compression algorithms
Test: th
Change-Id: I88f254087e536d9e5925064f85317f0acce280ee
With variable block size compression being added, the number of ops
written cannot be calculated directly as easily since one op can cover
the data for multiple ops previously. We can get rid of this check for
XOR and Raw blocks as
within WriteOperation() we already make a check to see if we are
exceeding op_count_max limit.
We still need to keep this check for EmitZeroBlocks and EmitCopyBlocks
since the number of operations is determined ahead of time in those
function calls. Without this check in place, the ops will be added to
cached ops and return true when ops cannot be written.
with this change, v3 cow ota now works on cuttlefish with support for
variable block size compression.
Test: th
Change-Id: Ia55f152f5deb67a9022d0feff112345e72741dd3
Changes to structure of v3 header + operation needed for variable block
size. Seperating this CL from the variable block size one so we can get
v3 enabled on cuttlefish
the op count type changes are so that op count matches the type of
max_blocks. Max_blocks is used when op buffer size is not set -> we
default to upper bound of one operation per block in the partition.
Test: th
Bug: 307452468
Change-Id: I1a2581763a4fd6be5d5795f7e4781023e9984256
Assign CPUSET_SP_BACKGROUND taskprofile to snapshot merge threads.
This will ensure that the threads will not run on big cores.
Additionally, reduce the flushing of data to 1MB after merging REPLACE ops.
No major regression observed on snashot merge time.
On Pixel 6 for incremental OTA of 500M, snapshot merge time increased
from 72 seconds to 76 seconds after this patch.
Bug: 311233916
Test: Full and incremental OTA on Pixel 6 - Verify merge threads not on big cores
Change-Id: I455afdac0b77227869d846d0c4472ea9eb34c41c
Signed-off-by: Akilesh Kailash <akailash@google.com>
Performance of COW v3 is now on par with v2 in both multi-threaded and
single threaded configurations. Note, v2 cow writer can cache up to 1024
blocks in memory if multi-threaded compression is enabled(even though
batch size is configured as 200). For a fair comparison, benchmarks are
ran with batch size of 256. For batch size of 256 or greater, v2 and v3
have similar multi-threaded performance.
Test: th
Bug: 313962438
Change-Id: I377c8291689a7a038bb00b09d7371a155e6972e9
Adding a check here to ensure that next_data_pos_ isn't modified since
initialization. After sizing the sequence buffer, this value should be
the initialized value + the size of sequence buffer.
Test: cow_api_test
Change-Id: I9c79041b72544500989860a13ca6c25830d28750