Current check is kind of device specfic and causes false positive.
Bug: 320829712
Test: Trigger long key press and check boot reason
Change-Id: I5b2b8307f16eecd05513bfa51494da174fc839bd
The fastboot command currently uses USBDEVFS_BULK to transfer data
(including image data) to the target. On the kernel side it looks
like this:
1. Allocate a contiguous memory region and copy the user data into
the memory region (which may involve accessing storage).
2. Instruct the driver to start a DMA operation.
3. Wait for the DMA to finish.
This is suboptimal because it misses out on a pipelining
opportunity. We could be doing 3 for the current operation in parallel
with 1 for the next operation, so that the next DMA is ready to go
as soon as the current DMA finishes.
The kernel supports asynchronous operations on usbdevfs file
descriptors (USBDEVFS_SUBMITURB and USBDEVFS_REAPURB), so we can
implement this like so:
1. Submit URB 0
2. Submit URB 1
3. Wait for URB 0
4. Submit URB 2
5. Wait for URB 1
and so on.
That is what this CL implements. On my machine it increases transfer
speed from 125 MB/s to 160 MB/s using a USB 3.0 connection to the
target (Pixel 8).
Bug: 324107907
Change-Id: I20db7ea14af85db48f6494091c8279ef7a21033d
This is needed to upgrade the android_logger crate from 0.12.0
to 0.13.3.
with_max_level provides the same functionality as with_min_level.
The renaming is admittedly confusing, but the new name is accurate
and it makes sense that they deprecated and then removed the
previously poorly named with_min_level.
See crate documentation [1] and code [2].
[1]: https://docs.rs/android_logger/0.12.0/android_logger/struct.Config.html#method.with_min_level
[2]: https://docs.rs/android_logger/0.12.0/src/android_logger/lib.rs.html#227
Bug: 322718401
Test: build and run CF with the change.
Test: m aosp_cf_x86_64_phone
Change-Id: Ib4fbd486267d30e74e886139846950b066848d43
Add a new AID for Virtual Machines so we can grant
capabilities such as CAP_SYS_NICE.
Bug: 322197421
Test: Build and boots, and verified capabilities
Change-Id: Ie893ba8ed6956a554bccfbd00e4e6fe9212ea77d
Signed-off-by: David Dai <davidai@google.com>
CL aosp/2929791 removed I/O priority support to prepare for a clean
revert of the CL that migrates the blkio controller from the v1 to the
v2 cgroup hierarchy. Since there was no other reason to revert the I/O
priority CL, restore I/O priority support.
Bug: 186902601
Change-Id: I1a4053140ab55973878bfeacfb546da3c601a895
Signed-off-by: Bart Van Assche <bvanassche@google.com>
The flow of I/O path is as follows:
1: When there is a I/O request for a given sector, we first
check the in-memory COW operation mapping for that sector.
2: If the mapping of sector to COW operation is found, then the
existing I/O path will work seamlessly. Even if the COW operation
encodes multiple blocks, we will discard the remaining data.
3: If the mapping of sector to COW operation is not found:
a: Find the previous COW operation as the vector has sorted sectors.
b: If the previous COW operation is a REPLACE op:
i: Check if the current sector is encoded in the previous COW
operations compressed block.
ii: If the sector falls within the range of compressed blocks,
retrieve the block offset.
iii: De-compress the COW operation based on the compression
factor.
iv: memcpy the data based on the block offset.
v: cache the COW operation pointer as subsequent I/O requests
are sequential and can just be a memcpy at the correct offset.
c: If the previous COW operation is not a REPLACE op or if the
requested sector does not fall within the compression factor
of the previous COW operation, then fallback and read the data
from base device.
Snapshot-merge:
During merge of REPLACE ops, read the entire op in one shot, de-compress
multiple blocks and write all the blocks in one shot.
Performance:
go/variable-block-vabc-perf covers detail performance runs
on Pixel 6 for full and incremental OTA.
Bug: 319309466
Test: snapuserd_test covers all the I/O path with various block sizes.
About 252 cases with all combinations and tunables.
[==========] 252 tests from 4 test suites ran. (702565 ms total)
[ PASSED ] 252 tests.
On Pixel 6:
=======================================
COW Writer V3:
for i in full, incremental OTA
for j in 4k, 16k, 32k, 64k, 128, 256k
for k in lz4, zstd, gz
install OTA, reboot, verify merge
=======================================
COW Writer V2:
for i in full, incremental OTA
for j in 4k
for k in lz4, zstd, gz
install OTA, reboot, verity merge
=====================================
Change-Id: I4c3b5c3efa0d09677568b4396cc53db0e74e7c99
Signed-off-by: Akilesh Kailash <akailash@google.com>
This patch supports compression for bigger block size.
3 bits [57-59] in the COW Operation "source_info_" field is used to store
the compression factor. Supported compression factors are power of 2
viz: 4k, 8k, 16k, 32k, 64k, 128k, 256k.
Only REPLACE operations will have the bigger block size support for now.
This can be extended to other operations later.
The write path in EmitBlocks() has the core logic wherein consecutive
sequence of REPLACE ops are compressed based on the compression factor
settings. Thus, for a 64k compression factor, there will be just one
COW operation which encodes all the 16 operation and the entire 64k
block is compressed in one shot.
NOTE: There is no read I/O path support in this patch. Subsequent patch
will have the read support.
Performance data (with read I/O path support in subsequent patch):
go/variable-block-vabc-perf covers detail performance runs
on Pixel 6 for full and incremental OTA.
TL;DR:
Performance of a full OTA (All numbers are compared against 4k block
size)
=======================================
Snapshot-size:
~10-11% decrease in snapshot-size (disk-space) for zstd with 256k block
size.
~8% decrease in snapshot-size (disk-space) for lz4
Install time:
~13% decrease in OTA install time for zstd with 256k block size.
Snapshot-merge:
~50% decrease in snapshot-merge time with 256k block size for zstd
Post OTA boot-time:
~10.5 decrease in boot time for 64k block size for zstd
In-memory footprint for COW operations:
~80% decrease in memory footprint for 256k block size. (58MB -> 9.2MB)
============================================
For more improvements, further tuning of zstd/lz4 is
required primariy the compression levels, zstd compression window,
performance of gz with compression levels.
Bug: 319309466
Test: cow_api test covering all the supported block sizes for v3 writer.
On Pixel 6:
=======================================
COW Writer V3:
for OTA in full, incremental OTA
for block_size in 4k, 16k, 32k, 64k, 128k, 256k
for compression_algo in lz4, zstd, gz, none
install OTA, reboot, verify merge
=======================================
COW Writer V2:
for OTA in full, incremental OTA
for block_size in 4k
for compression_algo in lz4, zstd, gz, none
install OTA, reboot, verity merge
=====================================
Change-Id: I96201f1609582aa9d44d8085852e284b0c4a426d
Signed-off-by: Akilesh Kailash <akailash@google.com>
Intermediate CL needed before variable block size can land. Since v3 is
enabled on cuttlefish, the base build needs to write the
compression_factor in order for reader to properly parse. Otherwise
we'll fail OTA test
Test: th
Change-Id: Ia353aae8e668858851073f09308909ae70d7854e
In the case that op_count_max is read in as zero, we should use the
upper bound of max blocks as the estimation. One case in which this
error can happen is if a v2 cow estimator is used, we should still be
able to run an OTA if we upper bound our ops buffer size estimation.
Test: th
Change-Id: I97ca66368d6631bf43c8911ed66f99c9e8096e2d
Parse manifest compression_factor and set CowOptions appropriately. This
allows v3 writer to use compression factor in OTA. Updating some
comments about supported compression algorithms
Test: th
Change-Id: I88f254087e536d9e5925064f85317f0acce280ee
Make the makefile safer by requiring a specific value for the
environment variable that turns on Secretkeeper
Bug: 306364873
Test: TreeHugger
Change-Id: Ic5bb5e7411a19941f58ec8c973104c1e53f3834f
The test is not eligible for CTS. Reasons:
1. The init behavior does not directly affect app compat. App interact
with init only for the property service and that part is covered by
the Bionic test already.
2. This test doesn't run against the init binary installed on the
device. libinit where most of the init functionalities are
implemented is statically linked to this test binary. In other words,
this test is closer to a unit test for init.
3. This test is not compatible with Trunk stable where test and DUT are
built in different branches. The test depends on several (private)
libraries like libbase and libutils. Since the interfaces of the
libraries may have changed in the main branch, the test binary built
from the old test-dev branch may break.
This change does not remove the test. The test will still run as a unit
test during pre/post submit.
I didn't drop the `Cts` prefix from the name, because that requires
broader changes.
Bug: 320800872
Test: N/A
Change-Id: I1402c08b79b57ad6daa7948fe37f14fbbe36f1d6
With variable block size compression being added, the number of ops
written cannot be calculated directly as easily since one op can cover
the data for multiple ops previously. We can get rid of this check for
XOR and Raw blocks as
within WriteOperation() we already make a check to see if we are
exceeding op_count_max limit.
We still need to keep this check for EmitZeroBlocks and EmitCopyBlocks
since the number of operations is determined ahead of time in those
function calls. Without this check in place, the ops will be added to
cached ops and return true when ops cannot be written.
with this change, v3 cow ota now works on cuttlefish with support for
variable block size compression.
Test: th
Change-Id: Ia55f152f5deb67a9022d0feff112345e72741dd3
Changes to structure of v3 header + operation needed for variable block
size. Seperating this CL from the variable block size one so we can get
v3 enabled on cuttlefish
the op count type changes are so that op count matches the type of
max_blocks. Max_blocks is used when op buffer size is not set -> we
default to upper bound of one operation per block in the partition.
Test: th
Bug: 307452468
Change-Id: I1a2581763a4fd6be5d5795f7e4781023e9984256