This revert was created by Android Culprit Assistant. The culprit was identified in the following culprit search session (http://go/aca-get/91da3c52-9b76-498b-bdbd-a9de7d7ff53b).
Change-Id: I996c595bee9acc15aedaf0a912f67fa027f33cd0
This revert was created by Android Culprit Assistant. The culprit was identified in the following culprit search session (http://go/aca-get/91da3c52-9b76-498b-bdbd-a9de7d7ff53b).
Change-Id: I459265b9c9117d6006c1223947a202505d24c08f
Replace assert with check and log message. Also log more about the request if DMA heap allocation fails.
Bug: 315283243
Test: boot to home
Test: touch x && trusty_apploader x
Change-Id: Ic075809fd2a6b09d9c4e8dff986709c4deae8fb7
By using cgroup.kill we don't need to read cgroup.procs at all for
SIGKILLs, which is more efficient and should help reduce CPU contention
and cgroup lock contention. Fallback to cgroup.procs if we encounter an
error trying to use cgroup.kill, but if cgroup.kill fails it's likely
that cgroup.procs will too.
Bug: 239829790
Change-Id: I44706faccfb7c4611b512a3642b913f06d30c1dc
In killProcessGroup we currently read cgroup.procs to find processes to
kill, send them kill signals until cgroup.procs is empty, then remove
the cgroup directory. The cgroup cannot be removed until all processes
are dead, otherwise we'll get an EBUSY error from the kernel.
There is a race in the kernel where cgroup.procs can read empty even
though the cgroup is pinned by processes which are still exiting, and
can't be removed yet. [1]
Let's use the populated field of cgroup.events instead of an empty
cgroup.procs file to determine when the cgroup is removable. In
addition to functioning like we expect, this is more efficient because
we can poll on cgroup.events instead of retrying kills and rereading
cgroup.procs every 5ms which should help reduce CPU contention and
cgroup lock contention.
It's still possible that it takes longer for a cgroup to become
unpopulated than our timeout allows, in which case we will fail to
remove the cgroup and leak kernel memory. But this change should help
reduce the probability of that happening.
[1] https://lore.kernel.org/all/CABdmKX3SOXpcK85a7cx3iXrwUj=i1yXqEz9i9zNkx8mB=ZXQ8A@mail.gmail.com/
Bug: 301871933
Change-Id: If7dcfb331f47e06994c9ac85ed08bbcce18cdad7
Current cow size estimation is computed by taking the offset of next
data write. However, during COW size estimation we repeatdly increment
op_count_max. Since data section is placed right after op section,
incrementing op_count_max has the effect of moving the beginning of
data section. next_data_pos_ is never updated in this process, causing
the final estiamte to be off.
Test: th
Bug: 313962438
Change-Id: I250dff54c470c9c20d6db33d91bac898358dee31
Since currently implementation just does an bitwise or with the
storage unit, multiple calls to set_source would result in overlapping
bits. Fix by first clear the existing storage.
Test: th
Bug: 313962438
Change-Id: Iecfe8dd244c0f65ecd3cacb0404fdc39ef836d97
This is done so that we could depend on it elsewhere without needing all the unrelated methods.
Needed for ag/24553347
Bug: 296207744
Test: refactoring build
Change-Id: I7c6733208f3ae63ba9559753a24cffcb8e1b9d1e
This is a no-op but will be used in upcoming scudo changes that allow to
change the depot size at process startup time, and as such we will no
longer be able to call __scudo_get_stack_depot_size in debuggerd.
We already did the equivalent change for the ring buffer size in
https://r.android.com/q/topic:%22scudo_ring_buffer_size%22
Bug: 309446692
Change-Id: I761a7602c54a1f8f2d0575c5e011820d8dbaab63