When arc4random can get entropy (which is true for basically everyone
but init on kernels that don't support getrandom), use it instead of
AT_RANDOM.
Bug: http://b/29622562
Change-Id: I6932803af2c477e65562ff531bd959f199fad1df
This change implements the following property:
Any 2**N aligned memory region on size 2**N contains no more than one DSO.
The value N can be configured, with 16 or 18 looking like a good choice.
Additionally, DSOs are loaded at random page-aligned address inside these large
regions.
This change has dual purpose:
1. Larger values of N allow a lot more compact CFI shadow implementation.
See change I14dfea630de468eb5620e7f55f92b1397ba06217.
For example, CFI shadow for the system_server process has the following size (RSS, KB):
152 for N = 12, 32 for N = 16, 16 for N = 18.
2. Extra randomization is good for security.
This change does not result in extra RAM usage, because everything is still page-aligned.
It does result in a bit more VM fragmentation because of the gaps between shared libraries.
As it turns out, this fragmentation is barely noticeable because the kernel creates new mapping
at the highest possible address, and we do enough small mappings to almost completely fill the
gaps (ex. in the Zygote the gaps are filled with .ttf file mappings and thread stacks).
I've measured VM fragmentation as the sum of all VM gaps (unmapped regions) that are larger
than 1MB according to /proc/$PID/maps. On aosp_angler-userdebug, the numbers are (in GB):
| N = 12 | N = 18
system_server | 521.9 | 521.1
zygote64 | 522.1 | 521.3
zygote32 | 2.55 | 2.55
mediaserver | 4.00 | 4.00
Change-Id: Ia6df840dd409c82837efd1f263be420d9723c84a
Before, dynamic executables would initialize the global stack protector
twice, once for the linker, and once for the executable. This worked
because the result was the same for both initializations, because it
used getauxval(AT_RANDOM), which won't be the case once arc4random gets
used for it.
Bug: http://b/29622562
Change-Id: I7718b1ba8ee8fac7127ab2360cb1088e510fef5c
Test: ran the stack protector tests on angler (32/64bit, static/dynamic)
And clang won't let you have a function declaration where some arguments
have nullability specifiers and others don't.
Change-Id: I450b0221a3f7f068d5fe971dfbc0ba91d25710e8
Previously, arc4random would register a fork-detecting pthread_atfork
handler to not have to call getpid() after a fork. pthread_atfork uses
pthread_mutex_lock, which requires the current thread to be initialized,
preventing the use of arc4random for initializing the global stack guard,
which needs to happen before the main thread has been initialized.
Extract the arc4random fork-detection flag and use the existing
arc4random fork handler to set it.
Bug: http://b/29622562
Change-Id: I98c9329fa0e489c3f78cad52747eaaf2f5226b80
This function only exists for backwards compatibility, so leave it as it was.
Bug: http://b/26944282
Change-Id: I31973d1402660933103ee2d815649ab9569e4dfc
This patch uses __kernel_vsyscall instead of "int 0x80"
as the syscall entry point. AT_SYSINFO points to
an adapter to mask the arch specific difference and gives a
performance boost on i386 architecture.
Change-ID: Ib340c604d02c6c25714a95793737e3cfdc3fc5d7
Signed-off-by: Mingwei Shi <mingwei.shi@intel.com>
Our FORTIFY _chk functions' implementations were very repetitive and verbose
but not very helpful. We'd also screwed up and put the SSIZE_MAX checks where
they would never fire unless you actually had a buffer as large as half your
address space, which probably doesn't happen very often.
Factor out the duplication and take the opportunity to actually show details
like how big the overrun buffer was, or by how much it was overrun.
Also remove the obsolete FORTIFY event logging.
Also remove the unused __libc_fatal_no_abort.
This change doesn't improve the diagnostics from the optimized assembler
implementations.
Change-Id: I176a90701395404d50975b547a00bd2c654e1252
Add backtrace_string to convert a malloc_debug backtrace to a string.
Also move the backtrace functions to libc_malloc_debug_backtrace so that
libmemunreachable can reuse them.
Change-Id: I5ad67001c0b4d184903c762863a8588181d4873b
The major components of the rewrite:
- Completely remove the qemu shared library code. Nobody was using it
and it appears to have broken at some point.
- Adds the ability to enable/disable different options independently.
- Adds a new option that can enable the backtrace on alloc/free when
a process gets a specific signal.
- Adds a new way to enable malloc debug. If a special property is
set, and the process has an environment variable set, then debug
malloc will be enabled. This allows something that might be
a derivative of app_process to be started with an environment variable
being enabled.
- get_malloc_leak_info() used to return one element for each pointer that
had the exact same backtrace. The new version returns information for
every one of the pointers with same backtrace. It turns out ddms already
automatically coalesces these, so the old method simply hid the fact
that there where multiple pointers with the same amount of backtrace.
- Moved all of the malloc debug specific code into the library.
Nothing related to the malloc debug data structures remains in libc.
- Removed the calls to the debug malloc cleanup routine. Instead, I
added an atexit call with the debug malloc cleanup routine. This gets
around most problems related to the timing of doing the cleanup.
The new properties and environment variables:
libc.debug.malloc.options
Set by option name (such as "backtrace"). Setting this to a bad value
will cause a usage statement to be printed to the log.
libc.debug.malloc.program
Same as before. If this is set, then only the program named will
be launched with malloc debug enabled. This is not a complete match,
but if any part of the property is in the program name, malloc debug is
enabled.
libc.debug.malloc.env_enabled
If set, then malloc debug is only enabled if the running process has the
environment variable LIBC_DEBUG_MALLOC_ENABLE set.
Bug: 19145921
Change-Id: I7b0e58cc85cc6d4118173fe1f8627a391b64c0d7
Exactly which functions get a stack protector is up to the compiler, so
let's separate the code that sets up the environment stack protection
requires and explicitly build it with -fno-stack-protector.
Bug: http://b/26276517
Change-Id: I8719e23ead1f1e81715c32c1335da868f68369b5
Correct the comment, and remove the unused functionality. getauxval(3) does
now set errno to let you know it failed to find anything, but since none of
this function's callers care anyway it seems safer to leave errno untouched
until we actually have a demonstrated need for it.
Bug: https://code.google.com/p/android/issues/detail?id=198111
Change-Id: I232a42dc5a02c8faab94c7d69bef610408276c23
The BIONIC_ROUND_UP_POWER_OF_2 macro did not have parentheses around
the whole expression. This lead to the wrong value being computed when
used as part of a mathematical expression such as this:
value = BIONIC_ROUND_UP_POWER_OF_2(value) - 1;
This only happens on 64 bit abis.
Change-Id: I6f8afbdaf16fe64a88fa0246d074b3534c9159c1
It actually means "crash immediately". Well, it's an error. And callers are
much more likely to realize their mistake if we crash immediately rather
than return EINVAL. Historically, glibc has crashed and bionic -- before
the recent changes -- returned EINVAL, so this is a behavior change.
Change-Id: I0c2373a6703b20b8a97aacc1e66368a5885e8c51
In order to run tsan unit tests, we need to support pthread spin APIs.
Bug: 18623621
Bug: 25392375
Change-Id: Icbb4a74e72e467824b3715982a01600031868e29
It removes calling to pthread_mutex_lock() at the beginning of new
thread, which helps to support thread sanitizer.
Change-Id: Ia3601c476de7976a9177b792bd74bb200cee0e13
For the __release and __release_rt functions, the previous macros
would add a dwarf cfi entry for the function with no values. This works
with libunwind since it always tries the arm unwind information first.
This change removes those entries by creating a no dwarf version of the
assembler macro.
Change-Id: Ib93e42fff5a79b8d770eab0071fdee7d2afa988d
Read /proc/stat to count online cpus is not correct for all android
kernels. Change to reading /sys/devices/system/cpu/online instead.
Bug: 24376925
Change-Id: I3785a6c7aa15a467022a9a261b457194d688fb38
I'm removing the TODO on the assumption that being compatible with glibc
is more useful than BSD. The new internal "bionic_page.h" header factors
out some duplication between libc and the linker.
Bug: http://b/22735893
Change-Id: I4aec4dcba5886fb6f6b9290a8f85660643261321
According to the comments in Posix_close(), TEMP_FAILURE_RETRY() should
not be used with close():
462bdac45c%5E%21/#F12
Kill ScopedFd by simplifying the single caller.
Change-Id: I248c40b8c2fc95f1938a6edfc245c81847fc44af
Signed-off-by: Spencer Low <CompareAndSwap@gmail.com>
Previous implementation of rwlock contains four atomic variables, which
is hard to maintain and change. So I make following changes in this CL:
1. Add pending flags in rwlock.state, so we don't need to synchronize
between different atomic variables. Using compare_and_swap operations
on rwlock.state is enough for all state change.
2. Add pending_lock to protect readers/writers waiting and wake up
operations. As waiting/wakeup is not performance critical, using a
lock is easier to maintain.
3. Add writer preference option.
4. Add unit tests for rwlock.
Bug: 19109156
Change-Id: Idcaa58d695ea401d64445610b465ac5cff23ec7c