* changes:
Implement dynamic TLS accesses and allocation
Implement TLS_DTPMOD and TLS_DTPREL relocations
Ignore DT_TLSDESC_GOT / DT_TLSDESC_PLT
Disable the dlfcn.dlopen_library_with_ELF_TLS test
Add BionicAllocator::memalign
Move the linker allocator into libc
Replace some of linker_allocator's header includes
Add | to make bionic's trace end print match other trace end prints.
Test: took systrace with bionic tag enabled
Change-Id: Ieabb139dd224aa8045be914f21c0432d42a93755
This is used to prevent the additional indirection even after heap
profiling has finished, preventing any performance impact on processes
that are not currently being profiled.
Test: m
Test: flash sailfish
Test: try tearing down & re-enabling hooks
Bug: 120186127
Change-Id: Idc5988111a47870d2c093fd6a017b47e65f5616b
Initialize a thread's DTV to an empty zeroed DTV. Allocate the DTV and
any ELF module's TLS segment on-demand in __tls_get_addr. Use a generation
counter, incremented in the linker, to signal when threads should
update/reallocate their DTV objects.
A generation count of 0 always indicates the constant zero DTV.
Once a DTV is allocated, it isn't freed until the thread exits, because
a signal handler could interrupt the fast path of __tls_get_addr between
accessing the DTV slot and reading a field of the DTV. Bionic keeps a
linked list of DTV objects so it can free them at thread-exit.
Dynamic TLS memory is allocated using a BionicAllocator instance in
libc_shared_globals. For async-signal safety, access to the
linker/libc-shared state is protected by first blocking signals, then by
acquiring the reader-writer lock, TlsModules::rwlock. A write lock is
needed to allocate or free memory.
In pthread_exit, unconditionally block signals before freeing dynamic
TLS memory or freeing the shadow call stack.
ndk_cruft.cpp: Avoid including pthread_internal.h inside an extern "C".
(The header now includes a C++ template that doesn't compile inside
extern "C".)
Bug: http://b/78026329
Bug: http://b/123094171
Test: bionic unit tests
Change-Id: I3c9b12921c9e68b33dcc1d1dd276bff364eff5d7
Bionic needs this functionality to allocate a TLS segment with greater
than 16-byte alignment. For simplicity, this allocator only supports up
to one page of alignment.
The memory layout changes slightly when allocating an object of exactly
PAGE_SIZE alignment. Instead of allocating the page_info header at the
start of the page containing the pointer, it is allocated at the start
of the preceding page.
Bug: http://b/78026329
Test: linker-unit-tests{32,64}
Change-Id: I1c8d1cd7ca72d113bced5ee15ba8d831426b0081
Rename LinkerMemoryAllocator -> BionicAllocator
Rename LinkerSmallObjectAllocator -> BionicSmallObjectAllocator
libc and the linker need to share an instance of the allocator for
allocating and freeing dynamic ELF TLS memory (DTVs and segments). The
linker also continues to use this allocator.
Bug: http://b/78026329
Test: /data/nativetest/bionic-unit-tests-static
Test: /data/nativetest64/bionic-unit-tests-static
Test: /data/nativetest/linker-unit-tests/linker-unit-tests32
Test: /data/nativetest64/linker-unit-tests/linker-unit-tests64
Change-Id: I2da037006ddf8041a75f3eba2071a8fcdcc223ce
If a signal handler is blocking all of their signals, we should
probably respect that and not silently unblock bionic's reserved
signals for them. Otherwise, user code can deadlock, run out of stack,
etc. through no fault of their own, if one of the reserved signals
comes in while they've pivoted onto their signal stack.
Bug: http://b/122939726
Test: treehugger
Change-Id: I6425a3e7413edc16157b35dffe632e1ab1d76618
Addressing Elliott's remaining comments on the android_mallopt change.
Intending to let this get merged in normally (should be clean).
Test: blueline-userdebug still builds.
Change-Id: I4f00191091b8af367f84d087432a5af5f83036ee
On user builds, heapprofd should only be allowed to profile apps that
are either debuggable, or profileable (according to the manifest). This
change exposes extra zygote-specific knowledge to bionic, and makes the
dedicated signal handler check for the special case of being in a zygote child.
With this & the corresponding framework change, we should now be
handling the 4 combinations of:
{java, native} x {profile_at_runtime, profile_at_startup}.
See internal go/heapprofd-java-trigger for further context.
Test: on-device unit tests (shared & static) on blueline-userdebug.
Test: flashed blueline-userdebug, confirmed that java profiling activates from startup and at runtime.
Bug: 120409382
Change-Id: Ic251afeca4324dc650ac1d4f46976b526eae692a
(cherry picked from commit 998792e2b6)
Merged-In: Ic251afeca4324dc650ac1d4f46976b526eae692a
This new option causes an abort after malloc debug detects an error.
This allows vendors to get process coredumps to analyze memory for
corruption.
Bug: 123009873
Test: New test cases added for unit tests and config tests.
Change-Id: I6b480af7f747d6a82f61e8bf3df204a5f7ba017f
Given that it's friends setgid/setresgid already are, I don't see why
setregid(32) should be allowed.
Test: (Fixed up) CtsSeccompHostTestcases passes
Change-Id: I31bb429da26baa18ec63b6bfc62628a937fdab0c
Add a new function that installs a seccomp filter that checks
all setresuid/setresgid syscalls to fall within the passed in
uid/gid range. It allows all other syscalls through. Therefore,
this filter is meant to be used in addition to one of the
regular whitelist syscall filters. (If multiple seccomp filters
are installed a in process, all filters are run, and the most
restrictive result is used).
Since the regular app and app_zygote seccomp filters block all
other calls to change uid/gid (setuid, setgid, setgroups,
setreuid, setregid, setfsuid), combining these filters prevents
the process from using any other uid/gid than the one passed as
arguments to the new function.
Bug: 111434506
Test: atest CtsSeccompHostTestCases
Change-Id: If330efdafbedd8e7d38ca81896a4dbb0bc49f431
The APP_ZYGOTE seccomp policy is identical to the APP seccomp policy,
with the exception of allowing setresgid(32), which the app zygote
needs to be able to do (within a certain range).
Bug: 111434506
Test: manual
Change-Id: I34864837c981d201225e3e2e5501c0415a9a7dc8
Bionic maps typical C functions like setresuid() to a syscall,
depending on the architecture used. This tool generates a .h
file that maps all bionic functions in SYSCALLS.txt to the
syscall number used on a particular architecture. It can then
be used to generate correct seccomp policy at runtime.
Example output in func_to_syscall_nrs.h:
Bug: 111434506
Test: manually inspect func_to_syscall_nrs.h
Change-Id: I8bc5c1cb17a2e7b5c534b2e0496411f2d419ad86
This commit extracts `libc_headers` for `libasync_safe` and
`libpropertyinfoparser` (in the `system/core` repository).
Before this change, `libasync_safe` expects that `libc` is automatically
added to `system_shared_libs` of the libasync_safe vendor variant even
if `libc_defaults` explicitly declines any `system_shared_libs`.
This commit defines `libc_headers` for `libasync_safe` and
`libpropertyinfoparser` so that they can find the headers from libc
without causing circular dependencies.
Bug: 123006819
Test: make checkbuild
Change-Id: I2435ab61d36ff79ca2b4ef70bd898b795159c725
* changes:
Handle R_GENERIC_TLS_TPREL relocations
Avoid a dlopen abort on an invalid TLS alignment
Initialize static TLS memory using module list
Record TLS modules and layout static TLS memory
StaticTlsLayout: add exe/tcb and solib layout
This relocation is used for static TLS's initial-exec (IE) accesses.
A TLS symbol's value is its offset from the start of the ELF module's
TLS segment. It doesn't make sense to add the load_bias to this value,
so skip the call to soinfo::resolve_symbol_address.
Allow TLS relocations to refer to an unresolved weak symbol. In that case,
sym will be non-zero, but lsi will be nullptr. The dynamic linker resolves
the TPREL relocation to 0, making &missing_weak_symbol equal the thread
pointer.
Recognize Gold-style relocations to STB_LOCAL TLS symbols/sections and
issue an error.
Remove the "case R_AARCH64_TLS_TPREL64", because the R_GENERIC_TLS_TPREL
case handles it.
Remove the no-op R_AARCH64_TLSDESC handler. It's better to issue an error.
dlopen_library_with_ELF_TLS now fails with a consistent error about an
unimplemented dynamic TLS relocation.
Bug: http://b/78026329
Test: bionic unit tests (elftls tests are added in a later CL)
Change-Id: Ia08e1b5c8098117e12143d3b4ebb4dfaa5ca46ec
If the alignment of a TLS segment in a shared object is invalid, return
an error through dlerror() rather than aborting the process.
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: I60e589ddd8ca897f485d55af089f08bd3ff5b1fa
This implementation simply iterates over each static TLS module and
copies its initialization image into a new thread's static TLS block.
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: Ib7edb665271a07010bc68e306feb5df422f2f9e6
Replace reserve_tcb with reserve_exe_segment_and_tcb, which lays out both
the TCB and the executable's TLS segment, accounting for the difference in
layout between variant 1 and variant 2 targets.
The function isn't actually called with a non-null TlsSegment* yet.
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: Ibd6238577423a7d0451f36da7e64912046959796
* changes:
Add a __bionic_get_tls_segment function
Factor out ScopedRWLock into its own header
Build the linker with -D_USING_LIBCXX
Provide a stub aeabi.read_tp on other archs
Remove TLS_SLOT_TSAN(8)
This reverts commit 220f51e566.
The internal modules that were using extra symbols are all fixed.
Bug: 120266448
Test: m ndk_translation_all in cf_x86_phone
Change-Id: I561b16de1c320d2624e7cf8e6211e0c70edc823d
The function searches for a TLS segment in a ElfXX_Phdr table.
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: I221b13420d1a2da33fc2174b7dd256589f6ecfdb
As of the switch to clang-r346389c, it has been replaced with
TLS_SLOT_SANITIZER(6). lld reserves 8 words beyond the TP on arm/arm64, so
Bionic can't use anything beyond 7.
The DTV and bionic_tls slots on x86 haven't been part of a release yet,
and they should be strictly internal to Bionic anyway, so shift them down.
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: Ifb3b2d8d85efe1417ee9a10b657b665ec6f2fd3d
This commit adds `__attribute__((unused))` to
`__BIONIC_ERROR_FUNCTION_VISIBILITY`, so that `open()`, `openat()`,
`snprintf()`, and `sprintf()` don't raise `-Werror,-Wunused-function`
when `_FORTIFY_SOURCE` is enabled.
These errors were hidden because the include directories were passed
with `-isystem` (instead of `-I`) and clang did not report
`-Wunused-function` from `-isystem`.
Bug: 119086738
Test: make checkbuild
Change-Id: I0de71efdbacd90c5c6a419fc0368c92e8efdfd63
This includes one manual change:
In the file bionic/libc/kernel/uapi/linux/in.h, the macro IN_BADCLASS
was not definied correctly. Change the macro from:
#define IN_BADCLASS(a) ((((long int) (a)) == 0xffffffff)
to:
#define IN_BADCLASS(a) (((long int) (a)) == (long int)0xffffffff)
This change is being pushed to the upstream kernels.
Test: Builds and boots.
Change-Id: Ia304773a9dc6789b34d9769d73742384d6afb571
Merged-In: Ia304773a9dc6789b34d9769d73742384d6afb571
(cherry picked from commit 967fb01cce)
By sorting symbols by size, small symbols are grouped together and we
usually have less dirty pages at runtime. On cuttlefish, this results
in 20KB less dirty pages just after libc is loaded.
Bug: 112073665
Test: Build libc and check symbol ordering.
Test: Compare runtime private dirty memory usage on cuttlefish.
Change-Id: Ic8fa996f81adb5a8cbc4b97817d2b94ef0697a2a
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da)
This change makes it easier to move the location of the pthread keys
(e.g. into the bionic_tls struct).
Bug: http://b/78026329
Test: bionic unit tests
Test: disassembly of libc.so doesn't change
Merged-In: Ib75d9dab8726de96856af91ec3daa2c5cdbc2178
Change-Id: Ib75d9dab8726de96856af91ec3daa2c5cdbc2178
(cherry picked from commit ecad24fad9)