Commit graph

4 commits

Author SHA1 Message Date
Ryan Prichard
a63c047d90 Implement arm64 TLSDESC
Each TLSDESC relocation relocates a 2-word descriptor in the GOT that
contains:
 - the address of a TLS resolver function
 - an argument to pass (indirectly) to the resolver function

(Specifically, the address of the 2-word descriptor is passed to the
resolver.)

The loader resolves R_GENERIC_TLSDESC relocations using one of three
resolver functions that it defines:
 - tlsdesc_resolver_static
 - tlsdesc_resolver_dynamic
 - tlsdesc_resolver_unresolved_weak

The resolver functions are written in assembly because they have a
restrictive calling convention. They're only allowed to modify x0 and
(apparently) the condition codes.

For a relocation to memory in static TLS (i.e. the executable or an solib
loaded initially), the loader uses a simple resolver function,
tlsdesc_resolver_static, that returns the static offset it receives from
the loader.

For relocations to dynamic TLS memory (i.e. memory in a dlopen'ed solib),
the loader uses tlsdesc_resolver_dynamic, which allocates TLS memory on
demand. It inlines the fast path of __tls_get_addr, then falls back to
__tls_get_addr when it needs to allocate memory. The loader handles these
dynamic TLS relocations in two passes:
 - In the first pass, it allocates a table of TlsDynamicResolverArg
   objects, one per dynamic TLSDESC relocation.
 - In the second pass, once the table is finalized, it writes the
   addresses of the TlsDynamicResolverArg objects into the TLSDESC
   relocations.

tlsdesc_resolver_unresolved_weak returns a negated thread pointer so that
taking the address of an unresolved weak TLS symbols produces NULL.

The loader handles R_GENERIC_TLSDESC in a target-independent way, but
only for arm64, because Bionic has only implemented the resolver functions
for arm64.

Bug: http://b/78026329
Test: bionic unit tests
Test: check that backtrace works inside a resolver function and inside
  __tls_get_addr called from a resolver
  (gdbclient.py, b __tls_get_addr, bt)
Change-Id: I752e59ff986292449892c449dad2546e6f0ff7b6
2019-01-28 21:22:45 -08:00
Ryan Prichard
bf427f4225 Fix soinfo_tls::module dangling reference
The field was pointing into an element of an std::vector, but the address
of a vector element is invalidated when the vector is resized.

This bug was caught by the new elftls.shared_ie and
elftls_dl.dlopen_shared_var_ie tests.

Bug: http://b/78026329
Test: bionic unit tests
Change-Id: I7232f6d703a9e339fe8966a95b7a68bae2c9c420
2019-01-17 17:13:53 -08:00
Ryan Prichard
e5e69e0912 Record TLS modules and layout static TLS memory
Bug: http://b/78026329
Test: bionic unit tests
Change-Id: Ibf1bf5ec864c7830e4cd1cb882842b644e6182ae
2019-01-16 16:52:47 -08:00
Ryan Prichard
45d1349c63 Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.

To allow per-architecture TLS slots:
 - Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
   generally negative, while new x86 slots are generally positive.
 - Define a bionic_tcb struct that provides two things:
    - a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
    - an inline accessor function: void*& tls_slot(size_t tpindex);

For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.

To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.

This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
 - stack guard
 - stack [omitted for main thread and with pthread_attr_setstack]
 - static TLS:
    - bionic_tcb [exec TLS will either precede or succeed the TCB]
    - bionic_tls [prefixed by the pthread keys]
    - [solib TLS segments will be placed here]
 - guard page

As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.

At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.

Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)

Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da)
2019-01-11 15:34:22 -08:00