Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2019 The Android Open Source Project
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* * Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* * Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in
|
|
|
|
* the documentation and/or other materials provided with the
|
|
|
|
* distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
|
|
|
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
|
|
|
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
|
|
|
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
|
|
|
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
|
|
|
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
|
|
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
|
|
|
|
* OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
2019-01-07 03:24:10 +01:00
|
|
|
#include <link.h>
|
2019-01-02 03:53:48 +01:00
|
|
|
#include <pthread.h>
|
|
|
|
#include <stdatomic.h>
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
#include <stdint.h>
|
2019-01-07 03:24:10 +01:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
|
2019-01-18 10:00:59 +01:00
|
|
|
__LIBC_HIDDEN__ extern _Atomic(size_t) __libc_tls_generation_copy;
|
|
|
|
|
2019-01-07 03:24:10 +01:00
|
|
|
struct TlsSegment {
|
|
|
|
size_t size = 0;
|
|
|
|
size_t alignment = 1;
|
|
|
|
const void* init_ptr = ""; // Field is non-null even when init_size is 0.
|
|
|
|
size_t init_size = 0;
|
|
|
|
};
|
|
|
|
|
|
|
|
__LIBC_HIDDEN__ bool __bionic_get_tls_segment(const ElfW(Phdr)* phdr_table, size_t phdr_count,
|
2019-01-17 08:13:38 +01:00
|
|
|
ElfW(Addr) load_bias, TlsSegment* out);
|
|
|
|
|
|
|
|
__LIBC_HIDDEN__ bool __bionic_check_tls_alignment(size_t* alignment);
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
|
|
|
|
struct StaticTlsLayout {
|
|
|
|
constexpr StaticTlsLayout() {}
|
|
|
|
|
|
|
|
private:
|
|
|
|
size_t offset_ = 0;
|
|
|
|
size_t alignment_ = 1;
|
|
|
|
bool overflowed_ = false;
|
|
|
|
|
|
|
|
// Offsets to various Bionic TLS structs from the beginning of static TLS.
|
|
|
|
size_t offset_bionic_tcb_ = SIZE_MAX;
|
|
|
|
size_t offset_bionic_tls_ = SIZE_MAX;
|
|
|
|
|
|
|
|
public:
|
|
|
|
size_t offset_bionic_tcb() const { return offset_bionic_tcb_; }
|
|
|
|
size_t offset_bionic_tls() const { return offset_bionic_tls_; }
|
2019-01-15 09:11:37 +01:00
|
|
|
size_t offset_thread_pointer() const;
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
|
|
|
|
size_t size() const { return offset_; }
|
|
|
|
size_t alignment() const { return alignment_; }
|
|
|
|
bool overflowed() const { return overflowed_; }
|
|
|
|
|
2019-01-15 06:52:14 +01:00
|
|
|
size_t reserve_exe_segment_and_tcb(const TlsSegment* exe_segment, const char* progname);
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
void reserve_bionic_tls();
|
2019-01-15 06:52:14 +01:00
|
|
|
size_t reserve_solib_segment(const TlsSegment& segment) {
|
|
|
|
return reserve(segment.size, segment.alignment);
|
|
|
|
}
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
void finish_layout();
|
|
|
|
|
|
|
|
private:
|
|
|
|
size_t reserve(size_t size, size_t alignment);
|
|
|
|
|
|
|
|
template <typename T> size_t reserve_type() {
|
|
|
|
return reserve(sizeof(T), alignof(T));
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t round_up_with_overflow_check(size_t value, size_t alignment);
|
|
|
|
};
|
2019-01-02 03:53:48 +01:00
|
|
|
|
2019-01-18 10:00:59 +01:00
|
|
|
static constexpr size_t kTlsGenerationNone = 0;
|
|
|
|
static constexpr size_t kTlsGenerationFirst = 1;
|
|
|
|
|
|
|
|
// The first ELF TLS module has ID 1. Zero is reserved for the first word of
|
|
|
|
// the DTV, a generation count. Unresolved weak symbols also use module ID 0.
|
|
|
|
static constexpr size_t kTlsUninitializedModuleId = 0;
|
|
|
|
|
|
|
|
static inline size_t __tls_module_id_to_idx(size_t id) { return id - 1; }
|
|
|
|
static inline size_t __tls_module_idx_to_id(size_t idx) { return idx + 1; }
|
|
|
|
|
2019-01-02 03:53:48 +01:00
|
|
|
// A descriptor for a single ELF TLS module.
|
|
|
|
struct TlsModule {
|
|
|
|
TlsSegment segment;
|
|
|
|
|
|
|
|
// Offset into the static TLS block or SIZE_MAX for a dynamic module.
|
|
|
|
size_t static_offset = SIZE_MAX;
|
|
|
|
|
|
|
|
// The generation in which this module was loaded. Dynamic TLS lookups use
|
|
|
|
// this field to detect when a module has been unloaded.
|
2019-01-18 10:00:59 +01:00
|
|
|
size_t first_generation = kTlsGenerationNone;
|
2019-01-02 03:53:48 +01:00
|
|
|
|
|
|
|
// Used by the dynamic linker to track the associated soinfo* object.
|
|
|
|
void* soinfo_ptr = nullptr;
|
|
|
|
};
|
|
|
|
|
2020-07-14 23:37:04 +02:00
|
|
|
// Signature of the callbacks that will be called after DTLS creation and
|
|
|
|
// before DTLS destruction.
|
|
|
|
typedef void (*dtls_listener_t)(void* dynamic_tls_begin, void* dynamic_tls_end);
|
|
|
|
|
|
|
|
// Signature of the thread-exit callbacks.
|
|
|
|
typedef void (*thread_exit_cb_t)(void);
|
|
|
|
|
|
|
|
struct CallbackHolder {
|
|
|
|
thread_exit_cb_t cb;
|
|
|
|
CallbackHolder* prev;
|
|
|
|
};
|
|
|
|
|
2019-01-02 03:53:48 +01:00
|
|
|
// Table of the ELF TLS modules. Either the dynamic linker or the static
|
|
|
|
// initialization code prepares this table, and it's then used during thread
|
|
|
|
// creation and for dynamic TLS lookups.
|
|
|
|
struct TlsModules {
|
|
|
|
constexpr TlsModules() {}
|
|
|
|
|
2019-01-18 10:00:59 +01:00
|
|
|
// A pointer to the TLS generation counter in libc.so. The counter is
|
|
|
|
// incremented each time an solib is loaded or unloaded.
|
|
|
|
_Atomic(size_t) generation = kTlsGenerationFirst;
|
|
|
|
_Atomic(size_t) *generation_libc_so = nullptr;
|
2019-01-02 03:53:48 +01:00
|
|
|
|
|
|
|
// Access to the TlsModule[] table requires taking this lock.
|
|
|
|
pthread_rwlock_t rwlock = PTHREAD_RWLOCK_INITIALIZER;
|
|
|
|
|
|
|
|
// Pointer to a block of TlsModule objects. The first module has ID 1 and
|
|
|
|
// is stored at index 0 in this table.
|
|
|
|
size_t module_count = 0;
|
2020-07-14 23:37:04 +02:00
|
|
|
size_t static_module_count = 0;
|
2019-01-02 03:53:48 +01:00
|
|
|
TlsModule* module_table = nullptr;
|
2020-07-14 23:37:04 +02:00
|
|
|
|
|
|
|
// Callback to be invoked after a dynamic TLS allocation.
|
|
|
|
dtls_listener_t on_creation_cb = nullptr;
|
|
|
|
|
|
|
|
// Callback to be invoked before a dynamic TLS deallocation.
|
|
|
|
dtls_listener_t on_destruction_cb = nullptr;
|
|
|
|
|
|
|
|
// The first thread-exit callback; inlined to avoid allocation.
|
|
|
|
thread_exit_cb_t first_thread_exit_callback = nullptr;
|
|
|
|
|
|
|
|
// The additional callbacks, if any.
|
|
|
|
CallbackHolder* thread_exit_callback_tail_node = nullptr;
|
2019-01-02 03:53:48 +01:00
|
|
|
};
|
2019-01-15 22:45:27 +01:00
|
|
|
|
|
|
|
void __init_static_tls(void* static_tls);
|
2019-01-18 10:00:59 +01:00
|
|
|
|
|
|
|
// Dynamic Thread Vector. Each thread has a different DTV. For each module
|
|
|
|
// (executable or solib), the DTV has a pointer to that module's TLS memory. The
|
|
|
|
// DTV is initially empty and is allocated on-demand. It grows as more modules
|
|
|
|
// are dlopen'ed. See https://www.akkadia.org/drepper/tls.pdf.
|
|
|
|
//
|
|
|
|
// The layout of the DTV is specified in various documents, but it is not part
|
|
|
|
// of Bionic's public ABI. A compiler can't generate code to access it directly,
|
|
|
|
// because it can't access libc's global generation counter.
|
|
|
|
struct TlsDtv {
|
|
|
|
// Number of elements in this object's modules field.
|
|
|
|
size_t count;
|
|
|
|
|
|
|
|
// A pointer to an older TlsDtv object that should be freed when the thread
|
|
|
|
// exits. The objects aren't immediately freed because a DTV could be
|
|
|
|
// reallocated by a signal handler that interrupted __tls_get_addr's fast
|
|
|
|
// path.
|
|
|
|
TlsDtv* next;
|
|
|
|
|
|
|
|
// The DTV slot points at this field, which allows omitting an add instruction
|
|
|
|
// on the fast path for a TLS lookup. The arm64 tlsdesc_resolver.S depends on
|
|
|
|
// the layout of fields past this point.
|
|
|
|
size_t generation;
|
|
|
|
void* modules[];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct TlsIndex {
|
|
|
|
size_t module_id;
|
|
|
|
size_t offset;
|
|
|
|
};
|
|
|
|
|
|
|
|
#if defined(__i386__)
|
|
|
|
#define TLS_GET_ADDR_CCONV __attribute__((regparm(1)))
|
|
|
|
#define TLS_GET_ADDR ___tls_get_addr
|
|
|
|
#else
|
|
|
|
#define TLS_GET_ADDR_CCONV
|
|
|
|
#define TLS_GET_ADDR __tls_get_addr
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern "C" void* TLS_GET_ADDR(const TlsIndex* ti) TLS_GET_ADDR_CCONV;
|
|
|
|
|
|
|
|
struct bionic_tcb;
|
|
|
|
void __free_dynamic_tls(bionic_tcb* tcb);
|
2020-07-14 23:37:04 +02:00
|
|
|
void __notify_thread_exit_callbacks();
|
2022-10-10 21:21:44 +02:00
|
|
|
|
|
|
|
#if defined(__riscv)
|
|
|
|
// TLS_DTV_OFFSET is a constant used in relocation fields, defined in RISC-V ELF Specification[1]
|
|
|
|
// The front of the TCB contains a pointer to the DTV, and each pointer in DTV
|
|
|
|
// points to 0x800 past the start of a TLS block to make full use of the range
|
|
|
|
// of load/store instructions, refer to [2].
|
|
|
|
//
|
|
|
|
// [1]: RISC-V ELF Specification.
|
|
|
|
// https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-elf.adoc#constants
|
|
|
|
// [2]: Documentation of TLS data structures
|
|
|
|
// https://github.com/riscv-non-isa/riscv-elf-psabi-doc/issues/53
|
|
|
|
#define TLS_DTV_OFFSET 0x800
|
|
|
|
#else
|
|
|
|
#define TLS_DTV_OFFSET 0
|
|
|
|
#endif
|