2009-03-04 04:28:35 +01:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2008 The Android Open Source Project
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* * Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* * Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in
|
|
|
|
* the documentation and/or other materials provided with the
|
|
|
|
* distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
|
|
|
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
|
|
|
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
|
|
|
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
|
|
|
|
* OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
|
|
|
|
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
|
|
|
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
|
|
|
|
* OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2016-01-26 02:38:44 +01:00
|
|
|
#include <android/api-level.h>
|
2013-02-07 19:14:39 +01:00
|
|
|
#include <elf.h>
|
|
|
|
#include <errno.h>
|
2022-05-12 22:06:04 +02:00
|
|
|
#include <malloc.h>
|
2009-03-04 04:28:35 +01:00
|
|
|
#include <stddef.h>
|
2013-02-07 19:14:39 +01:00
|
|
|
#include <stdint.h>
|
2009-03-04 04:28:35 +01:00
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
2013-02-07 19:14:39 +01:00
|
|
|
#include <sys/auxv.h>
|
|
|
|
#include <sys/mman.h>
|
|
|
|
|
2022-05-12 22:06:04 +02:00
|
|
|
#include "async_safe/log.h"
|
|
|
|
#include "heap_tagging.h"
|
2009-03-04 04:28:35 +01:00
|
|
|
#include "libc_init_common.h"
|
2021-01-08 02:32:00 +01:00
|
|
|
#include "platform/bionic/macros.h"
|
|
|
|
#include "platform/bionic/mte.h"
|
2019-12-13 00:26:14 +01:00
|
|
|
#include "platform/bionic/page.h"
|
2021-09-30 01:52:20 +02:00
|
|
|
#include "platform/bionic/reserved_signals.h"
|
2021-01-08 02:32:00 +01:00
|
|
|
#include "private/KernelArgumentBlock.h"
|
|
|
|
#include "private/bionic_asm.h"
|
|
|
|
#include "private/bionic_asm_note.h"
|
2019-10-28 18:57:26 +01:00
|
|
|
#include "private/bionic_call_ifunc_resolver.h"
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
#include "private/bionic_elf_tls.h"
|
2016-06-30 01:47:53 +02:00
|
|
|
#include "private/bionic_globals.h"
|
2013-10-10 00:50:50 +02:00
|
|
|
#include "private/bionic_tls.h"
|
2022-05-12 22:06:04 +02:00
|
|
|
#include "pthread_internal.h"
|
2021-01-08 02:32:00 +01:00
|
|
|
#include "sys/system_properties.h"
|
2022-05-12 22:06:04 +02:00
|
|
|
#include "sysprop_helpers.h"
|
2013-10-10 00:50:50 +02:00
|
|
|
|
2018-08-14 01:46:15 +02:00
|
|
|
#if __has_feature(hwaddress_sanitizer)
|
|
|
|
#include <sanitizer/hwasan_interface.h>
|
|
|
|
#endif
|
|
|
|
|
2018-05-18 02:14:18 +02:00
|
|
|
// Leave the variable uninitialized for the sake of the dynamic loader, which
|
|
|
|
// links in this file. The loader will initialize this variable before
|
|
|
|
// relocating itself.
|
|
|
|
#if defined(__i386__)
|
|
|
|
__LIBC_HIDDEN__ void* __libc_sysinfo;
|
|
|
|
#endif
|
|
|
|
|
2014-07-11 21:59:16 +02:00
|
|
|
extern "C" int __cxa_atexit(void (*)(void *), void *, void *);
|
2021-01-08 02:32:00 +01:00
|
|
|
extern "C" const char* __gnu_basename(const char* path);
|
2014-07-11 21:59:16 +02:00
|
|
|
|
2020-10-23 18:55:33 +02:00
|
|
|
static void call_array(init_func_t** list, int argc, char* argv[], char* envp[]) {
|
2013-02-07 19:14:39 +01:00
|
|
|
// First element is -1, list is null-terminated
|
|
|
|
while (*++list) {
|
2020-10-23 18:55:33 +02:00
|
|
|
(*list)(argc, argv, envp);
|
2013-02-07 19:14:39 +01:00
|
|
|
}
|
2009-07-18 01:11:10 +02:00
|
|
|
}
|
|
|
|
|
2022-10-05 02:15:49 +02:00
|
|
|
#if defined(__arm__) || defined(__i386__) // Legacy architectures used REL...
|
|
|
|
extern __LIBC_HIDDEN__ __attribute__((weak)) ElfW(Rel) __rel_iplt_start[], __rel_iplt_end[];
|
2019-01-24 02:56:24 +01:00
|
|
|
|
|
|
|
static void call_ifunc_resolvers() {
|
2022-10-05 02:15:49 +02:00
|
|
|
if (__rel_iplt_start == nullptr || __rel_iplt_end == nullptr) {
|
|
|
|
// These symbols were not emitted by gold. Gold has code to do so, but for
|
2019-02-15 01:19:59 +01:00
|
|
|
// whatever reason it is not being run. In these cases ifuncs cannot be
|
|
|
|
// resolved, so we do not support using ifuncs in static executables linked
|
|
|
|
// with gold.
|
|
|
|
//
|
|
|
|
// Since they are weak, they will be non-null when linked with bfd/lld and
|
|
|
|
// null when linked with gold.
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-10-05 02:15:49 +02:00
|
|
|
for (ElfW(Rel)* r = __rel_iplt_start; r != __rel_iplt_end; ++r) {
|
2019-01-24 02:56:24 +01:00
|
|
|
ElfW(Addr)* offset = reinterpret_cast<ElfW(Addr)*>(r->r_offset);
|
2022-10-05 02:15:49 +02:00
|
|
|
ElfW(Addr) resolver = *offset;
|
2019-10-28 18:57:26 +01:00
|
|
|
*offset = __bionic_call_ifunc_resolver(resolver);
|
2019-01-24 02:56:24 +01:00
|
|
|
}
|
|
|
|
}
|
2022-10-05 02:15:49 +02:00
|
|
|
#else // ...but modern architectures use RELA instead.
|
|
|
|
extern __LIBC_HIDDEN__ __attribute__((weak)) ElfW(Rela) __rela_iplt_start[], __rela_iplt_end[];
|
2019-01-24 02:56:24 +01:00
|
|
|
|
|
|
|
static void call_ifunc_resolvers() {
|
2022-10-05 02:15:49 +02:00
|
|
|
if (__rela_iplt_start == nullptr || __rela_iplt_end == nullptr) {
|
|
|
|
// These symbols were not emitted by gold. Gold has code to do so, but for
|
2019-02-15 01:19:59 +01:00
|
|
|
// whatever reason it is not being run. In these cases ifuncs cannot be
|
|
|
|
// resolved, so we do not support using ifuncs in static executables linked
|
|
|
|
// with gold.
|
|
|
|
//
|
|
|
|
// Since they are weak, they will be non-null when linked with bfd/lld and
|
|
|
|
// null when linked with gold.
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-10-05 02:15:49 +02:00
|
|
|
for (ElfW(Rela)* r = __rela_iplt_start; r != __rela_iplt_end; ++r) {
|
2019-01-24 02:56:24 +01:00
|
|
|
ElfW(Addr)* offset = reinterpret_cast<ElfW(Addr)*>(r->r_offset);
|
2022-10-05 02:15:49 +02:00
|
|
|
ElfW(Addr) resolver = r->r_addend;
|
2019-10-28 18:57:26 +01:00
|
|
|
*offset = __bionic_call_ifunc_resolver(resolver);
|
2019-01-24 02:56:24 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2013-01-11 23:43:05 +01:00
|
|
|
static void apply_gnu_relro() {
|
2014-02-11 02:46:57 +01:00
|
|
|
ElfW(Phdr)* phdr_start = reinterpret_cast<ElfW(Phdr)*>(getauxval(AT_PHDR));
|
2013-02-07 19:14:39 +01:00
|
|
|
unsigned long int phdr_ct = getauxval(AT_PHNUM);
|
2012-11-10 01:51:36 +01:00
|
|
|
|
2014-02-11 02:46:57 +01:00
|
|
|
for (ElfW(Phdr)* phdr = phdr_start; phdr < (phdr_start + phdr_ct); phdr++) {
|
2013-02-07 19:14:39 +01:00
|
|
|
if (phdr->p_type != PT_GNU_RELRO) {
|
|
|
|
continue;
|
2012-11-10 01:51:36 +01:00
|
|
|
}
|
|
|
|
|
2014-02-11 02:46:57 +01:00
|
|
|
ElfW(Addr) seg_page_start = PAGE_START(phdr->p_vaddr);
|
|
|
|
ElfW(Addr) seg_page_end = PAGE_END(phdr->p_vaddr + phdr->p_memsz);
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2013-02-07 19:14:39 +01:00
|
|
|
// Check return value here? What do we do if we fail?
|
|
|
|
mprotect(reinterpret_cast<void*>(seg_page_start), seg_page_end - seg_page_start, PROT_READ);
|
|
|
|
}
|
|
|
|
}
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2019-01-02 03:53:48 +01:00
|
|
|
static void layout_static_tls(KernelArgumentBlock& args) {
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
StaticTlsLayout& layout = __libc_shared_globals()->static_tls_layout;
|
|
|
|
layout.reserve_bionic_tls();
|
2019-01-02 03:53:48 +01:00
|
|
|
|
|
|
|
const char* progname = args.argv[0];
|
|
|
|
ElfW(Phdr)* phdr_start = reinterpret_cast<ElfW(Phdr)*>(getauxval(AT_PHDR));
|
|
|
|
size_t phdr_ct = getauxval(AT_PHNUM);
|
|
|
|
|
2019-01-17 08:13:38 +01:00
|
|
|
static TlsModule mod;
|
2019-01-18 10:00:59 +01:00
|
|
|
TlsModules& modules = __libc_shared_globals()->tls_modules;
|
2019-01-17 08:13:38 +01:00
|
|
|
if (__bionic_get_tls_segment(phdr_start, phdr_ct, 0, &mod.segment)) {
|
|
|
|
if (!__bionic_check_tls_alignment(&mod.segment.alignment)) {
|
|
|
|
async_safe_fatal("error: TLS segment alignment in \"%s\" is not a power of 2: %zu\n",
|
|
|
|
progname, mod.segment.alignment);
|
|
|
|
}
|
|
|
|
mod.static_offset = layout.reserve_exe_segment_and_tcb(&mod.segment, progname);
|
2019-01-18 10:00:59 +01:00
|
|
|
mod.first_generation = kTlsGenerationFirst;
|
|
|
|
|
|
|
|
modules.module_count = 1;
|
2020-07-14 23:37:04 +02:00
|
|
|
modules.static_module_count = 1;
|
2019-01-18 10:00:59 +01:00
|
|
|
modules.module_table = &mod;
|
2019-01-02 03:53:48 +01:00
|
|
|
} else {
|
|
|
|
layout.reserve_exe_segment_and_tcb(nullptr, progname);
|
|
|
|
}
|
2019-01-18 10:00:59 +01:00
|
|
|
// Enable the fast path in __tls_get_addr.
|
|
|
|
__libc_tls_generation_copy = modules.generation;
|
2019-01-02 03:53:48 +01:00
|
|
|
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
layout.finish_layout();
|
|
|
|
}
|
|
|
|
|
2020-12-15 22:55:32 +01:00
|
|
|
#ifdef __aarch64__
|
|
|
|
static bool __read_memtag_note(const ElfW(Nhdr)* note, const char* name, const char* desc,
|
|
|
|
unsigned* result) {
|
2022-02-05 02:13:27 +01:00
|
|
|
if (note->n_type != NT_ANDROID_TYPE_MEMTAG) {
|
2020-12-15 22:55:32 +01:00
|
|
|
return false;
|
|
|
|
}
|
2022-02-05 02:13:27 +01:00
|
|
|
if (note->n_namesz != 8 || strncmp(name, "Android", 8) != 0) {
|
2020-12-15 22:55:32 +01:00
|
|
|
return false;
|
|
|
|
}
|
2022-02-05 02:13:27 +01:00
|
|
|
// Previously (in Android 12), if the note was != 4 bytes, we check-failed
|
|
|
|
// here. Let's be more permissive to allow future expansion.
|
|
|
|
if (note->n_descsz < 4) {
|
|
|
|
async_safe_fatal("unrecognized android.memtag note: n_descsz = %d, expected >= 4",
|
|
|
|
note->n_descsz);
|
2020-12-15 22:55:32 +01:00
|
|
|
}
|
|
|
|
*result = *reinterpret_cast<const ElfW(Word)*>(desc);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned __get_memtag_note(const ElfW(Phdr)* phdr_start, size_t phdr_ct,
|
|
|
|
const ElfW(Addr) load_bias) {
|
|
|
|
for (size_t i = 0; i < phdr_ct; ++i) {
|
|
|
|
const ElfW(Phdr)* phdr = &phdr_start[i];
|
|
|
|
if (phdr->p_type != PT_NOTE) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
ElfW(Addr) p = load_bias + phdr->p_vaddr;
|
|
|
|
ElfW(Addr) note_end = load_bias + phdr->p_vaddr + phdr->p_memsz;
|
|
|
|
while (p + sizeof(ElfW(Nhdr)) <= note_end) {
|
|
|
|
const ElfW(Nhdr)* note = reinterpret_cast<const ElfW(Nhdr)*>(p);
|
|
|
|
p += sizeof(ElfW(Nhdr));
|
|
|
|
const char* name = reinterpret_cast<const char*>(p);
|
|
|
|
p += align_up(note->n_namesz, 4);
|
|
|
|
const char* desc = reinterpret_cast<const char*>(p);
|
|
|
|
p += align_up(note->n_descsz, 4);
|
|
|
|
if (p > note_end) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
unsigned ret;
|
|
|
|
if (__read_memtag_note(note, name, desc, &ret)) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-08 02:32:00 +01:00
|
|
|
// Returns true if there's an environment setting (either sysprop or env var)
|
|
|
|
// that should overwrite the ELF note, and places the equivalent heap tagging
|
|
|
|
// level into *level.
|
|
|
|
static bool get_environment_memtag_setting(HeapTaggingLevel* level) {
|
|
|
|
static const char kMemtagPrognameSyspropPrefix[] = "arm64.memtag.process.";
|
2022-01-25 03:29:50 +01:00
|
|
|
static const char kMemtagGlobalSysprop[] = "persist.arm64.memtag.default";
|
2022-08-02 00:16:01 +02:00
|
|
|
static const char kMemtagOverrideSyspropPrefix[] =
|
|
|
|
"persist.device_config.memory_safety_native.mode_override.process.";
|
2021-01-08 02:32:00 +01:00
|
|
|
|
|
|
|
const char* progname = __libc_shared_globals()->init_progname;
|
|
|
|
if (progname == nullptr) return false;
|
|
|
|
|
|
|
|
const char* basename = __gnu_basename(progname);
|
|
|
|
|
2022-08-02 00:02:25 +02:00
|
|
|
char options_str[PROP_VALUE_MAX];
|
|
|
|
char sysprop_name[512];
|
|
|
|
async_safe_format_buffer(sysprop_name, sizeof(sysprop_name), "%s%s", kMemtagPrognameSyspropPrefix,
|
2021-01-08 02:32:00 +01:00
|
|
|
basename);
|
2022-08-02 00:16:01 +02:00
|
|
|
char remote_sysprop_name[512];
|
|
|
|
async_safe_format_buffer(remote_sysprop_name, sizeof(remote_sysprop_name), "%s%s",
|
|
|
|
kMemtagOverrideSyspropPrefix, basename);
|
|
|
|
const char* sys_prop_names[] = {sysprop_name, remote_sysprop_name, kMemtagGlobalSysprop};
|
2021-01-08 02:32:00 +01:00
|
|
|
|
2022-01-25 03:29:50 +01:00
|
|
|
if (!get_config_from_env_or_sysprops("MEMTAG_OPTIONS", sys_prop_names, arraysize(sys_prop_names),
|
2022-08-02 00:02:25 +02:00
|
|
|
options_str, sizeof(options_str))) {
|
2021-01-08 02:32:00 +01:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp("sync", options_str) == 0) {
|
|
|
|
*level = M_HEAP_TAGGING_LEVEL_SYNC;
|
|
|
|
} else if (strcmp("async", options_str) == 0) {
|
|
|
|
*level = M_HEAP_TAGGING_LEVEL_ASYNC;
|
|
|
|
} else if (strcmp("off", options_str) == 0) {
|
|
|
|
*level = M_HEAP_TAGGING_LEVEL_TBI;
|
|
|
|
} else {
|
|
|
|
async_safe_format_log(
|
|
|
|
ANDROID_LOG_ERROR, "libc",
|
|
|
|
"unrecognized memtag level: \"%s\" (options are \"sync\", \"async\", or \"off\").",
|
|
|
|
options_str);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns the initial heap tagging level. Note: This function will never return
|
|
|
|
// M_HEAP_TAGGING_LEVEL_NONE, if MTE isn't enabled for this process we enable
|
|
|
|
// M_HEAP_TAGGING_LEVEL_TBI.
|
|
|
|
static HeapTaggingLevel __get_heap_tagging_level(const void* phdr_start, size_t phdr_ct,
|
2022-05-13 00:54:38 +02:00
|
|
|
uintptr_t load_bias, bool* stack) {
|
2021-01-08 02:32:00 +01:00
|
|
|
unsigned note_val =
|
|
|
|
__get_memtag_note(reinterpret_cast<const ElfW(Phdr)*>(phdr_start), phdr_ct, load_bias);
|
2022-05-13 00:54:38 +02:00
|
|
|
*stack = note_val & NT_MEMTAG_STACK;
|
|
|
|
|
|
|
|
HeapTaggingLevel level;
|
|
|
|
if (get_environment_memtag_setting(&level)) return level;
|
2021-01-08 02:32:00 +01:00
|
|
|
|
2022-02-05 02:13:27 +01:00
|
|
|
// Note, previously (in Android 12), any value outside of bits [0..3] resulted
|
|
|
|
// in a check-fail. In order to be permissive of further extensions, we
|
2022-05-13 00:54:38 +02:00
|
|
|
// relaxed this restriction.
|
|
|
|
if (!(note_val & (NT_MEMTAG_HEAP | NT_MEMTAG_STACK))) return M_HEAP_TAGGING_LEVEL_TBI;
|
2021-01-08 02:32:00 +01:00
|
|
|
|
2022-02-05 02:13:27 +01:00
|
|
|
unsigned mode = note_val & NT_MEMTAG_LEVEL_MASK;
|
|
|
|
switch (mode) {
|
|
|
|
case NT_MEMTAG_LEVEL_NONE:
|
|
|
|
// Note, previously (in Android 12), NT_MEMTAG_LEVEL_NONE was
|
|
|
|
// NT_MEMTAG_LEVEL_DEFAULT, which implied SYNC mode. This was never used
|
|
|
|
// by anyone, but we note it (heh) here for posterity, in case the zero
|
|
|
|
// level becomes meaningful, and binaries with this note can be executed
|
|
|
|
// on Android 12 devices.
|
|
|
|
return M_HEAP_TAGGING_LEVEL_TBI;
|
2021-01-08 02:32:00 +01:00
|
|
|
case NT_MEMTAG_LEVEL_ASYNC:
|
|
|
|
return M_HEAP_TAGGING_LEVEL_ASYNC;
|
|
|
|
case NT_MEMTAG_LEVEL_SYNC:
|
|
|
|
default:
|
2022-02-05 02:13:27 +01:00
|
|
|
// We allow future extensions to specify mode 3 (currently unused), with
|
|
|
|
// the idea that it might be used for ASYMM mode or something else. On
|
|
|
|
// this version of Android, it falls back to SYNC mode.
|
|
|
|
return M_HEAP_TAGGING_LEVEL_SYNC;
|
2021-01-08 02:32:00 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-15 22:55:32 +01:00
|
|
|
// Figure out the desired memory tagging mode (sync/async, heap/globals/stack) for this executable.
|
|
|
|
// This function is called from the linker before the main executable is relocated.
|
|
|
|
__attribute__((no_sanitize("hwaddress", "memtag"))) void __libc_init_mte(const void* phdr_start,
|
|
|
|
size_t phdr_ct,
|
2022-05-13 00:54:38 +02:00
|
|
|
uintptr_t load_bias,
|
|
|
|
void* stack_top) {
|
|
|
|
bool memtag_stack;
|
|
|
|
HeapTaggingLevel level = __get_heap_tagging_level(phdr_start, phdr_ct, load_bias, &memtag_stack);
|
2022-05-12 22:06:04 +02:00
|
|
|
char* env = getenv("BIONIC_MEMTAG_UPGRADE_SECS");
|
2022-08-10 23:57:14 +02:00
|
|
|
static const char kAppProcessName[] = "app_process64";
|
|
|
|
const char* progname = __libc_shared_globals()->init_progname;
|
|
|
|
progname = progname ? __gnu_basename(progname) : nullptr;
|
|
|
|
if (progname &&
|
|
|
|
strncmp(progname, kAppProcessName, sizeof(kAppProcessName)) == 0) {
|
|
|
|
// disable timed upgrade for zygote, as the thread spawned will violate the requirement
|
|
|
|
// that it be single-threaded.
|
|
|
|
env = nullptr;
|
|
|
|
}
|
2022-05-12 22:06:04 +02:00
|
|
|
int64_t timed_upgrade = 0;
|
|
|
|
if (env) {
|
|
|
|
char* endptr;
|
|
|
|
timed_upgrade = strtoll(env, &endptr, 10);
|
|
|
|
if (*endptr != '\0' || timed_upgrade < 0) {
|
|
|
|
async_safe_format_log(ANDROID_LOG_ERROR, "libc",
|
|
|
|
"Invalid value for BIONIC_MEMTAG_UPGRADE_SECS: %s",
|
|
|
|
env);
|
|
|
|
timed_upgrade = 0;
|
|
|
|
}
|
|
|
|
// Make sure that this does not get passed to potential processes inheriting
|
|
|
|
// this environment.
|
|
|
|
unsetenv("BIONIC_MEMTAG_UPGRADE_SECS");
|
|
|
|
}
|
|
|
|
if (timed_upgrade) {
|
|
|
|
if (level == M_HEAP_TAGGING_LEVEL_ASYNC) {
|
|
|
|
async_safe_format_log(ANDROID_LOG_INFO, "libc",
|
|
|
|
"Attempting timed MTE upgrade from async to sync.");
|
|
|
|
__libc_shared_globals()->heap_tagging_upgrade_timer_sec = timed_upgrade;
|
|
|
|
level = M_HEAP_TAGGING_LEVEL_SYNC;
|
|
|
|
} else if (level != M_HEAP_TAGGING_LEVEL_SYNC) {
|
|
|
|
async_safe_format_log(
|
|
|
|
ANDROID_LOG_ERROR, "libc",
|
|
|
|
"Requested timed MTE upgrade from invalid %s to sync. Ignoring.",
|
|
|
|
DescribeTaggingLevel(level));
|
|
|
|
}
|
|
|
|
}
|
2021-01-08 02:32:00 +01:00
|
|
|
if (level == M_HEAP_TAGGING_LEVEL_SYNC || level == M_HEAP_TAGGING_LEVEL_ASYNC) {
|
|
|
|
unsigned long prctl_arg = PR_TAGGED_ADDR_ENABLE | PR_MTE_TAG_SET_NONZERO;
|
|
|
|
prctl_arg |= (level == M_HEAP_TAGGING_LEVEL_SYNC) ? PR_MTE_TCF_SYNC : PR_MTE_TCF_ASYNC;
|
2020-12-15 22:55:32 +01:00
|
|
|
|
2021-07-02 00:16:40 +02:00
|
|
|
// When entering ASYNC mode, specify that we want to allow upgrading to SYNC by OR'ing in the
|
|
|
|
// SYNC flag. But if the kernel doesn't support specifying multiple TCF modes, fall back to
|
|
|
|
// specifying a single mode.
|
|
|
|
if (prctl(PR_SET_TAGGED_ADDR_CTRL, prctl_arg | PR_MTE_TCF_SYNC, 0, 0, 0) == 0 ||
|
|
|
|
prctl(PR_SET_TAGGED_ADDR_CTRL, prctl_arg, 0, 0, 0) == 0) {
|
2020-12-15 22:55:32 +01:00
|
|
|
__libc_shared_globals()->initial_heap_tagging_level = level;
|
2022-05-13 00:54:38 +02:00
|
|
|
__libc_shared_globals()->initial_memtag_stack = memtag_stack;
|
|
|
|
|
|
|
|
if (memtag_stack) {
|
|
|
|
void* page_start =
|
|
|
|
reinterpret_cast<void*>(PAGE_START(reinterpret_cast<uintptr_t>(stack_top)));
|
|
|
|
if (mprotect(page_start, PAGE_SIZE, PROT_READ | PROT_WRITE | PROT_MTE | PROT_GROWSDOWN)) {
|
|
|
|
async_safe_fatal("error: failed to set PROT_MTE on main thread stack: %s\n",
|
|
|
|
strerror(errno));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-15 22:55:32 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-08 02:32:00 +01:00
|
|
|
// MTE was either not enabled, or wasn't supported on this device. Try and use
|
|
|
|
// TBI.
|
2020-12-15 22:55:32 +01:00
|
|
|
if (prctl(PR_SET_TAGGED_ADDR_CTRL, PR_TAGGED_ADDR_ENABLE, 0, 0, 0) == 0) {
|
|
|
|
__libc_shared_globals()->initial_heap_tagging_level = M_HEAP_TAGGING_LEVEL_TBI;
|
|
|
|
}
|
2022-05-12 22:06:04 +02:00
|
|
|
// We did not enable MTE, so we do not need to arm the upgrade timer.
|
|
|
|
__libc_shared_globals()->heap_tagging_upgrade_timer_sec = 0;
|
2020-12-15 22:55:32 +01:00
|
|
|
}
|
|
|
|
#else // __aarch64__
|
2022-05-13 00:54:38 +02:00
|
|
|
void __libc_init_mte(const void*, size_t, uintptr_t, void*) {}
|
2020-12-15 22:55:32 +01:00
|
|
|
#endif // __aarch64__
|
|
|
|
|
2021-09-30 01:52:20 +02:00
|
|
|
void __libc_init_profiling_handlers() {
|
|
|
|
// The dynamic variant of this function is more interesting, but this
|
|
|
|
// at least ensures that static binaries aren't killed by the kernel's
|
|
|
|
// default disposition for these two real-time signals that would have
|
|
|
|
// handlers installed if this was a dynamic binary.
|
|
|
|
signal(BIONIC_SIGNAL_PROFILER, SIG_IGN);
|
|
|
|
signal(BIONIC_SIGNAL_ART_PROFILER, SIG_IGN);
|
|
|
|
}
|
|
|
|
|
2022-05-13 00:54:38 +02:00
|
|
|
__attribute__((no_sanitize("memtag"))) __noreturn static void __real_libc_init(
|
|
|
|
void* raw_args, void (*onexit)(void) __unused, int (*slingshot)(int, char**, char**),
|
|
|
|
structors_array_t const* const structors, bionic_tcb* temp_tcb) {
|
2017-10-06 00:18:47 +02:00
|
|
|
BIONIC_STOP_UNWIND;
|
|
|
|
|
2018-11-22 11:44:09 +01:00
|
|
|
// Initialize TLS early so system calls and errno work.
|
2013-02-07 19:14:39 +01:00
|
|
|
KernelArgumentBlock args(raw_args);
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
__libc_init_main_thread_early(args, temp_tcb);
|
2018-11-22 12:16:06 +01:00
|
|
|
__libc_init_main_thread_late();
|
|
|
|
__libc_init_globals();
|
2018-11-22 11:44:09 +01:00
|
|
|
__libc_shared_globals()->init_progname = args.argv[0];
|
2018-11-22 11:41:36 +01:00
|
|
|
__libc_init_AT_SECURE(args.envp);
|
2019-01-02 03:53:48 +01:00
|
|
|
layout_static_tls(args);
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
__libc_init_main_thread_final();
|
2018-11-22 11:41:36 +01:00
|
|
|
__libc_init_common();
|
2020-12-15 22:55:32 +01:00
|
|
|
__libc_init_mte(reinterpret_cast<ElfW(Phdr)*>(getauxval(AT_PHDR)), getauxval(AT_PHNUM),
|
2022-05-13 00:54:38 +02:00
|
|
|
/*load_bias = */ 0, /*stack_top = */ raw_args);
|
2020-12-15 22:55:32 +01:00
|
|
|
__libc_init_scudo();
|
2021-09-30 01:52:20 +02:00
|
|
|
__libc_init_profiling_handlers();
|
2019-11-15 01:02:09 +01:00
|
|
|
__libc_init_fork_handler();
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2019-01-24 02:56:24 +01:00
|
|
|
call_ifunc_resolvers();
|
2013-02-07 19:14:39 +01:00
|
|
|
apply_gnu_relro();
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2013-02-07 19:14:39 +01:00
|
|
|
// Several Linux ABIs don't pass the onexit pointer, and the ones that
|
|
|
|
// do never use it. Therefore, we ignore it.
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2020-10-23 18:55:33 +02:00
|
|
|
call_array(structors->preinit_array, args.argc, args.argv, args.envp);
|
|
|
|
call_array(structors->init_array, args.argc, args.argv, args.envp);
|
2009-07-18 01:11:10 +02:00
|
|
|
|
2013-02-07 19:14:39 +01:00
|
|
|
// The executable may have its own destructors listed in its .fini_array
|
|
|
|
// so we need to ensure that these are called when the program exits
|
|
|
|
// normally.
|
2018-08-03 02:31:13 +02:00
|
|
|
if (structors->fini_array != nullptr) {
|
|
|
|
__cxa_atexit(__libc_fini,structors->fini_array,nullptr);
|
2014-09-04 23:54:34 +02:00
|
|
|
}
|
2010-10-21 04:16:50 +02:00
|
|
|
|
2022-05-12 22:06:04 +02:00
|
|
|
__libc_init_mte_late();
|
|
|
|
|
2013-02-07 19:14:39 +01:00
|
|
|
exit(slingshot(args.argc, args.argv, args.envp));
|
2009-03-04 04:28:35 +01:00
|
|
|
}
|
2016-01-26 02:38:44 +01:00
|
|
|
|
2019-02-01 01:27:54 +01:00
|
|
|
extern "C" void __hwasan_init_static();
|
2018-09-20 01:29:12 +02:00
|
|
|
|
2020-02-10 19:30:38 +01:00
|
|
|
// This __libc_init() is only used for static executables, and is called from crtbegin.c.
|
|
|
|
//
|
|
|
|
// The 'structors' parameter contains pointers to various initializer
|
|
|
|
// arrays that must be run before the program's 'main' routine is launched.
|
2022-05-13 00:54:38 +02:00
|
|
|
__attribute__((no_sanitize("hwaddress", "memtag"))) __noreturn void __libc_init(
|
|
|
|
void* raw_args, void (*onexit)(void) __unused, int (*slingshot)(int, char**, char**),
|
|
|
|
structors_array_t const* const structors) {
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
bionic_tcb temp_tcb = {};
|
2018-08-14 01:46:15 +02:00
|
|
|
#if __has_feature(hwaddress_sanitizer)
|
2018-09-20 01:29:12 +02:00
|
|
|
// Install main thread TLS early. It will be initialized later in __libc_init_main_thread. For now
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
// all we need is access to TLS_SLOT_SANITIZER.
|
|
|
|
__set_tls(&temp_tcb.tls_slot(0));
|
2019-02-01 01:27:54 +01:00
|
|
|
// Initialize HWASan enough to run instrumented code. This sets up TLS_SLOT_SANITIZER, among other
|
|
|
|
// things.
|
|
|
|
__hwasan_init_static();
|
2018-09-20 01:29:12 +02:00
|
|
|
// We are ready to run HWASan-instrumented code, proceed with libc initialization...
|
2018-08-14 01:46:15 +02:00
|
|
|
#endif
|
Reorganize static TLS memory for ELF TLS
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da625fcbf1e43dfae09b7b4817fa1660)
2019-01-03 11:51:30 +01:00
|
|
|
__real_libc_init(raw_args, onexit, slingshot, structors, &temp_tcb);
|
2018-08-14 01:46:15 +02:00
|
|
|
}
|
|
|
|
|
2018-11-13 01:01:37 +01:00
|
|
|
static int g_target_sdk_version{__ANDROID_API__};
|
2018-04-04 00:56:35 +02:00
|
|
|
|
2018-11-13 01:01:37 +01:00
|
|
|
extern "C" int android_get_application_target_sdk_version() {
|
2018-04-04 00:56:35 +02:00
|
|
|
return g_target_sdk_version;
|
|
|
|
}
|
|
|
|
|
2018-11-13 01:01:37 +01:00
|
|
|
extern "C" void android_set_application_target_sdk_version(int target) {
|
2018-04-04 00:56:35 +02:00
|
|
|
g_target_sdk_version = target;
|
2021-03-05 22:31:41 +01:00
|
|
|
__libc_set_target_sdk_version(target);
|
2016-01-26 02:38:44 +01:00
|
|
|
}
|
2018-11-22 11:40:17 +01:00
|
|
|
|
2019-11-02 01:18:28 +01:00
|
|
|
// This function is called in the dynamic linker before ifunc resolvers have run, so this file is
|
|
|
|
// compiled with -ffreestanding to avoid implicit string.h function calls. (It shouldn't strictly
|
|
|
|
// be necessary, though.)
|
2018-11-22 11:40:17 +01:00
|
|
|
__LIBC_HIDDEN__ libc_shared_globals* __libc_shared_globals() {
|
2018-12-15 02:34:05 +01:00
|
|
|
static libc_shared_globals globals;
|
2018-11-22 11:40:17 +01:00
|
|
|
return &globals;
|
|
|
|
}
|