I only came to improve the signature mismatch error, but I was then annoyed by the copy & paste of the other checks.
get_chunk_size() seems to be deliberately avoiding any checks, though I think that might be a bug, and there should be a get_chunk_size() that _does_ check for most callers, and a get_chunk_size_unchecked() for the <sys/thread_properties.h> stuff that seems to want to only be "best effort" (but does still have _some_ possibility of aborting, in addition to the possibility of segfaulting).
Also a bit of "include what you use" after cider complained about all the unused includes in bionic_allocator.h.
Bug: https://issuetracker.google.com/341850283
Change-Id: I278b495601353733af516a2d60ed10feb9cef36b
Change the comment to explain _why_ we're resolving the path, get
rid of unnecessarily explicit strlen() calls, and make it clearer
that result.path is unconditionally initialized; it's just the
specific content that varies.
Change-Id: Iffbd5efc2eafd56e3efa3c0aaf7c191e6bb66a04
Fix the issue if link for an symlink that point to an relative path
cause the linker can not find the right absolute path.
For example:
lrwxr-xr-x 1 root shell 13 2009-01-01 08:00 /system/bin/app_process -> app_process64
the '/system/bin/app_process' is symlinked to 'app_process64' and will be failed.
if the 'exe_to_load' is null and also failed when stat '/proc/self/exe'
will entered this path.
Without Patch:
[ Linking executable "app_process64" ]
linker: CANNOT LINK EXECUTABLE "/system/bin/app_process": library "libnativeloader.so" not found: needed by main executable
With Patch:
[ Linking executable "/system/bin/app_process64" ]
[ Using config section "system" ]
[ Jumping to _start (0x75593c3000)... ]
Test: Manual - Run app_process (symlinked to app_process64)
Change-Id: Iacd0a810a679e8d55d68d7e4c84f0e5e4f276b14
Signed-off-by: chenxinyuanchen <chenxinyuanchen@xiaomi.com>
Signed-off-by: chenxinyuanchen <chenxinyuanchen@xiaomi.corp-partner.google.com>
Previously, on RISC-V, the static hwcap variable in
__bionic_call_ifunc_resolver resulted in a call to __cxa_guard_acquire,
which used a GOT access to __stack_chk_guard, but the GOT hadn't yet
been initialized. Fix this problem by applying RELR relocations
earlier.
Bug: http://b/330725041
Test: lunch aosp_cf_riscv64_phone-trunk_staging-eng; boot device
Change-Id: Ib10fdcc0d2c1b875eba6bc5e0115a6768d6f25ee
Newer versions of lld write ifunc relocations for PLT entries
to .rela.dyn instead of .rela.plt. This causes a problem because
.rela.dyn is subject to Android relocation packing and we don't
support relocation packing when calling the linker's own ifunc
resolvers. Resolve the problem by passing --pack-dyn-relocs=relr
which disables Android relocation packing but keeps RELR packing
enabled which covers most of the relocations in the linker.
With the current toolchain there are only two entries in the linker's
.rela.dyn (and these entries look like a bug anyway) so there should
be no substantial change to binary size as a result of this change.
Relocation section '.rela.dyn' at offset 0x8e8 contains 2 entries:
Offset Info Type Symbol's Value Symbol's Name + Addend
00000000001691d8 0000000100000401 R_AARCH64_GLOB_DAT 0000000000000000 ZSTD_trace_decompress_begin + 0
00000000001691e0 0000000200000401 R_AARCH64_GLOB_DAT 0000000000000000 ZSTD_trace_decompress_end + 0
Bug: 331450960
Change-Id: Idf403e775d134cbe208d6b1635a84a2a3e70b74b
This avoids a diagnostic on arm32/arm64 when running ldd on a shared
library with a PT_TLS segment:
executable's TLS segment is underaligned: alignment is 8, needs to be at least 64 for ARM64 Bionic
Bug: http://b/328822319
Test: ldd /system/lib64/libc.so
Change-Id: Idb061b980333ba3b5b3f44b52becf041d76ea0b7
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I7150ed22af0723cc0b2d326c046e4e4a8b56ad09
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Also enable stack MTE if main binary links in a library that needs it.
Otherwise the following is possible:
1. a binary doesn't require stack MTE, but links in libraries that use
stg on the stack
2. that binary later dlopens a library that requires stack MTE, and our
logic in dlopen remaps the stacks with MTE
3. the libraries from step 1 now have tagged pointers with missing tags
in memory, so things go wrong
This reverts commit f53e91cc81.
Reason for revert: Fixed problem detected in b/324568991
Test: atest memtag_stack_dlopen_test with MTE enabled
Test: check crash is gone on fullmte build
Change-Id: I4a93f6814a19683c3ea5fe1e6d455df5459d31e1
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
__linker_init calls soinfo::~soinfo after __linker_init_post_relocation
has returned, and ~soinfo modifies global loader state
(g_soinfo_handles_map), so the loader lock must be held.
Use a variable with a destructor (~DlMutexUnlocker) to release the
loader lock after ~soinfo returns.
It's not clear to me when the mutex should be acquired.
pthread_mutex_lock in theory can use __bionic_tls (ScopedTrace), but
that should only happen if the mutex is contended, and it won't be.
The loader constructors shouldn't be spawning a thread, and the vdso
shouldn't really have a constructor. ifunc relocations presumably don't
spawn a thread either. It probably doesn't matter much as long as it's
held before calling constructors in the executable or shared objects.
Bug: http://b/290318196
Test: treehugger
Test: bionic-unit-tests
Change-Id: I544ad65db07e5fe83315ad4489d77536d463a0a4
A constructor could spawn a thread, which could call into the loader,
so the global loader mutex must be held.
Bug: http://b/290318196
Test: treehugger
Change-Id: I7a5249898a11fbc62d1ecdb85b24017a42a4b179
To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
Enable linking on a system without /proc mounted by falling back to
reading the executable paths from argv[0] when /proc/exe/self can't be
found.
Bug: 254835242
Change-Id: I0735e873fa4e2f439688722c4a846fb70ff398a5
Map all stacks (primary, thread, and sigaltstack) as PROT_MTE when the
binary requests it through the ELF note.
For the reference, the note is produced by the following toolchain changes:
https://reviews.llvm.org/D118948https://reviews.llvm.org/D119384https://reviews.llvm.org/D119381
Bug: b/174878242
Test: fvp_mini with ToT LLVM (more tests in a separate change)
Change-Id: I04a4e21c966e7309b47b1f549a2919958d93a872
A recent change to lld [1] made it so that the __rela?_iplt_*
symbols are no longer defined for PIEs and shared libraries. Since
the linker is a PIE, this prevents it from being able to look up
its own relocations via these symbols. We don't need these symbols
to find the relocations however, as their location is available via
the dynamic table. Therefore, start using the dynamic table to find
the relocations instead of using the symbols.
Previously landed in r.android.com/1801427 and reverted in
r.android.com/1804876 due to linux-bionic breakage. This time,
search .rela.dyn as well as .rela.plt, since the linker may put the
relocations in either location (see [2]).
[1] f8cb78e99a
[2] https://reviews.llvm.org/D65651
Bug: 197420743
Change-Id: I5bef157472e9893822e3ca507ef41a15beefc6f1
To take advantage of file-backed huge pages for the text segments of key
shared libraries (go/android-hugepages), the dynamic linker must load
candidate ELF files at an appropriately aligned address and mark
executable segments with MADV_HUGEPAGE.
This patches uses segments' p_align values to determine when a file is
PMD aligned (2MB alignment), and performs load operations accordingly.
Bug: 158135888
Test: Verified PMD aligned libraries are backed with huge pages on
supporting kernel versions.
Change-Id: Ia2367fd5652f663d50103e18f7695c59dc31c7b9
Setting the linker's soname ("ld-android.so") can allocate heap memory
now that the name uses an std::string, and it's probably a good idea to
defer doing this until after the linker has relocated itself (and after
it has called C++ constructors for global variables.)
Bug: none
Test: bionic unit tests
Test: verify that dlopen("ld-android.so", RTLD_NOLOAD) works
Change-Id: I6b9bd7552c3ae9b77e3ee9e2a98b069b8eef25ca
Use a note in executables to specify
(none|sync|async) heap tagging level. To be extended with (heap x stack x
globals) in the future. A missing note disables all tagging.
Bug: b/135772972
Test: bionic-unit-tests (in a future change)
Change-Id: Iab145a922c7abe24cdce17323f9e0c1063cc1321
This patch adds support to load BTI-enabled objects.
According to the ABI, BTI is recorded in the .note.gnu.property section.
The new parser evaluates the property section, if exists.
It searches for .note section with NT_GNU_PROPERTY_TYPE_0.
Once found it tries to find GNU_PROPERTY_AARCH64_FEATURE_1_AND.
The results are cached.
The main change in linker is when protection of loaded ranges gets
applied. When BTI is requested and the platform also supports it
the prot flags have to be amended with PROT_BTI for executable ranges.
Failing to add PROT_BTI flag would disable BTI protection.
Moreover, adding the new PROT flag for shared objects without BTI
compatibility would break applications.
Kernel does not add PROT_BTI to a loaded ELF which has interpreter.
Linker handles this case too.
Test: 1. Flame boots
2. Tested on FVP with BTI enabled
Change-Id: Iafdf223b74c6e75d9f17ca90500e6fe42c4c1218
The search_linked_namespaces parameter to find_library_internal is
always true.
Bug: none
Test: bionic tests
Change-Id: I4b6f48afefca4f52b34ca2c9e0f4335fa895ff34
Symbol lookup is O(L) where L is the number of libraries to search (e.g.
in the global and local lookup groups). Factor out the per-DSO work into
soinfo_do_lookup_impl, and optimize for the situation where all the DSOs
are using DT_GNU_HASH (rather than SysV hashes).
To load a set of libraries, the loader first constructs an auxiliary list
of libraries (SymbolLookupList, containing SymbolLookupLib objects). The
SymbolLookupList is reused for each DSO in a load group. (-Bsymbolic is
accommodated by modifying the SymbolLookupLib at the front of the list.)
To search for a symbol, soinfo_do_lookup_impl has a small loop that first
scans a vector of GNU bloom filters looking for a possible match.
There was a slight improvement from templatizing soinfo_do_lookup_impl
and skipping the does-this-DSO-lack-GNU-hash check.
Rewrite the relocation processing loop to be faster. There are specialized
functions that handle the expected relocation types in normal relocation
sections and in PLT relocation sections.
This CL can reduce the initial link time of large programs by around
40-50% (e.g. audioserver, cameraserver, etc). On the linker relocation
benchmark (64-bit walleye), it reduces the time from 131.6ms to 71.9ms.
Bug: http://b/143577578 (incidentally fixed by this CL)
Test: bionic-unit-tests
Change-Id: If40a42fb6ff566570f7280b71d58f7fa290b9343
With -O0, on arm64, Clang uses memset for this declaration:
bionic_tcb temp_tcb = {};
arm64 doesn't currently have an ifunc for memset, but if it did, then this
line would crash when the linker is compiled with -O0. It looks like other
architectures would only use a memset call if the bionic_tcb struct were
larger.
Avoid memset by using a custom memclr function that the compiler optimizes
into something efficient.
Also add __attribute__((uninitialized)) to ensure that
-ftrivial-auto-var-init does not generate a call to memset. See this
change[1] in build/soong.
[1] If085ec53c619e2cebc86ca23f7039298160d99ae
Test: build linker with -O0, linker[64] --help works
Bug: none
Change-Id: I0df8065a362646de4fa021cae63a7d68ca3966b6
Using ifuncs allows the linker to select faster versions of libc functions
like strcmp, making linking faster.
The linker continues to first initialize TLS, then call the ifunc
resolvers. There are small amounts of code in Bionic that need to avoid
calling functions selected using ifuncs (generally string.h APIs). I've
tried to compile those pieces with -ffreestanding. Maybe it's unnecessary,
but maybe it could help avoid compiler-inserted memset calls, and maybe
it will be useful later on.
The ifuncs are called in a special early pass using special
__rel[a]_iplt_start / __rel[a]_iplt_end symbols. The linker will encounter
the ifuncs again as R_*_IRELATIVE dynamic relocations, so they're skipped
on the second pass.
Break linker_main.cpp into its own liblinker_main library so it can be
compiled with -ffreestanding.
On walleye, this change fixes a recent 2.3% linker64 start-up time
regression (156.6ms -> 160.2ms), but it also helps the 32-bit time by
about 1.9% on the same benchmark. I'm measuring the run-time using a
synthetic benchmark based on loading libandroid_servers.so.
Test: bionic unit tests, manual benchmarking
Bug: none
Merged-In: Ieb9446c2df13a66fc0d377596756becad0af6995
Change-Id: Ieb9446c2df13a66fc0d377596756becad0af6995
(cherry picked from commit 772bcbb0c2)
Define a "linker_bin_template" cc_defaults module that a native bridge
implementation can inherit to define a guest linker.
Break the debuggerd_init call off into separate
linker_debuggerd_{android,stub}.cpp files to allow opting in/out of the
debuggerd integration without needing to change how linker_main.cpp is
compiled. (This is necessary for a later commit that moves
linker_main.cpp into a new static library.)
Test: bionic unit tests
Bug: none
Merged-In: I7c5d79281bce1e69817b266dd91d43ea40f78522
Change-Id: I7c5d79281bce1e69817b266dd91d43ea40f78522
(cherry picked from commit 5adf402ee9)
COUNT_PAGES tries to count the pages dirtied by relocations, but this
implementation is broken because it's merging rel->r_offset values from
multiple DSOs. The functionality is hard to use, because it requires
rebuilding the linker, and it's not obvious to me that it should belong
in the linker. If we do want it, we should make it work without rebuilding
the linker.
Similar information can currently be collected by parsing the result of
`readelf -r` on a binary (or a set of binaries).
Bug: none
Test: m linker libc com.android.runtime ; adb sync ; run something
Change-Id: I760fb6ea4ea3d1927eb5145cdf4ca133851d69b4
The linker currently sets VMA name ".bss" for bss sections in DSOs
loaded by the linker. With this change, the linker now also sets VMA
name for bss sections in the linker itself and the main executable, so
that they don't get left out in various accounting.
Test: Run 'dd' and check its /proc/<pid>/maps.
Change-Id: I62d9996ab256f46e2d82cac581c17fa94836a228
Ordinary executables have a PT_INTERP path of /system/bin/linker[64], but:
- executables using bootstrap Bionic use /system/bin/bootstrap/linker[64]
- ASAN executables use /system/bin/linker_asan[64]
gdb appears to use the PT_INTERP path for debugging the dynamic linker
before the linker has initialized the r_debug module list. If the linker's
l_name differs from PT_INTERP, then gdb assumes that the linker has been
unloaded and searches for a new solib using the linker's l_name path.
gdb may print a warning like:
warning: Temporarily disabling breakpoints for unloaded shared library "$OUT/symbols/system/bin/linker64"
If I'm currently debugging the linker when this happens, gdb apparently
doesn't load debug symbols for the linker. This can be worked around with
gdb's "sharedlibrary" command, but it's better to avoid it.
Previously, when PT_INTERP was the bootstrap linker, but l_name was
"/system/bin/linker[64]", gdb would find the default non-bootstrap linker
binary and (presumably) get confused about symbol addresses.
(Also, remove the "static std::string exe_path" variable because the
soinfo::realpath_ field is a std::string that already lasts until exit. We
already use it for link_map_head.l_name in notify_gdb_of_load.)
Bug: http://b/134183407
Test: manual
Change-Id: I9a95425a3a5e9fd01e9dd272273c6ed3667dbb9a
Given that we have both linker and linker64, I didn't really want to have
to have ldd and ldd64, so this change just adds the --list option to the
linkers and a shell script wrapper "ldd" that calls the appropriate
linker behind the scenes.
Test: adb shell linker --list `which app_process32`
Test: adb shell linker64 --list `which date`
Test: adb shell ldd `which app_process32`
Test: adb shell ldd `which date`
Change-Id: I33494bda1cc3cafee54e091f97c0f2ae52d1f74b
- Show which executable is being linked, which linker config file is
being read, and which section in it is being used with, enabled on
$LD_DEBUG>=1.
- Show more info to follow the dlopen() process, enabled with "dlopen"
in the debug.ld.xxx property.
Test: Flash, boot, and look at logcat after "adb shell setprop debug.ld.all dlopen"
Bug: 120430775
Change-Id: I5441c8ced26ec0e2f04620c3d2a1ae860b792154
Introduce a new flag ANDROID_DLEXT_RESERVED_ADDRESS_RECURSIVE which
instructs the linker to use the reserved address space to load all of
the newly-loaded libraries required by a dlopen() call instead of only
the main library. They will be loaded consecutively into that region if
they fit. The RELRO sections of all the loaded libraries will also be
considered for reading/writing shared RELRO data.
This will allow the WebView implementation to potentially consist of
more than one .so file while still benefiting from the RELRO sharing
optimisation, which would otherwise only apply to the "root" .so file.
Test: bionic-unit-tests (existing and newly added)
Bug: 110790153
Change-Id: I61da775c29fd5017d9a1e2b6b3757c3d20a355b3
When the linker is invoked on itself, (`linker64 /system/bin/linker64`),
the linker prints an error, because self-invocation isn't allowed. The
current method for detecting self-invocation fails because the second
linker instance can crash in a constructor function before reaching
__linker_init.
Fix the problem by moving the error check into a constructor function,
which finishes initializing libc sufficiently to call async_safe_fatal.
The only important thing missing is __libc_sysinfo on 32-bit x86. The aux
vector isn't readily accessible, so use the fallback int 0x80.
Bug: http://b/123637025
Test: bionic unit tests (32-bit x86)
Change-Id: I8be6369e8be3938906628ae1f82be13e6c510119
This is the second attempt to purge linker block allocators. Unlike the
previously reverted change which purge allocators whenever all objects
are freed, we only purge right before control leaves the linker. This
limits the performance impact to one munmap() call per dlopen(), in
most cases.
Bug: 112073665
Test: Boot and check memory usage with 'showmap'.
Test: Run camear cold start performance test.
Change-Id: I02c7c44935f768e065fbe7ff0389a84bd44713f0
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da)
Instead of passing the address of a KernelArgumentBlock to libc.so for
initialization, use __loader_shared_globals() to initialize globals.
Most of the work happened in the previous CLs. This CL switches a few
KernelArgumentBlock::getauxval calls to [__bionic_]getauxval and stops
routing the KernelArgumentBlock address through the libc init functions.
Bug: none
Test: bionic unit tests
Change-Id: I96c7b02c21d55c454558b7a5a9243c682782f2dd
Merged-In: I96c7b02c21d55c454558b7a5a9243c682782f2dd
(cherry picked from commit 746ad15912)