The current external/libcxx uses the pthreads API to access EH globals,
which is unlikely to work in the loader. The new libc++ prebuilt uses
ELF TLS, which also doesn't work in the loader and crashes the arm64
loader on startup.
Bug: http://b/332594828
Test: treehugger
Change-Id: Icf40085d78cbaba751a97a1d457cb6cc7af7e065
Newer versions of lld write ifunc relocations for PLT entries
to .rela.dyn instead of .rela.plt. This causes a problem because
.rela.dyn is subject to Android relocation packing and we don't
support relocation packing when calling the linker's own ifunc
resolvers. Resolve the problem by passing --pack-dyn-relocs=relr
which disables Android relocation packing but keeps RELR packing
enabled which covers most of the relocations in the linker.
With the current toolchain there are only two entries in the linker's
.rela.dyn (and these entries look like a bug anyway) so there should
be no substantial change to binary size as a result of this change.
Relocation section '.rela.dyn' at offset 0x8e8 contains 2 entries:
Offset Info Type Symbol's Value Symbol's Name + Addend
00000000001691d8 0000000100000401 R_AARCH64_GLOB_DAT 0000000000000000 ZSTD_trace_decompress_begin + 0
00000000001691e0 0000000200000401 R_AARCH64_GLOB_DAT 0000000000000000 ZSTD_trace_decompress_end + 0
Bug: 331450960
Change-Id: Idf403e775d134cbe208d6b1635a84a2a3e70b74b
ANDROID_REL[A] need to be processed after RELR in case it contains
an IRELATIVE relocation with a resolver that accesses data relocated
by RELR.
Bug: 331466607
Change-Id: I50865e67fc1492d75324ef3cb9defef3f8b88421
arm32/arm64: Previously, the loader miscalculated a negative value for
offset_bionic_tcb_ when the executable's alignment was greater than
(8 * sizeof(void*)). The process then tended to crash.
riscv: Previously, the loader didn't propagate the p_align field of the
PT_TLS segment into StaticTlsLayout::alignment_, so high alignment
values were ignored.
__bionic_check_tls_alignment: Stop capping alignment at page_size().
There is no need to cap it, and the uncapped value is necessary for
correctly positioning the TLS segment relative to the thread pointer
(TP) for ARM and x86. The uncapped value is now used for computing
static TLS layout, but only a page of alignment is actually provided:
* static TLS: __allocate_thread_mapping uses mmap, which provides only
a page's worth of alignment
* dynamic TLS: BionicAllocator::memalign caps align to page_size()
* There were no callers to StaticTlsLayout::alignment(), so remove it.
Allow PT_TLS.p_align to be 0: quietly convert it to 1.
For static TLS, ensure that the address of a TLS block is congruent to
p_vaddr, modulo p_align. That is, ensure this formula holds:
(&tls_block % p_align) == (p_vaddr % p_align)
For dynamic TLS, a TLS block is still allocated congruent to 0 modulo
p_align. Fixing dynamic TLS congruence is mostly a separate problem
from fixing static TLS congruence, and requires changing the dynamic
TLS allocator and/or DTV structure, so it should be fixed in a
later follow-up commit.
Typically (p_vaddr % p_align) is zero, but it's currently possible to
get a non-zero value with LLD: when .tbss has greater than page
alignment, but .tdata does not, LLD can produce a TLS segment where
(p_vaddr % p_align) is non-zero. LLD calculates TP offsets assuming
the loader will align the segment using (p_vaddr % p_align).
Previously, Bionic and LLD disagreed on the offsets from the TP to
the executable's TLS variables.
Add unit tests for StaticTlsLayout in bionic-unit-tests-static.
See also:
* https://github.com/llvm/llvm-project/issues/40872
* https://sourceware.org/bugzilla/show_bug.cgi?id=24606
* https://reviews.llvm.org/D61824
* https://reviews.freebsd.org/D31538
Bug: http://b/133354825
Bug: http://b/328844725
Bug: http://b/328844839
Test: bionic-unit-tests bionic-unit-tests-static
Change-Id: I8850c32ff742a45d3450d8fc39075c10a1e11000
This avoids a diagnostic on arm32/arm64 when running ldd on a shared
library with a PT_TLS segment:
executable's TLS segment is underaligned: alignment is 8, needs to be at least 64 for ARM64 Bionic
Bug: http://b/328822319
Test: ldd /system/lib64/libc.so
Change-Id: Idb061b980333ba3b5b3f44b52becf041d76ea0b7
The loader doesn't currently support using TLS within itself.
Previously, if a TLS segment was accidentally linked into the loader,
then the loader's `soinfo_tls* tls_` field would be initialized with a
valid TlsSegment, but the loader soinfo wouldn't be registered with
linker_tls.cpp, so the module ID would be 0. (The first valid module ID
is 1.)
The result was architecture-dependent. On x86, everything worked until
the first TLS access, which segfaulted. On arm64, relocating TLSDESC
hit a CHECK() failure on the invalid module ID.
Make the loader more robust:
* Abort in the loader if it detects that it has a TLS segment.
* For R_GENERIC_TLS_DTPMOD, verify that a module ID is valid before
writing it.
Bug: none
Test: manually add a thread_local variable to the loader
Test: bionic-unit-tests
Change-Id: I93c17ca65df4af2d46288957a0e483b0e2b13862
Only zero the partial page at the end of the segment. There may be
entire pages beyond first page boundary after the segment -- due to
segment padding (VMA extension); but these are not expected to be
touched (faulted).
Do not attempt to zero the extended region past the first partial page,
since doing so may:
1) Result in a SIGBUS, as the region is not backed by the underlying
file.
2) Break the COW backing, faulting in new anon pages for a region
that will not be used.
Bug: 327600007
Bug: 328797737
Test: Dexcom G7 app
Test: atest -c linker-unit-tests
Test: bootup/idle/system-processes-memory-direct
Change-Id: Ib76b022f94bfc4a4b7eaca2c155af81478741b91
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I9cd04574190ef4c727308363a8cb1120c36e53e0
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I7150ed22af0723cc0b2d326c046e4e4a8b56ad09
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Also enable stack MTE if main binary links in a library that needs it.
Otherwise the following is possible:
1. a binary doesn't require stack MTE, but links in libraries that use
stg on the stack
2. that binary later dlopens a library that requires stack MTE, and our
logic in dlopen remaps the stacks with MTE
3. the libraries from step 1 now have tagged pointers with missing tags
in memory, so things go wrong
This reverts commit f53e91cc81.
Reason for revert: Fixed problem detected in b/324568991
Test: atest memtag_stack_dlopen_test with MTE enabled
Test: check crash is gone on fullmte build
Change-Id: I4a93f6814a19683c3ea5fe1e6d455df5459d31e1
Some obfuscated ELFs may containe "empty" PT_NOTEs (p_memsz == 0).
Attempting to mmap these will cause a EINVAL failure since the requested
mapping size is zero.
Skip these phrogram headers when parsing notes.
Also improve the failure log with arguments to the mmap syscall.
Test: Platinum Tests
Bug: 324468126
Change-Id: I7de4e55c6d221d555faabfcc33bb6997921dd022
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The PAD_SEGMENT note is used to optimize memory usage of the loader.
If the note parsing fails, skip the optimization and continue
loading the ELF normally.
Bug: 324309329
Bug: 316403210
Change-Id: I2aabc9f399816c53eb33ff303208a16022571edf
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This reverts commit 79c9694c91.
Reason for revert: DroidMonitor: Potential culprit for Bug b/324348078 - verifying through ABTD before revert submission. This is part of the standard investigation process, and does not mean your CL will be reverted.
Change-Id: I32f7bc824900e18a7d53b025ffe3aaef0ee71802
kPageSize is needed to determine whether the loader needs to
extend VMAs to avoid gaps in the memory map when loading the ELF.
While at it, use kPageSize to generically deduce the PMD size and
replace the hardcoded 2MB PMD size.
Bug: 316403210
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Change-Id: Ie770320e629c38149dc75dae1deb1e429dd1acf2
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
This CL is created as a best effort to migrate test targets to the new Android ownership model.
It is based on historical data from repository history and insights from git blame.
Given the nature of this effort, there may be instances of incorrect attribution. If you find incorrect or unnecessary
attribution in this CL, please create a new CL to fix that.
For detailed guidelines and further information on the migration please refer to the link below,
go/new-android-ownership-model
Bug: 304529413
Test: N/A
Change-Id: Ie36b2a3245d9901323affcc5e51dafbb87af9248