When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I7150ed22af0723cc0b2d326c046e4e4a8b56ad09
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Some obfuscated ELFs may containe "empty" PT_NOTEs (p_memsz == 0).
Attempting to mmap these will cause a EINVAL failure since the requested
mapping size is zero.
Skip these phrogram headers when parsing notes.
Also improve the failure log with arguments to the mmap syscall.
Test: Platinum Tests
Bug: 324468126
Change-Id: I7de4e55c6d221d555faabfcc33bb6997921dd022
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The PAD_SEGMENT note is used to optimize memory usage of the loader.
If the note parsing fails, skip the optimization and continue
loading the ELF normally.
Bug: 324309329
Bug: 316403210
Change-Id: I2aabc9f399816c53eb33ff303208a16022571edf
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This reverts commit 79c9694c91.
Reason for revert: DroidMonitor: Potential culprit for Bug b/324348078 - verifying through ABTD before revert submission. This is part of the standard investigation process, and does not mean your CL will be reverted.
Change-Id: I32f7bc824900e18a7d53b025ffe3aaef0ee71802
kPageSize is needed to determine whether the loader needs to
extend VMAs to avoid gaps in the memory map when loading the ELF.
While at it, use kPageSize to generically deduce the PMD size and
replace the hardcoded 2MB PMD size.
Bug: 316403210
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Change-Id: Ie770320e629c38149dc75dae1deb1e429dd1acf2
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
This CL is created as a best effort to migrate test targets to the new Android ownership model.
It is based on historical data from repository history and insights from git blame.
Given the nature of this effort, there may be instances of incorrect attribution. If you find incorrect or unnecessary
attribution in this CL, please create a new CL to fix that.
For detailed guidelines and further information on the migration please refer to the link below,
go/new-android-ownership-model
Bug: 304529413
Test: N/A
Change-Id: Ie36b2a3245d9901323affcc5e51dafbb87af9248
The libraries are
- libdl_static
- liblinker_main
- liblinker_malloc
- libsystemproperties
The availability to runtime apex was done implicitly using a baseline map in
build/soong/apex/apex.go. Make this explicit in Android.bp
Bug: 281077552
Test: m nothing
Change-Id: I029ae204f6cfef8c301a20b7c4294636b60b38be
Add support for 16kb page sizes in the test cases: page_start
and page_offset.
Bug: 315174209
Test: atest -c linker-unit-tests
Change-Id: Ibaae493a0930f3f2df390a6af6c8a988a682fe52
This is a no-op but will be used in upcoming scudo changes that allow to
change the depot size at process startup time, and as such we will no
longer be able to call __scudo_get_stack_depot_size in debuggerd.
We already did the equivalent change for the ring buffer size in
https://r.android.com/q/topic:%22scudo_ring_buffer_size%22
Bug: 309446692
Change-Id: Icdcf4cd0a3a486d1ea07a8c616cae776730e1047
The build breakage is now fixed by the current stable Clang, workaround
is no longer needed.
Test: presubmit
Change-Id: Ice2863844ff886f503d50fa7a006cde78d16492d
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
I may not be likely to see these on mobile hardware, but I do see them
on qemu, and this is annoying:
```
AT_??? (46) 0
AT_??? (47) 0
```
Test: treehugger
Change-Id: I2db3e8adaecf55bce7b5046e17ec1ef7b2e3b8ea
Looks like we all copy-pasted the same comment, and the original comment was wrong. These functions all return 0 on success, and -1 on error.
Change-Id: I11e635e0895fe1fa941d69b721b8ad9ff5eb7f15
Test: N/A
Bug: N/A
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
LinkerBlockAllocator::alloc() always does a memset() anyway, and soinfo
contains non-POD types, so this seems both unnecessary and questionable.
Bug: http://b/298741930
Test: treehugger
Change-Id: I2b5a3858c8506c6086dfd59cbe729afb5b9ffa1c
__linker_init calls soinfo::~soinfo after __linker_init_post_relocation
has returned, and ~soinfo modifies global loader state
(g_soinfo_handles_map), so the loader lock must be held.
Use a variable with a destructor (~DlMutexUnlocker) to release the
loader lock after ~soinfo returns.
It's not clear to me when the mutex should be acquired.
pthread_mutex_lock in theory can use __bionic_tls (ScopedTrace), but
that should only happen if the mutex is contended, and it won't be.
The loader constructors shouldn't be spawning a thread, and the vdso
shouldn't really have a constructor. ifunc relocations presumably don't
spawn a thread either. It probably doesn't matter much as long as it's
held before calling constructors in the executable or shared objects.
Bug: http://b/290318196
Test: treehugger
Test: bionic-unit-tests
Change-Id: I544ad65db07e5fe83315ad4489d77536d463a0a4