If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
kPageSize is needed to determine whether the loader needs to
extend VMAs to avoid gaps in the memory map when loading the ELF.
While at it, use kPageSize to generically deduce the PMD size and
replace the hardcoded 2MB PMD size.
Bug: 316403210
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Change-Id: Ie770320e629c38149dc75dae1deb1e429dd1acf2
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
This CL is created as a best effort to migrate test targets to the new Android ownership model.
It is based on historical data from repository history and insights from git blame.
Given the nature of this effort, there may be instances of incorrect attribution. If you find incorrect or unnecessary
attribution in this CL, please create a new CL to fix that.
For detailed guidelines and further information on the migration please refer to the link below,
go/new-android-ownership-model
Bug: 304529413
Test: N/A
Change-Id: Ie36b2a3245d9901323affcc5e51dafbb87af9248
The libraries are
- libdl_static
- liblinker_main
- liblinker_malloc
- libsystemproperties
The availability to runtime apex was done implicitly using a baseline map in
build/soong/apex/apex.go. Make this explicit in Android.bp
Bug: 281077552
Test: m nothing
Change-Id: I029ae204f6cfef8c301a20b7c4294636b60b38be
Add support for 16kb page sizes in the test cases: page_start
and page_offset.
Bug: 315174209
Test: atest -c linker-unit-tests
Change-Id: Ibaae493a0930f3f2df390a6af6c8a988a682fe52
This is a no-op but will be used in upcoming scudo changes that allow to
change the depot size at process startup time, and as such we will no
longer be able to call __scudo_get_stack_depot_size in debuggerd.
We already did the equivalent change for the ring buffer size in
https://r.android.com/q/topic:%22scudo_ring_buffer_size%22
Bug: 309446692
Change-Id: Icdcf4cd0a3a486d1ea07a8c616cae776730e1047
The build breakage is now fixed by the current stable Clang, workaround
is no longer needed.
Test: presubmit
Change-Id: Ice2863844ff886f503d50fa7a006cde78d16492d
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
I may not be likely to see these on mobile hardware, but I do see them
on qemu, and this is annoying:
```
AT_??? (46) 0
AT_??? (47) 0
```
Test: treehugger
Change-Id: I2db3e8adaecf55bce7b5046e17ec1ef7b2e3b8ea
Looks like we all copy-pasted the same comment, and the original comment was wrong. These functions all return 0 on success, and -1 on error.
Change-Id: I11e635e0895fe1fa941d69b721b8ad9ff5eb7f15
Test: N/A
Bug: N/A
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
LinkerBlockAllocator::alloc() always does a memset() anyway, and soinfo
contains non-POD types, so this seems both unnecessary and questionable.
Bug: http://b/298741930
Test: treehugger
Change-Id: I2b5a3858c8506c6086dfd59cbe729afb5b9ffa1c
__linker_init calls soinfo::~soinfo after __linker_init_post_relocation
has returned, and ~soinfo modifies global loader state
(g_soinfo_handles_map), so the loader lock must be held.
Use a variable with a destructor (~DlMutexUnlocker) to release the
loader lock after ~soinfo returns.
It's not clear to me when the mutex should be acquired.
pthread_mutex_lock in theory can use __bionic_tls (ScopedTrace), but
that should only happen if the mutex is contended, and it won't be.
The loader constructors shouldn't be spawning a thread, and the vdso
shouldn't really have a constructor. ifunc relocations presumably don't
spawn a thread either. It probably doesn't matter much as long as it's
held before calling constructors in the executable or shared objects.
Bug: http://b/290318196
Test: treehugger
Test: bionic-unit-tests
Change-Id: I544ad65db07e5fe83315ad4489d77536d463a0a4
Block allocator mmaps arbitary large area ("LinkerBlockAllocatorPage")
of size 4kB * 100. In order to reduce mmap-* syscall and kernel VMA
memory usage.
This works fine for 16kB page size since 100 4kB page are equal to 25
16kB pages.
But for 64kB page size the area will include a partial page (16kB sized
region at the end).
Change the size to 96 4kb pages (6 64kB pages), this will work for all
aarch64 supported page sizes.
And remove the use of PAGE_SIZE
Bug: 294438799
Test: atest -c linker-unit-tests
Change-Id: I7782406d1470183097ce9391c9b70b177e1750e6
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The new libc++_static.a platform prebuilt prefers to use this C11
function rather than the non-standard memalign.
Bug: http://b/175635923
Test: treehugger
Change-Id: I9c181568ba69c0508e400baac2cbb42b169c9768
A constructor could spawn a thread, which could call into the loader,
so the global loader mutex must be held.
Bug: http://b/290318196
Test: treehugger
Change-Id: I7a5249898a11fbc62d1ecdb85b24017a42a4b179
To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
We don't actually care about the length of this jump, and lld will relax
it to a jal when possible anyway. Better to have people copy & paste
call and tail than jal and j.
Test: treehugger
Change-Id: I889044b95fbb5567189a0d6ef31f81df0e0383cd
This mode instructs the linker to search for libraries in hwasan
subdirectories of all library search paths. This is set up to contain a
hwasan-enabled copy of libc, which is needed for HWASan programs to
operate. There are two ways this mode can be enabled:
* for native binaries, by using the linker_hwasan64 symlink as its
interpreter
* for apps: by setting the LD_HWASAN environment variable in wrap.sh
Bug: 276930343
Change-Id: I0f4117a50091616f26947fbe37a28ee573b97ad0
As of https://reviews.llvm.org/D143769, binaries (with -fsanitize=memtag-*)
have DT_AARCH64_MEMTAG_* dynamic entries, as per the AArch64 MemtagABI.
Android uses an OS-specific ELF note for MTE config, but we should
migrate to the new thing (while preserving backwards compatibility).
Without actually doing the migration right now, just handle these new
entries. Otherwise, you get a whole bunch of logspam about the
unrecognised dynamic entries.
Bug: 274032544
Test: Build android, don't get logspam.
Change-Id: I5c8b59f77a0058e5b93335e269d558a5014f2260
GWP-ASan's recoverable mode was landed upstream in
https://reviews.llvm.org/D140173.
This mode allows for a use-after-free or a buffer-overflow bug to be
detected by GWP-ASan, a crash report dumped, but then GWP-ASan (through
the preCrashReport() and postCrashReportRecoverableOnly() hooks) will
patch up the memory so that the process can continue, in spite of the
memory safety bug.
This is desirable, as it allows us to consider migrating non-system apps
from opt-in GWP-ASan to opt-out GWP-ASan. The major concern was "if we
make it opt-out, then bad apps will start crashing". If we don't crash,
problem solved :). Obviously, we'll need to do this with an amount of
process sampling to mitigate against the 70KiB memory overhead.
The biggest problem is that the debuggerd signal handler isn't the first
signal handler for apps, it's the sigchain handler inside of libart.
Clearly, the sigchain handler needs to ask us whether the crash is
GWP-ASan's fault, and if so, please patch up the allocator. Because of
linker namespace restrictions, libart can't directly ask the linker
(which is where debuggerd lies), so we provide a proxy function in libc.
Test: Build the platform, run sanitizer-status and various test apps
with recoverable gwp-asan. Assert that it doesn't crash, and we get a
debuggerd report.
Bug: 247012630
Change-Id: I86d5e27a9ca5531c8942e62647fd377c3cd36dfd
Enable linking on a system without /proc mounted by falling back to
reading the executable paths from argv[0] when /proc/exe/self can't be
found.
Bug: 254835242
Change-Id: I0735e873fa4e2f439688722c4a846fb70ff398a5
This is a no-op but will be used in upcoming scudo changes that allow to
change the buffer size at process startup time, and as such we will no
longer be able to call __scudo_get_ring_buffer_size in debuggerd.
Bug: 263287052
Change-Id: I18f166fc136ac8314d748eb80a806defcc25c9fd
When loading a dynamic library, reserved memory is successful, but fail in other steps, such as loading segments, which will generate a memory leak. Because the reserved memory is not released in time.
Bug: https://issuetracker.google.com/issues/263713888
Change-Id: I556ee02e37db5259df0b6c7178cd9a076dab9725
Signed-off-by: huangchaochao <huangchaochao@bytedance.com>
android_namespace_link_t::shared_lib_sonames_ is unorderd_set<string>.
When initializing, it's copied a few times unnecessarily.
- when add_linked_namespace is called
- when android_namespace_link_t() is called
- when push_back is called.
Now, it's moved around after the initial creation.
Bug: n/a
Test: atest --test-mapping .
Change-Id: I283954bb0c0bbf94ebd74407137f492e08fd41bd