Previously, on RISC-V, the static hwcap variable in
__bionic_call_ifunc_resolver resulted in a call to __cxa_guard_acquire,
which used a GOT access to __stack_chk_guard, but the GOT hadn't yet
been initialized. Fix this problem by applying RELR relocations
earlier.
Bug: http://b/330725041
Test: lunch aosp_cf_riscv64_phone-trunk_staging-eng; boot device
Change-Id: Ib10fdcc0d2c1b875eba6bc5e0115a6768d6f25ee
arm32/arm64: Previously, the loader miscalculated a negative value for
offset_bionic_tcb_ when the executable's alignment was greater than
(8 * sizeof(void*)). The process then tended to crash.
riscv: Previously, the loader didn't propagate the p_align field of the
PT_TLS segment into StaticTlsLayout::alignment_, so high alignment
values were ignored.
__bionic_check_tls_alignment: Stop capping alignment at page_size().
There is no need to cap it, and the uncapped value is necessary for
correctly positioning the TLS segment relative to the thread pointer
(TP) for ARM and x86. The uncapped value is now used for computing
static TLS layout, but only a page of alignment is actually provided:
* static TLS: __allocate_thread_mapping uses mmap, which provides only
a page's worth of alignment
* dynamic TLS: BionicAllocator::memalign caps align to page_size()
* There were no callers to StaticTlsLayout::alignment(), so remove it.
Allow PT_TLS.p_align to be 0: quietly convert it to 1.
For static TLS, ensure that the address of a TLS block is congruent to
p_vaddr, modulo p_align. That is, ensure this formula holds:
(&tls_block % p_align) == (p_vaddr % p_align)
For dynamic TLS, a TLS block is still allocated congruent to 0 modulo
p_align. Fixing dynamic TLS congruence is mostly a separate problem
from fixing static TLS congruence, and requires changing the dynamic
TLS allocator and/or DTV structure, so it should be fixed in a
later follow-up commit.
Typically (p_vaddr % p_align) is zero, but it's currently possible to
get a non-zero value with LLD: when .tbss has greater than page
alignment, but .tdata does not, LLD can produce a TLS segment where
(p_vaddr % p_align) is non-zero. LLD calculates TP offsets assuming
the loader will align the segment using (p_vaddr % p_align).
Previously, Bionic and LLD disagreed on the offsets from the TP to
the executable's TLS variables.
Add unit tests for StaticTlsLayout in bionic-unit-tests-static.
See also:
* https://github.com/llvm/llvm-project/issues/40872
* https://sourceware.org/bugzilla/show_bug.cgi?id=24606
* https://reviews.llvm.org/D61824
* https://reviews.freebsd.org/D31538
Bug: http://b/133354825
Bug: http://b/328844725
Bug: http://b/328844839
Test: bionic-unit-tests bionic-unit-tests-static
Change-Id: I8850c32ff742a45d3450d8fc39075c10a1e11000
The loader doesn't currently support using TLS within itself.
Previously, if a TLS segment was accidentally linked into the loader,
then the loader's `soinfo_tls* tls_` field would be initialized with a
valid TlsSegment, but the loader soinfo wouldn't be registered with
linker_tls.cpp, so the module ID would be 0. (The first valid module ID
is 1.)
The result was architecture-dependent. On x86, everything worked until
the first TLS access, which segfaulted. On arm64, relocating TLSDESC
hit a CHECK() failure on the invalid module ID.
Make the loader more robust:
* Abort in the loader if it detects that it has a TLS segment.
* For R_GENERIC_TLS_DTPMOD, verify that a module ID is valid before
writing it.
Bug: none
Test: manually add a thread_local variable to the loader
Test: bionic-unit-tests
Change-Id: I93c17ca65df4af2d46288957a0e483b0e2b13862
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I9cd04574190ef4c727308363a8cb1120c36e53e0
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I7150ed22af0723cc0b2d326c046e4e4a8b56ad09
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Also enable stack MTE if main binary links in a library that needs it.
Otherwise the following is possible:
1. a binary doesn't require stack MTE, but links in libraries that use
stg on the stack
2. that binary later dlopens a library that requires stack MTE, and our
logic in dlopen remaps the stacks with MTE
3. the libraries from step 1 now have tagged pointers with missing tags
in memory, so things go wrong
This reverts commit f53e91cc81.
Reason for revert: Fixed problem detected in b/324568991
Test: atest memtag_stack_dlopen_test with MTE enabled
Test: check crash is gone on fullmte build
Change-Id: I4a93f6814a19683c3ea5fe1e6d455df5459d31e1
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
This mode instructs the linker to search for libraries in hwasan
subdirectories of all library search paths. This is set up to contain a
hwasan-enabled copy of libc, which is needed for HWASan programs to
operate. There are two ways this mode can be enabled:
* for native binaries, by using the linker_hwasan64 symlink as its
interpreter
* for apps: by setting the LD_HWASAN environment variable in wrap.sh
Bug: 276930343
Change-Id: I0f4117a50091616f26947fbe37a28ee573b97ad0
As of https://reviews.llvm.org/D143769, binaries (with -fsanitize=memtag-*)
have DT_AARCH64_MEMTAG_* dynamic entries, as per the AArch64 MemtagABI.
Android uses an OS-specific ELF note for MTE config, but we should
migrate to the new thing (while preserving backwards compatibility).
Without actually doing the migration right now, just handle these new
entries. Otherwise, you get a whole bunch of logspam about the
unrecognised dynamic entries.
Bug: 274032544
Test: Build android, don't get logspam.
Change-Id: I5c8b59f77a0058e5b93335e269d558a5014f2260
android_namespace_link_t::shared_lib_sonames_ is unorderd_set<string>.
When initializing, it's copied a few times unnecessarily.
- when add_linked_namespace is called
- when android_namespace_link_t() is called
- when push_back is called.
Now, it's moved around after the initial creation.
Bug: n/a
Test: atest --test-mapping .
Change-Id: I283954bb0c0bbf94ebd74407137f492e08fd41bd
As of https://r.android.com/2304013 classloader namespaces are no
longer called "classloader-namespace". However, this whole TODO is
stale - it was supposed to be addressed in O and it only applies to
compat code for SDK < 24, so there is no use fixing it now.
Test: N/A - comment change only
Bug: 258340826
Change-Id: Id09e262191cea236224196a4a4268331d5cf84c6
The two namespaces are often the same, but if they aren't the old
message could be confusing and not very helpful.
#codehealth
Test: Build and boot with `LinkerLogger::flags_ = kLogDlopen` and check
logcat
Change-Id: I61a78d40f1eb5c074772e3c113a1055d3e915cb1
The file is a manually created linker config file for the binaries in
the APEX. This is discouraged since such a manually created linker
config is error-prone and hard to maintain. Since the per-APEX
linker config file is automatically created by the linkerconfig tool as
/linkerconfig/<name>/ld.config.txt, we can safely deprecated the
fallback path.
There currently are two APEXes using these hand-crafted configs. They
can (and should) keep the configs for backwards compatibility; in case
when they run on older devices where the auto-generated configs are not
available. But for newer platforms, the files are simply ignored and no
new APEX should be using that.
Bug: 218933083
Test: m
Change-Id: I84bd8850b626a8506d53af7ebb86b158f6e6414a
During "step 1" of find_libraries, the linker finds the transitive
closure of dependencies, in BFS order. As it finds each library, it
adds the library to its primary namespace (so that, if some other
library also depends on it, find_loaded_library_by_soname can find the
library in the process of being loaded).
LD_PRELOAD libraries are automatically marked DF_1_GLOBAL, and any
DF_1_GLOBAL library is added to every linker namespace. Previously,
this secondary namespace registration happened after step 1. The result
is that across different namespaces, the order of libraries could vary.
In general, a namespace's primary members will all appear before
secondary members. This is undesirable for libsigchain.so, which we
want to have appear before any other non-preloaded library.
Instead, when an soinfo is added to its primary namespace, immediately
add it to all the other namespaces, too. This ensures that the order of
soinfo objects is the same across namespaces.
Expand the dl.exec_with_ld_config_file_with_ld_preload and
dl.exec_with_ld_config_file tests to cover the new behavior. Mark
lib1.so DF_1_GLOBAL and use a "foo" symbol to mimic the behavior of a
signal API interposed by (e.g.) libsigchain.so and a ASAN preload.
Test: bionic unit tests
Bug: http://b/143219447
Change-Id: I9fd90f6f0d14caf1aca6d414b3e9aab77deca3ff
Setting the linker's soname ("ld-android.so") can allocate heap memory
now that the name uses an std::string, and it's probably a good idea to
defer doing this until after the linker has relocated itself (and after
it has called C++ constructors for global variables.)
Bug: none
Test: bionic unit tests
Test: verify that dlopen("ld-android.so", RTLD_NOLOAD) works
Change-Id: I6b9bd7552c3ae9b77e3ee9e2a98b069b8eef25ca
Once upon a time (and, indeed, to this very day if you're on LP32) the
soinfo struct used a fixed-length buffer for the soname. This caused
some issues, mainly with app developers who accidentally included a full
Windows "C:\My Computer\...\libfoo.so" style path. To avoid all this we
switched to just pointing into the ELF file itself, where the DT_SONAME
is already stored as a NUL-terminated string. And all was well for many
years.
Now though, we've seen a bunch of slow startup traces from dogfood where
`dlopen("libnativebridge.so")` in a cold start takes 125-200ms on a recent
device, despite no IO contention. Even though libnativebridge.so is only
20KiB.
Measurement showed that every library whose soname we check required
pulling in a whole page just for the (usually) very short string. Worse,
there's readahead. In one trace we saw 18 pages of libhwui.so pulled
in just for `"libhwui.so\0"`. In fact, there were 3306 pages (~13MiB)
added to the page cache during `dlopen("libnativebridge.so")`. 13MiB for
a 20KiB shared library!
This is the obvious change to use a std::string to copy the sonames
instead. This will dirty slightly more memory, but massively improve
locality.
Testing with the same pathological setup took `dlopen("libnativebridge.so")`
down from 192ms to 819us.
Bug: http://b/177102905
Test: tested with a pathologically modified kernel
Change-Id: I33837f4706adc25f93c6fa6013e8ba970911dfb9
This patch adds support to load BTI-enabled objects.
According to the ABI, BTI is recorded in the .note.gnu.property section.
The new parser evaluates the property section, if exists.
It searches for .note section with NT_GNU_PROPERTY_TYPE_0.
Once found it tries to find GNU_PROPERTY_AARCH64_FEATURE_1_AND.
The results are cached.
The main change in linker is when protection of loaded ranges gets
applied. When BTI is requested and the platform also supports it
the prot flags have to be amended with PROT_BTI for executable ranges.
Failing to add PROT_BTI flag would disable BTI protection.
Moreover, adding the new PROT flag for shared objects without BTI
compatibility would break applications.
Kernel does not add PROT_BTI to a loaded ELF which has interpreter.
Linker handles this case too.
Test: 1. Flame boots
2. Tested on FVP with BTI enabled
Change-Id: Iafdf223b74c6e75d9f17ca90500e6fe42c4c1218
Update a comment in android-changes-for-ndk-developers.md about the
removed debug.ld.greylist_disabled system property.
Update language to comply with Android's inclusive language guidance
#inclusivefixit
See https://source.android.com/setup/contribute/respectful-code for reference
Bug: http://b/162536543
Test: bionic-unit-tests
Change-Id: I760ee14bce14d9d799926c43d2c14fd8ffbc6968
1. Cleanup for #inclusivefixit. (whitelisted -> allowed_libs)
2. Support the old term for backwards compatibility. (Also update test.)
3. Fix the formatting errors found by clang-format.
See https://source.android.com/setup/contribute/respectful-code
for reference.
Bug: 161896447
Test: atest linker-unit-tests linker-benchmarks
Change-Id: I19dbed27a6d874ac0049cb7b67d2cb0f75369c1b
This property provided a way to disable the greylist, for testing
whether an app targeting < 24 still works. Instead of turning off the
greylist, though, an app developer should simply target a newer API.
(If app developers really need this property for testing, they can
still use it on versions of Android between N and R, inclusive.)
Update language to comply with Android's inclusive language guidance
See https://source.android.com/setup/contribute/respectful-code for reference
#inclusivefixit
Bug: http://b/162536543
Test: bionic-unit-tests
Change-Id: Id1eb2807fbb7436dc9ed7fe47e15b7d165a26789
Add inaccessible gaps between shared libraries to make it harder for the
attackers to defeat ASLR by random probing.
To avoid excessive page table bloat, only do this when a library is
about to cross a huge page boundary, effectively allowing several
smaller libraries to be lumped together.
Bug: 158113540
Test: look at /proc/$$/maps
Change-Id: I39c0100b81f72447e8b3c6faafa561111492bf8c
This reverts commit a8cf3fef2a.
Reason for revert: memory regression due to the fragmentation of the page tables
Bug: 159810641
Bug: 158113540
Change-Id: I6212c623ff440c7f6889f0a1e82cf7a96200a411
There are some special cases - such as init process - when linker
configuration is not expected to exist. This change disables warning
message that generated linker configuration does not exist in those
cases.
Bug: 158800902
Test: Tested from cuttlefish that warning message is not generated from
init
Change-Id: Ie2fbb5210175cf1e6f2b7e638f57c3b74d395368
Improve ASLR by increasing the randomly sized gaps between shared
library mappings, and keep them mapped PROT_NONE.
Bug: 158113540
Test: look at /proc/$$/maps
Change-Id: Ie72c84047fb624fe2ac8b7744b2a2d0d255ea974
Change the location set in the linker
Bug: 130219528
Bug: 138994281
Test: atest CtsBionicTestCases
Test: atest CtsJniTestCases
Change-Id: I215a8e023ccc4d5ffdd7df884c809f8d12050c8f
For the bootstrap linker, insert /system/${LIB}/bootstrap in front of
/system/${LIB} in any namespace search path.
Bug: http://b/152572170
Test: bionic unit tests
Change-Id: Ia359d9f2063f4b6fff3f79b51b500ba968a18247