Some obfuscated ELFs may containe "empty" PT_NOTEs (p_memsz == 0).
Attempting to mmap these will cause a EINVAL failure since the requested
mapping size is zero.
Skip these phrogram headers when parsing notes.
Also improve the failure log with arguments to the mmap syscall.
Test: Platinum Tests
Bug: 324468126
Change-Id: I7de4e55c6d221d555faabfcc33bb6997921dd022
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The PAD_SEGMENT note is used to optimize memory usage of the loader.
If the note parsing fails, skip the optimization and continue
loading the ELF normally.
Bug: 324309329
Bug: 316403210
Change-Id: I2aabc9f399816c53eb33ff303208a16022571edf
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This reverts commit 79c9694c91.
Reason for revert: DroidMonitor: Potential culprit for Bug b/324348078 - verifying through ABTD before revert submission. This is part of the standard investigation process, and does not mean your CL will be reverted.
Change-Id: I32f7bc824900e18a7d53b025ffe3aaef0ee71802
kPageSize is needed to determine whether the loader needs to
extend VMAs to avoid gaps in the memory map when loading the ELF.
While at it, use kPageSize to generically deduce the PMD size and
replace the hardcoded 2MB PMD size.
Bug: 316403210
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Change-Id: Ie770320e629c38149dc75dae1deb1e429dd1acf2
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
This CL is created as a best effort to migrate test targets to the new Android ownership model.
It is based on historical data from repository history and insights from git blame.
Given the nature of this effort, there may be instances of incorrect attribution. If you find incorrect or unnecessary
attribution in this CL, please create a new CL to fix that.
For detailed guidelines and further information on the migration please refer to the link below,
go/new-android-ownership-model
Bug: 304529413
Test: N/A
Change-Id: Ie36b2a3245d9901323affcc5e51dafbb87af9248
The libraries are
- libdl_static
- liblinker_main
- liblinker_malloc
- libsystemproperties
The availability to runtime apex was done implicitly using a baseline map in
build/soong/apex/apex.go. Make this explicit in Android.bp
Bug: 281077552
Test: m nothing
Change-Id: I029ae204f6cfef8c301a20b7c4294636b60b38be
Add support for 16kb page sizes in the test cases: page_start
and page_offset.
Bug: 315174209
Test: atest -c linker-unit-tests
Change-Id: Ibaae493a0930f3f2df390a6af6c8a988a682fe52
This is a no-op but will be used in upcoming scudo changes that allow to
change the depot size at process startup time, and as such we will no
longer be able to call __scudo_get_stack_depot_size in debuggerd.
We already did the equivalent change for the ring buffer size in
https://r.android.com/q/topic:%22scudo_ring_buffer_size%22
Bug: 309446692
Change-Id: Icdcf4cd0a3a486d1ea07a8c616cae776730e1047
The build breakage is now fixed by the current stable Clang, workaround
is no longer needed.
Test: presubmit
Change-Id: Ice2863844ff886f503d50fa7a006cde78d16492d
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
I may not be likely to see these on mobile hardware, but I do see them
on qemu, and this is annoying:
```
AT_??? (46) 0
AT_??? (47) 0
```
Test: treehugger
Change-Id: I2db3e8adaecf55bce7b5046e17ec1ef7b2e3b8ea
Looks like we all copy-pasted the same comment, and the original comment was wrong. These functions all return 0 on success, and -1 on error.
Change-Id: I11e635e0895fe1fa941d69b721b8ad9ff5eb7f15
Test: N/A
Bug: N/A
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
LinkerBlockAllocator::alloc() always does a memset() anyway, and soinfo
contains non-POD types, so this seems both unnecessary and questionable.
Bug: http://b/298741930
Test: treehugger
Change-Id: I2b5a3858c8506c6086dfd59cbe729afb5b9ffa1c
__linker_init calls soinfo::~soinfo after __linker_init_post_relocation
has returned, and ~soinfo modifies global loader state
(g_soinfo_handles_map), so the loader lock must be held.
Use a variable with a destructor (~DlMutexUnlocker) to release the
loader lock after ~soinfo returns.
It's not clear to me when the mutex should be acquired.
pthread_mutex_lock in theory can use __bionic_tls (ScopedTrace), but
that should only happen if the mutex is contended, and it won't be.
The loader constructors shouldn't be spawning a thread, and the vdso
shouldn't really have a constructor. ifunc relocations presumably don't
spawn a thread either. It probably doesn't matter much as long as it's
held before calling constructors in the executable or shared objects.
Bug: http://b/290318196
Test: treehugger
Test: bionic-unit-tests
Change-Id: I544ad65db07e5fe83315ad4489d77536d463a0a4
Block allocator mmaps arbitary large area ("LinkerBlockAllocatorPage")
of size 4kB * 100. In order to reduce mmap-* syscall and kernel VMA
memory usage.
This works fine for 16kB page size since 100 4kB page are equal to 25
16kB pages.
But for 64kB page size the area will include a partial page (16kB sized
region at the end).
Change the size to 96 4kb pages (6 64kB pages), this will work for all
aarch64 supported page sizes.
And remove the use of PAGE_SIZE
Bug: 294438799
Test: atest -c linker-unit-tests
Change-Id: I7782406d1470183097ce9391c9b70b177e1750e6
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The new libc++_static.a platform prebuilt prefers to use this C11
function rather than the non-standard memalign.
Bug: http://b/175635923
Test: treehugger
Change-Id: I9c181568ba69c0508e400baac2cbb42b169c9768
A constructor could spawn a thread, which could call into the loader,
so the global loader mutex must be held.
Bug: http://b/290318196
Test: treehugger
Change-Id: I7a5249898a11fbc62d1ecdb85b24017a42a4b179
To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
We don't actually care about the length of this jump, and lld will relax
it to a jal when possible anyway. Better to have people copy & paste
call and tail than jal and j.
Test: treehugger
Change-Id: I889044b95fbb5567189a0d6ef31f81df0e0383cd
This mode instructs the linker to search for libraries in hwasan
subdirectories of all library search paths. This is set up to contain a
hwasan-enabled copy of libc, which is needed for HWASan programs to
operate. There are two ways this mode can be enabled:
* for native binaries, by using the linker_hwasan64 symlink as its
interpreter
* for apps: by setting the LD_HWASAN environment variable in wrap.sh
Bug: 276930343
Change-Id: I0f4117a50091616f26947fbe37a28ee573b97ad0
As of https://reviews.llvm.org/D143769, binaries (with -fsanitize=memtag-*)
have DT_AARCH64_MEMTAG_* dynamic entries, as per the AArch64 MemtagABI.
Android uses an OS-specific ELF note for MTE config, but we should
migrate to the new thing (while preserving backwards compatibility).
Without actually doing the migration right now, just handle these new
entries. Otherwise, you get a whole bunch of logspam about the
unrecognised dynamic entries.
Bug: 274032544
Test: Build android, don't get logspam.
Change-Id: I5c8b59f77a0058e5b93335e269d558a5014f2260
GWP-ASan's recoverable mode was landed upstream in
https://reviews.llvm.org/D140173.
This mode allows for a use-after-free or a buffer-overflow bug to be
detected by GWP-ASan, a crash report dumped, but then GWP-ASan (through
the preCrashReport() and postCrashReportRecoverableOnly() hooks) will
patch up the memory so that the process can continue, in spite of the
memory safety bug.
This is desirable, as it allows us to consider migrating non-system apps
from opt-in GWP-ASan to opt-out GWP-ASan. The major concern was "if we
make it opt-out, then bad apps will start crashing". If we don't crash,
problem solved :). Obviously, we'll need to do this with an amount of
process sampling to mitigate against the 70KiB memory overhead.
The biggest problem is that the debuggerd signal handler isn't the first
signal handler for apps, it's the sigchain handler inside of libart.
Clearly, the sigchain handler needs to ask us whether the crash is
GWP-ASan's fault, and if so, please patch up the allocator. Because of
linker namespace restrictions, libart can't directly ask the linker
(which is where debuggerd lies), so we provide a proxy function in libc.
Test: Build the platform, run sanitizer-status and various test apps
with recoverable gwp-asan. Assert that it doesn't crash, and we get a
debuggerd report.
Bug: 247012630
Change-Id: I86d5e27a9ca5531c8942e62647fd377c3cd36dfd
Enable linking on a system without /proc mounted by falling back to
reading the executable paths from argv[0] when /proc/exe/self can't be
found.
Bug: 254835242
Change-Id: I0735e873fa4e2f439688722c4a846fb70ff398a5
This is a no-op but will be used in upcoming scudo changes that allow to
change the buffer size at process startup time, and as such we will no
longer be able to call __scudo_get_ring_buffer_size in debuggerd.
Bug: 263287052
Change-Id: I18f166fc136ac8314d748eb80a806defcc25c9fd
When loading a dynamic library, reserved memory is successful, but fail in other steps, such as loading segments, which will generate a memory leak. Because the reserved memory is not released in time.
Bug: https://issuetracker.google.com/issues/263713888
Change-Id: I556ee02e37db5259df0b6c7178cd9a076dab9725
Signed-off-by: huangchaochao <huangchaochao@bytedance.com>
android_namespace_link_t::shared_lib_sonames_ is unorderd_set<string>.
When initializing, it's copied a few times unnecessarily.
- when add_linked_namespace is called
- when android_namespace_link_t() is called
- when push_back is called.
Now, it's moved around after the initial creation.
Bug: n/a
Test: atest --test-mapping .
Change-Id: I283954bb0c0bbf94ebd74407137f492e08fd41bd
As of https://r.android.com/2304013 classloader namespaces are no
longer called "classloader-namespace". However, this whole TODO is
stale - it was supposed to be addressed in O and it only applies to
compat code for SDK < 24, so there is no use fixing it now.
Test: N/A - comment change only
Bug: 258340826
Change-Id: Id09e262191cea236224196a4a4268331d5cf84c6
The only meaningful change from alibaba's version is jr instead of jalr.
Also fix the comment that's in all the begin.S files while we're here.
Signed-off-by: Mao Han <han_mao@linux.alibaba.com>
Signed-off-by: Xia Lifang <lifang_xia@linux.alibaba.com>
Signed-off-by: Chen Guoyin <chenguoyin.cgy@linux.alibaba.com>
Signed-off-by: Wang Chen <wangchen20@iscas.ac.cn>
Signed-off-by: Lu Xufan <luxufan@iscas.ac.cn>
Test: ran mksh
Change-Id: I2645c78bd700b8a55bde363600d7f8b87de641a1
The two namespaces are often the same, but if they aren't the old
message could be confusing and not very helpful.
#codehealth
Test: Build and boot with `LinkerLogger::flags_ = kLogDlopen` and check
logcat
Change-Id: I61a78d40f1eb5c074772e3c113a1055d3e915cb1
liblog_for_runtime_apex is a static variant of liblog which is
explicitly marked as available to the runtime APEX. Any static
dependency to liblog from inside the runtime APEX is changed from liblog
to liblog_for_runtime_apex.
Previously, to support the need for using liblog inside the runtime
APEX, the entire (i.e. both static and shared variants) liblog module
was marked as available to the runtime APEX, although in reality only
the static variant of the library was needed there. This was not only
looking dirty, but also has caused a problem like b/241259844.
To fix this, liblog is separated into two parts. (1) liblog and (2)
liblog_for_runtime_apex. (1) no longer is available to the runtime APEX
and is intended to be depended on in most cases: either from the
non-updatable platform, or from other APEXes. (2) is a static library
which is explicitly marked as available to the runtime APEX and also
visible to certain modules that are included in the runtime APEX.
Bug: 241259844
Test: m and check that liblog depends on stub library of libc
Change-Id: Ib21f6e64da0c7592341b97b95ca8485d7c29ac4d
As with libc.so, add the debug frame into the linker for arm32
to make any crashes unwindable.
Bug: 242162222
Test: Forced a crash on wembley where an unwind failed before and
Test: verified it unwinds properly with the debug frame.
Change-Id: I2b904af63f670b038d169f5a7d907637b352ab4e
Map all stacks (primary, thread, and sigaltstack) as PROT_MTE when the
binary requests it through the ELF note.
For the reference, the note is produced by the following toolchain changes:
https://reviews.llvm.org/D118948https://reviews.llvm.org/D119384https://reviews.llvm.org/D119381
Bug: b/174878242
Test: fvp_mini with ToT LLVM (more tests in a separate change)
Change-Id: I04a4e21c966e7309b47b1f549a2919958d93a872
The lambda function is converted to bool instead of being called. So,
get_transparent_hugepages_supported() returns always true.
Test: check whether /sys/kernel/mm/transparent_hugepage/enabled is
accessed via strace.
Bug: http://b/233137490
Signed-off-by: Suchang Woo <suchang.woo@samsung.com>
Change-Id: I88b0d18d8ceb2300482043391eed4ae7041866ca
The file is a manually created linker config file for the binaries in
the APEX. This is discouraged since such a manually created linker
config is error-prone and hard to maintain. Since the per-APEX
linker config file is automatically created by the linkerconfig tool as
/linkerconfig/<name>/ld.config.txt, we can safely deprecated the
fallback path.
There currently are two APEXes using these hand-crafted configs. They
can (and should) keep the configs for backwards compatibility; in case
when they run on older devices where the auto-generated configs are not
available. But for newer platforms, the files are simply ignored and no
new APEX should be using that.
Bug: 218933083
Test: m
Change-Id: I84bd8850b626a8506d53af7ebb86b158f6e6414a
This is important for enabling the error about unsupported TLS
relocations to local symbols. The fast path tends to skip this error,
because it fails during lookup_symbol(). Add a test for this error.
I didn't see a performance regression in the linker_relocation
benchmark.
Bug: http://b/226978634
Test: m bionic-unit-tests
Change-Id: Ibef9bde2973cf8c2d420ecc9e8fe2c69a5097ce2
This flag means "$ORIGIN processing required", and since we always
do that, we can claim support for it.
Change-Id: If60ef331963f6bc1e1818d7fa2ee57c1aa8fa343
For convenience, builds against musl libc currently use the
linux_glibc properties because they are almost always linux-specific
and not glibc-specific. In preparation for removing this hack,
tweak the linux_glibc properties by either moving them to host_linux,
which will apply to linux_glibc, linux_musl and linux_bionic, or
by setting appropriate musl or linux_musl properties. Properties
that must not be repeated while musl uses linux_musl and also still
uses the linux_glibc properties are moved to glibc properties, which
don't apply to musl. Whether these stay as glibc properties or get
moved back to linux_glibc later once the musl hack is removed is TBD.
Bug: 223257095
Test: m checkbuild
Test: m USE_HOST_MUSL=true host-native
Change-Id: I809bf1ba783dff02f6491d87fbdc9fa7fc0975b0
For a 32-bit userspace, `struct LinkedListEntry` takes 8 bytes for
storing the two pointers, a default block allocator size alignment of
16-bytes would waste 50% of memory. By changing the alignment to size
of a pointer, it saves >1MB memory postboot on wembley device.
Bug: http://b/206889551
Test: bionic-unit-tests
Change-Id: Ie92399c9bb3971f631396ee09bbbfd7eb17dc1a7
This change is to allocate `head_` and `tail_` outside of LinkedList
and only keep a readonly pointer there. By doing this, all updates
of the list touches memory other than the LinkedList itself, thus
preventing copy-on-write pages being allocated in child processes
when the list changes.
The other approach is to make the LinkedList a singly-linked list,
however, that approach would cause a full list traversal to add
one item to the list. And preliminary number shows there are ~60K
calls to `soinfo::add_secondary_namespace` during Android bootup
on a wembley device, where a singly-linked approach could be
hurting performance.
NOTE: the header is allocated and initialized upon first use instead
of being allocated in the constructor, the latter ends up in crash.
This is likely caused by static initialization order in the linker,
e.g. g_soinfo_list_allocator is a static object, and if this linked
list is embedded into some other static objects, there's no guarantee
the allocator will be available.
Bug: http://b/206889551
Test: bionic-unit-tests
Change-Id: Ic6f053881f85f9dc5d249bb7d7443d7a9a7f214f
Clang cannot build ifunc with LTO. This is a KI: https://bugs.llvm.org/show_bug.cgi?id=46488
Move the LTO: never down to libc itself, so that we can have LTO for the
rest of linker.
Test: m GLOBAL_THINLTO=true linker
Change-Id: I483fc3944e340638a664fb390279e211c2ae224b
This also effectively re-enables linker_wrapper, which may have been
independently fixed some time ago.
Test: mixed_droid.sh
Change-Id: I9bc7e099fe3c5da1c4da12c79128baf6f807354a
A recent change to lld [1] made it so that the __rela?_iplt_*
symbols are no longer defined for PIEs and shared libraries. Since
the linker is a PIE, this prevents it from being able to look up
its own relocations via these symbols. We don't need these symbols
to find the relocations however, as their location is available via
the dynamic table. Therefore, start using the dynamic table to find
the relocations instead of using the symbols.
Previously landed in r.android.com/1801427 and reverted in
r.android.com/1804876 due to linux-bionic breakage. This time,
search .rela.dyn as well as .rela.plt, since the linker may put the
relocations in either location (see [2]).
[1] f8cb78e99a
[2] https://reviews.llvm.org/D65651
Bug: 197420743
Change-Id: I5bef157472e9893822e3ca507ef41a15beefc6f1
This reverts commit 65bdf655c4.
Reason for revert: checking the failure of avd/avd_boot_test
Bug: 197781964
Change-Id: I70eb03b45cdfbd87ef6edb03b74ad6d1970dc08c
A recent change to lld [1] made it so that the __rela?_iplt_*
symbols are no longer defined for PIEs and shared libraries. Since
the linker is a PIE, this prevents it from being able to look up
its own relocations via these symbols. We don't need these symbols
to find the relocations however, as their location is available via
the dynamic table. Therefore, start using the dynamic table to find
the relocations instead of using the symbols.
[1] f8cb78e99a
Change-Id: I4a12ae9f5ffd06d0399e05ec3ecc4211c7be2880
In order to support demangling of rust symbols by the linker, we are
adding a small Rust component. Rust expects `memalign` to be present in
hosted environments, and it doesn't appear costly to enable it.
Bug: 178565008
Test: m, killall -11 keystore2 produced mangled names in tombstone
Change-Id: I8fc749000fa02a3b760c8cc55be3348b9964d931
Now that linker_wrapper.o does not use objcopy --prefix-symbols=__dlwrap_
it can reference the _start symbol of the original binary without
colliding with its own __dlwrap__start symbol, which means
host_bionic_inject is no longer necessary.
Test: build and run host bionic binary
Change-Id: I1752efa39fa73a092fab039771bf59c99b7b5974
The only symbol that actually needs a prefix to avoid a collision is
_start, and that can be handled with a copy of begin.S that uses a
"#define" to rename _start to __dlwrap__start. Removing the prefixed
symbols will also allow simplifying the host bionic build process by
letting it directly reference the real _start.
Test: build and run host bionic binary
Change-Id: I50be786c16fe04b7f05c14ebfb74f710c7446ed9
To take advantage of file-backed huge pages for the text segments of key
shared libraries (go/android-hugepages), the dynamic linker must load
candidate ELF files at an appropriately aligned address and mark
executable segments with MADV_HUGEPAGE.
This patches uses segments' p_align values to determine when a file is
PMD aligned (2MB alignment), and performs load operations accordingly.
Bug: 158135888
Test: Verified PMD aligned libraries are backed with huge pages on
supporting kernel versions.
Change-Id: Ia2367fd5652f663d50103e18f7695c59dc31c7b9
Introduces a cc_defaults category hugepage_aligned that passes the
requisite linker flags to produce shared object files with 2MB-aligned
sections. This enables supporting platforms to back the text segments of
these libraries with hugepages.
Bug: 158135888
Test: Built and confirmed ELF layout
Change-Id: I5c8ce35d8f8bf6647ec19d58398740bd494cc89c
This works around buggy applications that read a few bytes past the
end of their allocation, which would otherwise cause a segfault with
the concurrent Scudo change that aligns large allocations to the right.
Because the implementation of
android_set_application_target_sdk_version() lives in the linker,
we need to introduce a hook so that libc is notified when the target
SDK version changes.
Bug: 181344545
Change-Id: Id4be6645b94fad3f64ae48afd16c0154f1de448f
Otherwise if a 32bit copy of a library used by Toybox
exists on LD_LIBRARY_PATH then file(1) will fail.
Bug: 181666541
Test: Manually copied to device and verified correct behaviour
Change-Id: I7d729927b1b433ec953c266920489613fc096e03