To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
Enable linking on a system without /proc mounted by falling back to
reading the executable paths from argv[0] when /proc/exe/self can't be
found.
Bug: 254835242
Change-Id: I0735e873fa4e2f439688722c4a846fb70ff398a5
Map all stacks (primary, thread, and sigaltstack) as PROT_MTE when the
binary requests it through the ELF note.
For the reference, the note is produced by the following toolchain changes:
https://reviews.llvm.org/D118948https://reviews.llvm.org/D119384https://reviews.llvm.org/D119381
Bug: b/174878242
Test: fvp_mini with ToT LLVM (more tests in a separate change)
Change-Id: I04a4e21c966e7309b47b1f549a2919958d93a872
A recent change to lld [1] made it so that the __rela?_iplt_*
symbols are no longer defined for PIEs and shared libraries. Since
the linker is a PIE, this prevents it from being able to look up
its own relocations via these symbols. We don't need these symbols
to find the relocations however, as their location is available via
the dynamic table. Therefore, start using the dynamic table to find
the relocations instead of using the symbols.
Previously landed in r.android.com/1801427 and reverted in
r.android.com/1804876 due to linux-bionic breakage. This time,
search .rela.dyn as well as .rela.plt, since the linker may put the
relocations in either location (see [2]).
[1] f8cb78e99a
[2] https://reviews.llvm.org/D65651
Bug: 197420743
Change-Id: I5bef157472e9893822e3ca507ef41a15beefc6f1
To take advantage of file-backed huge pages for the text segments of key
shared libraries (go/android-hugepages), the dynamic linker must load
candidate ELF files at an appropriately aligned address and mark
executable segments with MADV_HUGEPAGE.
This patches uses segments' p_align values to determine when a file is
PMD aligned (2MB alignment), and performs load operations accordingly.
Bug: 158135888
Test: Verified PMD aligned libraries are backed with huge pages on
supporting kernel versions.
Change-Id: Ia2367fd5652f663d50103e18f7695c59dc31c7b9
Setting the linker's soname ("ld-android.so") can allocate heap memory
now that the name uses an std::string, and it's probably a good idea to
defer doing this until after the linker has relocated itself (and after
it has called C++ constructors for global variables.)
Bug: none
Test: bionic unit tests
Test: verify that dlopen("ld-android.so", RTLD_NOLOAD) works
Change-Id: I6b9bd7552c3ae9b77e3ee9e2a98b069b8eef25ca
Use a note in executables to specify
(none|sync|async) heap tagging level. To be extended with (heap x stack x
globals) in the future. A missing note disables all tagging.
Bug: b/135772972
Test: bionic-unit-tests (in a future change)
Change-Id: Iab145a922c7abe24cdce17323f9e0c1063cc1321
This patch adds support to load BTI-enabled objects.
According to the ABI, BTI is recorded in the .note.gnu.property section.
The new parser evaluates the property section, if exists.
It searches for .note section with NT_GNU_PROPERTY_TYPE_0.
Once found it tries to find GNU_PROPERTY_AARCH64_FEATURE_1_AND.
The results are cached.
The main change in linker is when protection of loaded ranges gets
applied. When BTI is requested and the platform also supports it
the prot flags have to be amended with PROT_BTI for executable ranges.
Failing to add PROT_BTI flag would disable BTI protection.
Moreover, adding the new PROT flag for shared objects without BTI
compatibility would break applications.
Kernel does not add PROT_BTI to a loaded ELF which has interpreter.
Linker handles this case too.
Test: 1. Flame boots
2. Tested on FVP with BTI enabled
Change-Id: Iafdf223b74c6e75d9f17ca90500e6fe42c4c1218
The search_linked_namespaces parameter to find_library_internal is
always true.
Bug: none
Test: bionic tests
Change-Id: I4b6f48afefca4f52b34ca2c9e0f4335fa895ff34
Symbol lookup is O(L) where L is the number of libraries to search (e.g.
in the global and local lookup groups). Factor out the per-DSO work into
soinfo_do_lookup_impl, and optimize for the situation where all the DSOs
are using DT_GNU_HASH (rather than SysV hashes).
To load a set of libraries, the loader first constructs an auxiliary list
of libraries (SymbolLookupList, containing SymbolLookupLib objects). The
SymbolLookupList is reused for each DSO in a load group. (-Bsymbolic is
accommodated by modifying the SymbolLookupLib at the front of the list.)
To search for a symbol, soinfo_do_lookup_impl has a small loop that first
scans a vector of GNU bloom filters looking for a possible match.
There was a slight improvement from templatizing soinfo_do_lookup_impl
and skipping the does-this-DSO-lack-GNU-hash check.
Rewrite the relocation processing loop to be faster. There are specialized
functions that handle the expected relocation types in normal relocation
sections and in PLT relocation sections.
This CL can reduce the initial link time of large programs by around
40-50% (e.g. audioserver, cameraserver, etc). On the linker relocation
benchmark (64-bit walleye), it reduces the time from 131.6ms to 71.9ms.
Bug: http://b/143577578 (incidentally fixed by this CL)
Test: bionic-unit-tests
Change-Id: If40a42fb6ff566570f7280b71d58f7fa290b9343
With -O0, on arm64, Clang uses memset for this declaration:
bionic_tcb temp_tcb = {};
arm64 doesn't currently have an ifunc for memset, but if it did, then this
line would crash when the linker is compiled with -O0. It looks like other
architectures would only use a memset call if the bionic_tcb struct were
larger.
Avoid memset by using a custom memclr function that the compiler optimizes
into something efficient.
Also add __attribute__((uninitialized)) to ensure that
-ftrivial-auto-var-init does not generate a call to memset. See this
change[1] in build/soong.
[1] If085ec53c619e2cebc86ca23f7039298160d99ae
Test: build linker with -O0, linker[64] --help works
Bug: none
Change-Id: I0df8065a362646de4fa021cae63a7d68ca3966b6
Using ifuncs allows the linker to select faster versions of libc functions
like strcmp, making linking faster.
The linker continues to first initialize TLS, then call the ifunc
resolvers. There are small amounts of code in Bionic that need to avoid
calling functions selected using ifuncs (generally string.h APIs). I've
tried to compile those pieces with -ffreestanding. Maybe it's unnecessary,
but maybe it could help avoid compiler-inserted memset calls, and maybe
it will be useful later on.
The ifuncs are called in a special early pass using special
__rel[a]_iplt_start / __rel[a]_iplt_end symbols. The linker will encounter
the ifuncs again as R_*_IRELATIVE dynamic relocations, so they're skipped
on the second pass.
Break linker_main.cpp into its own liblinker_main library so it can be
compiled with -ffreestanding.
On walleye, this change fixes a recent 2.3% linker64 start-up time
regression (156.6ms -> 160.2ms), but it also helps the 32-bit time by
about 1.9% on the same benchmark. I'm measuring the run-time using a
synthetic benchmark based on loading libandroid_servers.so.
Test: bionic unit tests, manual benchmarking
Bug: none
Merged-In: Ieb9446c2df13a66fc0d377596756becad0af6995
Change-Id: Ieb9446c2df13a66fc0d377596756becad0af6995
(cherry picked from commit 772bcbb0c2)
Define a "linker_bin_template" cc_defaults module that a native bridge
implementation can inherit to define a guest linker.
Break the debuggerd_init call off into separate
linker_debuggerd_{android,stub}.cpp files to allow opting in/out of the
debuggerd integration without needing to change how linker_main.cpp is
compiled. (This is necessary for a later commit that moves
linker_main.cpp into a new static library.)
Test: bionic unit tests
Bug: none
Merged-In: I7c5d79281bce1e69817b266dd91d43ea40f78522
Change-Id: I7c5d79281bce1e69817b266dd91d43ea40f78522
(cherry picked from commit 5adf402ee9)
COUNT_PAGES tries to count the pages dirtied by relocations, but this
implementation is broken because it's merging rel->r_offset values from
multiple DSOs. The functionality is hard to use, because it requires
rebuilding the linker, and it's not obvious to me that it should belong
in the linker. If we do want it, we should make it work without rebuilding
the linker.
Similar information can currently be collected by parsing the result of
`readelf -r` on a binary (or a set of binaries).
Bug: none
Test: m linker libc com.android.runtime ; adb sync ; run something
Change-Id: I760fb6ea4ea3d1927eb5145cdf4ca133851d69b4
The linker currently sets VMA name ".bss" for bss sections in DSOs
loaded by the linker. With this change, the linker now also sets VMA
name for bss sections in the linker itself and the main executable, so
that they don't get left out in various accounting.
Test: Run 'dd' and check its /proc/<pid>/maps.
Change-Id: I62d9996ab256f46e2d82cac581c17fa94836a228
Ordinary executables have a PT_INTERP path of /system/bin/linker[64], but:
- executables using bootstrap Bionic use /system/bin/bootstrap/linker[64]
- ASAN executables use /system/bin/linker_asan[64]
gdb appears to use the PT_INTERP path for debugging the dynamic linker
before the linker has initialized the r_debug module list. If the linker's
l_name differs from PT_INTERP, then gdb assumes that the linker has been
unloaded and searches for a new solib using the linker's l_name path.
gdb may print a warning like:
warning: Temporarily disabling breakpoints for unloaded shared library "$OUT/symbols/system/bin/linker64"
If I'm currently debugging the linker when this happens, gdb apparently
doesn't load debug symbols for the linker. This can be worked around with
gdb's "sharedlibrary" command, but it's better to avoid it.
Previously, when PT_INTERP was the bootstrap linker, but l_name was
"/system/bin/linker[64]", gdb would find the default non-bootstrap linker
binary and (presumably) get confused about symbol addresses.
(Also, remove the "static std::string exe_path" variable because the
soinfo::realpath_ field is a std::string that already lasts until exit. We
already use it for link_map_head.l_name in notify_gdb_of_load.)
Bug: http://b/134183407
Test: manual
Change-Id: I9a95425a3a5e9fd01e9dd272273c6ed3667dbb9a
Given that we have both linker and linker64, I didn't really want to have
to have ldd and ldd64, so this change just adds the --list option to the
linkers and a shell script wrapper "ldd" that calls the appropriate
linker behind the scenes.
Test: adb shell linker --list `which app_process32`
Test: adb shell linker64 --list `which date`
Test: adb shell ldd `which app_process32`
Test: adb shell ldd `which date`
Change-Id: I33494bda1cc3cafee54e091f97c0f2ae52d1f74b
- Show which executable is being linked, which linker config file is
being read, and which section in it is being used with, enabled on
$LD_DEBUG>=1.
- Show more info to follow the dlopen() process, enabled with "dlopen"
in the debug.ld.xxx property.
Test: Flash, boot, and look at logcat after "adb shell setprop debug.ld.all dlopen"
Bug: 120430775
Change-Id: I5441c8ced26ec0e2f04620c3d2a1ae860b792154
Introduce a new flag ANDROID_DLEXT_RESERVED_ADDRESS_RECURSIVE which
instructs the linker to use the reserved address space to load all of
the newly-loaded libraries required by a dlopen() call instead of only
the main library. They will be loaded consecutively into that region if
they fit. The RELRO sections of all the loaded libraries will also be
considered for reading/writing shared RELRO data.
This will allow the WebView implementation to potentially consist of
more than one .so file while still benefiting from the RELRO sharing
optimisation, which would otherwise only apply to the "root" .so file.
Test: bionic-unit-tests (existing and newly added)
Bug: 110790153
Change-Id: I61da775c29fd5017d9a1e2b6b3757c3d20a355b3
When the linker is invoked on itself, (`linker64 /system/bin/linker64`),
the linker prints an error, because self-invocation isn't allowed. The
current method for detecting self-invocation fails because the second
linker instance can crash in a constructor function before reaching
__linker_init.
Fix the problem by moving the error check into a constructor function,
which finishes initializing libc sufficiently to call async_safe_fatal.
The only important thing missing is __libc_sysinfo on 32-bit x86. The aux
vector isn't readily accessible, so use the fallback int 0x80.
Bug: http://b/123637025
Test: bionic unit tests (32-bit x86)
Change-Id: I8be6369e8be3938906628ae1f82be13e6c510119
This is the second attempt to purge linker block allocators. Unlike the
previously reverted change which purge allocators whenever all objects
are freed, we only purge right before control leaves the linker. This
limits the performance impact to one munmap() call per dlopen(), in
most cases.
Bug: 112073665
Test: Boot and check memory usage with 'showmap'.
Test: Run camear cold start performance test.
Change-Id: I02c7c44935f768e065fbe7ff0389a84bd44713f0
For ELF TLS "local-exec" accesses, the static linker assumes that an
executable's TLS segment is located at a statically-known offset from the
thread pointer (i.e. "variant 1" for ARM and "variant 2" for x86).
Because these layouts are incompatible, Bionic generally needs to allocate
its TLS slots differently between different architectures.
To allow per-architecture TLS slots:
- Replace the TLS_SLOT_xxx enumerators with macros. New ARM slots are
generally negative, while new x86 slots are generally positive.
- Define a bionic_tcb struct that provides two things:
- a void* raw_slots_storage[BIONIC_TLS_SLOTS] field
- an inline accessor function: void*& tls_slot(size_t tpindex);
For ELF TLS, it's necessary to allocate a temporary TCB (i.e. TLS slots),
because the runtime linker doesn't know how large the static TLS area is
until after it has loaded all of the initial solibs.
To accommodate Golang, it's necessary to allocate the pthread keys at a
fixed, small, positive offset from the thread pointer.
This CL moves the pthread keys into bionic_tls, then allocates a single
mapping per thread that looks like so:
- stack guard
- stack [omitted for main thread and with pthread_attr_setstack]
- static TLS:
- bionic_tcb [exec TLS will either precede or succeed the TCB]
- bionic_tls [prefixed by the pthread keys]
- [solib TLS segments will be placed here]
- guard page
As before, if the new mapping includes a stack, the pthread_internal_t
is allocated on it.
At startup, Bionic allocates a temporary bionic_tcb object on the stack,
then allocates a temporary bionic_tls object using mmap. This mmap is
delayed because the linker can't currently call async_safe_fatal() before
relocating itself.
Later, Bionic allocates a stack-less thread mapping for the main thread,
and copies slots from the temporary TCB to the new TCB.
(See *::copy_from_bootstrap methods.)
Bug: http://b/78026329
Test: bionic unit tests
Test: verify that a Golang app still works
Test: verify that a Golang app crashes if bionic_{tls,tcb} are swapped
Merged-In: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
Change-Id: I6543063752f4ec8ef6dc9c7f2a06ce2a18fc5af3
(cherry picked from commit 1e660b70da)
Instead of passing the address of a KernelArgumentBlock to libc.so for
initialization, use __loader_shared_globals() to initialize globals.
Most of the work happened in the previous CLs. This CL switches a few
KernelArgumentBlock::getauxval calls to [__bionic_]getauxval and stops
routing the KernelArgumentBlock address through the libc init functions.
Bug: none
Test: bionic unit tests
Change-Id: I96c7b02c21d55c454558b7a5a9243c682782f2dd
Merged-In: I96c7b02c21d55c454558b7a5a9243c682782f2dd
(cherry picked from commit 746ad15912)
Split __libc_init_main_thread into __libc_init_main_thread_early and
__libc_init_main_thread_late. The early function is called very early in
the startup of the dynamic linker and static executables. It initializes
the global auxv pointer and enough TLS memory to do system calls, access
errno, and run -fstack-protector code (but with a zero cookie because the
code for generating a cookie is complex).
After the linker is relocated, __libc_init_main_thread_late finishes
thread initialization.
Bug: none
Test: bionic unit tests
Change-Id: I6fcd8d7587a380f8bd649c817b40a3a6cc1d2ee0
Merged-In: I6fcd8d7587a380f8bd649c817b40a3a6cc1d2ee0
(cherry picked from commit 39bc44bb0e)
* changes:
Use shared globals to init __progname + environ
Move the abort message to libc_shared_globals
Expose libc_shared_globals to libc.so with symbol
Initialize the __progname and environ global variables using
libc_shared_globals rather than KernelArgumentBlock.
Also: suppose the linker is invoked on an executable:
linker prog [args...]
The first argument passed to main() and constructor functions is "prog"
rather than "linker". For consistency, this CL changes the BSD
__progname global from "linker" to "prog".
Bug: none
Test: bionic unit tests
Change-Id: I376d76953c9436706dbc53911ef6585c1acc1c31
__libc_shared_globals() is available in dynamic modules as soon as
relocation has finished (i.e. after ifuncs run). Before ifuncs have run,
the android_set_abort_message() function already doesn't work because it
calls public APIs via the PLT. (If this matters, we can use a static
bool variable to enable android_set_abort_message after libc
initialization).
__libc_shared_globals() is hidden, so it's available in the linker
immediately (i.e. before relocation). TLS memory (e.g. errno) currently
isn't accessible until after relocation, but a later patch fixes that.
Bug: none
Test: bionic unit tests
Change-Id: Ied4433758ed2da9ee404c6158e319cf502d05a53
Previously, the address of the global variable was communicated from the
dynamic linker to libc.so using a field of KernelArgumentBlock, which is
communicated using the TLS_SLOT_BIONIC_PREINIT slot.
As long as this function isn't called during relocations (i.e. while
executing an ifunc), it always return a non-NULL value. If it's called
before its PLT entry is relocated, I expect a crash.
I removed the __libc_init_shared_globals function. It's currently empty,
and I don't think there's one point in libc's initialization where
shared globals should be initialized.
Bug: http://b/25751302
Test: bionic unit tests
Change-Id: I614d25e7ef5e0d2ccc40d5c821dee10f1ec61c2e
__sanitize_environment_variables is only called when getauxval(AT_SECURE)
is true.
Instead of scanning __libc_auxv, reuse getauxval. If the entry is missing,
getauxval will set errno to ENOENT.
Reduce the number of times that __libc_sysinfo and __libc_auxv are
initialized. (Previously, __libc_sysinfo was initialized 3 times for the
linker's copy). The two variables are initialized in these places:
- __libc_init_main_thread for libc.a (including the linker copy)
- __libc_preinit_impl for libc.so
- __linker_init: the linker's copy of __libc_sysinfo is still initialized
twice, because __libc_init_main_thread runs after relocation. A later
CL consolidates the linker's two initializations.
Bug: none
Test: bionic unit tests
Change-Id: I196f4c9011b0d803ee85c07afb415fcb146f4d65
Change three things regarding the work around to the fact that init is
special:
1) Only first stage init is special, so we change the check to include
accessing /proc/self/exe, which if is available, means that we're
not first stage init and do not need any work arounds.
2) Fix the fact that /init may be a symlink and may need readlink()
3) Suppress errors from realpath_fd() since these are expected to fail
due to /proc not being mounted.
Bug: 80395578
Test: sailfish boots without the audit generated from calling stat()
on /init and without the errors from realpath_fd()
Change-Id: I266f1486b142cb9a41ec791eba74122bdf38cf12
The executable can be inside a zip file using the same syntax used for
shared objects: path.zip!/libentry.so.
The linker currently requires an absolute path. This restriction could be
loosened, but it didn't seem important? If it allowed non-absolute paths,
we'd need to decide how to handle:
- foo/bar (relative to CWD?)
- foo (search PATH / LD_LIBRARY_PATH, or also relative to CWD?)
- foo.zip!/bar (normalize_path() requires an absolute path)
The linker adjusts the argc/argv passed to main() and to constructor
functions to hide the initial linker argument, but doesn't adjust the auxv
vector or files like /proc/self/{exe,cmdline,auxv,stat}. Those files will
report that the kernel loaded the linker as an executable.
I think the linker_logger.cpp change guarding against (g_argv == NULL)
isn't actually necessary, but it seemed like a good idea given that I'm
delaying initialization of g_argv until after C++ constructors have run.
Bug: http://b/112050209
Test: bionic unit tests
Change-Id: I846faf98b16fd34218946f6167e8b451897debe5
* Initialize the exe's l_ld correctly, and initialize its l_addr field
earlier.
* Copy the phdr/phnum fields from the linker's temporary soinfo to its
final soinfo. This change ensures that dl_iterate_phdr shows the phdr
table for the linker.
* Change init_linker_info_for_gdb a little: use an soinfo's fields to
init the soinfo::link_map_head field, then reuse the new
init_link_map_head function to handle the linker and the executable.
Test: manual
Test: bionic-unit-tests
Bug: https://issuetracker.google.com/112627083
Bug: http://b/110967431
Change-Id: I40fad2c4d48f409347aaa1ccb98d96db89da1dfe
gdbserver assumes that the first entry is the exe, so it must come
first.
Fixes debugging of executables with gdb.
Bug: https://issuetracker.google.com/112627083
Bug: http://b/110967431
Test: gdbclient.py -r toybox
Change-Id: I7b30398d679c3f8b92d8d02572f9073ae0fce798
When the linker is invoked directly, rather than as an interpreter for a
real program, the AT_BASE value is 0. To find the linker's base address,
the linker currently relies on the static linker populating the target of
a RELA relocation with an offset rather than leaving it zero. (With lld,
it will require a special flag, --apply-dynamic-relocs.)
Instead, do something more straightforward: the linker already finds the
executable's base address using its PHDR table, so do the same thing when
the linker is run by itself.
Bug: http://b/72789859
Test: boots, run linker/linker64 by itself
Change-Id: I4da5c346ca164ea6f4fbc011f8c3db4e6a829456
Add two functions to allow objects that own a file descriptor to
enforce that only they can close their file descriptor.
Use them in FILE* and DIR*.
Bug: http://b/110100358
Test: bionic_unit_tests
Test: aosp/master boots without errors
Test: treehugger
Change-Id: Iecd6e8b26c62217271e0822dc3d2d7888b091a45
init is now built as a dynamic executable, so the dynamic linker has to
be able to run in the init process. However, since init is launched so
early, even /dev/* and /proc/* file systems are not mounted and thus
some APIs that rely on the paths do not work. The dynamic linker now
goes alternative path when it is running in the init process.
For example, /proc/self/exe is not read for the init since we always now
the path of the init (/init). Also, arc4random* APIs are not used since
the APIs rely on /dev/urandom. Linker now does not randomize library
loading order and addresses when running in the init process.
Bug: 80454183
Test: `adb reboot recovery; adb devices` shows the device ID
Change-Id: I29b6d70e4df5f7f690876126d5fe81258c1d3115
A GOT lookup happening prior to soinfo::link_image causes a segfault. With
-O0, the compiler moves GOT lookups from after __linker_init's link_image
call to the start of __linker_init.
Rename the existing __linker_init_post_relocation to linker_main, then
extract the existing post-link_image code to a new
__linker_init_post_relocation function.
Bug: http://b/80503879
Test: /data/nativetest64/bionic-unit-tests/bionic-unit-tests
Test: manual
Change-Id: If8a470f8360acbe35e2a308b0fbff570de6131cf
__libc_sysinfo is hidden, so accessing it doesn't require a relocated GOT.
It is important not to have a relocatable initializer on __libc_sysinfo,
because if it did have one, and if we initialized it before relocating the
linker, then on 32-bit x86 (which uses REL rather than RELA), the
relocation step would calculate the wrong addend and overwrite
__libc_sysinfo with garbage.
Asides:
* It'd be simpler to keep the __libc_sysinfo initializer for static
executables, but the loader pulls in libc_init_static (even though it
uses almost none of the code in that file, like __libc_init).
* The loader has called __libc_init_sysinfo three times by the time it
has relocated itself. A static executable calls it twice, while libc.so
calls it only once.
Bug: none
Test: lunch aosp_x86-userdebug ; emulator
Test: adb shell /data/nativetest/bionic-unit-tests/bionic-unit-tests
Test: adb shell /data/nativetest/bionic-unit-tests-static/bionic-unit-tests-static
Change-Id: I5944f57847db7191608f4f83dde22b49e279e6cb
- It is only needed for dynamic executables, so move the initialization
out of __libc_init_main_thread and just before the solib constructor
calls. For static executables, the slot was initialized, then never
used or cleared. Instead, leave it clear.
- For static executables, __libc_init_main_thread already initialized the
stack guard, so remove the redundant __init_thread_stack_guard call.
- Simplify the slot access/clearing a bit in __libc_preinit.
- Remove the "__libc_init_common() will change the TLS area so the old one
won't be accessible anyway." comment. AFAICT, it's incorrect -- the
main thread's TLS area in a dynamic executable is initialized to a
static pthread_internal_t object in the linker, then reused by libc.so.
Test: adb shell /data/nativetest/bionic-unit-tests/bionic-unit-tests
Test: adb shell /data/nativetest/bionic-unit-tests-static/bionic-unit-tests-static
Change-Id: Ie2da6f5be3ad563fa65b38eaadf8ba6ecc6a64b6
vdso should be available in all namespaces when present. This
bug went undetected because the way libc currently uses vdso (it
does all the lookups itself). This makes it available for the
programs that want to take advantage by dlopening it.
Bug: http://b/73105445
Bug: http://b/79561555
Test: adb shell /data/nativetest/arm/bionic-unit-tests/bionic-unit-tests --gtest_filter=dl.exec_with_ld_config_file
Test: adb shell /data/nativetest/bionic-unit-tests/bionic-unit-tests --gtest_filter=dl*
Change-Id: I8eae0c9848f256190d1c9ec85d10dc6ce383a8bc
(cherry picked from commit 69c68c46ac)
This change addresses multiple problems introduced by
02586a2a34
1. In the case of unsuccessful dlopen the failure guard is triggered
for two namespaces which leads to double unload.
2. In the case where load_tasks includes libraries from 3 and more
namespaces it results in incorrect linking of libraries shared between
second and third/forth and so on namespaces.
The root cause of these problems was recursive call to find_libraries.
It does not do what it is expected to do. It does not form new load_tasks
list and immediately jumps to linking local_group. Not only this skips
reference counting it also will include unlinked but accessible library
from third (and fourth and fifth) namespaces in invalid local group. The
best case scenario here is that for 3 or more namesapces this will
fail to link. The worse case scenario it will link the library
incorrectly with will lead to very hard to catch bugs.
This change removes recursive call and replaces it with explicit list of
local_groups which should be linked. It also revisits the way we do
reference counting - with this change the reference counts are updated after
after libraries are successfully loaded.
Also update soinfo_free to abort in case when linker tries to free same
soinfo for the second time - this makes linker behavior less undefined.
Test: bionic-unit-tests
Bug: http://b/69787209
Change-Id: Iea25ced181a98c6503cce6e2b832c91d697342d5
vector::erase(iterator) erases the element that that iterator points
to, vector::erase(iterator a, iterator b) erases the range [a, b), with
a == b being a no-op.
Test: LD_PRELOAD=libc.so sh
Change-Id: I6a85c1cfaa8eb67756cb75d421f332d5c9a43a33
std::remove_if moves removed elements to the end, without actually
resizing the collection. To do so, you have to call erase on its
returned iterator.
Test: mma
Change-Id: Iae7f2f194166408f2b101d0c1cfc95202d8bbe63