The only symbol that actually needs a prefix to avoid a collision is
_start, and that can be handled with a copy of begin.S that uses a
"#define" to rename _start to __dlwrap__start. Removing the prefixed
symbols will also allow simplifying the host bionic build process by
letting it directly reference the real _start.
Test: build and run host bionic binary
Change-Id: I50be786c16fe04b7f05c14ebfb74f710c7446ed9
A static analyzer is complaining that num_valid_bits could be 64, and if
it were 64, then two later accesses would be out-of-bounds. is_nul_u64
can't be zero, though, because we only exit the loop when part of is_nul
is non-zero.
Bug: none
Test: manual
Change-Id: I75c3f70b600aa5478cb32fdf4ca0ae1173b69524
There are places in frameworks and art code that directly included
private bionic header files. Move these files to the new platform
include files.
This change also moves the __get_tls.h header file to tls.h and includes
the tls defines header so that there is a single header that platform
code can use to get __get_tls and the defines.
Also, simplify the visibility rules for platform includes.
Bug: 141560639
Test: Builds and bionic unit tests pass.
Change-Id: I9e5e9c33fe8a85260f69823468bc9d340ab7a1f9
Merged-In: I9e5e9c33fe8a85260f69823468bc9d340ab7a1f9
(cherry picked from commit 44631c919a)
Each TLSDESC relocation relocates a 2-word descriptor in the GOT that
contains:
- the address of a TLS resolver function
- an argument to pass (indirectly) to the resolver function
(Specifically, the address of the 2-word descriptor is passed to the
resolver.)
The loader resolves R_GENERIC_TLSDESC relocations using one of three
resolver functions that it defines:
- tlsdesc_resolver_static
- tlsdesc_resolver_dynamic
- tlsdesc_resolver_unresolved_weak
The resolver functions are written in assembly because they have a
restrictive calling convention. They're only allowed to modify x0 and
(apparently) the condition codes.
For a relocation to memory in static TLS (i.e. the executable or an solib
loaded initially), the loader uses a simple resolver function,
tlsdesc_resolver_static, that returns the static offset it receives from
the loader.
For relocations to dynamic TLS memory (i.e. memory in a dlopen'ed solib),
the loader uses tlsdesc_resolver_dynamic, which allocates TLS memory on
demand. It inlines the fast path of __tls_get_addr, then falls back to
__tls_get_addr when it needs to allocate memory. The loader handles these
dynamic TLS relocations in two passes:
- In the first pass, it allocates a table of TlsDynamicResolverArg
objects, one per dynamic TLSDESC relocation.
- In the second pass, once the table is finalized, it writes the
addresses of the TlsDynamicResolverArg objects into the TLSDESC
relocations.
tlsdesc_resolver_unresolved_weak returns a negated thread pointer so that
taking the address of an unresolved weak TLS symbols produces NULL.
The loader handles R_GENERIC_TLSDESC in a target-independent way, but
only for arm64, because Bionic has only implemented the resolver functions
for arm64.
Bug: http://b/78026329
Test: bionic unit tests
Test: check that backtrace works inside a resolver function and inside
__tls_get_addr called from a resolver
(gdbclient.py, b __tls_get_addr, bt)
Merged-In: I752e59ff986292449892c449dad2546e6f0ff7b6
Change-Id: I752e59ff986292449892c449dad2546e6f0ff7b6
Every other architecture already uses an assembly file here.
The previous code aligned ESP incorrectly, but it doesn't really matter
because everything is built with Clang's -mstackrealign, which realigns
ESP in every function prologue.
Bug: http://b/73140672#comment4
Test: lunch aosp_x86-eng; m; emulator; device boots
Test: manual
Change-Id: I921fd7848cdc611b4f8f13d1176d1983ffea952d
I noticed that sometimes the old unwinder will add an extra PC 0 frame
after this change, but the new unwinder works in all cases. I'm not going
to fix the old unwinder since I plan to remove it very soon.
Bug: 67784501
Test: Forced a crash in the linker and verified that the unwind
Test: stops in __dl_start. Tested on arm/aarch64/x86/x86_64.
Change-Id: Id6585768023256be5c1d341df7b06b786a220b40
From Linux 3.17 6ebbf2ce437b33022d30badd49dc94d33ecfa498:
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Test: No regressions detected
Test: Passes full CTS run
Change-Id: Ie268f9893e3df0f68fbfe82a13f3c7cc5c5909d8
Signed-off-by: Alex Naidis <alex.naidis@linux.com>
There's no need: __linker_init only takes one argument.
Also remove the arm __CTOR_LIST__; we use .init_array and .fini_array instead
of .ctor and .dtor anyway, and I don't think we've ever supported the latter.
Change-Id: Ifc91a5a90c6aa39d674bf0509a7af2e1ff0beddd
Our <machine/asm.h> files were modified from upstream, to the extent
that no architecture was actually using the upstream ENTRY or END macros,
assuming that architecture even had such a macro upstream. This patch moves
everyone to the same macros, with just a few tweaks remaining in the
<machine/asm.h> files, which no one should now use directly.
I've removed most of the unused cruft from the <machine/asm.h> files, though
there's still rather a lot in the mips/mips64 ones.
Bug: 12229603
Change-Id: I2fff287dc571ac1087abe9070362fb9420d85d6d
Addition of support for AArch64 in the linker64 target.
Change-Id: I8dfd9711278f6706063e91f626b6007ea7a3dd6e
Signed-off-by: Marcus Oakland <marcus.oakland@arm.com>
There's now only one place where we deal with this stuff, it only needs to
be parsed once by the dynamic linker (rather than by each recipient), and it's
now easier for us to get hold of auxv data early on.
Change-Id: I6314224257c736547aac2e2a650e66f2ea53bef5
This patch replaces .S versions of x86 crtfiles with .c which are much
easier to support. Some of the files are matching .c version of Arm
crtfiles. x86 files required some cleanup anyway and this cleanup actually
led to matching Arm files.
I didn't change anything to share the same crt*.c between x86 and Arm. I
prefer to keep them separate for a while in case any change is required
for one of the arch, but it's good thing to do in the following patches.
Change-Id: Ibcf033f8d15aa5b10c05c879fd4b79a64dfc70f3
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
With -fstack-protector, x86 -m32 needs __stack_chk_fail_local
defined in crtbegin_*.o.
Include __stack_chk_fail_local.S in begin.S otherwise linker
(which is built w/o crt*) may not link.
Change-Id: Id242fcf3eff157264afe3b04f27288ab7991220a
We don't have a toolchain anymore, we don't have working original
kernel headers, and nobody is maintaining this so there is really
no point in keeping this here. Details of the patch:
- removed code paths from Android.mk files related to the SuperH
architecture ("sh")
- removed libc/arch-sh, linker/arch-sh, libc/kernel/arch-sh
- simplified libc/SYSCALLS.TXT
- simplified the scripts in libc/tools/ and libc/kernel/tools
Change-Id: I26b0e1422bdc347489e4573e2fbec0e402f75560
Signed-off-by: David 'Digit' Turner <digit@android.com>