This patch uses __kernel_vsyscall instead of "int 0x80"
as the syscall entry point. AT_SYSINFO points to
an adapter to mask the arch specific difference and gives a
performance boost on i386 architecture.
Change-ID: Ib340c604d02c6c25714a95793737e3cfdc3fc5d7
Signed-off-by: Mingwei Shi <mingwei.shi@intel.com>
This change provides __restore/__restore_rt on x86 and __restore_rt on
x86_64 with unwinding information to be able to unwind through signal
frame via libgcc provided unwinding interface. See comments inlined for
more details.
Also remove the test that had a dependency on
__attribute__((cleanup(foo_cleanup))). It doesn't provide us with any
better test coverage than we have from the newer tests, and it doesn't
work well across a variety architectures (presumably because no one uses
this attribute in the real world).
Tested this on host via bionic-unit-tests-run-on-host on both x86 and
x86-64.
Bug: 17436734
Change-Id: I2f06814e82c8faa732cb4f5648868dc0fd2e5fe4
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
For silvermont, the __popcountsi2 symbol does not get exported by libc.
But for atom, this symbol is exported. Since we already exported this symbol
for previous releases, it's better to just follow through and force
the export, but only for 32 bit. x86 64 bit will not export this symbol.
Bug: 17681440
(cherry picked from commit d11eac3455)
Change-Id: I93704c721d98d569922f606f214069bda24872ba
* LP32 should use sa_restorer too. gdb expects this, and future (>= 3.15) x86
kernels will apparently stop supporting the case where SA_RESTORER isn't
set.
* gdb and libunwind care about the exact instruction sequences, so we need to
modify the code slightly in a few cases to match what they're looking for.
* gdb also cares about the exact function names (for some architectures),
so we need to use __restore and __restore_rt rather than __sigreturn and
__rt_sigreturn.
* It's possible that we don't have a VDSO; dl_iterate_phdr shouldn't assume
that getauxval(AT_SYSINFO_EHDR) will return a non-null pointer.
This fixes unwinding through a signal handler in gdb for all architectures.
It doesn't fix libunwind for arm and arm64. I'll keep investigating that...
Bug: 17436734
Change-Id: Ic1ea1184db6655c5d96180dc07bcc09628e647cb
The use of the .hidden directive to avoid going via the PLT for
__set_errno had the side-effect of actually making __set_errno
hidden (which is odd because assembler directives don't usually
affect symbols defined in a different file --- you can't even
create a weak reference to a symbol that's defined in a different
file).
This change switches the system call stubs over to a new always-hidden
__set_errno_internal and has a visible __set_errno on LP32 just for
binary compatibility with old NDK apps.
(cherry-pick of 7efad83d430f4d824f2aaa75edea5106f6ff8aae.)
Bug: 17423135
Change-Id: I6b6d7a05dda85f923d22e5ffd169a91e23499b7b
On most architectures the kernel subtracts a random offset to the stack
pointer in create_elf_tables by calling arch_align_stack before writing
the auxval table and so on. On all but x86 this doesn't cause a problem
because the random offset is less than a page, but on x86 it's up to two
pages. This means that our old technique of rounding the stack pointer
doesn't work. (Our old implementation of that technique was wrong too.)
It's also incorrect to assume that the main thread's stack base and size
are constant. Likewise to assume that the main thread has a guard page.
The main thread is not like other threads.
This patch switches to reading /proc/self/maps (and checking RLIMIT_STACK)
whenever we're asked.
Bug: 17111575
Signed-off-by: Fengwei Yin <fengwei.yin@intel.com>
Change-Id: I1d4dbffe7bc7bda1d353c3a295dbf68d29f63158
Clean up the x86/x86_64 assembler. The motivator (other than reducing
confusion) was that asm.h incorrectly checked PIC rather than __PIC__.
Bug: 16823325
Change-Id: Iaa9d45009e93a4b31b719021c93ac221e336479b
vfork() was removed from POSIX 2008, so this replaces its implementation
with a call to fork().
Bug: 13935372
Change-Id: I6d99ac9e52a2efc5ee9bda1cab908774b830cedc
__set_errno returns -1 exactly so that callers don't need to bother.
The other architectures were already taking advantage of this, but
no one had ever fixed x86 and x86_64.
Change-Id: Ie131494be664f6c4a1bbf8c61bbbed58eac56122
x86-64 needs these CFI directives to stop unwinding here.
I've also cleaned up the assembler a little, and made x86 and x86-64
a little more alike.
Bug: 15195760
Change-Id: I40f92c007843c29c933bb6876fe2b4611e1b946b
The problem with the original patch was that using syscall(3) means that
errno can be set, but pthread_create(3) was abusing the TLS errno slot as
a pthread_mutex_t for the thread startup handshake.
There was also a mistake in the check for syscall failures --- it should
have checked against -1 instead of 0 (not just because that's the default
idiom, but also here because futex(2) can legitimately return values > 0).
This patch stops abusing the TLS errno slot and adds a pthread_mutex_t to
pthread_internal_t instead. (Note that for LP64 sizeof(pthread_mutex_t) >
sizeof(uintptr_t), so we could potentially clobber other TLS slots too.)
I've also rewritten the LP32 compatibility stubs to directly reuse the
code from the .h file.
This reverts commit 75c55ff84e.
Bug: 15195455
Change-Id: I6ffb13e5cf6a35d8f59f692d94192aae9ab4593d
This reverts commit ced906c849.
Causes issues on art / dalvik due to a broken return value
check and other undiagnosed issues.
bug: 15195455
Change-Id: I5d6bbb389ecefb0e33a5237421a9d56d32a9317c
Also hide part of the system properties compatibility code, since
we needed to touch that to keep it building.
I'll remove __futex_syscall4 and futex in a later patch.
Bug: 11156955
Change-Id: Ibbf42414c5bb07fb9f1c4a169922844778e4eeae
Also let clone(2) set the TLS for x86.
Also ensure we initialize the TLS before we clone(2) for all architectures.
Change-Id: Ie5fa4466e1c9ee116a281dfedef574c5ba60c0b5
Also ensure that arm64/x86-64/x86 assembler uses local labels.
(There are are so many non-local labels in arm that fixing them
seems out of scope.)
Also synchronize the __bionic_clone.S comments.
Change-Id: I03b4f84780d996b54d6637a074638196bbb01cd4
clone(2) is the public symbol.
Also switch a test from __bionic_clone to clone; testing public API
means the test now works on glibc too.
Change-Id: If59def26a00c3afadb8a6cf9442094c35a59ffde
Our <machine/asm.h> files were modified from upstream, to the extent
that no architecture was actually using the upstream ENTRY or END macros,
assuming that architecture even had such a macro upstream. This patch moves
everyone to the same macros, with just a few tweaks remaining in the
<machine/asm.h> files, which no one should now use directly.
I've removed most of the unused cruft from the <machine/asm.h> files, though
there's still rather a lot in the mips/mips64 ones.
Bug: 12229603
Change-Id: I2fff287dc571ac1087abe9070362fb9420d85d6d
Also make the other architectures more similar to one another,
use NULL instead of 0 in calling code, and remove an unused #define.
Change-Id: I52b874afb6a351c802f201a0625e484df6d093bb
The kernel now maintains the pthread_internal_t::tid field for us,
and __clone was only used in one place so let's inline it so we don't
have to leave such a dangerous function lying around. Also rename
files to match their content and remove some useless #includes.
Change-Id: I24299fb4a940e394de75f864ee36fdabbd9438f9
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
We shouldn't have been passing the bottom 32 bits of the address used
for pthread_join to the kernel.
Change-Id: I487e5002d60c27adba51173719213abbee0f183f