<pthread.h> was missing nonnull attributes, noreturn on pthread_exit,
and had incorrect cv qualifiers for several standard functions.
I've also marked the non-standard stuff (where I count glibc rather
than POSIX as "standard") so we can revisit this cruft for LP64 and
try to ensure we're compatible with glibc.
I've also broken out the pthread_cond* functions into a new file.
I've made the remaining pthread files (plus ptrace) part of the bionic code
and fixed all the warnings.
I've added a few more smoke tests for chunks of untested pthread functionality.
We no longer need the libc_static_common_src_files hack for any of the
pthread implementation because we long since stripped out the rest of
the armv5 support, and this hack was just to ensure that __get_tls in libc.a
went via the kernel if necessary.
This patch also finishes the job of breaking up the pthread.c monolith, and
adds a handful of new tests.
Change-Id: Idc0ae7f5d8aa65989598acd4c01a874fe21582c7
Experiment shows that the claim in the makefile was false: gdb works fine
setting breakpoints in these functions when compiled without special treatment.
Change-Id: Ibdf4dd5a14d171c954b8c2089daaf28e1c310be9
I really don't want to add yet another copy for aarch64.
Also sort arm, mips, and x86.
Also silence the "TARGET_ARCH_VARIANT" warning for non-ARM; Intel and MIPS
have both complained about it.
Change-Id: I32c592a90c0cf0cdae250d84035b3e4655543781
(aarch64 kernels only have the newer system calls.)
Also expose the new functionality that's exposed by glibc in our header files.
Change-Id: I45d2d168a03f88723d1f7fbf634701006a4843c5
Modern architectures only get the *at(2) system calls. For example,
aarch64 doesn't have open(2), and expects userspace to use openat(2)
instead.
Change-Id: I87b4ed79790cb8a80844f5544ac1a13fda26c7b5
Also clean up <signal.h> and revert the hacks that were necessary
for 64-bit in linker/debugger.cpp until now.
Change-Id: I3b0554ca8a49ee1c97cda086ce2c1954ebc11892
Let's have both use rt_sigprocmask, like in glibc. The 64-bit ABIs
can share the same code as the 32-bit ABIs.
Also, let's test the return side of these calls, not just the
setting.
Bug: 11069919
Change-Id: I11da99f85b5b481870943c520d05ec929b15eddb
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
Normally we don't have -Werror for upstream code, but for those warnings
that probably point to 32-bit assumptions about pointers, we want those
warnings to always be errors.
Change-Id: Ibece9caf09b2f7989ca600ef448d07868669a8fb
The NDK ABI requires that you support SSE2, and the build system won't let you
build with ARCH_X86_HAVE_SSE2 set to false. So let's stop pretending this
constant is actually a variable, and let's remove the corresponding dead code.
Also, the USE_SSE2 and USE_SSE3 macros are unused, so let's not bother
setting them.
Change-Id: I40b501d998530d22518ce1c4d14575513a8125bb
Clang and gcc default to different standards, so we should be explicit
about the versions we want to compile for.
Change-Id: I65495a2392dd29f36373b94c616c2506173e6033
Use basic .c versions of all functions for x86_64 until they are
manually optimized and .s versions released.
Change-Id: I59bba08931e894822db485c8803c2665c226234a
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
Found by adapting the simple unit tests for libc logging to test
snprintf too. Fix taken from upstream OpenBSD without updating
the rest of stdio.
Change-Id: Ie339a8e9393a36080147aae4d6665118e5d93647
Required for x86 build with multilib compiler.
Change-Id: Iac71cdc3461df6fb48cb2a7b713324ca368e6704
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
Change-Id: I36bf3fcf11f5ac9b88137597bac3487a7bb81b0f
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
Yet another archaic relic containing bugs that had been fixed years before the
Android project even started...
Bug: 9935113
Change-Id: I3c9d019a216efd609ee568cf8c70bc360f357403
This updates the MIPS arch to be much more in
sync with the commit Nick Kralevich made last
June; see 9d40326830.
Rewrite
crtbegin.S -> crtbegin.c
crtbegin_so.S -> crtbegin_so.c
__dso_handle.S -> __dso_handle.c
__dso_handle_so.S -> __dso_handle_so.c
atexit.S -> atexit.c
Previously __do_global_dtors_aux was in the tasks
__FINI_ARRAY__ linked with crtbegin.S and it now being
removed as there is no need to call a destructor just
before terminating a process.
Shared libraries, on the other hand, are linked with
crtbegin_so.c and have a hidden destructor declared
to allow the bionic linker to call __on_dlclose().
Change-Id: Ieb4da5199b54573de05743990e309db381a11cb8
Signed-off-by: Pete Delaney <piet.delaney@imgtec.com>
Signed-off-by: Chao-Ying Fu <chao-ying.fu@imgtec.com>
Signed-off-by: Chris Dearman <chris.dearman@imgtec.com>