This gives us:
* <dirent.h>
struct dirent64
readdir64, readdir64_r, alphasort64, scandir64
* <fcntl.h>
creat64, openat64, open64.
* <sys/stat.h>
struct stat64
fstat64, fstatat64, lstat64, stat64.
* <sys/statvfs.h>
struct statvfs64
statvfs64, fstatvfs64.
* <sys/vfs.h>
struct statfs64
statfs64, fstatfs64.
This also removes some of the incorrect #define hacks we've had in the
past (for stat64, for example, which we promised to clean up way back
in bug 8472078).
Bug: 11865851
Bug: 8472078
Change-Id: Ia46443521918519f2dfa64d4621027dfd13ac566
We don't need quite so much duplication because we already have a way
to get the signal number from its name, and that already copes with the
fact that the mips/mips64 numbers are different from everyone else's.
Also remove sys_signame from LP64. glibc doesn't have this BSD-ism.
Change-Id: I6dc411a3d73589383c85d3b07d9d648311492a10
1. Moved arch-specific setup to their own files:
- <arch>/<arch>.mk, arch-specific configs. Variables in those config
end with the arch name.
- removed the extra complexity introduced by function libc-add-cpu-variant-src,
which seems to be not very useful these days.
2. Separated out the crt object files generation rules and set up the
rules for both TARGET_ARCH and TARGET_2ND_ARCH.
3. Build all the libraries for both TARGET_ARCH and TARGET_2ND_ARCH,
with the arch-specific LOCAL_ variables.
Bug: 11654773
Change-Id: I9c2d85db0affa49199d182236d2210060a321421
Remove the linker's reliance on BSD cruft and use the glibc-style
ElfW macro. (Other code too, but the linker contains the majority
of the code that needs to work for Elf32 and Elf64.)
All platforms need dl_iterate_phdr_static, so it doesn't make sense
to have that part of the per-architecture configuration.
Bug: 12476126
Change-Id: I1d7f918f1303a392794a6cd8b3512ff56bd6e487
libc/libm support for MIPS64 targets
Change-Id: I8271941d418612a286be55495f0e95822f90004f
Signed-off-by: Chris Dearman <chris.dearman@imgtec.com>
Signed-off-by: Raghu Gandham <raghu.gandham@imgtec.com>
This patch switches to using the uapi constants. It also adds the missing
setns system call, fixes sched_getcpu's error behavior, and fixes the
gensyscalls script now ARM is uapi-only too.
Change-Id: I8e16b1693d6d32cd9b8499e46b5d8b0a50bc4f1d
Because there was no default := for the aarch64 libc_crt_target_cflags,
the += was causing libc_crt_target_cflags to be recursively-defined
variable, which meant that when we were compiling crtbegin.c LOCAL_PATH
would be bionic/tests/ and we'd have -Ibionic/tests/include/ and find
none of our include files.
Also fix linking of pthread_debug.cpp, at least in the disabled mode.
The enabled mode was already broken for all architectures, and continues
to be broken after this change. It's been broken for long enough that
we might want to just remove it...
(aarch64 is using the FSF linker where arm uses the gold linker.)
Change-Id: I7db2e386694f6933db043138e6e97e5ae54d4174
This is the first patch out of a series of patches that add support for
AArch64, the new 64bit execution state of the ARMv8 Architecture. The
patches add support for LP64 programming model.
The patch adds:
* "arch-aarch64" to the architecture directories.
* "arch-aarch64/include" - headers used by libc
* "arch-aarch64/bionic":
- crtbegin, crtend support;
- aarch64 specific syscall stubs;
- setjmp, clone, vfork assembly files.
Change-Id: If72b859f81928d03ad05d4ccfcb54c2f5dbf99a5
Signed-off-by: Serban Constantinescu <serban.constantinescu@arm.com>
Also fix the signature of usleep, and the definition of useconds_t which
should be unsigned, as the 'u' in its name implies.
This patch also cleans up the existing FreeBSD hacks by moving the libm
stuff from <sys/cdefs.h> to a libm-private header, and adding comments
about the hacks we use to build FreeBSD source.
Change-Id: Ibe5067a380502df94a0a3a7901969b35411085b6
The kernel now maintains the pthread_internal_t::tid field for us,
and __clone was only used in one place so let's inline it so we don't
have to leave such a dangerous function lying around. Also rename
files to match their content and remove some useless #includes.
Change-Id: I24299fb4a940e394de75f864ee36fdabbd9438f9
As of 61e699a133, stdio clean up
functions are no longer registered in atexit and must be called
manually via __cleanup.
The issue this fixes is some static binaries linked against bionic
cannot output properly when piped or redirected because the buffer
is not flushed before closing.
This is done by pulling in exit.c (and other dependencies) from
netbsd.
Change-Id: I193e54a6d08900f291550029fe75ce76394d9e22
In practice, thanks to all the registers the stubs don't actually change,
but it's confusing to have an incorrect declaration.
I suspect that fcntl remains broken for aarch64; it happens to work for
x86_64 because the first vararg argument gets placed in the right register
anyway, but I have no reason to believe that's true for aarch64.
This patch adds a unit test, though, so we'll be able to tell when we get
as far as running the unit tests.
Change-Id: I58dd0054fe99d7d51d04c22781d8965dff1afbf3
Unlike on 32-bit systems where off_t is 32-bit, we don't want to
throw away the top 32 bits of an LP64 system's 64-bit off_t.
Change-Id: Ib2e0daeb4fc0b8ab3d1b983d0b371d8f81033b50
<pthread.h> was missing nonnull attributes, noreturn on pthread_exit,
and had incorrect cv qualifiers for several standard functions.
I've also marked the non-standard stuff (where I count glibc rather
than POSIX as "standard") so we can revisit this cruft for LP64 and
try to ensure we're compatible with glibc.
I've also broken out the pthread_cond* functions into a new file.
I've made the remaining pthread files (plus ptrace) part of the bionic code
and fixed all the warnings.
I've added a few more smoke tests for chunks of untested pthread functionality.
We no longer need the libc_static_common_src_files hack for any of the
pthread implementation because we long since stripped out the rest of
the armv5 support, and this hack was just to ensure that __get_tls in libc.a
went via the kernel if necessary.
This patch also finishes the job of breaking up the pthread.c monolith, and
adds a handful of new tests.
Change-Id: Idc0ae7f5d8aa65989598acd4c01a874fe21582c7
Experiment shows that the claim in the makefile was false: gdb works fine
setting breakpoints in these functions when compiled without special treatment.
Change-Id: Ibdf4dd5a14d171c954b8c2089daaf28e1c310be9
I really don't want to add yet another copy for aarch64.
Also sort arm, mips, and x86.
Also silence the "TARGET_ARCH_VARIANT" warning for non-ARM; Intel and MIPS
have both complained about it.
Change-Id: I32c592a90c0cf0cdae250d84035b3e4655543781
(aarch64 kernels only have the newer system calls.)
Also expose the new functionality that's exposed by glibc in our header files.
Change-Id: I45d2d168a03f88723d1f7fbf634701006a4843c5
Modern architectures only get the *at(2) system calls. For example,
aarch64 doesn't have open(2), and expects userspace to use openat(2)
instead.
Change-Id: I87b4ed79790cb8a80844f5544ac1a13fda26c7b5
Also clean up <signal.h> and revert the hacks that were necessary
for 64-bit in linker/debugger.cpp until now.
Change-Id: I3b0554ca8a49ee1c97cda086ce2c1954ebc11892
Let's have both use rt_sigprocmask, like in glibc. The 64-bit ABIs
can share the same code as the 32-bit ABIs.
Also, let's test the return side of these calls, not just the
setting.
Bug: 11069919
Change-Id: I11da99f85b5b481870943c520d05ec929b15eddb
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
Normally we don't have -Werror for upstream code, but for those warnings
that probably point to 32-bit assumptions about pointers, we want those
warnings to always be errors.
Change-Id: Ibece9caf09b2f7989ca600ef448d07868669a8fb
The NDK ABI requires that you support SSE2, and the build system won't let you
build with ARCH_X86_HAVE_SSE2 set to false. So let's stop pretending this
constant is actually a variable, and let's remove the corresponding dead code.
Also, the USE_SSE2 and USE_SSE3 macros are unused, so let's not bother
setting them.
Change-Id: I40b501d998530d22518ce1c4d14575513a8125bb
Clang and gcc default to different standards, so we should be explicit
about the versions we want to compile for.
Change-Id: I65495a2392dd29f36373b94c616c2506173e6033
Use basic .c versions of all functions for x86_64 until they are
manually optimized and .s versions released.
Change-Id: I59bba08931e894822db485c8803c2665c226234a
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
Found by adapting the simple unit tests for libc logging to test
snprintf too. Fix taken from upstream OpenBSD without updating
the rest of stdio.
Change-Id: Ie339a8e9393a36080147aae4d6665118e5d93647
Required for x86 build with multilib compiler.
Change-Id: Iac71cdc3461df6fb48cb2a7b713324ca368e6704
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
I've mailed the tz list about this, and will switch to whatever upstream
fix comes along as soon as it's available.
Bug: 10310929
Change-Id: I36bf3fcf11f5ac9b88137597bac3487a7bb81b0f
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
Only works on some kernels, and only on page-aligned regions of
anonymous memory. It will show up in /proc/pid/maps as
[anon:<name>] and in /proc/pid/smaps as Name: <name>
Change-Id: If31667cf45ff41cc2a79a140ff68707526def80e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Change-Id: Ifd091ebdbce70fe35a7c5d8f71d5914255f3af35
Yet another archaic relic containing bugs that had been fixed years before the
Android project even started...
Bug: 9935113
Change-Id: I3c9d019a216efd609ee568cf8c70bc360f357403