The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
If __get_tls has the right type, a lot of confusing casting can disappear.
It was probably a mistake that __get_tls was exposed as a function for mips
and x86 (but not arm), so let's (a) ensure that the __get_tls function
always matches the macro, (b) that we have the function for arm too, and
(c) that we don't have the function for any 64-bit architecture.
Change-Id: Ie9cb989b66e2006524ad7733eb6e1a65055463be
We shouldn't have been passing the bottom 32 bits of the address used
for pthread_join to the kernel.
Change-Id: I487e5002d60c27adba51173719213abbee0f183f
FORTIFY_SOURCE prevents buffer overflows from occurring.
However, the error message often implies that we only
detect it, not prevent it.
Bring more clarity to the error messages by emphasizing
prevention over detection.
Change-Id: I5f3e1478673bdfc589e6cc4199fce8e52e197a24
Make sure the buffer we're dealing with has enough room.
Might as well check for memory issues while we're here,
even though I don't imagine they'll happen in practice.
Change-Id: I0ae1f0f06aca9ceb91e58c70183bb14e275b92b5
Clang (prior to 3.4) does not actually provide a declaration (or definition)
of _Unwind_GetIP() for ARM. We can work around this by writing our own
basic implementation using the available primitive operations.
Change-Id: If6c66846952d8545849ad32d2b55daa4599cfe2c
This was causing conflicting declarations for the library definitions of
common functions like sprintf(), snprintf(), and strchr().
Change-Id: I5daaa8a58183aa0d4d0fae8a7cb799671810f576
Copyright headers shouldn't contain the filename (and especially
shouldn't contain a different file's filename).
Change-Id: I82690a3bf371265402bc16f5d2fbb9299c3a1926
Fortify calls to recv() and recvfrom().
We use __bos0 to match glibc's behavior, and because I haven't
tested using __bos.
Change-Id: Iad6ae96551a89af17a9c347b80cdefcf2020c505
This adds mmap64() to bionic so that it is possible to have
large offset passed to kernel. However, the syscall mechanism
only passes 32-bit number to kernel. So effectively, the
largest offset that can be passed is about 43 bits (since
offset is signed, and the number passed to kernel is number
of pages (page size == 4K => 12 bits)).
Change-Id: Ib54f4e9b54acb6ef8b0324f3b89c9bc810b07281
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
__page_shift and __page_size were accidentally declared in unistd.h with
C linkage - their implementation needs to use the same linkage.
Going forward, though, let's stop the inlining madness and let's kill
the non-standard __getpageshift(). This patch takes getpagesize(3) out
of line and removes __getpageshift but fixes __page_shift and __page_size
for backwards binary compatibility.
Change-Id: I35ed66a08989ced1db422eb03e4d154a5d6b5bda
Signed-off-by: Bernhard Rosenkraenzer <Bernhard.Rosenkranzer@linaro.org>
KernelArgumentBlock is defined as a class in KernelArgumentBlock.h, but
forward declarations refer to it as a struct.
While this is essentially the same, the mismatch causes a compiler
warning in clang (and may cause warnings in future versions of gcc) in
code that is supposed to be compiled with -Werror.
Change-Id: I4ba49d364c44d0a42c276aff3a8098300dbdcdf0
Signed-off-by: Bernhard Rosenkraenzer <Bernhard.Rosenkranzer@linaro.org>
Null or constant dereferencing occurs if properties are not initialized.
On Android devices it shouldn't happen but can be faced if testing bionic
libc.so on Linux host.
Change-Id: I8f047cbe17d0e7bcde40ace000a8aa53789c16cb
Signed-off-by: Pavel Chupin <pavel.v.chupin@intel.com>
clock_gettime was returning EINVAL for the values
produced by pthread_getcpuclockid.
Bug: 10346183
(cherry picked from commit 9b06cc3c1b)
Change-Id: Ib81a7024c218a4502f256c3002b9030e2aaa278d
clock_gettime was returning EINVAL for the values
produced by pthread_getcpuclockid.
Bug: 10346183
Change-Id: Iabe643d7d46110bb311a0367aa0fc737f653208e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Use the new __bionic_name_mem function to name malloc'd memory as
"libc_malloc" on kernels that support it.
Change-Id: I7235eae6918fa107010039b9ab8b7cb362212272
Only works on some kernels, and only on page-aligned regions of
anonymous memory. It will show up in /proc/pid/maps as
[anon:<name>] and in /proc/pid/smaps as Name: <name>
Change-Id: If31667cf45ff41cc2a79a140ff68707526def80e
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
Also make sysconf use PTHREAD_STACK_MIN rather than redefining its
own, different, constant.
Bug: 9997352
Change-Id: I9a8e7d2b18e691439abfb45533e82c36eee9e81d
off_t is signed to support seeking backwards, but that's a liability
when using off_t to represent a subset of a file.
Change-Id: I2a3615166eb16212347eb47f1242e3bfb93c2022
Restoring DEFAULT_MMAP_THRESHOLD to 64k, the way it was before
999089181e.
This forces allocations in the 64k-256k range to be mmaped.
Change-Id: Iace55ed638edd272b3e94fa6cd2ddd349042be84
Signed-off-by: Rom Lemarchand <romlem@google.com>
This reverts commits eb1b07469f and
d14dc3b87f, and fixes the bug where
we were calling mmap (which might cause errno to be set) before
__set_tls (which is required to implement errno).
Bug: 8557703
Change-Id: I2c36d00240c56e156e1bb430d8c22a73a068b70c
We notify debuggerd of problems by installing signal handlers. That's
fine except for when the signal is caused by us running off the end of
a thread's stack and into the guard page.
Bug: 8557703
Change-Id: I1ef65b4bb3bbca7e9a9743056177094921e60ed3
pthread_getattr_np was reporting the values supplied to us, not the values we
actually used, which is kinda the whole point of pthread_getattr_np.
pthread_attr_setguardsize and pthread_attr_setstacksize were reporting EINVAL
for any size that wasn't a multiple of the system page size. This is
unnecessary. We can just round like POSIX suggests and glibc already does.
Also improve the error reporting for pthread_create failures.
Change-Id: I7ebc518628a8a1161ec72e111def911d500bba71