Also clean up <signal.h> and revert the hacks that were necessary
for 64-bit in linker/debugger.cpp until now.
Change-Id: I3b0554ca8a49ee1c97cda086ce2c1954ebc11892
I originally modified the krait mainloop prefetch from cacheline * 8 to * 2.
This causes a perf degradation for copies bigger than will fit in the cache.
Fixing this back to the original * 8. I tried other multiples, but * 8 is th
sweet spot on krait.
Bug: 11221806
Change-Id: I1f75fad6440f7417e664795a6e7b5616f6a29c45
Let's have both use rt_sigprocmask, like in glibc. The 64-bit ABIs
can share the same code as the 32-bit ABIs.
Also, let's test the return side of these calls, not just the
setting.
Bug: 11069919
Change-Id: I11da99f85b5b481870943c520d05ec929b15eddb
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
We shouldn't have been passing the bottom 32 bits of the address used
for pthread_join to the kernel.
Change-Id: I487e5002d60c27adba51173719213abbee0f183f
This is basically the other half of I5de76f6c46ac87779f207d568a86bb453e2414de
from Pavel Chupin <pavel.v.chupin@intel.com>, but taking the exact upstream
_types.h instead of the modified version. (I was confused when I suggested
otherwise.)
I've also cleaned up the internal_types.h situation; we weren't gaining
anything from these empty files, and there is no upstream internal_types.h
for x86_64.
Change-Id: I802a9a6a8df1c979e820659212c75a47c2ef392e
memcpy.a15.S/strcmp.a15.S files were submitted by ARM for use as the basis
for the memcpy/strcmp implementations in cortex-a15.
memset.S was moved in to the generic directory.
NOTE: memcpy.a9.S was submitted by Linaro to be the basis for the memcpy
for cortex-a9/cortex-a15 but has not been incorporated yet.
Bug: 10971279
Merge from internal master.
(cherry-picked from 48fc3e8b9f)
Change-Id: I8f9297578990d517f004e4e8840e2b2cbd5a47d8
The check for __ARM_FEATURE_DSP being defined is pointless since it
is always defined.
Bug: 10971279
Merge from internal master.
(cherry-picked from d2642fa70c)
Change-Id: If23ab3271f4da0c38cd531ffdc9a7e5eed6ec5dc
Much of the per-architecture duplication can be removed, so let's do so
before we add the 64-bit architectures.
Change-Id: Ieb796503c8e5353ea38c3bab768bb9a690c9a767
I accidentally did a signed comparison of the size_t values passed in
for three of the _chk functions. Changing them to unsigned compares.
Add three new tests to verify this failure is fixed.
Bug: 10691831
Merge from internal master.
(cherry-picked from 883ef2499c)
Change-Id: Id9a96b549435f5d9b61dc132cf1082e0e30889f5
The backtrace when a fortify check failed was not correct. This change
adds all of the necessary directives to get a correct backtrace.
Fix the strcmp directives and change all labels to local labels.
Testing:
- Verify that the runtime can decode the stack for __memcpy_chk, __memset_chk,
__strcpy_chk, __strcat_chk fortify failures.
- Verify that gdb can decode the stack properly when hitting a fortify check.
- Verify that the runtime can decode the stack for a seg fault for all of the
_chk functions and for memcpy/memset.
- Verify that gdb can decode the stack for a seg fault for all of the _chk
functions and for memcpy/memset.
- Verify that the runtime can decode the stack for a seg fault for strcmp.
- Verify that gdb can decode the stack for a seg fault in strcmp.
Bug: 10342460
Bug: 10345269
Merge from internal master.
(cherry-picked from 05332f2ce7)
Change-Id: Ibc919b117cfe72b9ae97e35bd48185477177c5ca
The libcorkscrew stack unwinder does not understand cfi directives,
so add .save directives so that it can function properly.
Also add the directives in to strcmp.S and fix a missing set of
directives in cortex-a9/memcpy_base.S.
Bug: 10345269
Merge from internal master.
(cherry-picked from 5f7ccea3ff)
Change-Id: If48a216203216a643807f5d61906015984987189
This change pulls the memcpy code out into a new file so that the
__strcpy_chk and __strcat_chk can use it with an include.
The new versions of the two chk functions uses assembly versions
of strlen and memcpy to implement this check. This allows near
parity with the assembly versions of strcpy/strcat. It also means that
as memcpy implementations get faster, so do the chk functions.
Other included changes:
- Change all of the assembly labels to local labels. The other labels
confuse gdb and mess up backtracing.
- Add .cfi_startproc and .cfi_endproc directives so that gdb is not
confused when falling through from one function to another.
- Change all functions to use cfi directives since they are more powerful.
- Move the memcpy_chk fail code outside of the memcpy function definition
so that backtraces work properly.
- Preserve lr before the calls to __fortify_chk_fail so that the backtrace
actually works.
Testing:
- Ran the bionic unit tests. Verified all error messages in logs are set
correctly.
- Ran libc_test, replacing strcpy with __strcpy_chk and replacing
strcat with __strcat_chk.
- Ran the debugger on nexus10, nexus4, and old nexus7. Verified that the
backtrace is correct for all fortify check failures. Also verify that
when falling through from __memcpy_chk to memcpy that the backtrace is
still correct. Also verified the same for __memset_chk and bzero.
Verified the two different paths in the cortex-a9 memset routine that
save variables to the stack still show the backtrace properly.
Bug: 9293744
(cherry-picked from 2be91915dc)
Change-Id: Ia407b74d3287d0b6af0139a90b6eb3bfaebf2155
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.
Testing:
- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
three calls to __memset_chk. First call dest_len is length + 1. Second
call dest_len is length. Third call dest_len is length - 1.
Verified that the first two calls pass, and the third fails. Examined
the logcat output on all nexus devices to verify that the fortify
error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
__memcpy_chk and large copies, the savings is relatively small (about 1%).
For small copies, the savings is large on cortex-a15/krait devices
(between 5% to 30%).
For cortex-a9 and small copies, the speed up is present, but relatively
small (about 3% to 5%).
For __memset_chk and large copies, the savings is also small (about 1%).
However, all processors show larger speed-ups on small copies (about 30% to
100%).
Bug: 9293744
Merge from internal master.
(cherry-picked from 7c860db074)
Change-Id: I916ad305e4001269460ca6ebd38aaa0be8ac7f52
Create one version of strcat/strcpy/strlen for cortex-a15/krait and another
version for cortex-a9.
Tested with the libc_test strcat/strcpy/strlen tests.
Including new tests that verify that the src for strcat/strcpy do not
overread across page boundaries.
NOTE: The handling of unaligned strcpy (same code in strcat) could probably
be optimized further such that the src is read 64 bits at a time instead of
the partial reads occurring now.
strlen improves slightly since it was recently optimized.
Performance improvements for strcpy and strcat (using an empty dest string):
cortex-a9
- Small copies vary from about 5% to 20% as the size gets above 10 bytes.
- Copies >= 1024, about a 60% improvement.
- Unaligned copies, from about 40% improvement.
cortex-a15
- Most small copies exhibit a 100% improvement, a few copies only
improve by 20%.
- Copies >= 1024, about 150% improvement.
- Unaligned copies, about 100% improvement.
krait
- Most small copies vary widely, but on average 20% improvement, then
the performance gets better, hitting about a 100% improvement when
copies 64 bytes of data.
- Copies >= 1024, about 100% improvement.
- When coping MBs of data, about 50% improvement.
- Unaligned copies, about 90% improvement.
As strcat destination strings get larger in size:
cortex-a9
- about 40% improvement for small dst strings (>= 32).
- about 250% improvement for dst strings >= 1024.
cortex-a15
- about 200% improvement for small dst strings (>=32).
- about 250% improvement for dst strings >= 1024.
krait
- about 25% improvement for small dst strings (>=32).
- about 100% improvement for dst strings >=1024.
Merge from internal master.
(cherry-picked from d119b7b6f4)
Change-Id: I296463b251ef9fab004ee4dded2793feca5b547a
This is needed when passing -mcpu=cortex-a9 or higher on a modern
toolchain for prebuilt library compatibility
Change-Id: I73eb2393377914ae26216a8c2828ad973d1c1225
Tested using a static version of the strlen libc_test program
on a nexus7 that uses the generic code.
Merge from internal master.
(cherry-picked from d8d10a8994)
Change-Id: I88f7dc01dc5b5c3ac2d5580d92153bc1bc36c564
This optimized version is primarily targeted at cortex-a15.
Tested on all nexus devices using the system/extras/libc_test strlen test.
Tested alignments from 1 to 32 that are powers of 2.
Tested that strlen does not cross page boundaries at all alignments.
Speed improvements listed below:
cortex-a15
- Sizes >= 32 bytes, ~75% improvement.
- Sizes >= 1024 bytes, ~250% improvement.
cortex-a9
- Sizes >= 32 bytes, ~75% improvement.
- Sizes >= 1024 bytes, ~85% improvement.
krait
- Sizes >= 32 bytes, ~95% improvement.
- Sizes >= 1024 bytes, ~160% improvement.
Merge from internal master.
(cherry-picked from 2fc0717977)
Change-Id: I1ceceb4e745fd68e9d946f96d1d42e0cdaff6ccf
We cleaned up the auto-generated ones a while back to not touch
the stack unnecessarily if they have <= 4 arguments. This patch
cleans up some hand-crafted ones.
Also improve comments in clone.S.
Change-Id: I8850bf98f2b26829385315304472a760e6880ed8
This memcpy code uses NEON/VFP to achieve very good performance
on ARMv7-A processors. It is specifically tuned for A15 but should
provide good performance on A9 also. It is equivalent to the code
in cortex-strings rev 116.
This patch is a follow up the existing gerrit change:
I7f6f77995f3ca903ad9c66d14261441667a2a935
This version includes a tweak for performance on misaligned
buffers and splits the header comment into license and
documentation sections.
Change-Id: Ibd2e23c8d8e01357ba0247be1d05192de3ceba69
Signed-off-by: Will Newton <will.newton@linaro.org>
This memcpy code uses NEON/VFP to achieve very good performance
on ARMv7-A processors. It is specifically tuned for A15 but should
provide good performance on A9 also. It is equivalent to the code
in cortex-strings rev 116.
This patch is a follow up the existing gerrit change:
I7f6f77995f3ca903ad9c66d14261441667a2a935
But this version includes a tweak for performance on misaligned
buffers.
Change-Id: I285abac0068f8ae29a1cbf7862ea8590aadaf0a7
Signed-off-by: Will Newton <will.newton@linaro.org>