The Android build system always links against libstdc++.so anyway. Having
operator new and operator delete in a separate library means we can't use
constructors and destructors on heap-allocated objects inside the C library,
which is quite an unfortunate limitation.
This will be cheaper too; on LP64 we can stop linking against the [now empty]
libstdc++.so giving the dynamic linker one less library to worry about for
every process.
There's precedent too --- we already have no libpthread or librt.
For now I'm leaving the include files where they are, and I'm generating a
dummy libstdc++.so and libstdc++.a. We can come back and clean that up later
if all goes well.
Bug: 13367666
Change-Id: I6f3e27ea7c30d03d6394965d0400c9dc87fa83db
The x86_64 build was failing because clone.S had a call to __thread_entry which
was being added to a different intermediate .a on the way to making libc.so,
and the linker couldn't guarantee statically that such a relocation would be
possible.
ld: error: out/target/product/generic_x86_64/obj/STATIC_LIBRARIES/libc_common_intermediates/libc_common.a(clone.o): requires dynamic R_X86_64_PC32 reloc against '__thread_entry' which may overflow at runtime; recompile with -fPIC
This patch addresses that by ensuring that the caller and callee end up in the
same intermediate .a. While I'm here, I've tried to clean up some of the mess
that led to this situation too. In particular, this removes libc/private/ from
the default include path (except for the DNS code), and splits out the DNS
code into its own library (since it's a weird special case of upstream NetBSD
code that's diverged so heavily it's unlikely ever to get back in sync).
There's more cleanup of the DNS situation possible, but this is definitely a
step in the right direction, and it's more than enough to get x86_64 building
cleanly.
Change-Id: I00425a7245b7a2573df16cc38798187d0729e7c4
Remove the hand-collated ones, and switch to a script that pulls the
copyright headers out of every file and collects the unique ones.
Change-Id: Ied3b98b3f56241df97166c410ff81de4e0157c9d
In particular this affects assert(3) and __cxa_pure_virtual, both of
which have managed to confuse people this week by apparently aborting
without reason. (Because stderr goes nowhere, normally.)
Bug: 6852995
Bug: 6840813
Change-Id: I7f5d17d5ddda439e217b7932096702dc013b9142
The root of the problem is that the existing implementation is based on the
ARM C++ ABI, which mandates a different guard variable layout than the
Itanium/x86 C++ one.
This patch modifies the implementation in a way that satisfies both ABIs (and
doesn't require changing the toolchains).
Change-Id: I885e9adc7f088b9c0a78355bd752f1e6aeec9f07
Signed-off-by: Fengwei Yin <fengwei.yin@intel.com>
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Bruce Beare <bruce.j.beare@intel.com>
We're going to modify the __atomic_xxx implementation to provide
full memory barriers, to avoid problems for NDK machine code that
link to these functions.
First step is to remove their usage from our platform code.
We now use inlined versions of the same functions for a slight
performance boost.
+ remove obsolete atomics_x86.c (was never compiled)
NOTE: This improvement was benchmarked on various devices.
Comparing a pthread mutex lock + atomic increment + unlock
we get:
- ARMv7 emulator, running on a 2.4 GHz Xeon:
before: 396 ns after: 288 ns
- x86 emulator in KVM mode on same machine:
before: 27 ns after: 27 ns
- Google Nexus S, in ARMv7 mode (single-core):
before: 82 ns after: 76 ns
- Motorola Xoom, in ARMv7 mode (multi-core):
before: 121 ns after: 120 ns
The code has also been rebuilt in ARMv5TE mode for correctness.
Change-Id: Ic1dc72b173d59b2e7af901dd70d6a72fb2f64b17