While studying the implementation of POSIX pthread_rwlock* functions, I noticed
that two functions were marked __always_inline twice. "They must really mean it
this time."
Also add back `inline` keyword to one other usage of __always_inline to be
consistent with other uses of __always_inline throughout the codebase.
Change-Id: Ibf9eaed5fc9fd03afcdd969cff82dec71a8ce30f
Historical code still uses this, and people work around its absence
locally. All of iOS/macOS and musl/glibc have this.
Change-Id: I119834f535b346275be5fa1df3c323eee9e242cc
Platform and future NDK releases will have no PAGE_SIZE by default,
unless __BIONIC_DEPRECATED_PAGE_SIZE_MACRO is specified.
This ensures that when people use these headers with non-standard build
systems, they will still become aware of the changes.
Bug: 312546062
Test: build/boot
Change-Id: I29f5de2cd5d59d3cefdd45a6da1ccdd7c12f1f19
Use recoverable mode for system processes and system apps as well.
Given we're a sampled bug detector anyway, why not let these processes
continue. This might save some user experience if something ends up
crashing that requires a SysUI reboot (like system_server). And, hey,
starting up processes is expensive.
Bug: N/A
Test: atest CtsGwpAsanTestCases
Change-Id: Ia6be4fcf3b3ed55a3089587d060aba7ab318cf97
The address of contents is only guaranteed to be aligned to 4KB on
4KB page size systems, but the compiler was generating code that
assumed it to be aligned to 64KB, which broke on a 4KB page size
system. This probably ought to be fixed, either in the compiler so
it can't generate code assuming such large alignments (it's hard to
see what useful optimizations are possible by assuming such large
alignments anyway) or by making bionic respect the p_align field in
PT_LOAD, but for now let's hide the address behind an asm statement
that the compiler can't see through.
As a result of this change, the code generation for the function
__bionic_setjmp_cookie_get on x86 changed so that it clobbers ecx,
as allowed by the calling convention. However, the x86 assembly
implementation for setjmp was assuming that it wouldn't be
clobbered. Fix it.
Bug: 332534664
Change-Id: I07fa737d8cf892d27ce08c305dafb0a53fef36cb
It would be nicer to do this in the build system properly, and skip
linking scudo altogether when using HWASan, but this workaround is
almost as good, so we should submit this for now.
Test: CtsWrapHwasanTestCases
Change-Id: If38df37daadae93b8979279dce7f2c9cc5bc03f8
Submitted on behalf of a third-party: Linaro Limited
License rights, if any, to the submission are granted solely by the
copyright owner of such submission under its applicable intellectual
property.
Copyright (c) 2012, Linaro Limited
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the Linaro nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Origin Project URL: https://android.googlesource.com/platform/bionic/
Commit ID: 7e4fa56099
Third Party code includes additions/modifications from Qualcomm Innovation Center, Inc.
Test: All
Change-Id: I479a572a325e27262d27aa37c516618e4322e9bb
Submitted on behalf of a third-party: Arm Limited
License rights, if any, to the submission are granted solely by the
copyright owner of such submission under its applicable intellectual
property.
Copyright (c) 2012-2022, Arm Limited.
SPDX-License-Identifier: MIT OR Apache-2.0 WITH LLVM-exception
Origin Project URL: https://github.com/ARM-software/optimized-routines
Tag: v24.01
Third Party code includes additions/modifications from Qualcomm Innovation Center, Inc.
Test: All
Change-Id: I0c97398a435e3f8ddf8ad38bc6bd71cc0d78aea5
See code comments for details. I think everything we could reasonably
upstream from this file is now an upstream pull request. If they get in,
I'll try my luck with the arm32 TLS constant (which is a bit more
interesting because there's a probably obsolete conflict upstream, but
someone who knows about FreeBSD/arm32 would want to look at that).
Test: treehugger
Change-Id: I5bf197045940d25efb2a520716499d924c362b57
Not useful right now, but Qualcomm has an Oryon memset they'd like to
use, and there's no reason to treat memrchr as a weird special case.
Bug: https://issuetracker.google.com/330105715
Test: treehugger
Change-Id: Id879479bf4f45433debcb3fe08cfa96bb1eb3b93
RTLD_DEFAULT/RTLD_NEXT already linked to the functions, but the functions should link to the constants too.
Change-Id: I854b632092f077d71918e99b3caec874e1df1ef3
Looks like I'd been bad here, and added new stuff to this file rather
than <elf.h> directly. I've also done nothing to upstream any of this.
This patch at least addresses the former problem, moving our stuff out
into <elf.h>.
Rather than *delete* anything that conflicts with Linux in elf_common.h,
I've disable it with // or #if, and marked those as Android changes to
make it less likely that the next update accidentally drops them (which
isn't super likely, since most of them should actually cause build
failures when they conflict with uapi).
Test: treehugger
Change-Id: Id0deccc7305c60b0f708b55e2eed0dedc0bca41d
arm32/arm64: Previously, the loader miscalculated a negative value for
offset_bionic_tcb_ when the executable's alignment was greater than
(8 * sizeof(void*)). The process then tended to crash.
riscv: Previously, the loader didn't propagate the p_align field of the
PT_TLS segment into StaticTlsLayout::alignment_, so high alignment
values were ignored.
__bionic_check_tls_alignment: Stop capping alignment at page_size().
There is no need to cap it, and the uncapped value is necessary for
correctly positioning the TLS segment relative to the thread pointer
(TP) for ARM and x86. The uncapped value is now used for computing
static TLS layout, but only a page of alignment is actually provided:
* static TLS: __allocate_thread_mapping uses mmap, which provides only
a page's worth of alignment
* dynamic TLS: BionicAllocator::memalign caps align to page_size()
* There were no callers to StaticTlsLayout::alignment(), so remove it.
Allow PT_TLS.p_align to be 0: quietly convert it to 1.
For static TLS, ensure that the address of a TLS block is congruent to
p_vaddr, modulo p_align. That is, ensure this formula holds:
(&tls_block % p_align) == (p_vaddr % p_align)
For dynamic TLS, a TLS block is still allocated congruent to 0 modulo
p_align. Fixing dynamic TLS congruence is mostly a separate problem
from fixing static TLS congruence, and requires changing the dynamic
TLS allocator and/or DTV structure, so it should be fixed in a
later follow-up commit.
Typically (p_vaddr % p_align) is zero, but it's currently possible to
get a non-zero value with LLD: when .tbss has greater than page
alignment, but .tdata does not, LLD can produce a TLS segment where
(p_vaddr % p_align) is non-zero. LLD calculates TP offsets assuming
the loader will align the segment using (p_vaddr % p_align).
Previously, Bionic and LLD disagreed on the offsets from the TP to
the executable's TLS variables.
Add unit tests for StaticTlsLayout in bionic-unit-tests-static.
See also:
* https://github.com/llvm/llvm-project/issues/40872
* https://sourceware.org/bugzilla/show_bug.cgi?id=24606
* https://reviews.llvm.org/D61824
* https://reviews.freebsd.org/D31538
Bug: http://b/133354825
Bug: http://b/328844725
Bug: http://b/328844839
Test: bionic-unit-tests bionic-unit-tests-static
Change-Id: I8850c32ff742a45d3450d8fc39075c10a1e11000
We're starting to see projects _only_ use the SPDX identifiers (and
they're more readable "at a glance" anyway), so it's probably time to
include these...
Test: N/A
Change-Id: I5c76d77dcd392a8db1166108e410389d349a42c3
Vendor modules do not follow bionic versioning but define their own
versioning for LLNDK. Ignore the __INTRODUCED_IN annotation for
vendor modules.
Bug: 302113279
Test: build trunk-staging and next configurations
Change-Id: I04646b524d17f7ae47f0f96cb98f221f3e821629
We cannot use a WriteProtected because we are accessing it in a
multithreaded context.
Test: atest memtag_stack_dlopen_test w/ MTE
Test: atest bionic-unit-tests w/ MTE
Test: atest bionic-unit-tests on _fullmte
Bug: 328256432
Change-Id: I39faa75f97fd5b3fb755a46e88346c17c0e9a8e2
We would get the SP inside of memtag_handle_longjmp, which could prevent
us from detecting the case where a longjmp is going into a function that
had already returned. This changes makes the behaviour more predictable.
Change-Id: I75bf931c8f4129a2f38001156b7bbe0b54a726ee