Add basic assembly stubs for TLS Descriptor support in the dynamic
linker, and enable several code paths related to TLSDESC for RISC-V.
Note: This patch requires an updated toolchain that supports TLSDESC
for RISC-V, and the `-mtls-dialect=` compiler option specifically.
Test: adb shell /data/nativetest64/bionic-unit-tests/bionic-unit-tests --gtest_filter=*tls*
Bug: 322984914
Change-Id: I74bd0fa216b44b4ca2c5a5a6aec37b3fc47b00d9
I only came to improve the signature mismatch error, but I was then annoyed by the copy & paste of the other checks.
get_chunk_size() seems to be deliberately avoiding any checks, though I think that might be a bug, and there should be a get_chunk_size() that _does_ check for most callers, and a get_chunk_size_unchecked() for the <sys/thread_properties.h> stuff that seems to want to only be "best effort" (but does still have _some_ possibility of aborting, in addition to the possibility of segfaulting).
Also a bit of "include what you use" after cider complained about all the unused includes in bionic_allocator.h.
Bug: https://issuetracker.google.com/341850283
Change-Id: I278b495601353733af516a2d60ed10feb9cef36b
We only use this in one other place anyway.
Also be explicit about how `__tls_get_addr` and `___tls_get_addr` differ, since I missed that at first!
Change-Id: Ica214886c5346f118f063bca26e6dd8d74ee21f4
The maximum page size Android supports
now is 16384, and Art only supports 16kB,
so we can save a bit of space.
Bug: 332556665
Test: N/A
Change-Id: I23df607bcc5cf9e96d7b6a66169413cd1a883f7e
The address of contents is only guaranteed to be aligned to 4KB on
4KB page size systems, but the compiler was generating code that
assumed it to be aligned to 64KB, which broke on a 4KB page size
system. This probably ought to be fixed, either in the compiler so
it can't generate code assuming such large alignments (it's hard to
see what useful optimizations are possible by assuming such large
alignments anyway) or by making bionic respect the p_align field in
PT_LOAD, but for now let's hide the address behind an asm statement
that the compiler can't see through.
As a result of this change, the code generation for the function
__bionic_setjmp_cookie_get on x86 changed so that it clobbers ecx,
as allowed by the calling convention. However, the x86 assembly
implementation for setjmp was assuming that it wouldn't be
clobbered. Fix it.
Bug: 332534664
Change-Id: I07fa737d8cf892d27ce08c305dafb0a53fef36cb
arm32/arm64: Previously, the loader miscalculated a negative value for
offset_bionic_tcb_ when the executable's alignment was greater than
(8 * sizeof(void*)). The process then tended to crash.
riscv: Previously, the loader didn't propagate the p_align field of the
PT_TLS segment into StaticTlsLayout::alignment_, so high alignment
values were ignored.
__bionic_check_tls_alignment: Stop capping alignment at page_size().
There is no need to cap it, and the uncapped value is necessary for
correctly positioning the TLS segment relative to the thread pointer
(TP) for ARM and x86. The uncapped value is now used for computing
static TLS layout, but only a page of alignment is actually provided:
* static TLS: __allocate_thread_mapping uses mmap, which provides only
a page's worth of alignment
* dynamic TLS: BionicAllocator::memalign caps align to page_size()
* There were no callers to StaticTlsLayout::alignment(), so remove it.
Allow PT_TLS.p_align to be 0: quietly convert it to 1.
For static TLS, ensure that the address of a TLS block is congruent to
p_vaddr, modulo p_align. That is, ensure this formula holds:
(&tls_block % p_align) == (p_vaddr % p_align)
For dynamic TLS, a TLS block is still allocated congruent to 0 modulo
p_align. Fixing dynamic TLS congruence is mostly a separate problem
from fixing static TLS congruence, and requires changing the dynamic
TLS allocator and/or DTV structure, so it should be fixed in a
later follow-up commit.
Typically (p_vaddr % p_align) is zero, but it's currently possible to
get a non-zero value with LLD: when .tbss has greater than page
alignment, but .tdata does not, LLD can produce a TLS segment where
(p_vaddr % p_align) is non-zero. LLD calculates TP offsets assuming
the loader will align the segment using (p_vaddr % p_align).
Previously, Bionic and LLD disagreed on the offsets from the TP to
the executable's TLS variables.
Add unit tests for StaticTlsLayout in bionic-unit-tests-static.
See also:
* https://github.com/llvm/llvm-project/issues/40872
* https://sourceware.org/bugzilla/show_bug.cgi?id=24606
* https://reviews.llvm.org/D61824
* https://reviews.freebsd.org/D31538
Bug: http://b/133354825
Bug: http://b/328844725
Bug: http://b/328844839
Test: bionic-unit-tests bionic-unit-tests-static
Change-Id: I8850c32ff742a45d3450d8fc39075c10a1e11000
We cannot use a WriteProtected because we are accessing it in a
multithreaded context.
Test: atest memtag_stack_dlopen_test w/ MTE
Test: atest bionic-unit-tests w/ MTE
Test: atest bionic-unit-tests on _fullmte
Bug: 328256432
Change-Id: I39faa75f97fd5b3fb755a46e88346c17c0e9a8e2
Also enable stack MTE if main binary links in a library that needs it.
Otherwise the following is possible:
1. a binary doesn't require stack MTE, but links in libraries that use
stg on the stack
2. that binary later dlopens a library that requires stack MTE, and our
logic in dlopen remaps the stacks with MTE
3. the libraries from step 1 now have tagged pointers with missing tags
in memory, so things go wrong
This reverts commit f53e91cc81.
Reason for revert: Fixed problem detected in b/324568991
Test: atest memtag_stack_dlopen_test with MTE enabled
Test: check crash is gone on fullmte build
Change-Id: I4a93f6814a19683c3ea5fe1e6d455df5459d31e1
These files were segregated because they were lacking a little cleanup.
Unfortunately that means this change has to do some of the cleanup, but
that's probably for the best.
Test: treehugger
Change-Id: I2dd33504787fc3313995de99e0745a0df22915b3
This reverts commit 79c9694c91.
Reason for revert: DroidMonitor: Potential culprit for Bug b/324348078 - verifying through ABTD before revert submission. This is part of the standard investigation process, and does not mean your CL will be reverted.
Change-Id: I32f7bc824900e18a7d53b025ffe3aaef0ee71802
Factor out generic __get_elf_note() logic and rename __get_elf_note() to
__find_elf_note(). Expose __get_elf_note() in libc/private/bionic_note.h
This will be used in the subsequent patch to test the presence of
NT_ANDROID_TYPE_PAD_SEGMENT note when loading segments.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: Ia7cb2f40b10cfaef402182a675087c8422b37e4d
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
BYPASS_INCLUSIVE_LANGUAGE_REASON="man" refers to manual not person
Bug: 318749472
Test: atest pthread on MTE enabled device
Test: atest memtag_stack_dlopen_test on MTE enabled device
Test: manual with NDK r26b built app with fsanitize=memtag-stack
Change-Id: Iac191c31b87ccbdc6a52c63ddd22e7b440354202
clang complains if you define a symbol and _then_ make it weak, rather
than the other way round:
/tmp/setjmp-c3c977.s:90:1: warning: sigsetjmp changed binding to STB_WEAK
.weak sigsetjmp;
^
Test: treehugger
Change-Id: Iee6b0ea456bb2e92aea810ce45f171caabaa89d2
crt_pad_segment provides a note of type NT_ANDROID_TYPE_PAD_SEGMENT.
It's intended when present is for the loader to pad segment gaps to
reduce kernel slab memory usage (due to additional vm_area_struct's
for gaps). crt_pad_segment.o retains control to the static linker so
that app developers can opt out (build with different flags) it there
are undesireable side effects.
Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 0] NULL 0000000000000000 000000 000000 00 0 0 0
[ 1] .strtab STRTAB 0000000000000000 0000f8 000066 00 0 0 1
[ 2] .text PROGBITS 0000000000000000 000040 000000 00 AX 0 0 4
[ 3] .note.GNU-stack PROGBITS 0000000000000000 000040 000000 00 0 0 1
[ 4] .note.android.pad_segment NOTE 0000000000000000 000040 00001c 00 A 0 0 4
[ 5] .rela.note.android.pad_segment RELA 0000000000000000 0000e0 000018 18 I 7 4 8
[ 6] .debug_line PROGBITS 0000000000000000 00005c 000052 00 0 0 1
[ 7] .symtab SYMTAB 0000000000000000 0000b0 000030 18 1 1 8
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
L (link order), O (extra OS processing required), G (group), T (TLS),
C (compressed), x (unknown), o (OS specific), E (exclude),
D (mbind), l (large), p (processor specific)
Bug: 316403210
Test: m -j50 ndk
Test: find out/soong/ndk -name 'crt_pad_segment.o'
Test: readelf -SW crt_pag_segment.o
Change-Id: I94af5e85fd602e7ba5fd56788ae39277368229d8
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
std::atomic<T>'s default constructor is no longer trivial, because it
now does value-initialization. As a result, the class is no longer
trivial, so libc_globals is no longer trivial, so it is no longer POD.
(FWIW, the "POD" notion has been deprecated in favor of "trivial" and
"standard layout" concepts: POD == trivial + stdlayout.)
See https://cplusplus.github.io/LWG/issue2334 and wg21.link/p0883r2.
Mark __libc_globals as constinit, because that seems closer to
something we actually care about, AFAICT.
Bug: http://b/175635923
Test: m libc_malloc_debug
Change-Id: I338589bce03d06f20752bca342eeb86a42fc1ee7
This is a no-op but will be used in upcoming scudo changes that allow to
change the depot size at process startup time, and as such we will no
longer be able to call __scudo_get_stack_depot_size in debuggerd.
We already did the equivalent change for the ring buffer size in
https://r.android.com/q/topic:%22scudo_ring_buffer_size%22
Bug: 309446692
Change-Id: Icdcf4cd0a3a486d1ea07a8c616cae776730e1047
The bionic benchmarks set the decay time in various ways, but
don't necessarily restore it properly. Add a new method for
getting the current decay time and then a way to restore it.
Right now the assumption is that the decay time defaults to zero,
but in the near future that assumption might be incorrect. Therefore
using this method will future proof the code.
Bug: 302212507
Test: Unit tests pass for both static and dynamic executables.
Test: Ran bionic benchmarks that were modified.
Change-Id: Ia77ff9ffee3081c5c1c02cb4309880f33b284e82
Adds support for the dynamic entries to specify MTE enablement. This is
now the preferred way for dynamically linked executables to specify to
the loader what mode MTE should be in, and whether stack MTE should be
enabled. In future, this is also needed for MTE globals support.
Leave the existing ELF note parsing as a backup option because dynamic
entries are not supported for fully static executables, and there's
still a bunch of glue sitting around in the build system and tests that
explicitly include the note. When -fsanitize=memtag* is specified, lld
will create the note implicitly (along with the new dynamic entries),
but at some point once we've cleaned up all the old references to the
note, we can remove the notegen from lld.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Test: Build/boot the device under _fullmte.
Change-Id: I954b7e78afa5ff4274a3948b968cfad8eba94d88
strerrordesc_np() isn't very useful (being just another name for
strerror()), but strerrorname_np() lets you get "ENOSYS" for ENOSYS,
which will make some of our test assertion messages clearer when we
switch over from strerror().
This also adds `%#m` formatting to all the relevant functions.
Test: treehugger
Change-Id: Icfe07a39a307d591c3f4f2a09d008dc021643062
The alignment of kShadowSize to a page sized multiple is
not explicitly needed, since mmap() will return a page-sized
multiple mapping.
kCfiCheckAlign remains 4k as this is chosen by the clang
compiler. [1] [2]
[1] 3568976375/clang/lib/CodeGen/CGExpr.cpp (L3433)
[2] https://clang.llvm.org/docs/ControlFlowIntegrityDesign.html#cfi-shadow
Bug: 296275298
Test: Boot 16kb device, check no cfi failures.
Test: atest -c bionic-unit-tests
Change-Id: Iac0c129c413afe01389f529f5c64051c4ffff2df
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
In the 4k targets there is no fucntional difference since
max_page_size() == page_size() == 4096.
On a 16kb device max_page_size() == 65536 and page_size() == 16384.
However, aligning up does not incur any memory regressions
since the .bss section is still be backes in page-sizeed chunks.
See: go/16k-page-aligned-variables
Bug: 296275298
Test: mma
Change-Id: I41c3e410f3b84c24eeb969c9aeca4b33a8d6170a
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This lets us remove the riscv64 "sys/asm.h" file. It turns out everyone
loves this macro --- tons of x86 and arm assembler is already using it!
I'll clean up some of the now-duplicate definitions separately, and I'll
move the assembler we wrote ourselves over to this macro (rather than
the current `.L_foo` style) too.
Test: built riscv64 _and_ arm/arm64 _and_ x86/x86-64
Change-Id: If3f93c9b71094a8bed1fd1bb81bb83ec60ce409e
We've had these backward all this time. The relevant quote is in a
code comment in the implementation, but the first call after
completely decoding a code point that requires a surrogate pair should
return the number of bytes decoded by the most recent call, and the
second call should return -3 (if only C had given those some named
constants that might have been more obviously wrong).
Bug: https://issuetracker.google.com/289419882
Test: Fixed the test, tests run against glibc and musl to confirm
Change-Id: Idabf01075b1cad35b604ede8d676d6f0b1dc91e6
The magic numbers that C defines are obnoxious. We had partial
definitions for these internally. Add the missing one and move them to
a public header for anyone else that may want to use them.
Bug: None
Test: None
Change-Id: Ia6b8cff4310bcccb23078c52216528db668ac966
Not necessary (as demonstrated by the lack of this for x86), but this
saves one stack frame in aborts, which gets you one more useful stack
frame in logs and clustering etc, which improves your chances of finding
your bug.
Test: crasher64 abort
Change-Id: Ieb214f3b46520161edc1e53c0d766353b777d8ba
Also de-pessimize time(), where the vdso entrypoint only exists on
x86/x86-64 anyway.
Bug: https://github.com/google/android-riscv64/issues/8
Test: strace
Change-Id: I14cb2a3130b6ff88d06d43ea13d3a825a26de290
While looking at the disassembly for the epoll stuff I noticed that this
expands to quite a lot of code that the compiler can't optimize out for
LP64 (because it doesn't know that the "copy the argument into a local
and then use the local" bit isn't important).
There are two obvious options here. Something like this:
```
int signalfd64(int fd, const sigset64_t* mask, int flags) {
return __signalfd4(fd, mask, sizeof(*mask), flags);
}
int signalfd(int fd, const sigset_t* mask, int flags) {
#if defined(__LP64__)
return signalfd64(fd, mask, flags);
#else
SigSetConverter set = {.sigset = *mask};
return signalfd64(fd, &set.sigset64, flags);
#endif
}
```
Or something like this:
```
int signalfd64(int fd, const sigset64_t* mask, int flags) {
return __signalfd4(fd, mask, sizeof(*mask), flags);
}
#if defined(__LP64__)
__strong_alias(signalfd, signalfd64);
#else
int signalfd(int fd, const sigset_t* mask, int flags) {
SigSetConverter set = {};
set.sigset = *mask;
return signalfd64(fd, &set.sigset64, flags);
}
#endif
```
The former is slightly more verbose, but seems a bit more obvious, so I
initially went with that. (The former is more verbose in the generated
code too, given that the latter expands to _no_ code, just another symbol
pointing to the same code address.)
Having done that, I realized that slight changes to the interface would
let clang optimize away most/all of the overhead for LP64 with the only
preprocessor hackery being in SigSetConverter itself.
I also pulled out the legacy bsd `int` conversions since they're only
used in two (secret!) functions, so it's clearer to just have a separate
union for them. While doing so, I suppressed those functions for
riscv64, since there's no reason to keep carrying that mistake forward.
posix_spawn() is another simple case that doesn't actually benefit from
SigSetConverter, so I've given that its own anonymous union too.
Test: treehugger
Change-Id: Iaf67486da40d40fc53ec69717c3492ab7ab81ad6
shadowstack implicitly assumes that SCS_SIZE is a multiple of page size.
Currently, SCS_SIZE is set to 8K. This assumption is broken on 16K
platforms.
Test: launch_cvd --use_16k
Bug: 253652966
Bug: 279808236
Change-Id: I1180cfba32c98d638e18615ccfdc369beb390ea7
GWP-ASan's recoverable mode was landed upstream in
https://reviews.llvm.org/D140173.
This mode allows for a use-after-free or a buffer-overflow bug to be
detected by GWP-ASan, a crash report dumped, but then GWP-ASan (through
the preCrashReport() and postCrashReportRecoverableOnly() hooks) will
patch up the memory so that the process can continue, in spite of the
memory safety bug.
This is desirable, as it allows us to consider migrating non-system apps
from opt-in GWP-ASan to opt-out GWP-ASan. The major concern was "if we
make it opt-out, then bad apps will start crashing". If we don't crash,
problem solved :). Obviously, we'll need to do this with an amount of
process sampling to mitigate against the 70KiB memory overhead.
The biggest problem is that the debuggerd signal handler isn't the first
signal handler for apps, it's the sigchain handler inside of libart.
Clearly, the sigchain handler needs to ask us whether the crash is
GWP-ASan's fault, and if so, please patch up the allocator. Because of
linker namespace restrictions, libart can't directly ask the linker
(which is where debuggerd lies), so we provide a proxy function in libc.
Test: Build the platform, run sanitizer-status and various test apps
with recoverable gwp-asan. Assert that it doesn't crash, and we get a
debuggerd report.
Bug: 247012630
Change-Id: I86d5e27a9ca5531c8942e62647fd377c3cd36dfd
This is a no-op but will be used in upcoming scudo changes that allow to
change the buffer size at process startup time, and as such we will no
longer be able to call __scudo_get_ring_buffer_size in debuggerd.
Bug: 263287052
Change-Id: I18f166fc136ac8314d748eb80a806defcc25c9fd