These are padding pages are only needed to layout the ELF to be
compatible with max-page-size. They are zero-filled (holes) and
can be dropped from the page cache.
The madvise() here is a special case that also serves to hint to the
kernel what part of the segment is padding.
For example the kernel then shows these padding regions as PROT_NONE
VMAs (labeled [page size compat]) in /proc/*/maps.
Note: This doesn't use backing vm_area_structs, so doesn't consume
additional slab memory.
Before:
❯ cf-adb shell cat /proc/1/maps | grep -A1 'libbase.so$'
7f8d13600000-7f8d13614000 r--p 00000000 fe:09 21909460 /system/lib64/libbase.so
7f8d13614000-7f8d13638000 r-xp 00014000 fe:09 21909460 /system/lib64/libbase.so
7f8d13638000-7f8d1363c000 r--p 00038000 fe:09 21909460 /system/lib64/libbase.so
7f8d1363c000-7f8d1363d000 rw-p 0003c000 fe:09 21909460 /system/lib64/libbase.so
Segments appear extended in /proc/<pid>/maps
After:
❯ cf-adb shell cat /proc/1/maps | grep -A1 'libbase.so$'
7f3650043000-7f3650054000 r--p 00000000 fe:09 21906900 /system/lib64/libbase.so
7f3650054000-7f3650057000 ---p 00000000 00:00 0 [page size compat]
7f3650057000-7f3650079000 r-xp 00014000 fe:09 21906900 /system/lib64/libbase.so
7f3650079000-7f365007b000 ---p 00000000 00:00 0 [page size compat]
7f365007b000-7f365007c000 r--p 00038000 fe:09 21906900 /system/lib64/libbase.so
7f365007c000-7f365007f000 ---p 00000000 00:00 0 [page size compat]
7f365007f000-7f3650080000 rw-p 0003c000 fe:09 21906900 /system/lib64/libbase.so
Segments maintain PROT_NONE gaps ("[page size compat]") for app
compatiblity but these are not backed by actual slab VMA memory.
Bug: 330117029
Bug: 327600007
Bug: 330767927
Bug: 328266487
Bug: 329803029
Test: Manual - Launch Free Fire Chaos app
Change-Id: Ic50540e247b4294eb08f8cf70e74bd2bf6606684
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
It has been found that some existing apps have implicit dependencies on
the address ranges in /proc/*/maps. Since segment extension changes the
range of the LOAD segment VMAs some of these apps crash, either by
SIGBUS or in yet unidentified ways.
Only perform the segment extension optimization if the kernel has the
necessary mitigations to ensure app compatibility.
Bug: 330117029
Bug: 327600007
Bug: 330767927
Bug: 328266487
Bug: 329803029
Test: Manual - Launch Free Fire Chaos app
Change-Id: I5b03b22c4a468f6646750a00942cc2d57f43d0de
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Loader segment extension was introduced to fix kernel slab memory
regressions in page-agnostic Android. This reression was due to required
VMAs for the gap VMAs that exist when the elf-max-page-size >
runtime-page-size.
This issue already existed for some libraries like libart.so and
libhwui.so which use 2mB alignment to make use of THPs (transparent huge
pages).
Later it was found that this optimization could break in-field apps due
to dependencies and the address ranges in /proc/*/[s]maps.
To avoid breaking the in-field apps, the kernel can work around the
compatibility issues if made aware of where the padding regions exist.
However, the kernel can only represent padding for p_align up to 64KiB.
This is because the kernel uses 4 available bits in the vm_area_struct
to represent padding extent; and so cannot enable mitigations to avoid
breaking app compatibility for p_aligns > 64KiB.
For ELFs with p_align > 64KiB, don't do segment extension, to avoid issues
with app compatibility -- these ELFs already exist with gap mappings
and are not part of the page size transition VMA slab regression.
Bug: 330117029
Bug: 327600007
Bug: 330767927
Bug: 328266487
Bug: 329803029
Test: Manual - Launch Free Fire Chaos app
Change-Id: Id4dcced4632dce67adab6348816f85847ce1de58
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Some obfuscated ELFs have PT_NOTE headers that are past the end of the
file. Skip parsing these for crt_pad_segment note, as accesses beyond
the file will cause a SIGBUS.
Bug: 331717625
Test: Manual - Launch Guns up app
Change-Id: I39365064e6c1538b0be1114479557d94a72ee369
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Only zero the partial page at the end of the segment. There may be
entire pages beyond first page boundary after the segment -- due to
segment padding (VMA extension); but these are not expected to be
touched (faulted).
Do not attempt to zero the extended region past the first partial page,
since doing so may:
1) Result in a SIGBUS, as the region is not backed by the underlying
file.
2) Break the COW backing, faulting in new anon pages for a region
that will not be used.
Bug: 327600007
Bug: 328797737
Test: Dexcom G7 app
Test: atest -c linker-unit-tests
Test: bootup/idle/system-processes-memory-direct
Change-Id: Ib76b022f94bfc4a4b7eaca2c155af81478741b91
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I9cd04574190ef4c727308363a8cb1120c36e53e0
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests
Test: atest -c bionic-unit-tests
Change-Id: I7150ed22af0723cc0b2d326c046e4e4a8b56ad09
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Some obfuscated ELFs may containe "empty" PT_NOTEs (p_memsz == 0).
Attempting to mmap these will cause a EINVAL failure since the requested
mapping size is zero.
Skip these phrogram headers when parsing notes.
Also improve the failure log with arguments to the mmap syscall.
Test: Platinum Tests
Bug: 324468126
Change-Id: I7de4e55c6d221d555faabfcc33bb6997921dd022
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
If the LOAD segment VMAs are extended to prevent creating additional
VMAs, the the protection extent of the GNU_RELRO segment must also
be updated to match. Otherwise, the partial mprotect will reintroduce
an additional VMA due to the split protections.
Update the GNU_RELRO protection range when the ELF was loaded by the
bionic loader. Be careful not to attempt any fix up for ELFs not loaded
by us (e.g. ELF loaded by the kernel) since these don't have the
extended VMA fix to begin with.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
1 extra RW VMA at offset 0x000da000 (3 RW mappings in total)
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f5a50225000-7f5a50279000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a50279000-7f5a5031a000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5031a000-7f5a5032e000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032e000-7f5a5032f000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f5a5032f000-7f5a50780000 rw-p 00000000 00:00 0 [anon:.bss]
Removed RW VMA at offset 0x000da000 (2 RW mappings in total)
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [ Later patch ]
Test: atest -c bionic-unit-tests
Change-Id: If1d99e8b872fcf7f6e0feb02ff33503029b63be3
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
When the page_size < p_align of the ELF load segment, the loader
will end up creating extra PROT_NONE gap VMA mappings between the
LOAD segments. This problem is exacerbated by Android's zygote
model, where the number of loaded .so's can lead to ~30MB increase
in vm_area_struct unreclaimable slab memory.
Extend the LOAD segment VMA's to cover the range between the
segment's end and the start of the next segment, being careful
to avoid touching regions of the extended mapping where the offset
would overrun the size of the file. This avoids the loader
creating an additional gap VMA for each LOAD segment.
Consider a system with 4KB page size and the ELF files with 64K
alignment. e.g:
$ readelf -Wl /system/lib64/bootstrap/libc.so | grep 'Type\|LOAD'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x0000000000000000 0x0000000000000000 0x0441a8 0x0441a8 R 0x10000
LOAD 0x0441b0 0x00000000000541b0 0x00000000000541b0 0x091860 0x091860 R E 0x10000
LOAD 0x0d5a10 0x00000000000f5a10 0x00000000000f5a10 0x003d40 0x003d40 RW 0x10000
LOAD 0x0d9760 0x0000000000109760 0x0000000000109760 0x0005c0 0x459844 RW 0x10000
Before this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7fa1d4a90000-7fa1d4ad5000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4ad5000-7fa1d4ae4000 ---p 00000000 00:00 0
7fa1d4ae4000-7fa1d4b76000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b76000-7fa1d4b85000 ---p 00000000 00:00 0
7fa1d4b85000-7fa1d4b8a000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b8a000-7fa1d4b99000 ---p 00000000 00:00 0
7fa1d4b99000-7fa1d4b9a000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7fa1d4b9a000-7fa1d4feb000 rw-p 00000000 00:00 0 [anon:.bss]
3 additional PROT_NONE (---p) VMAs for gap mappings.
After this patch:
$ cat /proc/1/maps | grep -A1 libc.so
7f468f069000-7f468f0bd000 r--p 00000000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f0bd000-7f468f15e000 r-xp 00044000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f15e000-7f468f163000 r--p 000d5000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f163000-7f468f172000 rw-p 000da000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f172000-7f468f173000 rw-p 000d9000 fe:09 20635520 /system/lib64/bootstrap/libc.so
7f468f173000-7f468f5c4000 rw-p 00000000 00:00 0 [anon:.bss]
No additional gap VMAs. However notice there is an extra RW VMA at
offset 0x000da000. This is caused by the RO protection of the
GNU_RELRO segment, which causes the extended RW VMA to split.
The GNU_RELRO protection extension is handled in the subsequent
patch in this series.
Bug: 316403210
Bug: 300367402
Bug: 307803052
Bug: 312550202
Test: atest -c linker-unit-tests [Later patch]
Test: atest -c bionic-unit-tests
Change-Id: I3363172c02d5a4e2b2a39c44809e433a4716bc45
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
The PAD_SEGMENT note is used to optimize memory usage of the loader.
If the note parsing fails, skip the optimization and continue
loading the ELF normally.
Bug: 324309329
Bug: 316403210
Change-Id: I2aabc9f399816c53eb33ff303208a16022571edf
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
kPageSize is needed to determine whether the loader needs to
extend VMAs to avoid gaps in the memory map when loading the ELF.
While at it, use kPageSize to generically deduce the PMD size and
replace the hardcoded 2MB PMD size.
Bug: 316403210
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Change-Id: Ie770320e629c38149dc75dae1deb1e429dd1acf2
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
ReadPadSegmentNote() finds the elf note of type
NT_ANDROID_TYPE_PAD_SEGMENT and checks that the desc value
is 1, to decided whether the LOAD segment mappings should
be extended (padded) to avoid gaps.
Cache the result of this operation in ElfReader and soinfo
for use in the subsequent patch which handles the extension
of the segment mappings.
Test: atest -c linker-unit-tests [Later patch]
Test: m && launch_cvd
Bug: 316403210
Change-Id: I32c05cce741d221c3f92835ea09d932c40bdf8b1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
This patch adds the necessary bionic code for the linker to protect
global data using MTE.
The implementation is described in the MemtagABI addendum to the
AArch64 ELF ABI:
https://github.com/ARM-software/abi-aa/blob/main/memtagabielf64/memtagabielf64.rst
In summary, this patch includes:
1. When MTE globals is requested, the linker maps writable SHF_ALLOC
sections as anonymous pages with PROT_MTE (copying the file contents
into the anonymous mapping), rather than using a file-backed private
mapping. This is required as file-based mappings are not necessarily
backed by the kernel with tag-capable memory. For sections already
mapped by the kernel when the linker is invoked via. PT_INTERP, we
unmap the contents, remap a PROT_MTE+anonymous mapping in its place,
and re-load the file contents from disk.
2. When MTE globals is requested, the linker tags areas of global memory
(as defined in SHT_AARCH64_MEMTAG_GLOBALS_DYNAMIC) with random tags,
but ensuring that adjacent globals are never tagged using the same
memory tag (to provide detemrinistic overflow detection).
3. Changes to RELATIVE, ABS64, and GLOB_DAT relocations to load and
store tags in the right places. This ensures that the address tags are
materialized into the GOT entries as well. These changes are a
functional no-op to existing binaries and/or non-MTE capable hardware.
Bug: N/A
Test: atest bionic-unit-tests CtsBionicTestCases --test-filter=*Memtag*
Change-Id: Id7b1a925339b14949d5a8f607dd86928624bda0e
Looks like we all copy-pasted the same comment, and the original comment was wrong. These functions all return 0 on success, and -1 on error.
Change-Id: I11e635e0895fe1fa941d69b721b8ad9ff5eb7f15
Test: N/A
Bug: N/A
To enable experiments with non-4KiB page sizes, introduce
an inline page_size() function that will either return the runtime
page size (if PAGE_SIZE is not 4096) or a constant 4096 (elsewhere).
This should ensure that there are no changes to the generated code on
unaffected platforms.
Test: source build/envsetup.sh
lunch aosp_cf_arm64_16k_phone-userdebug
m -j32 installclean
m -j32
Test: launch_cvd \
-kernel_path /path/to/out/android14-5.15/dist/Image \
-initramfs_path /path/to/out/android14-5.15/dist/initramfs.img \
-userdata_format=ext4
Bug: 277272383
Bug: 230790254
Change-Id: Ic0ed98b67f7c6b845804b90a4e16649f2fc94028
When loading a dynamic library, reserved memory is successful, but fail in other steps, such as loading segments, which will generate a memory leak. Because the reserved memory is not released in time.
Bug: https://issuetracker.google.com/issues/263713888
Change-Id: I556ee02e37db5259df0b6c7178cd9a076dab9725
Signed-off-by: huangchaochao <huangchaochao@bytedance.com>
To take advantage of file-backed huge pages for the text segments of key
shared libraries (go/android-hugepages), the dynamic linker must load
candidate ELF files at an appropriately aligned address and mark
executable segments with MADV_HUGEPAGE.
This patches uses segments' p_align values to determine when a file is
PMD aligned (2MB alignment), and performs load operations accordingly.
Bug: 158135888
Test: Verified PMD aligned libraries are backed with huge pages on
supporting kernel versions.
Change-Id: Ia2367fd5652f663d50103e18f7695c59dc31c7b9
This patch adds support to load BTI-enabled objects.
According to the ABI, BTI is recorded in the .note.gnu.property section.
The new parser evaluates the property section, if exists.
It searches for .note section with NT_GNU_PROPERTY_TYPE_0.
Once found it tries to find GNU_PROPERTY_AARCH64_FEATURE_1_AND.
The results are cached.
The main change in linker is when protection of loaded ranges gets
applied. When BTI is requested and the platform also supports it
the prot flags have to be amended with PROT_BTI for executable ranges.
Failing to add PROT_BTI flag would disable BTI protection.
Moreover, adding the new PROT flag for shared objects without BTI
compatibility would break applications.
Kernel does not add PROT_BTI to a loaded ELF which has interpreter.
Linker handles this case too.
Test: 1. Flame boots
2. Tested on FVP with BTI enabled
Change-Id: Iafdf223b74c6e75d9f17ca90500e6fe42c4c1218
Add inaccessible gaps between shared libraries to make it harder for the
attackers to defeat ASLR by random probing.
To avoid excessive page table bloat, only do this when a library is
about to cross a huge page boundary, effectively allowing several
smaller libraries to be lumped together.
Bug: 158113540
Test: look at /proc/$$/maps
Change-Id: I39c0100b81f72447e8b3c6faafa561111492bf8c
This reverts commit a8cf3fef2a.
Reason for revert: memory regression due to the fragmentation of the page tables
Bug: 159810641
Bug: 158113540
Change-Id: I6212c623ff440c7f6889f0a1e82cf7a96200a411
Improve ASLR by increasing the randomly sized gaps between shared
library mappings, and keep them mapped PROT_NONE.
Bug: 158113540
Test: look at /proc/$$/maps
Change-Id: Ie72c84047fb624fe2ac8b7744b2a2d0d255ea974
Historically we've made a few mistakes where they haven't matched the
right number. And most non-Googlers are much more familiar with the
numbers, so it seems to make sense to rely more on them. Especially in
header files, which we actually expect real people to have to read from
time to time.
Test: treehugger
Change-Id: I0d4a97454ee108de1d32f21df285315c5488d886
ANDROID_DLEXT_WRITE_RELRO was causing the GNU RELRO sections of
libraries to become corrupted if more than one library was being loaded
at once (i.e. if the root library has DT_NEEDED entries for libraries
that weren't already loaded). The file offset was not being correctly
propagated between calls, so after writing out the (correct) RELRO data
to the file, it was mapping the data at file offset 0 for all libraries,
which corrupted the data for all but one of the libraries.
Fix this by passing file_offset as a pointer the same way that
phdr_table_map_gnu_relro does.
Bug: 128623590
Test: tbd
Change-Id: I196cd336bd5a67454e89fd85487356b1c7856871
Introduce a new flag ANDROID_DLEXT_RESERVED_ADDRESS_RECURSIVE which
instructs the linker to use the reserved address space to load all of
the newly-loaded libraries required by a dlopen() call instead of only
the main library. They will be loaded consecutively into that region if
they fit. The RELRO sections of all the loaded libraries will also be
considered for reading/writing shared RELRO data.
This will allow the WebView implementation to potentially consist of
more than one .so file while still benefiting from the RELRO sharing
optimisation, which would otherwise only apply to the "root" .so file.
Test: bionic-unit-tests (existing and newly added)
Bug: 110790153
Change-Id: I61da775c29fd5017d9a1e2b6b3757c3d20a355b3
Change three things regarding the work around to the fact that init is
special:
1) Only first stage init is special, so we change the check to include
accessing /proc/self/exe, which if is available, means that we're
not first stage init and do not need any work arounds.
2) Fix the fact that /init may be a symlink and may need readlink()
3) Suppress errors from realpath_fd() since these are expected to fail
due to /proc not being mounted.
Bug: 80395578
Test: sailfish boots without the audit generated from calling stat()
on /init and without the errors from realpath_fd()
Change-Id: I266f1486b142cb9a41ec791eba74122bdf38cf12
We've copied & pasted these to too many places. And if we're going to
have another go at upstreaming these, that's probably yet another reason
to have the *values* in just one place. (Even if upstream wants different
names, we'll likely keep the legacy names around for a while for source
compatibility.)
Bug: http://b/111903542
Test: ran tests
Change-Id: I8ccc557453d69530e5b74f865cbe0b458c84e3ba
init is now built as a dynamic executable, so the dynamic linker has to
be able to run in the init process. However, since init is launched so
early, even /dev/* and /proc/* file systems are not mounted and thus
some APIs that rely on the paths do not work. The dynamic linker now
goes alternative path when it is running in the init process.
For example, /proc/self/exe is not read for the init since we always now
the path of the init (/init). Also, arc4random* APIs are not used since
the APIs rely on /dev/urandom. Linker now does not randomize library
loading order and addresses when running in the init process.
Bug: 80454183
Test: `adb reboot recovery; adb devices` shows the device ID
Change-Id: I29b6d70e4df5f7f690876126d5fe81258c1d3115
Explicitly say "warning" for warnings, explicitly say what action
we're going to take (such as "(ignoring)"), always provide a link to
our documentation when there is one, explicitly say what API level the
behavior changes at, and explicitly say why we're allowing the misbehavior
for now.
Bug: http://b/71852862
Test: ran tests, looked at logcat
Change-Id: I1795a5af45deb904332b866d7d666690dae4340b