platform_bionic/benchmarks
Bob Badour 48d43034d7 [LSC] Add LOCAL_LICENSE_KINDS to bionic
Added SPDX-license-identifier-Apache-2.0 to:
  libdl/Android.bp
  tools/versioner/src/Android.bp

Added SPDX-license-identifier-Apache-2.0 SPDX-license-identifier-BSD to:
  benchmarks/Android.bp
  libc/malloc_debug/Android.bp
  libc/system_properties/Android.bp
  linker/Android.bp
  tests/Android.bp
  tests/libs/Android.bp
  tests/libs/Android.build.dlext_testzip.mk
  tests/make_fortify_compile_test.mk

Added SPDX-license-identifier-Apache-2.0 SPDX-license-identifier-BSD
    SPDX-license-identifier-ISC SPDX-license-identifier-MIT
    legacy_notice legacy_unencumbered
to:
  Android.bp
  libc/Android.bp

Added SPDX-license-identifier-Apache-2.0 SPDX-license-identifier-BSD
    SPDX-license-identifier-ISC SPDX-license-identifier-MIT
    legacy_unencumbered
to:
  tools/Android.bp
  tools/versioner/Android.bp

Added SPDX-license-identifier-Apache-2.0 SPDX-license-identifier-BSD
    SPDX-license-identifier-MIT legacy_unencumbered
to:
  libm/Android.bp

Added SPDX-license-identifier-Apache-2.0 legacy_unencumbered
to:
  libc/tools/Android.bp

Added SPDX-license-identifier-BSD
to:
  benchmarks/linker_relocation/Android.bp
  benchmarks/spawn/Android.bp
  libc/async_safe/Android.bp
  libc/malloc_hooks/Android.bp
  libfdtrack/Android.bp
  tests/headers/Android.bp
  tests/headers/posix/Android.bp

Added legacy_notice
to:
  apex/Android.bp
  benchmarks/linker_relocation/gen/Android.bp

Bug: 68860345
Bug: 151177513
Bug: 151953481

Test: m all

Exempt-From-Owner-Approval: janitorial work
Change-Id: I76cad00578b9b99180ee5dd1e04b4646d5c5fedf
2021-02-12 17:51:24 -08:00
..
linker_relocation [LSC] Add LOCAL_LICENSE_KINDS to bionic 2021-02-12 17:51:24 -08:00
spawn [LSC] Add LOCAL_LICENSE_KINDS to bionic 2021-02-12 17:51:24 -08:00
suites Fix benchmark-tests 2018-08-11 23:43:03 -07:00
test_suites Fix benchmark-tests 2018-08-11 23:43:03 -07:00
tests Fix test failures. 2018-11-07 14:30:55 -08:00
Android.bp [LSC] Add LOCAL_LICENSE_KINDS to bionic 2021-02-12 17:51:24 -08:00
atomic_benchmark.cpp Add benchmarks for heap size retrieval 2018-10-18 17:56:58 -07:00
bionic_benchmarks.cpp benchmarks: add 16 and 32 bytes to common sizes 2020-02-19 17:18:25 +01:00
ctype_benchmark.cpp Reimplement the <ctype.h> is* functions. 2019-10-08 12:04:09 -07:00
dlfcn_benchmark.cpp Add dladdr benchmark. 2019-10-16 08:59:06 -07:00
expf_input.cpp Add expf and exp2f benchmark 2018-06-19 11:34:54 -03:00
get_heap_size_benchmark.cpp Make more use of benchmark::DoNotOptimize in benchmarks. 2020-10-22 13:43:59 -07:00
inttypes_benchmark.cpp benchmarks: remove more boilerplate. 2019-09-26 07:42:23 -07:00
logf_input.cpp Add logf and log2f benchmark 2018-06-19 11:34:54 -03:00
malloc_benchmark.cpp Add mallopt M_PURGE benchmark. 2019-12-05 15:46:22 -08:00
malloc_map_benchmark.cpp Add a std::map, std::unordered_map benchmark. 2020-02-22 14:43:41 +00:00
malloc_sql.h Add new malloc benchmarks. 2018-08-14 16:01:58 -07:00
malloc_sql_benchmark.cpp Add mallopt M_PURGE benchmark. 2019-12-05 15:46:22 -08:00
math_benchmark.cpp Add pow benchmark 2018-08-08 18:04:48 -03:00
powf_input.cpp Add powf benchmark 2018-06-19 11:34:54 -03:00
property_benchmark.cpp Merge "Add an end-to-end property benchmark." 2020-02-03 13:58:11 +00:00
pthread_benchmark.cpp Changes for #inclusivefixit. 2020-07-21 16:34:58 -07:00
README.md Update module name referenced in README.md 2020-10-12 16:48:36 -07:00
run-on-host.sh run-on-host fixes 2019-09-24 15:36:31 -07:00
semaphore_benchmark.cpp Changes for #inclusivefixit. 2020-07-21 16:34:58 -07:00
sincosf_input.cpp Add sinf/cosf/sincosf benchmark 2018-06-19 11:34:54 -03:00
stdio_benchmark.cpp Make more use of benchmark::DoNotOptimize in benchmarks. 2020-10-22 13:43:59 -07:00
stdlib_benchmark.cpp Make more use of benchmark::DoNotOptimize in benchmarks. 2020-10-22 13:43:59 -07:00
string_benchmark.cpp Make more use of benchmark::DoNotOptimize in benchmarks. 2020-10-22 13:43:59 -07:00
time_benchmark.cpp bionic: benchmark: add clock_getres performance tests 2017-12-07 09:41:31 -08:00
unistd_benchmark.cpp Add a couple of new benchmarks. 2020-01-29 16:36:14 -08:00
util.cpp Fix bug in --bionic_cpu option handling. 2018-05-04 17:34:35 -07:00
util.h Add mallopt M_PURGE benchmark. 2019-12-05 15:46:22 -08:00
wctype_benchmark.cpp benchmarks: remove more boilerplate. 2019-09-26 07:42:23 -07:00

Bionic Benchmarks

[TOC]

libc benchmarks (bionic-benchmarks)

bionic-benchmarks is a command line tool for measuring the runtimes of libc functions. It is built on top of Google Benchmark with some additions to organize tests into suites.

Device benchmarks

$ mmma bionic/benchmarks
$ adb root
$ adb sync data
$ adb shell /data/benchmarktest/bionic-benchmarks/bionic-benchmarks
$ adb shell /data/benchmarktest64/bionic-benchmarks/bionic-benchmarks

By default, bionic-benchmarks runs all of the benchmarks in alphabetical order. Pass --benchmark_filter=getpid to run just the benchmarks with "getpid" in their name.

Host benchmarks

See the benchmarks/run-on-host.sh script. The host benchmarks can be run with 32-bit or 64-bit Bionic, or the host glibc.

XML suites

Suites are stored in the suites/ directory and can be chosen with the command line flag --bionic_xml.

To choose a specific XML file, use the --bionic_xml=FILE.XML option. By default, this option searches for the XML file in the suites/ directory. If it doesn't exist in that directory, then the file will be found as relative to the current directory. If the option specifies the full path to an XML file such as /data/nativetest/suites/example.xml, it will be used as-is.

If no XML file is specified through the command-line option, the default is to use suites/full.xml. However, for the host bionic benchmarks (bionic-benchmarks-glibc), the default is to use suites/host.xml.

XML suite format

The format for a benchmark is:

<fn>
    <name>BM_sample_benchmark</name>
    <cpu><optional_cpu_to_lock></cpu>
    <iterations><optional_iterations_to_run></iterations>
    <args><space separated list of function args|shorthand></args>
</fn>

XML-specified values for iterations and cpu take precedence over those specified via command line (via --bionic_iterations and --bionic_cpu, respectively.)

To make small changes in runs, you can also schedule benchmarks by passing in their name and a space-separated list of arguments via the --bionic_extra command line flag, e.g. --bionic_extra="BM_string_memcpy AT_COMMON_SIZES" or --bionic_extra="BM_string_memcmp 32 8 8"

Note that benchmarks will run normally if extra arguments are passed in, and it will fail with a segfault if too few are passed in.

Shorthand

For the sake of brevity, multiple runs can be scheduled in one XML element by putting one of the following in the args field:

NUM_PROPS
MATH_COMMON
AT_ALIGNED_<ONE|TWO>BUF
AT_<any power of two between 2 and 16384>_ALIGNED_<ONE|TWO>BUF
AT_COMMON_SIZES

Definitions for these can be found in bionic_benchmarks.cpp, and example usages can be found in the suites directory.

Unit Tests

bionic-benchmarks also has its own set of unit tests, which can be run from the binary in /data/nativetest[64]/bionic-benchmarks-tests

Process startup time (bionic-spawn-benchmarks)

The spawn/ subdirectory has a few benchmarks measuring the time used to start simple programs (e.g. Toybox's true and sh -c true). Run it on a device like so:

m bionic-spawn-benchmarks
adb root
adb sync data
adb shell /data/benchmarktest/bionic-spawn-benchmarks/bionic-spawn-benchmarks
adb shell /data/benchmarktest64/bionic-spawn-benchmarks/bionic-spawn-benchmarks

Google Benchmark reports both a real-time figure ("Time") and a CPU usage figure. For these benchmarks, the CPU measurement only counts time spent in the thread calling posix_spawn, not that spent in the spawned process. The real-time is probably more useful, and it is the figure used to determine the iteration count.

Locking the CPU frequency seems to improve the results of these benchmarks significantly, and it reduces variability.

Google Benchmark notes

Repetitions

Google Benchmark uses two settings to control how many times to run each benchmark, "iterations" and "repetitions". By default, the repetition count is one. Google Benchmark runs the benchmark a few times to determine a sufficiently-large iteration count.

Google Benchmark can optionally run a benchmark run repeatedly and report statistics (median, mean, standard deviation) for the runs. To do so, pass the --benchmark_repetitions option, e.g.:

# ./bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll --benchmark_repetitions=4
...
-------------------------------------------------------------------
Benchmark                         Time             CPU   Iterations
-------------------------------------------------------------------
BM_stdlib_strtoll              27.7 ns         27.7 ns     25290525
BM_stdlib_strtoll              27.7 ns         27.7 ns     25290525
BM_stdlib_strtoll              27.7 ns         27.7 ns     25290525
BM_stdlib_strtoll              27.8 ns         27.7 ns     25290525
BM_stdlib_strtoll_mean         27.7 ns         27.7 ns            4
BM_stdlib_strtoll_median       27.7 ns         27.7 ns            4
BM_stdlib_strtoll_stddev      0.023 ns        0.023 ns            4

There are 4 runs, each with 25290525 iterations. Measurements for the individual runs can be suppressed if they aren't needed:

# ./bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll --benchmark_repetitions=4 --benchmark_report_aggregates_only
...
-------------------------------------------------------------------
Benchmark                         Time             CPU   Iterations
-------------------------------------------------------------------
BM_stdlib_strtoll_mean         27.8 ns         27.7 ns            4
BM_stdlib_strtoll_median       27.7 ns         27.7 ns            4
BM_stdlib_strtoll_stddev      0.043 ns        0.043 ns            4

CPU frequencies

To get consistent results between runs, it can sometimes be helpful to restrict a benchmark to specific cores, or to lock cores at specific frequencies. Some phones have a big.LITTLE core setup, or at least allow some cores to run at higher frequencies than others.

A core can be selected for bionic-benchmarks using the --bionic_cpu option or using the taskset utility. e.g. A Pixel 3 device has 4 Kryo 385 Silver cores followed by 4 Gold cores:

blueline:/ # /data/benchmarktest64/bionic-benchmarks/bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll --bionic_cpu=0
...
------------------------------------------------------------
Benchmark                  Time             CPU   Iterations
------------------------------------------------------------
BM_stdlib_strtoll       64.2 ns         63.6 ns     11017493

blueline:/ # /data/benchmarktest64/bionic-benchmarks/bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll --bionic_cpu=4
...
------------------------------------------------------------
Benchmark                  Time             CPU   Iterations
------------------------------------------------------------
BM_stdlib_strtoll       21.8 ns         21.7 ns     33167103

A similar result can be achieved using taskset. The first parameter is a bitmask of core numbers to pass to sched_setaffinity:

blueline:/ # taskset f /data/benchmarktest64/bionic-benchmarks/bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll
...
------------------------------------------------------------
Benchmark                  Time             CPU   Iterations
------------------------------------------------------------
BM_stdlib_strtoll       64.3 ns         63.6 ns     10998697

blueline:/ # taskset f0 /data/benchmarktest64/bionic-benchmarks/bionic-benchmarks --benchmark_filter=BM_stdlib_strtoll
...
------------------------------------------------------------
Benchmark                  Time             CPU   Iterations
------------------------------------------------------------
BM_stdlib_strtoll       21.3 ns         21.2 ns     33094801

To lock the CPU frequency, use the sysfs interface at /sys/devices/system/cpu/cpu*/cpufreq/. Changing the scaling governor to performance suppresses the warning that Google Benchmark otherwise prints:

***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.

Some devices have a perf-setup.sh script that locks CPU and GPU frequencies. Some TradeFed benchmarks appear to be using the script. For more information: