Benchmark added to test an optimization I'll send round next, test added
when an even bigger refactoring (as part of a more interesting
optimization) broke strtol() in a way the strtol() tests didn't notice.
Test: treehugger
Change-Id: Ic974900021107938dbbbe98648960adb102d9595
The bionic benchmarks set the decay time in various ways, but
don't necessarily restore it properly. Add a new method for
getting the current decay time and then a way to restore it.
Right now the assumption is that the decay time defaults to zero,
but in the near future that assumption might be incorrect. Therefore
using this method will future proof the code.
Bug: 302212507
Test: Unit tests pass for both static and dynamic executables.
Test: Ran bionic benchmarks that were modified.
Change-Id: Ia77ff9ffee3081c5c1c02cb4309880f33b284e82
I don't think the old benchmarks made any sense; going through all the
characters might have made sense as a unit test, but I'm not sure how
actionable they were for realistic cases. In particular, "all ASCII" is
a common special case that's worth measuring separately. I'm still not
hugely convinced, but at least separating the "ASCII" and "wide" paths
is an improvement. I can't think of a case where we did optimization
work on this kind of code without considering those two paths
separately.
I've added to the single-character benchmarks by splitting out the
separate cases instead --- one benchmark each for single-byte up to
4-byte characters.
Bug: http://b/206523398
Test: treehugger
Change-Id: I58cbfedb4b497a55580857eff307a024938cf006
A lot of these benchmarks predate DoNotOptimize and rolled their own
hacks.
Bug: http://b/148307629
Test: ran benchmarks before & after and got similar results
Change-Id: If44699d261b687f6253af709edda58f4c90fb285
Add a calloc benchmark to make sure that a native allocator isn't
doing anything incorrectly when zero'ing memory.
Also add a fork call benchmark to verify that the time to make a
fork call isn't increasing.
Test: Ran benchmarks on walleye and verified that the numbers are not
Test: too variable between runs.
Change-Id: I61d289d277f85ac432a315e539cf6391ea036866
Update the native allocator documentation to include running of this
benchmark.
Move the malloc_benchmark.cpp to malloc_sql_benchmark.cpp and use
malloc_benchmark.cpp for benchmarking functions from malloc.h.
Bug: 137795072
Test: Ran new benchmark.
Change-Id: I76856de833032da324ad0bc0b6bd85a4ea8c253d
Many of our benchmarks are basically just "call one function with a
fixed argument". We don't need to keep repeating all the boilerplate for
that.
This also ensures we don't forget the benchmark::DoNotOptimize, which --
in addition to being a good idea in general -- specifically solves the
problem with gettid benchmark and provides a more accurate result by
removing the indirection through a function pointer.
Test: ran benchmarks
Change-Id: Id67535243678cd0d48f51cf25141e2040da9af03
Adding some benchmarks that keep a certain number of allocation
around. This benchmark should not be used as an absolute for determining
what is a good/bad native allocator. However, it should be used to make
sure that numbers are not completely changed between allocator versions.
Also update the malloc sql benchmark to match the same style as these
new benchmarks.
Bug: 129743239
Test: Ran these benchmarks.
Change-Id: I1995d98fd269b61d9c96efed6eff3ed278e24c97
Add a hand-rolled maps line parser as something to compare our realistic
sscanf benchmark against. Also add benchmarks for the ato*/strto* family.
This patch doesn't fix the tests, which seem to have been broken by
the recent google-benchmark upgrade despite the benchmarks themselves
all working just fine. To me that's a final strike against these tests
which are hard to maintain and not obviously useful, but we can worry
about what to do with them -- whether to just delete them or to try to
turn them into tests that actually have some value -- in a separate CL.
Bug: N/A
Test: ran benchmarks
Change-Id: I6c9a77ece98d624baeb049b360876ef5c35ea7f2
Instead of requiring the need to maintain a list of all the benchmarks,
add a programmatic way to generate all of the benchmarks.
This generation runs the benchmarks in alphabetical order.
Add a new macro BIONIC_BENCHMARK_WITH_ARG that will be the default argument
to pass to the benchmark. Change the benchmarks that require default arguments.
Add a small example xml file, and remove the full.xml/host.xml files.
Update readme.
Test: Ran new unit tests, verified all tests are added.
Change-Id: I8036daeae7635393222a7a92d18f34119adba745