Make sure that all the variables are properly initialized.
Remove the code that verifies the core to enable using get_schedaffinity
since that make it impossible to change the cpu for different tests.
Change the cpu_to_lock to an int, it really didn't need to be a long.
Fix a few missing tests.
Test: Ran unit tests.
Test: Built the tests and ran on different cpus, verifying that the
Test: chosen cpu was correct.
Test: Created an xml file that had different cpus for different tests
Test: and verified that it locked to each cpu properly.
Change-Id: Ie7b4ad8f306f13d6e968d118e71bb5dc0221552a
We've been using #pragma once for new internal files, but let's be more bold.
Bug: N/A
Test: builds
Change-Id: I7e2ee2730043bd884f9571cdbd8b524043030c07
There were a bunch more unreasonable/incorrect ones, but these ones
seemed legit. Nothing very interesting, though.
Bug: N/A
Test: ran tests, benchmarks
Change-Id: If66971194d4a7b4bf6d0251bedb88e8cdc88a76f
Instead of requiring the need to maintain a list of all the benchmarks,
add a programmatic way to generate all of the benchmarks.
This generation runs the benchmarks in alphabetical order.
Add a new macro BIONIC_BENCHMARK_WITH_ARG that will be the default argument
to pass to the benchmark. Change the benchmarks that require default arguments.
Add a small example xml file, and remove the full.xml/host.xml files.
Update readme.
Test: Ran new unit tests, verified all tests are added.
Change-Id: I8036daeae7635393222a7a92d18f34119adba745