These are available to com.android.neuralnetworks via the baseline
apexAvailable map in build/soong/apex/apex.go. This CL makes this
explicit in Android.bp
Test: m nothing #passes
Bug: 281077552
Change-Id: I9f08db0dba6b155c6f25393a5d4baf6de27110da
This CL adds the following additional bounds checks:
* Adds additional checks of the index of the std::vector before
accessing the element at the index
* Changes the array index operator [] to the checked std::vector::at
method
Bug: 256589724
Test: mma
Merged-In: I6bfb02a5cd76258284cc4d797a4508b21e672c4b
Change-Id: I6bfb02a5cd76258284cc4d797a4508b21e672c4b
(cherry picked from commit 9b17e6485b)
Merged-In: I6bfb02a5cd76258284cc4d797a4508b21e672c4b
This CL adds the following additional bounds checks:
* Adds additional checks of the index of the std::vector before
accessing the element at the index
* Changes the array index operator [] to the checked std::vector::at
method
Bug: 256589724
Test: mma
Merged-In: I6bfb02a5cd76258284cc4d797a4508b21e672c4b
Change-Id: I6bfb02a5cd76258284cc4d797a4508b21e672c4b
interfaces.
- Add new Android.bp in graphics folder and wrap composer and common
AIDL in to separate cc_defaults.
- remove composer3 dependency from allocator's VTS .bp file.
Bug: 243429120
Test: builds
Change-Id: Ia91e4ab87b7ac86248094317185b317d5604e654
Certain mutation testing -- mutateOperandLifeTimeTest and
mutateOperandInputOutputTest -- can introduce potentially very large
CONSTANT_COPY operands, which can in turn create potentially very
large Models which must be passed across binder. To avoid overflowing
the binder buffer, we estimate the size of the mutated Model, and skip
the test if that size is too high. The old logic recognizes that our
tests only have a single active binder transaction at a time, and
assumes that there are no other clients using the same service at the
same time, and so we should have the binder buffer to ourselves; to be
conservative, we reject any Model whose estimated size exceeds half
the binder buffer size. Unfortunately, sometimes the binder buffer
still overflows, because it unexpectedly contains an allocation from
some other transaction: It appears that binder buffer memory
management is not serialized with respect to transactions from our
tests, and therefore depending on scheduler behavior, there may be a
sizeable allocation still in the buffer when we attempt to pass the
large Model. To fix this problem we become even more conservative,
and instead of limiting the Model to half the binder buffer size, we
limit it to half IBinder.MAX_IPC_SIZE (the recommended transaction
size limit). To confirm that this change does not exclude too many
tests, I checked how may times the size filter function
exceedsBinderSizeLimit is called, how many times it rejects a model
under the new logic (modelsExceedHalfMaxIPCSize), and how many times
it rejects a model under the old logic (modelsExceedHalfMaxIPCSize).
Test: VtsHalNeuralnetworksV1_0TargetTest --gtest_filter=TestGenerated/ValidationTest.Test/*-*dsp*
Test: # models = 3592, modelsExceedHalfMaxIPCSize = 212, modelsExceedHalfBufferSize = 18
Test: VtsHalNeuralnetworksV1_1TargetTest --gtest_filter=TestGenerated/ValidationTest.Test/*-*dsp*
Test: # models = 7228, modelsExceedHalfMaxIPCSize = 330, modelsExceedHalfBufferSize = 28
Test: VtsHalNeuralnetworksV1_2TargetTest --gtest_filter=TestGenerated/ValidationTest.Test/*-*dsp*
Test: # models = 52072, modelsExceedHalfMaxIPCSize = 506, modelsExceedHalfBufferSize = 28
Test: VtsHalNeuralnetworksV1_3TargetTest --gtest_filter=TestGenerated/ValidationTest.Test/*-*dsp*
Test: # models = 73342, modelsExceedHalfMaxIPCSize = 568, modelsExceedHalfBufferSize = 28
Test: VtsHalNeuralnetworksTargetTest
Bug: 227719657
Bug: 227719752
Bug: 231928847
Bug: 238777741
Bug: 242271308
Change-Id: I3f81d71ca3c0ad4c639096b1dc034a8909bc8971
A sibling change removes the NN HIDL sample drivers from cuttlefish. In
response, this change removes the VtsHalNeuralnetworksV1_*TargetTest
tests from the TEST_MAPPING because they do not test anything without
the NN HIDL sample drivers present.
Note that the NN AIDL sample drivers and NN AIDL VTS test
(VtsHalNeuralnetworksTargetTest) are still present.
Bug: 233665601
Test: mma
Test: croot && cd hardware/interfaces/neuralnetworks && atest
Change-Id: I90bccd843ba9296c27d3010cec652be55a13a225
uniform_int_distribution<a> for types sizeof(a) < 2 are
not valid by the C++ library standard. Newer versions of LLVM
(particularly spurred on by ChromeOS toolchain changes)
require at least std::uniform_int_distribution<uint16_t>.
This is a required change for rolling LLVM to r458507.
This is necessary, but may not be sufficient to resolve
the issue.
Bug: 231351802
Test: mma
Change-Id: I04c3cc91507f3467c432b9a25effdac3f5fb56f3
Also skip FL6 (AIDL_V2) tests for older AIDL drivers.
Cherrypicked from I689fef0945428f6548977628e3c43628dd1e5bf7
Bug: 206089870
Test: VtsHalNeuralnetworksTargetTest
Specifically, for old driver such as AIDL_V1 sample driver, it can pass
HIDL tests and skip AIDL_V2 tests. For new driver such as AIDL_V2
sample driver, it can pass all tests.
Change-Id: I689fef0945428f6548977628e3c43628dd1e5bf7
(cherry picked from commit 23d4e5e298)
- Add BATCH_MATMUL operation
- Support TENSOR_INT32 for RESHAPE operation.
Also update "current" version snapshot and use
android.hardware.neuralnetworks-V2-ndk since AIDL v1 has been frozen.
Cherrypicked from Iabe45c57e2306d61055f711eda03b80b9cbe906d
Bug: 206089870
Test: mm
Change-Id: Iabe45c57e2306d61055f711eda03b80b9cbe906d
Merged-In: Iabe45c57e2306d61055f711eda03b80b9cbe906d
(cherry picked from commit aaeda0e84f)
Prior to this change, if IDevice::prepareModel* was passed a null
callback, the code would still attempt to call "notify" on the callback
to return the error to the client. This CL ensures the "notify" method
will not be invoked if the callback is null.
Bug: 230914930
Test: mma
Test: presubmit
Change-Id: I4a15d02c4879a0261ec26cc0e7a47d0a4da86b8b
Merged-In: I4a15d02c4879a0261ec26cc0e7a47d0a4da86b8b
(cherry picked from commit d6f6d01499)
NNAPI NN_TRY macros use Statement Expressions (a GNU extension) to
propagate errors. However, a "return" statement in a Statement
Expression can lead to memory leaks when the Statement Expression is
being used to initialize a member of a struct. Specifically, when one
member of a struct is already initialized, and a Statement Expression
used to initialize a subsequent member early-returns, the previously
initialized members will not have their destructors called.
This CL moves any NN_TRY macro out of struct initialization to avoid any
potential memory leaks.
Bug: 230500484
Test: mma
Test: presubmit
Change-Id: I3493fd4764f8eacc86750e6414e62bc891abaccd
Merged-In: I3493fd4764f8eacc86750e6414e62bc891abaccd
NNAPI NN_TRY macros use Statement Expressions (a GNU extension) to
propagate errors. However, a "return" statement in a Statement
Expression can lead to memory leaks when the Statement Expression is
being used to initialize a member of a struct. Specifically, when one member of a struct is already initialized, and a Statement Expression used to initialize a subsequent member early-returns, the previously initialized members will not have their destructors called.
This CL moves any NN_TRY macro out of struct initialization to avoid any
potential memory leaks.
Bug: 230500484
Test: mma
Test: presubmit
Change-Id: I3493fd4764f8eacc86750e6414e62bc891abaccd
For IBurst, a slot value of -1 indicates the slot should be ignored.
However, GeneratedTestHarness still attempts to call
IBurst::releaseMemoryResource on ignored slots. Instead, we should skip
releasing any ignored slots.
Bug: 230103381
Test: mma
Test: VtsHalNeuralnetworksTargetTest
Test: presubmit
Change-Id: I82e538aa0fd9e8ecc077df1c1ceece46a6166e67
Merged-In: I82e538aa0fd9e8ecc077df1c1ceece46a6166e67
For IBurst, a slot value of -1 indicates the slot should be ignored.
However, GeneratedTestHarness still attempts to call
IBurst::releaseMemoryResource on ignored slots. Instead, we should skip
releasing any ignored slots.
Bug: 230103381
Test: mma
Test: VtsHalNeuralnetworksTargetTest
Test: presubmit
Change-Id: I82e538aa0fd9e8ecc077df1c1ceece46a6166e67
Prior to this change, if IDevice::prepareModel* was passed a null
callback, the code would still attempt to call "notify" on the callback
to return the error to the client. This CL ensures the "notify" method
will not be invoked if the callback is null.
Bug: N/A
Test: mma
Test: presubmit
Change-Id: I4a15d02c4879a0261ec26cc0e7a47d0a4da86b8b
This change alters the asynchronous execute* methods to be handled
synchronously (from the same thread) for three reasons:
1) To remove the need to use IPreparedModel::getUnderlyingResource
2) To simplify the code
3) To make the code more performant
Bug: N/A
Test: mma
Test: presubmit
Change-Id: I2c37deb03d1b1c34b0173bd741e55fce4de757f7
Union tags are of enum type. Streaming it would make more sense by
casting to underlying types.
For now casting is not required since tags are defined as `enum Tag`.
But we're going to change it to `enum class Tag` which won't work with
operator<< without casting.
Bug: 218912230
Test: m
Change-Id: Ia5e8a5c38fe23c72dffbdca320a32abdfa0eb38e
These defines are redundant because they are already defined in
neuralnetworks_utils_defaults.
Bug: N/A
Test: mma
Change-Id: I1c5c44e9e61da19bc10dd8ed2e38099f7c4baccd
* Timed out runs do not show any warning messages.
* These test files cannot finish clang-tidy runs with
the following settings:
TIDY_TIMEOUT=90
WITH_TIDY=1
CLANG_ANALYZER_CHECKS=1
* When TIDY_TIMEOUT is set, in Android continuous builds,
tidy_timeout_srcs files will not be compiled by clang-tidy.
When developers build locally without TIDY_TIMEOUT,
tidy_timeout_srcs files will be compiled.
* Some of these test modules may be split into smaller ones,
or disable some time consuming checks, and then
enable clang-tidy to run within limited time.
Bug: 201099167
Test: make droid tidy-hardware-interfaces_subset
Change-Id: I1de28f1572fff368f67eab512fffec9f2e5c2a9b