Comparing with v1.1, the converter for 1.2 HIDL model has additional support
for extraParam, dynamic output shape, and zero-sized output.
Modify CompilationCachingTests to use the new test struct.
Bug: 123092187
Bug: 138718240
Test: All VTS
Change-Id: I54ac97f62898e47a338b51cc6d895a0309ab001f
Implement converter utilities constructing HIDL model and request from
TestModel.
Bug: 123092187
Bug: 138718240
Test: All VTS
Change-Id: I0b26b7f41d31d5e63ed083ab5f6f269a3620f034
See change I2c0366fb87c96851fa6e0f8fe9ceac012d8e3513
Bug: 136097638
Test: m VtsHalNeuralnetworksV1_0TargetTest
Test: m VtsHalNeuralnetworksV1_1TargetTest
Test: m VtsHalNeuralnetworksV1_2TargetTest
Test: m VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
Change-Id: I6fdede028422145d313d46532b5d2154ef0d40bc
Merged-In: I6fdede028422145d313d46532b5d2154ef0d40bc
(cherry picked from commit 2bcfdc82a0)
The VTS Callback files are a subset of the Callback files in
frameworks/ml/nn/runtime/Callbacks.*. This CL syncs the implementations,
removing the functionality that is not needed in VTS.
Fixes: 132322149
Test: mma
Test: VtsHalNeuralnetworksV1_0TargetTest
Test: VtsHalNeuralnetworksV1_1TargetTest
Test: VtsHalNeuralnetworksV1_2TargetTest
Change-Id: I114ce7f3b6c3d58de0196e9508209614d0a73e11
Merged-In: I114ce7f3b6c3d58de0196e9508209614d0a73e11
(cherry picked from commit 23d0e562e0)
To make it easier to create the next version of NNAPI, this change
removes the following nonsensical dependence:
- NNAPI 1.0 VTS depends on NNAPI 1.1 and 1.2
- NNAPI 1.1 VTS depends on NNAPI 1.2
In particular, I made the following changes:
- split GeneratedTestHarness.cpp into three separate implementations,
- created a restricted version of Callbacks.h for 1.0 and 1.1,
- removed the dependency on frameworks/ml/nn/HalInterfaces.h,
- refactored Android.bp files for more autonomy between 1.0, 1.1, and 1.2,
- consolidated some common code into Utils.h,
- created structure for sharing code between VTS versions (VtsHalNeuralNetworksV1_0_utils).
Bug: 74827824
Bug: 124462414
Test: VtsHalNeuralnetworksV1_0TargetTest
Test: VtsHalNeuralnetworksV1_1TargetTest
Test: VtsHalNeuralnetworksV1_1CompatV1_0TargetTest
Test: VtsHalNeuralnetworksV1_2TargetTest
Test: VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
Test: VtsHalNeuralnetworksV1_2CompatV1_1TargetTest
Change-Id: I4243d0b5e574255cef1070850f4d0a284f65f54e
Merged-In: I4243d0b5e574255cef1070850f4d0a284f65f54e
(cherry picked from commit 1d6b465997)
Bug: 121347610 document that NNAPI Execution inputs/outputs and HAL Request inputs/outputs must not be modified
Test: cd hardware/interfaces/neuralnetworks/1.0/vts/functional ; mma
Test: cd hardware/interfaces/neuralnetworks/1.2/vts/functional ; mma
Change-Id: Iac71d6d5ad92a90afd1b6babb7cfa128d7484c64
The documentation said that cell-to-input weights are required to be
present when input-to-input weights, recurrent-to-input weights and
input gate bias are present. This was incorrect since this weights can
be omitted if peephole connections are not used even if all the other
tensors are present.
Another bug that is fixed in this change is that for output #0 the docs
said "of shape [batch_size, num_units * 4] with CIFG, or [batch_size,
num_units * 3] without CIFG" when in fact it is the opposite, i.e. "of
shape [batch_size, num_units * 3] with CIFG, or [batch_size, num_units *
4] without CIFG."
Existing CTS/VTS tests expect behaviour described in the fixed documentation.
Existing CPU implementation is also compliant with the fixed documentation.
Fix: 111842951
Test: mma
Change-Id: Id011783e33672ae65dc6fe3784cb26feb832acf9
Merged-In: Id011783e33672ae65dc6fe3784cb26feb832acf9
(cherry picked from commit e0537f09fb)
This CL adds the following two types of validation tests on the NNAPI
Burst serialized format:
(1) it directly modifies the serialized data (invalidating it) to ensure
that vendor driver services properly validates the serialized
request
(2) it ensures that vendor driver services properly fail when the result
channel is not large enough to return the data
This CL additionally includes miscellaneous cleanups:
(1) having a generic "validateEverything" function
(2) moving the "prepareModel" function that's common across
validateRequest and validateBurst to a common area
Fixes: 129779280
Bug: 129157135
Test: mma
Test: VtsHalNeuralnetworksV1_2TargetTest (with sample-all)
Change-Id: Ib90fe7f662824de17db5a254a8c501855e45f6bd
Merged-In: Ib90fe7f662824de17db5a254a8c501855e45f6bd
(cherry picked from commit 20f28a24e9)
RESIZE_NEAREST_NEIGHBOR
- The CPU implementation always had the order of {width, height}.
- In P, the documentation was incorrectly changed to {height, width}.
Bug: 131623949
Bug: 130035110
Test: mm
Change-Id: I6c79459fa73347fb51fc34a76ad78d5ac207f210
Merged-In: I6c79459fa73347fb51fc34a76ad78d5ac207f210
(cherry picked from commit 286339b4c8)
Bug: http://b/74200014
This is no longer broken after switching to lld.
Test: mmma SANITIZE_TARGET=address
hardware/interfaces/neuralnetworks/1.0/vts/functional
Change-Id: I0920da7db83222d089c7a571e9478fb7ad9ad9d4
hidl-generated makefiles are now generated such that bpfmt(file) == file.
Bug: 67417008
Test: enable bpfmt hook
Change-Id: I1f69d292bc23a7cc293a66110cb02d597e1019ad
This CL adapts the VTS code to the corresponding changes made in the NN
utility library.
Bug: 128319484
Test: mma
Test: atest VtsHalNeuralnetworksV1_0Target
Test: atest VtsHalNeuralnetworksV1_1Target
Test: atest VtsHalNeuralnetworksV1_2Target
Change-Id: I470e8228cde2b75620ad851e4fe408f8e8329e7c
Merged-In: I470e8228cde2b75620ad851e4fe408f8e8329e7c
(cherry picked from commit 102e0442d8)
This CL adapts the VTS code to the corresponding changes made in the NN
utility library.
Bug: 119570067
Test: mma
Test: atest VtsHalNeuralnetworksV1_0Target
Test: atest VtsHalNeuralnetworksV1_1Target
Test: atest VtsHalNeuralnetworksV1_2Target
Change-Id: I7cbc1d7025c0352aa1ed29d71dc84c2fcfc20a4f
Merged-In: I7cbc1d7025c0352aa1ed29d71dc84c2fcfc20a4f
(cherry picked from commit e68668f65b)
- Instead of isCachingSupport returning a single boolean, switch to
getNumberOfCacheFilesNeeded returning the number of cache files. This
is to support use cases when driver needs more than one cache file for
each type, or when driver does not need data cache.
- Instead of a separate saveToCache, pass cache info along with
prepareModel_1_2 to save into cache as well as perform compilation.
This is to avoid a potential additional copy of cache files.
Bug: 123780248
Test: VtsHalNeuralnetworksV1_xTargetTest with 1.2 sample driver
Test: VtsHalNeuralnetworksV1_xTargetTest with a test driver that can
read and write cache entries
Change-Id: I921b7b8ccc3c66af19f6589f7213c6870d6f07bf
Merged-In: I921b7b8ccc3c66af19f6589f7213c6870d6f07bf
(cherry picked from commit b61ba1ed0b)
This is needed to be able to test DEQUANTIZE after adding
TENSOR_QUANT8_SYMM support.
Test: NeuralNetworksTest_static
Test: VtsHalNeuralnetworksV1_2TargetTest
Change-Id: Iba9b286df70919d7b67cd77c91e625a044bd686c
Merged-In: Iba9b286df70919d7b67cd77c91e625a044bd686c
(cherry picked from commit bf26a9e3d7)
This CL creates a new suite of tests to enable presubmit tests:
* PresubmitHalNeuralnetworksV1_0TargetTest
* PresubmitHalNeuralnetworksV1_1TargetTest
* PresubmitHalNeuralnetworksV1_2TargetTest
These tests are the same as the VTS tests, with the exception that they
will skip running all tests (but still pass) if the service cannot be
found and its name starts with "service-".
This change does not affect the existing NNAPI VTS tests.
Test: mma
Test: atest
Bug: 124040554
Change-Id: I36a38b66b21fd51d0ca381bb4e05a39266dd353f
(cherry picked from commit ed68233697)
Argument-dependent lookup will only work for operator>> if the operator
is in one of the argument's namespaces. This caused the enumerations to
pretty-print for V1_0, but not for V1_1 or V1_2. This change ensures the
V1_0 namespace is used.
Test: mma
Test: atest VtsHalNeuralnetworksV1_0TargetTest (verified the test output "OFFLINE" for DeviceStatus and "DEVICE_UNAVAILABLE" for ErrorStatus instead of raw byte value representation)
Test: atest VtsHalNeuralnetworksV1_1TargetTest (verified the test output "OFFLINE" for DeviceStatus and "DEVICE_UNAVAILABLE" for ErrorStatus instead of raw byte value representation)
Test: atest VtsHalNeuralnetworksV1_2TargetTest (verified ran and passed tests)
Fixes: 124316129
Change-Id: I764a46e2d87615b1f3da0ab0e6edb134bb533887
(cherry picked from commit 42a35bee10)
Add the following tests for compilation caching:
- validation tests
- Test isCachingSupported
- Test prepareModelFromCache with invalid numFd and invalid access mode
- Test saveToCache with invalid numFd, invalid access mode,
invalid file size, and invalid fd offset
- execution test
- Save a mobilenet model to cache and then retrieve and run accuracy
evaluation.
- The same test but the file offsets for prepareModelFromCache is not at zero.
- security test
- CompilationCachingSecurityTest.CorruptedSecuritySensitiveCache
Randomly flip one bit of security-sensitive cache.
- CompilationCachingSecurityTest.WrongLengthSecuritySensitiveCache
Randomly append bytes to security-sensitive cache.
- CompilationCachingSecurityTest.WrongToken
Randomly flip one bit of cache token.
Bug: 119616526
Test: VtsHalNeuralnetworksV1_xTargetTest with 1.2 sample driver
Test: VtsHalNeuralnetworksV1_xTargetTest with a test driver that can
read and write cache entries
Change-Id: Iae9211cb28ce972b29572dfedd45d1ade4dfdaf5
Merged-In: Iae9211cb28ce972b29572dfedd45d1ade4dfdaf5
(cherry picked from commit 3405878e5e)
Test dynamic output shape with generated models when
- Dimensions of output operands are fully specified
- Dimensions of output operands are unspecified with sufficient buffer
- Dimensions of output operands are unspecified with insufficient buffer
Test: VTS on 1.2 sample driver
Change-Id: I4d26395ce443687ccbd47445b36e3356d70035cc
Merged-In: I4d26395ce443687ccbd47445b36e3356d70035cc
(cherry picked from commit 929fd21e06)
- Instead of reporting PASS for unsupported tests, use GTEST_SKIP to
skip the tests at runtime.
Bug: 113356629
Test: mm
Test: VTS tests on HVX driver
Change-Id: I6a870b61809e58490e66dd4ea36ddeb64fc68a07
Merged-In: I6a870b61809e58490e66dd4ea36ddeb64fc68a07
(cherry picked from commit bb685a4a97)
FastMessageQueue is a Treble-compliant data structure that enables fast
communication between two processes. The FMQ object itself is an atomic
circular buffer that is optionally synchronized with a futex. However,
FMQ has no notion of ownership or lifetime across processes, so it must
be paired with higher-level constructs to manage the lifetime and
ownership.
The NNAPI is introducing the notion of an "Execution Burst" object (or
more simply a "Burst" object), which is similar to an
ANeuralNetworksExecution, but is intended to be reused across multiple
executions and has lower IPC overheads. It achieves this low IPC
overhead by replacing HIDL HwBinder calls with FMQ messages.
Specifically, it replaces IPreparedModel::executeSynchronously's call
from the client into the service with fmq_sync<FmqRequestDatum> (an FMQ
channel used to pass a serialized Request object) and it replaces
the return from the service into the client with
fmq_sync<FmqResultDatum> (an FMQ channel used to return serialized
result status and OutputShapes information).
Each channel is a unidirectional flow of information with exactly one
producer and exactly one consumer. The channels are created by the NN
runtime and passed to the service via
IPreparedModel::configureExecutionBurst.
This CL tests the Burst in both the execution path and validation path
in the Vendor Test Suite (VTS) in neuralnetworks/1.*/vts/functional/.
The VTS binary--VtsHalNeuralnetworksV1_2TargetTest--can be built and run
as any previous version could.
Bug: 119570067
Test: mma
Test: VtsHalNeuralnetworksV1_2TargetTest
Change-Id: I3a36484eff9565c2d028c07c099804a0289f294a
Merged-In: I3a36484eff9565c2d028c07c099804a0289f294a
(cherry picked from commit 814d8372f3)
Bug: 119274127
Test: all of the following, with the appropriate android.hardware.neuralnetworks@1.${X}::IDevice/sample-all
VtsHalNeuralnetworksV1_0TargetTest
VtsHalNeuralnetworksV1_0TargetTest
VtsHalNeuralnetworksV1_1CompatV1_0TargetTest
VtsHalNeuralnetworksV1_1CompatV1_0TargetTest
VtsHalNeuralnetworksV1_1TargetTest
VtsHalNeuralnetworksV1_1TargetTest
VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
VtsHalNeuralnetworksV1_2CompatV1_0TargetTest
VtsHalNeuralnetworksV1_2CompatV1_1TargetTest
VtsHalNeuralnetworksV1_2CompatV1_1TargetTest
VtsHalNeuralnetworksV1_2TargetTest
VtsHalNeuralnetworksV1_2TargetTest
Change-Id: Iedfa485b4008d9cec3b81ff4c0ce3ebc0b83c823
(cherry picked from commit 49e41678f5)
Create 1.2 version IPreparedModel, IPreparedModelCallback, and
IExecutionCallback.
Currently the new interfaces are created the same as 1.0 version,
but will have more methods introduced in later CLs.
Bug: 73506513
Test: VtsHalNeuralnetworksV1_xTargetTest with 1.2 sample driver
Change-Id: Icf4d04c22f88e825d87562f1489377fdf6bf585d
Merged-In: Icf4d04c22f88e825d87562f1489377fdf6bf585d
(cherry picked from commit b5cb8f7632)
This removes the use of a separately updated list of models
that has fallen out of sync.
Bug: 119293899
Test: VtsHalNeuralnetworksV1_2TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.2::IDevice/sample-all
Test: VtsHalNeuralnetworksV1_2CompatV1_1TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.2::IDevice/sample-all
Test: VtsHalNeuralnetworksV1_2CompatV1_0TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.2::IDevice/sample-all
Test: VtsHalNeuralnetworksV1_1TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.1::IDevice/sample-all
Test: VtsHalNeuralnetworksV1_1CompatV1_0TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.1::IDevice/sample-all
Test: VtsHalNeuralnetworksV1_0TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.0::IDevice/sample-all
Change-Id: I2d8804d78331b8fceab4c622c871802aa0f0a4b4
Merged-In: I2d8804d78331b8fceab4c622c871802aa0f0a4b4
(cherry picked from commit b5fe58b95a)
This makes it easier to find all the places that need to be changed
after adding a new type to MixedTyped.
Test: VtsHalNeuralnetworksV1_2TargetTest
Change-Id: I92867de6574ec6dc1a17e30d889c79501ea93063
Merged-In: I92867de6574ec6dc1a17e30d889c79501ea93063
(cherry picked from commit 9b490f4833)
This is a copy the v1.1 tests since we don't have any new ops
implemented in v1.2 yet.
Bug: 114365802
Test: mm
Test: NNAPI VTS
Change-Id: Ida7525fcd3ae0fd6f88ff9591e06aba922bdae64
Merged-In: Ida7525fcd3ae0fd6f88ff9591e06aba922bdae64
(cherry-picked from 871be94770)
Set the acceptable error range based on both absolute tolerance and
relative tolerance.
Currently, absolute tolerance is set to 1e-5 for FP32 and 5 epsilon
(~5e-3) for FP16 relaxed computation. The relative tolerance is set to
5ULP of the corresponding precision. Add a TODO mark for potential
future adjustment on error limit based on testing.
Bug: 111768023
Test: none
Change-Id: Idedcec3e09fd7de9696811b93c81d0f180e896ef