There are certain pieces of NeuralNetworks.h and of our various *.hal
files that ought to be kept in sync -- most notably the operand type
and operation type definitions and descriptions in our
NeuralNetworks.h and types.hal files. To avoid having to do this
manually, a tool can be employed.
The old solution to this problem is sync_enums_to_hal.py, which parses
the existing NeuralNetworks.h and types.hal files, and then rewrites
the types.hal files: It identifies the operand type and operation type
portions of the files, and in the types.hal files replaces them with
transformed versions of the portions from NeuralNetworks.h. This
approach is brittle -- it is very sensitive to the structure of the
files, as was recognized when this tool was created. Changes to those
files since the script was last updated essentially broke the script,
as noted in http://b/140132458.
The new solution employs a new tool generate_api.py to combine a
single "specification file" with one "template file" per API file
(NeuralNetworks.h or types.hal) to produce that API file. The script
generate_api.sh invokes generate_api.py once per API file, passing
appropriate arguments. See frameworks/ml/nn/tools/api/README.md for
more details.
In the process of combining information from NeuralNetworks.h and
*/types.hal, some formatting, syntactic, and semantic changes have
been made to those files. For example:
- types.hal files no longer refer to API levels or (similarly)
to HAL versions other than earlier HAL versions
Bug: 130169064
Bug: 140132458
Test: cd neuralnetworks ; mma
Change-Id: Iceb38effc2c7e6c32344e49225b85256144c0945
Merged-In: Iceb38effc2c7e6c32344e49225b85256144c0945
(cherry picked from commit 2cae5c8b01)
libsnapshot needs to communicate to the bootloader that a merge is in
progress. This can be used to prevent factory data resets, prevent
flashing or wiping userdata/metadata, and warning when the active slot
changes.
Bug: 138861550
Test: builds
Change-Id: I577877696b5ec6920b9520d518374931ce9ddfaa
Merged-In: I577877696b5ec6920b9520d518374931ce9ddfaa
Update the documentation of NetworkScanRequest to clarify
that the interval between scans is from the completion of
one scan to the start of another. This is the only possible
definition that doesn't possibly result in back-to-back
scans which never complete.
In the initial design of this API, the stated use case was
for scans where "interval" >> "scan duration". For that
use case, this clarification doesn't make a meaningful
difference; however, for the use case of long-duration
scans, the distinction prevents the issue stated above.
Bug: 139935383
Test: compilation (docstring-only change)
Change-Id: Ib8393110bfd3ea883045648ee7dac9c6e6a32d44
This CL additionally adds missing documentation for timing
information when returning results on the resultChannel.
Bug: 133773876
Test: mma
Change-Id: I1eb1affbb4a912d5fdeab012e2be7e7005deb04d
Merged-In: I1eb1affbb4a912d5fdeab012e2be7e7005deb04d
(cherry picked from commit f99bffd08a)
Under certain circumstances, we guarantee that a prepared model can be
executed successfully. In describing those circumstances, we
neglected to specify that operation input operands must have legal
values for the guarantee to hold. For example, the guarantee doesn't
hold if an ADD operation has an activation input that is not one of
the defined values; or if a RESHAPE operation has a shape input in
which two or more components are -1.
This change modifies the guarantee to apply only when operation input
operands have legal values. It also documents this guarantee for
burst execution.
Note that if an operation has an input operand that can be proven to
have an illegal value at preparation time (e.g., a constant value that
is illegal), model preparation might (but is not required to) fail for
that reason.
Bug: 135933040
Test: $ cd neuralnetworks ; mma
Change-Id: I8b421550dd89e4bbbdae899e7cb5e9e88a46d2fb
(cherry picked from commit 48544cc38a)
Bug: 121347610 document that NNAPI Execution inputs/outputs and HAL Request inputs/outputs must not be modified
Test: cd hardware/interfaces/neuralnetworks/1.0/vts/functional ; mma
Test: cd hardware/interfaces/neuralnetworks/1.2/vts/functional ; mma
Change-Id: Iac71d6d5ad92a90afd1b6babb7cfa128d7484c64
Starting from CameraDeviceSession 3.5, for IMPLEMENTATION_DEFINED pixel
format, configureStreams call uses original format and dataspace instead
of the overridden value.
This makes sure the HAL interface behavior is consistent between first
and subsequent processCaptureRequest() calls.
Test: Camera CTS and partner testing
Bug: 131864007
Change-Id: Id701141d2c11089ef063fd3f32444212855f84ab
Specify units of location vector and define the direction of the
rotation matrix translation as Android device frame to local sensor
frame.
Fixes: 133264933
Test: n/a, comment update only
Change-Id: I76dae7a6fae3c8f44a4dcd1fcc6b790abff86420
This patch restores the documentation on the channel order convention
that was present in the Audio HAL 2.0 but remove by mistake in 4.0.
This is a vendor feedback.
Test: mm
Bug: 133453897
Change-Id: I8eabd8883612d39ced21481fc44661b0808754bb
Signed-off-by: Kevin Rocard <krocard@google.com>
When incremental results are disabled for network scans,
update the API documentation to allow implementations to
ignore range checking for the incremental scan interval.
Bug: 112486807
Test: compilation - docstring-only change
Merged-In: I901335550b4b8c2cf75f91b39fd031f03ffae982
Change-Id: I901335550b4b8c2cf75f91b39fd031f03ffae982
(cherry picked from commit 944efca78d)
When incremental results are disabled for network scans,
update the API documentation to allow implementations to
ignore range checking for the incremental scan interval.
Bug: 112486807
Test: compilation - docstring-only change
Change-Id: I901335550b4b8c2cf75f91b39fd031f03ffae982
The documentation said that cell-to-input weights are required to be
present when input-to-input weights, recurrent-to-input weights and
input gate bias are present. This was incorrect since this weights can
be omitted if peephole connections are not used even if all the other
tensors are present.
Another bug that is fixed in this change is that for output #0 the docs
said "of shape [batch_size, num_units * 4] with CIFG, or [batch_size,
num_units * 3] without CIFG" when in fact it is the opposite, i.e. "of
shape [batch_size, num_units * 3] with CIFG, or [batch_size, num_units *
4] without CIFG."
Existing CTS/VTS tests expect behaviour described in the fixed documentation.
Existing CPU implementation is also compliant with the fixed documentation.
Fix: 111842951
Test: mma
Change-Id: Id011783e33672ae65dc6fe3784cb26feb832acf9
Merged-In: Id011783e33672ae65dc6fe3784cb26feb832acf9
(cherry picked from commit e0537f09fb)
Update IGnss.hal cleanup() and setCallback() method documentation to
clarify when the framework calls these methods so that the GNSS engine
knows when to shut down for power savings.
This CL updates @1.1::IGnss.hal and @1.0::IGnss.hal interfaces only.
@2.0::IGnss.hal is updated in another CL.
Bug: 124104175
Test: Existing tests pass
Change-Id: I8181251677dab78ce0619fa1e2a667b36e115f25
The IGnss.hal setCallback() and cleanup() methods need to be updated
to clarify when the framework calls them so that that the GNSS engine
knows when to shut down for power savings.
This CL updates the @2.0::IGnss.hal setCallback() method documenation
only. The @1.1::IGnss.hal and @1.0::IGnss.hal setCallback() and cleanup()
method documentation is updated in another CL.
Bug: 124104175
Test: Existing tests pass
Change-Id: I6a2dd6f82becc0adef8b4b56fe83e7c004aefd7a
The documentation said that cell-to-input weights are required to be
present when input-to-input weights, recurrent-to-input weights and
input gate bias are present. This was incorrect since this weights can
be omitted if peephole connections are not used even if all the other
tensors are present.
Another bug that is fixed in this change is that for output #0 the docs
said "of shape [batch_size, num_units * 4] with CIFG, or [batch_size,
num_units * 3] without CIFG" when in fact it is the opposite, i.e. "of
shape [batch_size, num_units * 3] with CIFG, or [batch_size, num_units *
4] without CIFG."
Existing CTS/VTS tests expect behaviour described in the fixed documentation.
Existing CPU implementation is also compliant with the fixed documentation.
Fix: 111842951
Test: mma
Change-Id: Id011783e33672ae65dc6fe3784cb26feb832acf9
We had no tests for quantized PAD in NNAPI 1.1 and think that vendors might have implemented different behaviors.
Bug: 122243484
Test: N/A
Change-Id: Ibfc0801ab746fc271dc5f8efc764b818c6d49df4
Merged-In: Ibfc0801ab746fc271dc5f8efc764b818c6d49df4
(cherry picked from commit b01ce9644e)
RESIZE_NEAREST_NEIGHBOR
- The CPU implementation always had the order of {width, height}.
- In P, the documentation was incorrectly changed to {height, width}.
Bug: 131623949
Bug: 130035110
Test: mm
Change-Id: I6c79459fa73347fb51fc34a76ad78d5ac207f210
Merged-In: I6c79459fa73347fb51fc34a76ad78d5ac207f210
(cherry picked from commit 286339b4c8)
We had no tests for quantized PAD in NNAPI 1.1 and think that vendors might have implemented different behaviors.
Bug: 122243484
Test: N/A
Change-Id: Ibfc0801ab746fc271dc5f8efc764b818c6d49df4