platform_hardware_interfaces/neuralnetworks/1.2/IDevice.hal
David Gross 632b4bd9b0 Add @V1_2::Capabilities to support all non extension operand types.
Performance information in Capabilities is used by the runtime when
it selects the appropriate processor to distribute work to.  Prior to
this CL, Capabilities can only distinguish between float and non-float
data types -- so, for example, float16 and float32 performance is
considered to be the same, and performance for all non-float data types is
considered to be the same.

Bug: 124041010

Test: NeuralNetworksTest_static
Test: VtsHalNeuralnetworksV1_2TargetTest --hal_service_instance=android.hardware.neuralnetworks@1.2::IDevice/sample-all

Change-Id: I83fb5920c1c75afbd7750d793a0b8f3e72a0552c
2019-03-21 19:37:36 -07:00

350 lines
20 KiB
Text

/*
* Copyright (C) 2018 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package android.hardware.neuralnetworks@1.2;
import @1.0::ErrorStatus;
import @1.1::ExecutionPreference;
import @1.1::IDevice;
import IPreparedModelCallback;
/**
* This interface represents a device driver.
*/
interface IDevice extends @1.1::IDevice {
/**
* Get the version string of the driver implementation.
*
* The version string must be a unique token among the set of version strings of
* drivers of a specific device. The token identifies the device driver's
* implementation. The token must not be confused with the feature level which is solely
* defined by the interface version. This API is opaque to the Android framework, but the
* Android framework may use the information for debugging or to pass on to NNAPI applications.
*
* Application developers sometimes have specific requirements to ensure good user experiences,
* and they need more information to make intelligent decisions when the Android framework cannot.
* For example, combined with the device name and other information, the token can help
* NNAPI applications filter devices based on their needs:
* - An application demands a certain level of performance, but a specific version of
* the driver cannot meet that requirement because of a performance regression.
* The application can blacklist the driver based on the version provided.
* - An application has a minimum precision requirement, but certain versions of
* the driver cannot meet that requirement because of bugs or certain optimizations.
* The application can filter out versions of these drivers.
*
* @return status Error status returned from querying the version string. Must be:
* - NONE if the query was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if the query resulted in an
* unspecified error
* @return version The version string of the device implementation.
* Must have nonzero length
*/
getVersionString() generates (ErrorStatus status, string version);
/**
* Get the type of a given device.
*
* The device type can be used to help application developers to distribute
* Machine Learning workloads and other workloads such as graphical rendering.
* E.g., for an app which renders AR scenes based on real time object detection
* results, the developer could choose an ACCELERATOR type device for ML
* workloads, and reserve GPU for graphical rendering.
*
* @param status Error status returned from querying the device type. Must be:
* - NONE if the query was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if the query resulted in an
* unspecified error
* @param type The DeviceType of the device. Please note, this is not a
* bitfield of DeviceTypes. Each device must only be of a
* single DeviceType.
*/
getType() generates (ErrorStatus status, DeviceType type);
/**
* Gets the capabilities of a driver.
*
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* @return capabilities Capabilities of the driver.
*/
getCapabilities_1_2() generates (ErrorStatus status, Capabilities capabilities);
/**
* Gets information about extensions supported by the driver implementation.
*
* All extension operations and operands must be fully supported for the
* extension to appear in the list of supported extensions.
*
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* @return extensions A list of supported extensions.
*/
getSupportedExtensions()
generates (ErrorStatus status, vec<Extension> extensions);
/**
* Gets the supported operations in a model.
*
* getSupportedOperations indicates which operations of a model are fully
* supported by the vendor driver. If an operation may not be supported for
* any reason, getSupportedOperations must return false for that operation.
*
* @param model A model whose operations--and their corresponding operands--
* are to be verified by the driver.
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if provided model is invalid
* @return supportedOperations A list of supported operations, where true
* indicates the operation is supported and false indicates the
* operation is not supported. The index of "supported" corresponds with
* the index of the operation it is describing.
*/
getSupportedOperations_1_2(Model model)
generates (ErrorStatus status, vec<bool> supportedOperations);
/**
* Gets the caching requirements of the driver implementation.
*
* There are two types of cache file descriptors provided to the driver: model cache
* and data cache.
*
* The data cache is for caching constant data, possibly including preprocessed
* and transformed tensor buffers. Any modification to the data cache should
* have no worse effect than generating bad output values at execution time.
*
* The model cache is for caching security-sensitive data such as compiled
* executable machine code in the device's native binary format. A modification
* to the model cache may affect the driver's execution behavior, and a malicious
* client could make use of this to execute beyond the granted permission. Thus,
* the driver must always check whether the model cache is corrupted before
* preparing the model from cache.
*
* getNumberOfCacheFilesNeeded returns how many of each type of cache files the driver
* implementation needs to cache a single prepared model. Returning 0 for both types
* indicates compilation caching is not supported by this driver. The driver may
* still choose not to cache certain compiled models even if it reports that caching
* is supported.
*
* If the device reports that caching is not supported, the user may avoid calling
* IDevice::prepareModelFromCache or providing cache file descriptors to
* IDevice::prepareModel_1_2.
*
* @return status Error status of the call, must be:
* - NONE if successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* @return numModelCache An unsigned integer indicating how many files for model cache
* the driver needs to cache a single prepared model. It must
* be less than or equal to Constant::MAX_NUMBER_OF_CACHE_FILES.
* @return numDataCache An unsigned integer indicating how many files for data cache
* the driver needs to cache a single prepared model. It must
* be less than or equal to Constant::MAX_NUMBER_OF_CACHE_FILES.
*/
getNumberOfCacheFilesNeeded()
generates (ErrorStatus status, uint32_t numModelCache, uint32_t numDataCache);
/**
* Asynchronously creates a prepared model for execution and optionally saves it
* into cache files.
*
* prepareModel is used to make any necessary transformations to or alternative
* representations to a model for execution, possibly including
* transformations on the constant data, optimization on the model's graph,
* or compilation into the device's native binary format. The model itself
* is not changed.
*
* Optionally, caching information may be provided for the driver to save
* the prepared model to cache files for faster model compilation time
* when the same model preparation is requested in the future. There are
* two types of cache file handles provided to the driver: model cache
* and data cache. For more information on the two types of cache handles,
* refer to getNumberOfCacheFilesNeeded.
*
* The file descriptors must be opened with read and write permission. A file may
* have any size, and the corresponding file descriptor may have any offset. The
* driver must truncate a file to zero size before writing to that file. The file
* descriptors may be closed by the client once the asynchronous preparation has
* finished. The driver must dup a file descriptor if it wants to get access to
* the cache file later.
*
* The model is prepared asynchronously with respect to the caller. The
* prepareModel function must verify the inputs to the preparedModel function
* related to preparing the model (as opposed to saving the prepared model to
* cache) are correct. If there is an error, prepareModel must immediately invoke
* the callback with the appropriate ErrorStatus value and nullptr for the
* IPreparedModel, then return with the same ErrorStatus. If the inputs to the
* prepareModel function that are related to preparing the model are valid and
* there is no error, prepareModel must launch an asynchronous task
* to prepare the model in the background, and immediately return from
* prepareModel with ErrorStatus::NONE. If the asynchronous task fails to launch,
* prepareModel must immediately invoke the callback with
* ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then return
* with ErrorStatus::GENERAL_FAILURE.
*
* When the asynchronous task has finished preparing the model, it must
* immediately invoke the callback function provided as an input to
* prepareModel. If the model was prepared successfully, the callback object
* must be invoked with an error status of ErrorStatus::NONE and the
* produced IPreparedModel object. If an error occurred preparing the model,
* the callback object must be invoked with the appropriate ErrorStatus
* value and nullptr for the IPreparedModel.
*
* Optionally, the driver may save the prepared model to cache during the
* asynchronous preparation. Any error that occurs when saving to cache must
* not affect the status of preparing the model. Even if the input arguments
* related to the cache may be invalid, or the driver may fail to save to cache,
* the prepareModel function must finish preparing the model. The driver
* may choose not to save to cache even if the caching information is
* provided and valid.
*
* The only information that may be unknown to the model at this stage is
* the shape of the tensors, which may only be known at execution time. As
* such, some driver services may return partially prepared models, where
* the prepared model may only be finished when it is paired with a set of
* inputs to the model. Note that the same prepared model object may be
* used with different shapes of inputs on different (possibly concurrent)
* executions.
*
* Multiple threads may call prepareModel on the same model concurrently.
*
* @param model The model to be prepared for execution.
* @param preference Indicates the intended execution behavior of a prepared
* model.
* @param modelCache A vector of handles with each entry holding exactly one
* cache file descriptor for the security-sensitive cache. The length of
* the vector must either be 0 indicating that caching information is not provided,
* or match the numModelCache returned from getNumberOfCacheFilesNeeded. The cache
* handles will be provided in the same order when retrieving the
* preparedModel from cache files with prepareModelFromCache.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of
* the vector must either be 0 indicating that caching information is not provided,
* or match the numDataCache returned from getNumberOfCacheFilesNeeded. The cache
* handles will be provided in the same order when retrieving the
* preparedModel from cache files with prepareModelFromCache.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. The same token will be provided when retrieving
* the prepared model from the cache files with prepareModelFromCache.
* Tokens should be chosen to have a low rate of collision for a particular
* application. The driver cannot detect a collision; a collision will result
* in a failed execution or in a successful execution that produces incorrect
* output values. If both modelCache and dataCache are empty indicating that
* caching information is not provided, this token must be ignored.
* @param callback A callback object used to return the error status of
* preparing the model for execution and the prepared model if
* successful, nullptr otherwise. The callback object's notify function
* must be called exactly once, even if the model could not be prepared.
* @return status Error status of launching a task which prepares the model
* in the background; must be:
* - NONE if preparation task is successfully launched
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if one of the input arguments related to preparing the
* model is invalid
*/
prepareModel_1_2(Model model, ExecutionPreference preference,
vec<handle> modelCache, vec<handle> dataCache,
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
/**
* Creates a prepared model from cache files for execution.
*
* prepareModelFromCache is used to retrieve a prepared model directly from
* cache files to avoid slow model compilation time. There are
* two types of cache file handles provided to the driver: model cache
* and data cache. For more information on the two types of cache handles,
* refer to getNumberOfCacheFilesNeeded.
*
* The file descriptors must be opened with read and write permission. A file may
* have any size, and the corresponding file descriptor may have any offset. The
* driver must truncate a file to zero size before writing to that file. The file
* descriptors may be closed by the client once the asynchronous preparation has
* finished. The driver must dup a file descriptor if it wants to get access to
* the cache file later.
*
* The model is prepared asynchronously with respect to the caller. The
* prepareModelFromCache function must verify the inputs to the
* prepareModelFromCache function are correct, and that the security-sensitive
* cache has not been modified since it was last written by the driver.
* If there is an error, or if compilation caching is not supported, or if the
* security-sensitive cache has been modified, prepareModelFromCache must
* immediately invoke the callback with the appropriate ErrorStatus value and
* nullptr for the IPreparedModel, then return with the same ErrorStatus. If
* the inputs to the prepareModelFromCache function are valid, the security-sensitive
* cache is not modified, and there is no error, prepareModelFromCache must launch an
* asynchronous task to prepare the model in the background, and immediately return
* from prepareModelFromCache with ErrorStatus::NONE. If the asynchronous task
* fails to launch, prepareModelFromCache must immediately invoke the callback
* with ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then
* return with ErrorStatus::GENERAL_FAILURE.
*
* When the asynchronous task has finished preparing the model, it must
* immediately invoke the callback function provided as an input to
* prepareModelFromCache. If the model was prepared successfully, the
* callback object must be invoked with an error status of ErrorStatus::NONE
* and the produced IPreparedModel object. If an error occurred preparing
* the model, the callback object must be invoked with the appropriate
* ErrorStatus value and nullptr for the IPreparedModel.
*
* The only information that may be unknown to the model at this stage is
* the shape of the tensors, which may only be known at execution time. As
* such, some driver services may return partially prepared models, where
* the prepared model may only be finished when it is paired with a set of
* inputs to the model. Note that the same prepared model object may be
* used with different shapes of inputs on different (possibly concurrent)
* executions.
*
* @param modelCache A vector of handles with each entry holding exactly one
* cache file descriptor for the security-sensitive cache. The length of
* the vector must match the numModelCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_2.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of the vector
* must match the numDataCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_2.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. It is the same token provided when saving
* the cache files with prepareModel_1_2. Tokens should be chosen
* to have a low rate of collision for a particular application. The driver
* cannot detect a collision; a collision will result in a failed execution
* or in a successful execution that produces incorrect output values.
* @param callback A callback object used to return the error status of
* preparing the model for execution and the prepared model if
* successful, nullptr otherwise. The callback object's notify function
* must be called exactly once, even if the model could not be prepared.
* @return status Error status of launching a task which prepares the model
* in the background; must be:
* - NONE if preparation task is successfully launched
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if caching is not supported or if there is an
* unspecified error
* - INVALID_ARGUMENT if one of the input arguments is invalid
*/
prepareModelFromCache(vec<handle> modelCache, vec<handle> dataCache,
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
};