NN HAL: Upgrade IPreparedModelCallback::notify to 1.3. am: cc47dffa57

am: 20b48731f8

Change-Id: I12ce2a3ada299b83ef786af2e1841d5c9cce99bb
This commit is contained in:
Xusong Wang 2019-11-19 15:29:10 -08:00 committed by android-build-merger
commit 43bd7056e4
12 changed files with 436 additions and 16 deletions

View file

@ -589,7 +589,8 @@ fd65298e1e09e0e3c781ab18305920d757dbe55a3b459ce17814ec5cf6dfee99 android.hardwar
ce8dbe76eb9ee94b46ef98f725be992e760a5751073d4f4912484026541371f3 android.hardware.health@2.1::IHealth
26f04510a0b57aba5167c5c0a7c2f077c2acbb98b81902a072517829fd9fd67f android.hardware.health@2.1::IHealthInfoCallback
db47f4ceceb1f06c656f39caa70c557b0f8471ef59fd58611bea667ffca20101 android.hardware.health@2.1::types
34515afa2bb792d3c6d8495a5f5d907d179c8507ca5e55c10050d02ae1d516ef android.hardware.neuralnetworks@1.3::IDevice
9e59fffceed0dd72a9799e04505db5f777bbbea1af0695ba4107ef6d967c6fda android.hardware.neuralnetworks@1.3::IDevice
fd5a2b723b75acbdd9f31bd07e0f83293c52f99f8d9b87bf58eeb6018f665fde android.hardware.neuralnetworks@1.3::IPreparedModelCallback
b74fe72cfe438f50e772e6a307657ff449d5bde83c15dd1f140ff2edbe73499c android.hardware.neuralnetworks@1.3::types
274fb1254a6d1a97824ec5c880eeefc0e410dc6d3a2a4c34052201169d2b7de0 android.hardware.radio@1.5::types
c8e81d912827a5d49b2ddcdc4eb4556c5d231a899a1dca879309e04210daa4a0 android.hardware.radio@1.5::IRadio

View file

@ -9,6 +9,7 @@ hidl_interface {
srcs: [
"types.hal",
"IDevice.hal",
"IPreparedModelCallback.hal",
],
interfaces: [
"android.hardware.neuralnetworks@1.0",

View file

@ -22,7 +22,7 @@ import @1.2::Constant;
import @1.2::DeviceType;
import @1.2::Extension;
import @1.2::IDevice;
import @1.2::IPreparedModelCallback;
import IPreparedModelCallback;
/**
* This interface represents a device driver.
@ -134,18 +134,18 @@ interface IDevice extends @1.2::IDevice {
* not provided, or match the numModelCache returned from
* getNumberOfCacheFilesNeeded. The cache handles will be provided in
* the same order when retrieving the preparedModel from cache files
* with prepareModelFromCache.
* with prepareModelFromCache_1_3.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of the
* vector must either be 0 indicating that caching information is not
* provided, or match the numDataCache returned from
* getNumberOfCacheFilesNeeded. The cache handles will be provided in
* the same order when retrieving the preparedModel from cache files
* with prepareModelFromCache.
* with prepareModelFromCache_1_3.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. The same token will be provided when
* retrieving the prepared model from the cache files with
* prepareModelFromCache. Tokens should be chosen to have a low rate of
* prepareModelFromCache_1_3. Tokens should be chosen to have a low rate of
* collision for a particular application. The driver cannot detect a
* collision; a collision will result in a failed execution or in a
* successful execution that produces incorrect output values. If both
@ -168,4 +168,83 @@ interface IDevice extends @1.2::IDevice {
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
/**
* Creates a prepared model from cache files for execution.
*
* prepareModelFromCache_1_3 is used to retrieve a prepared model directly from
* cache files to avoid slow model compilation time. There are
* two types of cache file handles provided to the driver: model cache
* and data cache. For more information on the two types of cache handles,
* refer to getNumberOfCacheFilesNeeded.
*
* The file descriptors must be opened with read and write permission. A file may
* have any size, and the corresponding file descriptor may have any offset. The
* driver must truncate a file to zero size before writing to that file. The file
* descriptors may be closed by the client once the asynchronous preparation has
* finished. The driver must dup a file descriptor if it wants to get access to
* the cache file later.
*
* The model is prepared asynchronously with respect to the caller. The
* prepareModelFromCache_1_3 function must verify the inputs to the
* prepareModelFromCache_1_3 function are correct, and that the security-sensitive
* cache has not been modified since it was last written by the driver.
* If there is an error, or if compilation caching is not supported, or if the
* security-sensitive cache has been modified, prepareModelFromCache_1_3 must
* immediately invoke the callback with the appropriate ErrorStatus value and
* nullptr for the IPreparedModel, then return with the same ErrorStatus. If
* the inputs to the prepareModelFromCache_1_3 function are valid, the security-sensitive
* cache is not modified, and there is no error, prepareModelFromCache_1_3 must launch an
* asynchronous task to prepare the model in the background, and immediately return
* from prepareModelFromCache_1_3 with ErrorStatus::NONE. If the asynchronous task
* fails to launch, prepareModelFromCache_1_3 must immediately invoke the callback
* with ErrorStatus::GENERAL_FAILURE and nullptr for the IPreparedModel, then
* return with ErrorStatus::GENERAL_FAILURE.
*
* When the asynchronous task has finished preparing the model, it must
* immediately invoke the callback function provided as an input to
* prepareModelFromCache_1_3. If the model was prepared successfully, the
* callback object must be invoked with an error status of ErrorStatus::NONE
* and the produced IPreparedModel object. If an error occurred preparing
* the model, the callback object must be invoked with the appropriate
* ErrorStatus value and nullptr for the IPreparedModel.
*
* The only information that may be unknown to the model at this stage is
* the shape of the tensors, which may only be known at execution time. As
* such, some driver services may return partially prepared models, where
* the prepared model may only be finished when it is paired with a set of
* inputs to the model. Note that the same prepared model object may be
* used with different shapes of inputs on different (possibly concurrent)
* executions.
*
* @param modelCache A vector of handles with each entry holding exactly one
* cache file descriptor for the security-sensitive cache. The length of
* the vector must match the numModelCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_3.
* @param dataCache A vector of handles with each entry holding exactly one
* cache file descriptor for the constants' cache. The length of the vector
* must match the numDataCache returned from getNumberOfCacheFilesNeeded.
* The cache handles will be provided in the same order as with prepareModel_1_3.
* @param token A caching token of length Constant::BYTE_SIZE_OF_CACHE_TOKEN
* identifying the prepared model. It is the same token provided when saving
* the cache files with prepareModel_1_3. Tokens should be chosen
* to have a low rate of collision for a particular application. The driver
* cannot detect a collision; a collision will result in a failed execution
* or in a successful execution that produces incorrect output values.
* @param callback A callback object used to return the error status of
* preparing the model for execution and the prepared model if
* successful, nullptr otherwise. The callback object's notify function
* must be called exactly once, even if the model could not be prepared.
* @return status Error status of launching a task which prepares the model
* in the background; must be:
* - NONE if preparation task is successfully launched
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if caching is not supported or if there is an
* unspecified error
* - INVALID_ARGUMENT if one of the input arguments is invalid
*/
prepareModelFromCache_1_3(vec<handle> modelCache, vec<handle> dataCache,
uint8_t[Constant:BYTE_SIZE_OF_CACHE_TOKEN] token,
IPreparedModelCallback callback)
generates (ErrorStatus status);
};

View file

@ -0,0 +1,57 @@
/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package android.hardware.neuralnetworks@1.3;
import @1.0::ErrorStatus;
import @1.2::IPreparedModelCallback;
import @1.2::IPreparedModel;
/**
* IPreparedModelCallback must be used to return a prepared model produced by an
* asynchronous task launched from IDevice::prepareModel.
*/
interface IPreparedModelCallback extends @1.2::IPreparedModelCallback {
/**
* There are three notify methods declared for the IPreparedModelCallback
* interface: notify_1_3, notify_1_2, and notify. One of the three
* notify methods must be invoked immediately after the asynchronous
* task holding this callback has finished preparing the model. If the model was
* successfully prepared, one of the notify methods must be invoked with
* ErrorStatus::NONE and the prepared model. If the model was not able to be
* successfully prepared, one of the notify methods must be invoked with the
* appropriate ErrorStatus and nullptr as the IPreparedModel. If the asynchronous
* task holding this callback fails to launch or if the model provided to
* IDevice::prepareModel is invalid, one of the notify methods must be invoked
* with the appropriate error as well as nullptr for the IPreparedModel.
*
* @param status Error status returned from the asynchronous model
* preparation task; must be:
* - NONE if the asynchronous task successfully prepared the
* model
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if the asynchronous task resulted in an
* unspecified error
* - INVALID_ARGUMENT if one of the input arguments to
* prepareModel is invalid
* @param preparedModel A model that has been asynchronously prepared for
* execution. If the model was unable to be prepared
* due to an error, nullptr must be passed in place of
* the IPreparedModel object.
*/
oneway notify_1_3(ErrorStatus status, IPreparedModel preparedModel);
};

View file

@ -14,6 +14,24 @@
// limitations under the License.
//
cc_library_static {
name: "VtsHalNeuralNetworksV1_3Callbacks",
defaults: ["VtsHalTargetTestDefaults"],
export_include_dirs: ["include"],
srcs: [
"Callbacks.cpp",
],
static_libs: [
"android.hardware.neuralnetworks@1.0",
"android.hardware.neuralnetworks@1.1",
"android.hardware.neuralnetworks@1.2",
"android.hardware.neuralnetworks@1.3",
],
header_libs: [
"libbase_headers",
]
}
cc_test {
name: "VtsHalNeuralnetworksV1_3TargetTest",
defaults: ["VtsHalTargetTestDefaults"],
@ -44,6 +62,7 @@ cc_test {
"libneuralnetworks_utils",
"VtsHalNeuralNetworksV1_0_utils",
"VtsHalNeuralNetworksV1_2Callbacks",
"VtsHalNeuralNetworksV1_3Callbacks",
],
whole_static_libs: [
"neuralnetworks_generated_V1_0_example",

View file

@ -0,0 +1,76 @@
/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#define LOG_TAG "Callbacks"
#include "1.3/Callbacks.h"
#include <android-base/logging.h>
#include <limits>
namespace android::hardware::neuralnetworks::V1_3::implementation {
using V1_0::ErrorStatus;
// PreparedModelCallback methods begin here
Return<void> PreparedModelCallback::notify(ErrorStatus errorStatus,
const sp<V1_0::IPreparedModel>& preparedModel) {
{
std::lock_guard<std::mutex> hold(mMutex);
// quick-return if object has already been notified
if (mNotified) {
return Void();
}
// store results and mark as notified
mErrorStatus = errorStatus;
mPreparedModel = preparedModel;
mNotified = true;
}
mCondition.notify_all();
return Void();
}
Return<void> PreparedModelCallback::notify_1_2(ErrorStatus errorStatus,
const sp<V1_2::IPreparedModel>& preparedModel) {
return notify(errorStatus, preparedModel);
}
Return<void> PreparedModelCallback::notify_1_3(ErrorStatus errorStatus,
const sp<V1_2::IPreparedModel>& preparedModel) {
return notify(errorStatus, preparedModel);
}
void PreparedModelCallback::wait() const {
std::unique_lock<std::mutex> lock(mMutex);
mCondition.wait(lock, [this] { return mNotified; });
}
ErrorStatus PreparedModelCallback::getStatus() const {
wait();
return mErrorStatus;
}
sp<V1_0::IPreparedModel> PreparedModelCallback::getPreparedModel() const {
wait();
return mPreparedModel;
}
} // namespace android::hardware::neuralnetworks::V1_3::implementation

View file

@ -28,7 +28,7 @@
#include <random>
#include <thread>
#include "1.2/Callbacks.h"
#include "1.3/Callbacks.h"
#include "GeneratedTestHarness.h"
#include "MemoryUtils.h"
#include "TestHarness.h"
@ -48,12 +48,12 @@ const test_helper::TestModel& get_test_model();
namespace android::hardware::neuralnetworks::V1_3::vts::functional {
using namespace test_helper;
using implementation::PreparedModelCallback;
using V1_0::ErrorStatus;
using V1_1::ExecutionPreference;
using V1_2::Constant;
using V1_2::IPreparedModel;
using V1_2::OperationType;
using V1_2::implementation::PreparedModelCallback;
namespace float32_model {
@ -231,7 +231,7 @@ class CompilationCachingTestBase : public testing::Test {
ASSERT_NE(kDevice.get(), nullptr);
// Create cache directory. The cache directory and a temporary cache file is always created
// to test the behavior of prepareModelFromCache, even when caching is not supported.
// to test the behavior of prepareModelFromCache_1_3, even when caching is not supported.
char cacheDirTemp[] = "/data/local/tmp/TestCompilationCachingXXXXXX";
char* cacheDir = mkdtemp(cacheDirTemp);
ASSERT_NE(cacheDir, nullptr);
@ -370,7 +370,7 @@ class CompilationCachingTestBase : public testing::Test {
// Launch prepare model from cache.
sp<PreparedModelCallback> preparedModelCallback = new PreparedModelCallback();
hidl_array<uint8_t, sizeof(mToken)> cacheToken(mToken);
Return<ErrorStatus> prepareLaunchStatus = kDevice->prepareModelFromCache(
Return<ErrorStatus> prepareLaunchStatus = kDevice->prepareModelFromCache_1_3(
modelCache, dataCache, cacheToken, preparedModelCallback);
ASSERT_TRUE(prepareLaunchStatus.isOk());
if (static_cast<ErrorStatus>(prepareLaunchStatus) != ErrorStatus::NONE) {

View file

@ -29,6 +29,7 @@
#include <android/hardware/neuralnetworks/1.2/IPreparedModelCallback.h>
#include <android/hardware/neuralnetworks/1.2/types.h>
#include <android/hardware/neuralnetworks/1.3/IDevice.h>
#include <android/hardware/neuralnetworks/1.3/IPreparedModelCallback.h>
#include <android/hardware/neuralnetworks/1.3/types.h>
#include <android/hidl/allocator/1.0/IAllocator.h>
#include <android/hidl/memory/1.0/IMemory.h>
@ -42,6 +43,7 @@
#include "1.0/Utils.h"
#include "1.2/Callbacks.h"
#include "1.3/Callbacks.h"
#include "ExecutionBurstController.h"
#include "MemoryUtils.h"
#include "TestHarness.h"
@ -52,6 +54,7 @@ namespace android::hardware::neuralnetworks::V1_3::vts::functional {
using namespace test_helper;
using hidl::memory::V1_0::IMemory;
using implementation::PreparedModelCallback;
using V1_0::DataLocation;
using V1_0::ErrorStatus;
using V1_0::OperandLifeTime;
@ -65,7 +68,6 @@ using V1_2::OutputShape;
using V1_2::SymmPerChannelQuantParams;
using V1_2::Timing;
using V1_2::implementation::ExecutionCallback;
using V1_2::implementation::PreparedModelCallback;
using HidlToken = hidl_array<uint8_t, static_cast<uint32_t>(Constant::BYTE_SIZE_OF_CACHE_TOKEN)>;
enum class OutputType { FULLY_SPECIFIED, UNSPECIFIED, INSUFFICIENT };

View file

@ -17,12 +17,13 @@
#define LOG_TAG "neuralnetworks_hidl_hal_test"
#include "1.0/Utils.h"
#include "1.2/Callbacks.h"
#include "1.3/Callbacks.h"
#include "GeneratedTestHarness.h"
#include "VtsHalNeuralnetworks.h"
namespace android::hardware::neuralnetworks::V1_3::vts::functional {
using implementation::PreparedModelCallback;
using V1_0::ErrorStatus;
using V1_0::OperandLifeTime;
using V1_1::ExecutionPreference;
@ -30,7 +31,6 @@ using V1_2::IPreparedModel;
using V1_2::OperationType;
using V1_2::OperationTypeRange;
using V1_2::SymmPerChannelQuantParams;
using V1_2::implementation::PreparedModelCallback;
using HidlToken =
hidl_array<uint8_t, static_cast<uint32_t>(V1_2::Constant::BYTE_SIZE_OF_CACHE_TOKEN)>;

View file

@ -21,8 +21,8 @@
#include <hidl/ServiceManagement.h>
#include <string>
#include <utility>
#include "1.0/Callbacks.h"
#include "1.0/Utils.h"
#include "1.3/Callbacks.h"
#include "GeneratedTestHarness.h"
#include "TestHarness.h"
@ -30,11 +30,11 @@ namespace android::hardware::neuralnetworks::V1_3::vts::functional {
using HidlToken =
hidl_array<uint8_t, static_cast<uint32_t>(V1_2::Constant::BYTE_SIZE_OF_CACHE_TOKEN)>;
using implementation::PreparedModelCallback;
using V1_0::ErrorStatus;
using V1_0::Request;
using V1_1::ExecutionPreference;
using V1_2::IPreparedModel;
using V1_2::implementation::PreparedModelCallback;
// internal helper function
void createPreparedModel(const sp<IDevice>& device, const Model& model,

View file

@ -22,7 +22,7 @@
#include <android/hardware/neuralnetworks/1.3/types.h>
#include <gtest/gtest.h>
#include "1.0/Utils.h"
#include "1.2/Callbacks.h"
#include "1.3/Callbacks.h"
namespace android::hardware::neuralnetworks::V1_3::vts::functional {
@ -51,7 +51,7 @@ void createPreparedModel(const sp<IDevice>& device, const Model& model,
// Utility function to get PreparedModel from callback and downcast to V1_2.
sp<V1_2::IPreparedModel> getPreparedModel_1_2(
const sp<V1_2::implementation::PreparedModelCallback>& callback);
const sp<implementation::PreparedModelCallback>& callback);
} // namespace android::hardware::neuralnetworks::V1_3::vts::functional

View file

@ -0,0 +1,185 @@
/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef ANDROID_HARDWARE_NEURALNETWORKS_V1_3_CALLBACKS_H
#define ANDROID_HARDWARE_NEURALNETWORKS_V1_3_CALLBACKS_H
#include <android-base/thread_annotations.h>
#include <android/hardware/neuralnetworks/1.0/IPreparedModelCallback.h>
#include <android/hardware/neuralnetworks/1.2/IPreparedModelCallback.h>
#include <android/hardware/neuralnetworks/1.3/IPreparedModelCallback.h>
#include <hidl/Status.h>
#include <condition_variable>
#include <mutex>
/*
* The Callback classes are used internally by the NeuralNetworks runtime to
* synchronize between different threads. An asynchronous task is launched
* paired with a callback object. When a client thread requires the output being
* generated by the asynchronous task, the client thread can wait for the result
* and be blocked until it has completed. Any wait may safely be called
* concurrently, even on the same callback object. When the asynchronous task
* has finished its workload, it must immediately call "notify*". If the
* asynchronous task has failed to launch, the function that tried to launch the
* asynchronous task must immediately call "notify*". This "notify*" call
* awakens any client threads waiting on the callback object.
*
* These classes exist to enable synchronization across HIDL. When
* synchronization is only required in the same process, consider using
* std::future, std::mutex, std::condition_variable, or std::experimental::latch
* instead.
*/
namespace android::hardware::neuralnetworks::V1_3::implementation {
/**
* The PreparedModelCallback class is used to receive the error status of
* preparing a model as well as the prepared model from a task executing
* asynchronously with respect to the runtime. If a calling thread calls wait
* or get* on a PreparedModelCallback object and the corresponding asynchronous
* task has not finished preparing the model, the calling thread will block
* until the asynchronous task has called notify*.
*
* If the callback object is notified more than once, only the results of the
* first call to notify* are used, and the results from subsequent calls are
* discarded.
*
* This callback object is passed as an argument to IDevice::prepareModel*.
*/
class PreparedModelCallback : public IPreparedModelCallback {
public:
/**
* IPreparedModelCallback::notify marks the callback object with the return
* status of the asynchronous model preparation along with the prepared
* model, and allows all prior and future wait calls on the
* PreparedModelCallback object to proceed.
*
* One of IPreparedModelCallback::notify, IPreparedModelCallback::notify_1_2,
* or IPreparedModelCallback::notify_1_3 must be called on a given
* PreparedModelCallback object.
*
* If the callback object is notified more than once, only the results of
* the first call to notify* are used, and the results from subsequent calls
* are discarded.
*
* @param status Error status returned from asynchronously preparing the
* model; will be:
* - NONE if the asynchronous preparation was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if the input model is invalid
* @param preparedModel Returned model that has been prepared for execution,
* nullptr if the model was unable to be prepared.
*/
Return<void> notify(V1_0::ErrorStatus status,
const sp<V1_0::IPreparedModel>& preparedModel) override;
/**
* IPreparedModelCallback::notify_1_2 marks the callback object with the
* return status of the asynchronous model preparation along with the
* prepared model, and allows all prior and future wait calls on the
* PreparedModelCallback object to proceed.
*
* One of IPreparedModelCallback::notify, IPreparedModelCallback::notify_1_2,
* or IPreparedModelCallback::notify_1_3 must be called on a given
* PreparedModelCallback object.
*
* If the callback object is notified more than once, only the results of
* the first call to notify* are used, and the results from subsequent calls
* are discarded.
*
* @param status Error status returned from asynchronously preparing the
* model; will be:
* - NONE if the asynchronous preparation was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if the input model is invalid
* @param preparedModel Returned model that has been prepared for execution,
* nullptr if the model was unable to be prepared.
*/
Return<void> notify_1_2(V1_0::ErrorStatus status,
const sp<V1_2::IPreparedModel>& preparedModel) override;
/**
* IPreparedModelCallback::notify_1_3 marks the callback object with the
* return status of the asynchronous model preparation along with the
* prepared model, and allows all prior and future wait calls on the
* PreparedModelCallback object to proceed.
*
* One of IPreparedModelCallback::notify, IPreparedModelCallback::notify_1_2,
* or IPreparedModelCallback::notify_1_3 must be called on a given
* PreparedModelCallback object.
*
* If the callback object is notified more than once, only the results of
* the first call to notify* are used, and the results from subsequent calls
* are discarded.
*
* @param status Error status returned from asynchronously preparing the
* model; will be:
* - NONE if the asynchronous preparation was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if the input model is invalid
* @param preparedModel Returned model that has been prepared for execution,
* nullptr if the model was unable to be prepared.
*/
Return<void> notify_1_3(V1_0::ErrorStatus status,
const sp<V1_2::IPreparedModel>& preparedModel) override;
/**
* PreparedModelCallback::wait blocks until notify* has been called on the
* callback object.
*/
void wait() const;
/**
* Retrieves the error status returned from the asynchronous task launched
* by IDevice::prepareModel*. If IDevice::prepareModel* has not finished
* asynchronously preparing the model, this call will block until the
* asynchronous task notifies the object.
*
* @return status Error status returned from asynchronously preparing the
* model; will be:
* - NONE if the asynchronous preparation was successful
* - DEVICE_UNAVAILABLE if driver is offline or busy
* - GENERAL_FAILURE if there is an unspecified error
* - INVALID_ARGUMENT if the input model is invalid
*/
V1_0::ErrorStatus getStatus() const;
/**
* Retrieves the model that has been prepared for execution from the
* asynchronous task launched by IDevice::prepareModel*. If
* IDevice::prepareModel* has not finished asynchronously preparing the
* model, this call will block until the asynchronous task notifies the
* object.
*
* @return preparedModel Returned model that has been prepared for
* execution, nullptr if the model was unable to be prepared.
*/
sp<V1_0::IPreparedModel> getPreparedModel() const;
private:
mutable std::mutex mMutex;
mutable std::condition_variable mCondition;
bool mNotified GUARDED_BY(mMutex) = false;
V1_0::ErrorStatus mErrorStatus = V1_0::ErrorStatus::GENERAL_FAILURE;
sp<V1_0::IPreparedModel> mPreparedModel;
};
} // namespace android::hardware::neuralnetworks::V1_3::implementation
#endif // ANDROID_HARDWARE_NEURALNETWORKS_V1_3_CALLBACKS_H