Path: blob/master/site/en-snapshot/lite/android/delegates/gpu.md
25118 views
GPU acceleration delegate with Interpreter API
Using graphics processing units (GPUs) to run your machine learning (ML) models can dramatically improve the performance and the user experience of your ML-enabled applications. On Android devices, you can enable
delegate and one of the following APIs:
This page describes how to enable GPU acceleration for TensorFlow Lite models in Android apps using the Interpreter API. For more information about using the GPU delegate for TensorFlow Lite, including best practices and advanced techniques, see the GPU delegates page.
Use GPU with TensorFlow Lite with Google Play services
The TensorFlow Lite Interpreter API provides a set of general purpose APIs for building a machine learning applications. This section describes how to use the GPU accelerator delegate with these APIs with TensorFlow Lite with Google Play services.
TensorFlow Lite with Google Play services is the recommended path to use TensorFlow Lite on Android. If your application is targeting devices not running Google Play, see the GPU with Interpreter API and standalone TensorFlow Lite section.
Add project dependencies
To enable access to the GPU delegate, add com.google.android.gms:play-services-tflite-gpu
to your app's build.gradle
file:
Enable GPU acceleration
Then initialize TensorFlow Lite with Google Play services with the GPU support:
Kotlin
val useGpuTask = TfLiteGpu.isGpuDelegateAvailable(context)
You can finally initialize the interpreter passing a GpuDelegateFactory
through InterpreterApi.Options
:
Kotlin
Note: The GPU delegate must be created on the same thread that runs it. Otherwise, you may see the following error, TfLiteGpuDelegate Invoke: GpuDelegate must run on the same thread where it was initialized.
The GPU delegate can also be used with ML model binding in Android Studio. For more information, see Generate model interfaces using metadata.
Use GPU with standalone TensorFlow Lite {:#standalone}
If your application is targets devices which are not running Google Play, it is possible to bundle the GPU delegate to your application and use it with the standalone version of TensorFlow Lite.
Add project dependencies
To enable access to the GPU delegate, add org.tensorflow:tensorflow-lite-gpu-delegate-plugin
to your app's build.gradle
file:
Enable GPU acceleration
Then run TensorFlow Lite on GPU with TfLiteDelegate
. In Java, you can specify the GpuDelegate
through Interpreter.Options
.
Kotlin
import org.tensorflow.lite.Interpreter import org.tensorflow.lite.gpu.CompatibilityList import org.tensorflow.lite.gpu.GpuDelegate
Quantized models {:#quantized-models}
Android GPU delegate libraries support quantized models by default. You do not have to make any code changes to use quantized models with the GPU delegate. The following section explains how to disable quantized support for testing or experimental purposes.
Disable quantized model support
The following code shows how to disable support for quantized models.
Java
GpuDelegate delegate = new GpuDelegate(new GpuDelegate.Options().setQuantizedModelsAllowed(false));
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
For more information about running quantized models with GPU acceleration, see GPU delegate overview.