GpuAccelerationConfig
Stay organized with collections
Save and categorize content based on your preferences.
Concrete class that represents GPU acceleration configs. For more details, see delegate.h
Inherited Method Summary
From class java.lang.Object
Object
|
clone()
|
boolean |
|
void |
finalize()
|
final Class<?>
|
getClass()
|
int |
hashCode()
|
final void |
notify()
|
final void |
notifyAll()
|
String
|
toString()
|
final void |
wait(long arg0, int arg1)
|
final void |
wait(long arg0)
|
final void |
wait()
|
Public Methods
public String cacheDirectory ()
Returns serialization cache directory.
public boolean enableQuantizedInference ()
Returns the enable quantized inference flag.
Returns the selected GPU backend.
Returns GPU inference preference.
Returns GPU inference priority(1).
Returns GPU inference priority(2).
Returns GPU inference priority(3).
public String modelToken ()
Returns unique model token string.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-10-31 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-31 UTC."],[[["\u003cp\u003e\u003ccode\u003eGpuAccelerationConfig\u003c/code\u003e enables GPU acceleration for TensorFlow Lite models on Android.\u003c/p\u003e\n"],["\u003cp\u003eIt allows configuration of GPU backend, inference preference, and priorities for different client needs.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can control quantized inference, serialization cache directory, and model token.\u003c/p\u003e\n"],["\u003cp\u003eRefer to \u003ccode\u003edelegate.h\u003c/code\u003e for detailed information about GPU delegate configuration.\u003c/p\u003e\n"],["\u003cp\u003eUse the \u003ccode\u003eGpuAccelerationConfig.Builder\u003c/code\u003e class to construct and customize instances of this configuration.\u003c/p\u003e\n"]]],["`GpuAccelerationConfig` manages GPU acceleration settings, including selecting the GPU backend, setting inference preferences, and specifying inference priorities. It allows enabling quantized inference and defines methods to retrieve the cache directory and a unique model token. It provides enums for backend selection, inference usage, and inference priority. `GpuAccelerationConfig` also allows serialization and includes a Builder class for configuration. The methods return values for these configuration parameters.\n"],null,["# GpuAccelerationConfig\n\npublic class **GpuAccelerationConfig** extends [AccelerationConfig](/android/reference/com/google/android/gms/tflite/acceleration/AccelerationConfig) \nConcrete class that represents GPU acceleration configs. For more details, see [delegate.h](//github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/gpu/delegate.h) \n\n### Nested Class Summary\n\n|-------|---|---|--------------------------------------------------------------------------|\n| class | [GpuAccelerationConfig.Builder](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.Builder) || Builder class. |\n| enum | [GpuAccelerationConfig.GpuBackend](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuBackend) || Which GPU backend to select. |\n| enum | [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) || Relative priorities given by the GPU delegate to different client needs. |\n| enum | [GpuAccelerationConfig.GpuInferenceUsage](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferenceUsage) || GPU inference preference for initialization time vs. |\n\n### Public Method Summary\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [String](//developer.android.com/reference/java/lang/String.html) | [cacheDirectory](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#cacheDirectory())() Returns serialization cache directory. |\n| boolean | [enableQuantizedInference](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#enableQuantizedInference())() Returns the enable quantized inference flag. |\n| [GpuAccelerationConfig.GpuBackend](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuBackend) | [forceBackend](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#forceBackend())() Returns the selected GPU backend. |\n| [GpuAccelerationConfig.GpuInferenceUsage](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferenceUsage) | [inferencePreference](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#inferencePreference())() Returns GPU inference preference. |\n| [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) | [inferencePriority1](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#inferencePriority1())() Returns GPU inference priority(1). |\n| [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) | [inferencePriority2](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#inferencePriority2())() Returns GPU inference priority(2). |\n| [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) | [inferencePriority3](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#inferencePriority3())() Returns GPU inference priority(3). |\n| [String](//developer.android.com/reference/java/lang/String.html) | [modelToken](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig#modelToken())() Returns unique model token string. |\n\n### Inherited Method Summary\n\nFrom class [com.google.android.gms.tflite.acceleration.AccelerationConfig](/android/reference/com/google/android/gms/tflite/acceleration/AccelerationConfig) \n\n|-------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [String](//developer.android.com/reference/java/lang/String.html) | [getAcceleratorName](/android/reference/com/google/android/gms/tflite/acceleration/AccelerationConfig#getAcceleratorName())() Returns the accelerator type of this config. |\n| byte\\[\\] | [serialize](/android/reference/com/google/android/gms/tflite/acceleration/AccelerationConfig#serialize())() Serializes config as bytes. |\n\nFrom class java.lang.Object \n\n|----------------------------------------------------------------------------|--------------------------------------------------------------------------------|\n| [Object](//developer.android.com/reference/java/lang/Object.html) | clone() |\n| boolean | equals([Object](//developer.android.com/reference/java/lang/Object.html) arg0) |\n| void | finalize() |\n| final [Class](//developer.android.com/reference/java/lang/Class.html)\\\u003c?\\\u003e | getClass() |\n| int | hashCode() |\n| final void | notify() |\n| final void | notifyAll() |\n| [String](//developer.android.com/reference/java/lang/String.html) | toString() |\n| final void | wait(long arg0, int arg1) |\n| final void | wait(long arg0) |\n| final void | wait() |\n\nPublic Methods\n--------------\n\n#### public [String](//developer.android.com/reference/java/lang/String.html) **cacheDirectory** ()\n\nReturns serialization cache directory. \n\n#### public boolean **enableQuantizedInference** ()\n\nReturns the enable quantized inference flag. \n\n#### public [GpuAccelerationConfig.GpuBackend](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuBackend) **forceBackend** ()\n\nReturns the selected GPU backend. \n\n#### public [GpuAccelerationConfig.GpuInferenceUsage](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferenceUsage) **inferencePreference** ()\n\nReturns GPU inference preference. \n\n#### public [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) **inferencePriority1** ()\n\nReturns GPU inference priority(1). \n\n#### public [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) **inferencePriority2** ()\n\nReturns GPU inference priority(2). \n\n#### public [GpuAccelerationConfig.GpuInferencePriority](/android/reference/com/google/android/gms/tflite/acceleration/GpuAccelerationConfig.GpuInferencePriority) **inferencePriority3** ()\n\nReturns GPU inference priority(3). \n\n#### public [String](//developer.android.com/reference/java/lang/String.html) **modelToken** ()\n\nReturns unique model token string."]]