GpuAccelerationConfig.Builder

public static class GpuAccelerationConfig.Builder extends Object

Builder class.

Public Constructor Summary

Builder()
Creates the GPU acceleration config builder.

Public Method Summary

GpuAccelerationConfig
build()
Builds the GPU acceleration config.
GpuAccelerationConfig.Builder
setCacheDirectory(String value)
Sets the directory to use for serialization.
GpuAccelerationConfig.Builder
setEnableQuantizedInference(boolean value)
Enables inference on quantized models with the delegate.
GpuAccelerationConfig.Builder
setForceBackend(GpuAccelerationConfig.GpuBackend value)
Sets GPU backend to select.
GpuAccelerationConfig.Builder
setInferencePreference(GpuAccelerationConfig.GpuInferenceUsage value)
Sets GPU inference preference for initialization time vs.
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
GpuAccelerationConfig.Builder
setModelToken(String value)
Sets the unique token string that acts as a 'namespace' for all serialization entries.

Inherited Method Summary

Public Constructors

public Builder ()

Creates the GPU acceleration config builder.

Public Methods

public GpuAccelerationConfig build ()

Builds the GPU acceleration config.

public GpuAccelerationConfig.Builder setCacheDirectory (String value)

Sets the directory to use for serialization. Whether serialization actually happens or not is dependent on backend used and validity of this directory.

NOTE: Users should ensure that this directory is private to the app to avoid data access issues.

public GpuAccelerationConfig.Builder setEnableQuantizedInference (boolean value)

Enables inference on quantized models with the delegate. Defaults to true.

public GpuAccelerationConfig.Builder setForceBackend (GpuAccelerationConfig.GpuBackend value)

Sets GPU backend to select. Default behaviour on Android is to try OpenCL and fall back to OpenGL if it's not available.

public GpuAccelerationConfig.Builder setInferencePreference (GpuAccelerationConfig.GpuInferenceUsage value)

Sets GPU inference preference for initialization time vs. inference time.

public GpuAccelerationConfig.Builder setInferencePriority1 (GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(1). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriority for more details.

public GpuAccelerationConfig.Builder setInferencePriority2 (GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(2). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriority for more details.

public GpuAccelerationConfig.Builder setInferencePriority3 (GpuAccelerationConfig.GpuInferencePriority value)

Sets inference priority(3). Ordered priorities provide better control over desired semantics, where priority(n) is more important than priority(n+1). See GpuAccelerationConfig.GpuInferencePriority for more details.

public GpuAccelerationConfig.Builder setModelToken (String value)

Sets the unique token string that acts as a 'namespace' for all serialization entries. Should be unique to a particular model (graph & constants). For an example of how to generate this from a TFLite model, see StrFingerprint().