AI-generated Key Takeaways
- 
          The AccelerationService API provides methods for accelerating TensorFlow Lite models. 
- 
          You can create an AccelerationService instance using a Context or a Context and an Executor. 
- 
          The API allows generating or selecting the best acceleration configuration by running mini-benchmarks over configurations. 
- 
          You can validate a single acceleration configuration or a collection of configurations using mini-benchmarks. 
Acceleration Service API
Public Method Summary
| static AccelerationService | |
| static AccelerationService | |
| Task<ValidatedAccelerationConfigResult> | 
                  
                  generateBestConfig(Model
                  model, 
                  ValidationConfig validationConfig)
                   
                    Generates a list of candidate  
                    AccelerationConfigs and runs Mini-benchmark over them. | 
| Task<ValidatedAccelerationConfigResult> | 
                  
                  selectBestConfig(Model
                  model, Iterable<AccelerationConfig>
                  configs, 
                  ValidationConfig validationConfig)
                   
                    Runs Mini-benchmark over a collection of  configs. | 
| Task<ValidatedAccelerationConfigResult> | 
                  
                  validateConfig(Model
                  model, 
                  AccelerationConfig accelerationConfig, 
                  ValidationConfig validationConfig)
                   
                    Runs Mini-benchmark with the given  model,accelerationConfig, andvalidationConfig. | 
| Task<Iterable<ValidatedAccelerationConfigResult>> | 
                  
                  validateConfigs(Model
                  model, Iterable<AccelerationConfig>
                  configs, 
                  ValidationConfig validationConfig)
                   
                    Runs Mini-benchmark over a collection of  
                    AccelerationConfig. | 
Inherited Method Summary
Public Methods
public static AccelerationService create (Context context)
Creates 
            AccelerationService instance.
public static AccelerationService create (Context context, Executor executor)
Creates 
            AccelerationService instance. Validation tests will run with the given
            executor.
public Task<ValidatedAccelerationConfigResult> generateBestConfig (Model model, ValidationConfig validationConfig)
Generates a list of candidate 
            AccelerationConfigs and runs Mini-benchmark over them. Among the ones that
            passed accuracy checks, returns the one with the best performance. Returns a
            Task of null if
            none of the configs passes validation check. Returns a failed
            Task if
            benchmarking failed.
public Task<ValidatedAccelerationConfigResult> selectBestConfig (Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)
public Task<ValidatedAccelerationConfigResult> validateConfig (Model model, AccelerationConfig accelerationConfig, ValidationConfig validationConfig)
Runs Mini-benchmark with the given model,
            accelerationConfig, and validationConfig. The benchmark
            result will also be cached by the acceleration service.
public Task<Iterable<ValidatedAccelerationConfigResult>> validateConfigs (Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)
Runs Mini-benchmark over a collection of 
            AccelerationConfig.