mediapipe_model_maker.quantization.QuantizationConfig

Configuration for post-training quantization.

Refer to https://www.tensorflow.org/lite/performance/post_training_quantization for different post-training quantization options.

optimizations A list of optimizations to apply when converting the model. If not set, use [Optimize.DEFAULT] by default.
representative_data A representative ds.Dataset for post-training quantization.
quantization_steps Number of post-training quantization calibration steps to run (default to DEFAULT_QUANTIZATION_STEPS).
inference_input_type Target data type of real-number input arrays. Allows for a different type for input arrays. Defaults to None. If set, must be be {tf.float32, tf.uint8, tf.int8}.
inference_output_type Target data type of real-number output arrays. Allows for a different type for output arrays. Defaults to None. If set, must be {tf.float32, tf.uint8, tf.int8}.
supported_ops Set of OpsSet options supported by the device. Used to Set converter.target_spec.supported_ops.
supported_types List of types for constant values on the target device. Supported values are types exported by lite.constants. Frequently, an optimization choice is driven by the most compact (i.e. smallest) type in this list (default [constants.FLOAT]).
experimental_new_quantizer Whether to enable experimental new quantizer.

ValueError if inference_input_type or inference_output_type are set but not in {tf.float32, tf.uint8, tf.int8}.

Methods

for_dynamic

View source

Creates configuration for dynamic range quantization.

for_float16

View source

Creates configuration for float16 quantization.

for_int8

View source

Creates configuration for full integer quantization.

Args
representative_data Representative data used for post-training quantization.
quantization_steps Number of post-training quantization calibration steps to run.
inference_input_type Target data type of real-number input arrays.
inference_output_type Target data type of real-number output arrays.
supported_ops Set of tf.lite.OpsSet options, where each option represents a set of operators supported by the target device.

Returns
QuantizationConfig.

set_converter_with_quantization

View source

Sets input TFLite converter with quantization configurations.

Args
converter input tf.lite.TFLiteConverter.
**kwargs arguments used by ds.Dataset.gen_tf_dataset.

Returns
tf.lite.TFLiteConverter with quantization configurations.