Configuration for post-training quantization.
View aliases
Main aliases
mediapipe_model_maker.gesture_recognizer.dataset.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.gesture_recognizer.gesture_recognizer.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.gesture_recognizer.gesture_recognizer.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.gesture_recognizer.gesture_recognizer.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.gesture_recognizer.gesture_recognizer.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.image_classifier.train_image_classifier_lib.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.image_classifier.train_image_classifier_lib.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.tasks.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.tasks.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.tasks.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.tasks.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.tasks.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.utils.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.core.utils.quantization.QuantizationConfig
, mediapipe_model_maker.python.text.text_classifier.text_classifier.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.text.text_classifier.text_classifier.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.python.text.text_classifier.text_classifier.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.text.text_classifier.text_classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.text.text_classifier.text_classifier.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.gesture_recognizer.dataset.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.gesture_recognizer.gesture_recognizer.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.gesture_recognizer.gesture_recognizer.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.gesture_recognizer.gesture_recognizer.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.gesture_recognizer.gesture_recognizer.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.image_classifier.train_image_classifier_lib.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.python.vision.image_classifier.train_image_classifier_lib.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.text_classifier.text_classifier.classifier.custom_model.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.text_classifier.text_classifier.classifier.custom_model.quantization.QuantizationConfig
, mediapipe_model_maker.text_classifier.text_classifier.classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.text_classifier.text_classifier.model_util.quantization.QuantizationConfig
, mediapipe_model_maker.text_classifier.text_classifier.quantization.QuantizationConfig
mediapipe_model_maker.quantization.QuantizationConfig(
optimizations: Optional[Union[tf.lite.Optimize, List[tf.lite.Optimize]]] = None,
representative_data: Optional[mediapipe_model_maker.quantization.ds.Dataset
] = None,
quantization_steps: Optional[int] = None,
inference_input_type: Optional[tf.dtypes.DType] = None,
inference_output_type: Optional[tf.dtypes.DType] = None,
supported_ops: Optional[Union[tf.lite.OpsSet, List[tf.lite.OpsSet]]] = None,
supported_types: Optional[Union[tf.dtypes.DType, List[tf.dtypes.DType]]] = None,
experimental_new_quantizer: bool = False
)
Refer to
https://www.tensorflow.org/lite/performance/post_training_quantization
for different post-training quantization options.
Args |
optimizations
|
A list of optimizations to apply when converting the model.
If not set, use [Optimize.DEFAULT] by default.
|
representative_data
|
A representative ds.Dataset for post-training
quantization.
|
quantization_steps
|
Number of post-training quantization calibration steps
to run (default to DEFAULT_QUANTIZATION_STEPS).
|
inference_input_type
|
Target data type of real-number input arrays. Allows
for a different type for input arrays. Defaults to None. If set, must be
be {tf.float32, tf.uint8, tf.int8} .
|
inference_output_type
|
Target data type of real-number output arrays.
Allows for a different type for output arrays. Defaults to None. If set,
must be {tf.float32, tf.uint8, tf.int8} .
|
supported_ops
|
Set of OpsSet options supported by the device. Used to Set
converter.target_spec.supported_ops.
|
supported_types
|
List of types for constant values on the target device.
Supported values are types exported by lite.constants. Frequently, an
optimization choice is driven by the most compact (i.e. smallest) type
in this list (default [constants.FLOAT]).
|
experimental_new_quantizer
|
Whether to enable experimental new quantizer.
|
Raises |
ValueError
|
if inference_input_type or inference_output_type are set but
not in {tf.float32, tf.uint8, tf.int8}.
|
Methods
for_dynamic
View source
@classmethod
for_dynamic() -> 'QuantizationConfig'
Creates configuration for dynamic range quantization.
for_float16
View source
@classmethod
for_float16() -> 'QuantizationConfig'
Creates configuration for float16 quantization.
for_int8
View source
@classmethod
for_int8(
representative_data: mediapipe_model_maker.quantization.ds.Dataset
,
quantization_steps: int = DEFAULT_QUANTIZATION_STEPS,
inference_input_type: tf.dtypes.DType = tf.uint8,
inference_output_type: tf.dtypes.DType = tf.uint8,
supported_ops: tf.lite.OpsSet = tf.lite.OpsSet.TFLITE_BUILTINS_INT8
) -> 'QuantizationConfig'
Creates configuration for full integer quantization.
Args |
representative_data
|
Representative data used for post-training
quantization.
|
quantization_steps
|
Number of post-training quantization calibration steps
to run.
|
inference_input_type
|
Target data type of real-number input arrays.
|
inference_output_type
|
Target data type of real-number output arrays.
|
supported_ops
|
Set of tf.lite.OpsSet options, where each option
represents a set of operators supported by the target device.
|
Returns |
QuantizationConfig.
|
set_converter_with_quantization
View source
set_converter_with_quantization(
converter: tf.lite.TFLiteConverter, **kwargs
) -> tf.lite.TFLiteConverter
Sets input TFLite converter with quantization configurations.
Args |
converter
|
input tf.lite.TFLiteConverter.
|
**kwargs
|
arguments used by ds.Dataset.gen_tf_dataset.
|
Returns |
tf.lite.TFLiteConverter with quantization configurations.
|