Attention: This MediaPipe Solutions Preview is an early release. Learn more.

mediapipe_model_maker.text_classifier.HParams

Hyperparameters used for training models.

A common set of hyperparameters shared by the training jobs of all model maker tasks.

learning_rate The learning rate to use for gradient descent training.
batch_size Batch size for training.
epochs Number of training iterations over the dataset.
steps_per_epoch An optional integer indicate the number of training steps per epoch. If not set, the training pipeline calculates the default steps per epoch as the training dataset size divided by batch size.
shuffle True if the dataset is shuffled before training.
export_dir The location of the model checkpoint files.
distribution_strategy A string specifying which Distribution Strategy to use. Accepted values are 'off', 'one_device', 'mirrored', 'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case insensitive. 'off' means not to use Distribution Strategy; 'tpu' means to use TPUStrategy using tpu_address. See the tf.distribute.Strategy documentation for more details: https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy.
num_gpus How many GPUs to use at each worker with the DistributionStrategies API. The default is -1, which means utilize all available GPUs.
tpu The Cloud TPU to use for training. This should be either the name used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 url.

Methods

__eq__

distribution_strategy 'off'
export_dir '/tmpfs/tmp/tmpqngo3zix'
num_gpus -1
shuffle False
steps_per_epoch None
tpu ''