mp.tasks.audio.AudioClassifier

Class that performs audio classification on audio data.

This API expects a TFLite model with mandatory TFLite Model Metadata that contains the mandatory AudioProperties of the solo input audio tensor and the optional (but recommended) category labels as AssociatedFiles with type TENSOR_AXIS_LABELS per output classification tensor.

(kTfLiteFloat32)

  • input audio buffer of size [batch * samples].
  • batch inference is not supported (batch is required to be 1).
  • for multi-channel models, the channels must be interleaved.

At least one output tensor with: (kTfLiteFloat32)

  • [1 x N] array with N represents the number of categories.
  • optional (but recommended) category labels as AssociatedFiles with type TENSOR_AXIS_LABELS, containing one label per line. The first such AssociatedFile (if any) is used to fill the category_name field of the results. The display_name field is filled from the AssociatedFile (if any) whose locale matches the display_names_locale field of the AudioClassifierOptions used at creation time ("en" by default, i.e. English). If none of these are available, only the index field of the results will be filled.

graph_config The mediapipe audio task graph config proto.
running_mode The running mode of the mediapipe audio task.
packet_callback The optional packet callback for getting results asynchronously in the audio stream mode.

ValueError The packet callback is not properly set based on the task's running mode.

Methods

classify

View source

Performs audio classification on the provided audio clip.

The audio clip is represented as a MediaPipe AudioData. The method accepts audio clips with various length and audio sample rate. It's required to provide the corresponding audio sample rate within the AudioData object.

The input audio clip may be longer than what the model is able to process in a single inference. When this occurs, the input audio clip is split into multiple chunks starting at different timestamps. For this reason, this function returns a vector of ClassificationResult objects, each associated ith a timestamp corresponding to the start (in milliseconds) of the chunk data that was classified, e.g:

ClassificationResult #0 (first chunk of data): timestamp_ms: 0 (starts at 0ms) classifications #0 (single head model): category #0: category_name: "Speech" score: 0.6 category #1: category_name: "Music" score: 0.2 ClassificationResult #1 (second chunk of data): timestamp_ms: 800 (starts at 800ms) classifications #0 (single head model): category #0: category_name: "Speech" score: 0.5 category #1: category_name: "Silence" score: 0.1

Args
audio_clip MediaPipe AudioData.

Returns
An AudioClassifierResult object that contains a list of classification result objects, each associated with a timestamp corresponding to the start (in milliseconds) of the chunk data that was classified.

Raises
ValueError If any of the input arguments is invalid, such as the sample rate is not provided in the AudioData object.
RuntimeError If audio classification failed to run.

classify_async

View source

Sends audio data (a block in a continuous audio stream) to perform audio classification.

Only use this method when the AudioClassifier is created with the audio stream running mode. The input timestamps should be monotonically increasing for adjacent calls of this method. This method will return immediately after the input audio data is accepted. The results will be available via the result_callback provided in the AudioClassifierOptions. The classify_async method is designed to process auido stream data such as microphone input.

The input audio data may be longer than what the model is able to process in a single inference. When this occurs, the input audio block is split into multiple chunks. For this reason, the callback may be called multiple times (once per chunk) for each call to this function.

The result_callback provides:

  • An AudioClassifierResult object that contains a list of classifications.
  • The input timestamp in milliseconds.

Args
audio_block MediaPipe AudioData.
timestamp_ms The timestamp of the input audio data in milliseconds.

Raises
ValueError If any of the followings:

1) The sample rate is not provided in the AudioData object or the provided sample rate is inconsistent with the previously received. 2) The current input timestamp is smaller than what the audio classifier has already processed.

close

View source

Shuts down the mediapipe audio task instance.

Raises
RuntimeError If the mediapipe audio task failed to close.

create_audio_record

View source

Creates an AudioRecord instance to record audio stream.

The returned AudioRecord instance is initialized and client needs to call the appropriate method to start recording.

Note that MediaPipe Audio tasks will up/down sample automatically to fit the sample rate required by the model. The default sample rate of the MediaPipe pretrained audio model, Yamnet is 16kHz.

Args
num_channels The number of audio channels.
sample_rate The audio sample rate.
required_input_buffer_size The required input buffer size in number of float elements.

Returns
An AudioRecord instance.

Raises
ValueError If there's a problem creating the AudioRecord instance.

create_from_model_path

View source

Creates an AudioClassifier object from a TensorFlow Lite model and the default AudioClassifierOptions.

Note that the created AudioClassifier instance is in audio clips mode, for classifying on independent audio clips.

Args
model_path Path to the model.

Returns
AudioClassifier object that's created from the model file and the default AudioClassifierOptions.

Raises
ValueError If failed to create AudioClassifier object from the provided file such as invalid file path.
RuntimeError If other types of error occurred.

create_from_options

View source

Creates the AudioClassifier object from audio classifier options.

Args
options Options for the audio classifier task.

Returns
AudioClassifier object that's created from options.

Raises
ValueError If failed to create AudioClassifier object from AudioClassifierOptions such as missing the model.
RuntimeError If other types of error occurred.

__enter__

View source

Return self upon entering the runtime context.

__exit__

View source

Shuts down the mediapipe audio task instance on exit of the context manager.

Raises
RuntimeError If the mediapipe audio task failed to close.