mp.tasks.audio.AudioEmbedder

Class that performs embedding extraction on audio clips or audio stream.

This API expects a TFLite model with mandatory TFLite Model Metadata that contains the mandatory AudioProperties of the solo input audio tensor and the optional (but recommended) label items as AssociatedFiles with type TENSOR_AXIS_LABELS per output embedding tensor.

(kTfLiteFloat32)

  • input audio buffer of size [batch * samples].
  • batch inference is not supported (batch is required to be 1).
  • for multi-channel models, the channels must be interleaved.

At least one output tensor with: (kTfLiteUInt8/kTfLiteFloat32)

  • N components corresponding to the N dimensions of the returned feature vector for this output layer.
  • Either 2 or 4 dimensions, i.e. [1 x N] or [1 x 1 x 1 x N].

graph_config The mediapipe audio task graph config proto.
running_mode The running mode of the mediapipe audio task.
packet_callback The optional packet callback for getting results asynchronously in the audio stream mode.

ValueError The packet callback is not properly set based on the task's running mode.

Methods

close

View source

Shuts down the mediapipe audio task instance.

Raises
RuntimeError If the mediapipe audio task failed to close.

create_audio_record

View source

Creates an AudioRecord instance to record audio stream.

The returned AudioRecord instance is initialized and client needs to call the appropriate method to start recording.

Note that MediaPipe Audio tasks will up/down sample automatically to fit the sample rate required by the model. The default sample rate of the MediaPipe pretrained audio model, Yamnet is 16kHz.

Args
num_channels The number of audio channels.
sample_rate The audio sample rate.
required_input_buffer_size The required input buffer size in number of float elements.

Returns
An AudioRecord instance.

Raises
ValueError If there's a problem creating the AudioRecord instance.

create_from_model_path

View source

Creates an AudioEmbedder object from a TensorFlow Lite model and the default AudioEmbedderOptions.

Note that the created AudioEmbedder instance is in audio clips mode, for embedding extraction on the independent audio clips.

Args
model_path Path to the model.

Returns
AudioEmbedder object that's created from the model file and the default AudioEmbedderOptions.

Raises
ValueError If failed to create AudioEmbedder object from the provided file such as invalid file path.
RuntimeError If other types of error occurred.

create_from_options

View source

Creates the AudioEmbedder object from audio embedder options.

Args
options Options for the audio embedder task.

Returns
AudioEmbedder object that's created from options.

Raises
ValueError If failed to create AudioEmbedder object from AudioEmbedderOptions such as missing the model.
RuntimeError If other types of error occurred.

embed

View source

Performs embedding extraction on the provided audio clips.

The audio clip is represented as a MediaPipe AudioData. The method accepts audio clips with various length and audio sample rate. It's required to provide the corresponding audio sample rate within the AudioData object.

The input audio clip may be longer than what the model is able to process in a single inference. When this occurs, the input audio clip is split into multiple chunks starting at different timestamps. For this reason, this function returns a vector of EmbeddingResult objects, each associated ith a timestamp corresponding to the start (in milliseconds) of the chunk data on which embedding extraction was carried out.

Args
audio_clip MediaPipe AudioData.

Returns
An AudioEmbedderResult object that contains a list of embedding result objects, each associated with a timestamp corresponding to the start (in milliseconds) of the chunk data on which embedding extraction was carried out.

Raises
ValueError If any of the input arguments is invalid, such as the sample rate is not provided in the AudioData object.
RuntimeError If audio embedding extraction failed to run.

embed_async

View source

Sends audio data (a block in a continuous audio stream) to perform audio embedding extraction.

Only use this method when the AudioEmbedder is created with the audio stream running mode. The input timestamps should be monotonically increasing for adjacent calls of this method. This method will return immediately after the input audio data is accepted. The results will be available via the result_callback provided in the AudioEmbedderOptions. The embed_async method is designed to process auido stream data such as microphone input.

The input audio data may be longer than what the model is able to process in a single inference. When this occurs, the input audio block is split into multiple chunks. For this reason, the callback may be called multiple times (once per chunk) for each call to this function.

The result_callback provides:

  • An AudioEmbedderResult object that contains a list of embeddings.
  • The input timestamp in milliseconds.

Args
audio_block MediaPipe AudioData.
timestamp_ms The timestamp of the input audio data in milliseconds.

Raises
ValueError If any of the followings:

1) The sample rate is not provided in the AudioData object or the provided sample rate is inconsistent with the previously received. 2) The current input timestamp is smaller than what the audio embedder has already processed.

__enter__

View source

Return self upon entering the runtime context.

__exit__

View source

Shuts down the mediapipe audio task instance on exit of the context manager.

Raises
RuntimeError If the mediapipe audio task failed to close.