MediaPipeTasksText Framework Reference

Classes

The following classes are available globally.

  • Holds the base options that is used for creation of any type of task. It has fields with important information acceleration configuration, TFLite model source etc.

    Declaration

    Swift

    class BaseOptions : NSObject, NSCopying
  • Category is a util class that contains a label, its display name, a float value as score, and the index of the label in the corresponding label file. Typically it’s used as the result of classification tasks.

    Declaration

    Swift

    class ResultCategory : NSObject
  • Represents the list of classification for a given classifier head. Typically used as a result for classification tasks.

    Declaration

    Swift

    class Classifications : NSObject
  • Represents the classification results of a model. Typically used as a result for classification tasks.

    Declaration

    Swift

    class ClassificationResult : NSObject
  • Represents the embedding for a given embedder head. Typically used in embedding tasks.

    One and only one of the two ‘floatEmbedding’ and ‘quantizedEmbedding’ will contain data, based on whether or not the embedder was configured to perform scala quantization.

    Declaration

    Swift

    class Embedding : NSObject
  • Represents the embedding results of a model. Typically used as a result for embedding tasks.

    Declaration

    Swift

    class EmbeddingResult : NSObject
  • MediaPipe Tasks options base class. Any MediaPipe task-specific options class should extend this class.

    Declaration

    Swift

    class TaskOptions : NSObject, NSCopying
  • MediaPipe Tasks result base class. Any MediaPipe task result class should extend this class.

    Declaration

    Swift

    class TaskResult : NSObject, NSCopying
  • @brief Performs classification on text.

    This API expects a TFLite model with (optional) TFLite Model Metadatathat contains the mandatory (described below) input tensors, output tensor, and the optional (but recommended) label items as AssociatedFiles with type TENSOR_AXIS_LABELS per output classification tensor.

    Metadata is required for models with int32 input tensors because it contains the input process unit for the model’s Tokenizer. No metadata is required for models with string input tensors.

    Input tensors

    • Three input tensors kTfLiteInt32 of shape [batch_size xbert_max_seq_len] representing the input ids, mask ids, and segment ids. This input signature requires a Bert Tokenizer process unit in the model metadata.
    • Or one input tensor kTfLiteInt32 of shape [batch_size xmax_seq_len] representing the input ids. This input signature requires a Regex Tokenizer process unit in the model metadata.
    • Or one input tensor (kTfLiteString) that is shapeless or has shape [1] containing the input string.

    At least one output tensor (kTfLiteFloat32/kBool) with:

    • N classes and shape [1 x N]
    • optional (but recommended) label map(s) as AssociatedFiles with type TENSOR_AXIS_LABELS, containing one label per line. The first such AssociatedFile (if any) is used to fill the categoryName field of the results. The displayName field is filled from the AssociatedFile (if any) whose locale matches the displayNamesLocale field of the MPPTextClassifierOptions used at creation time (“en” by default, i.e. English). If none of these are available, only the index field of the results will be filled.

    Declaration

    Swift

    class TextClassifier : NSObject
  • Options for setting up a MPPTextClassifier.

    Declaration

    Swift

    class TextClassifierOptions : TaskOptions, NSCopying
  • Represents the classification results generated by MPPTextClassifier. *

    Declaration

    Swift

    class TextClassifierResult : TaskResult
  • @brief Performs embedding extraction on text.

    This API expects a TFLite model with (optional) TFLite Model Metadata.

    Metadata is required for models with int32 input tensors because it contains the input process unit for the model’s Tokenizer. No metadata is required for models with string input tensors.

    Input tensors:

    • Three input tensors kTfLiteInt32 of shape [batch_size x bert_max_seq_len] representing the input ids, mask ids, and segment ids. This input signature requires a Bert Tokenizer process unit in the model metadata.
    • Or one input tensor kTfLiteInt32 of shape [batch_size x max_seq_len] representing the input ids. This input signature requires a Regex Tokenizer process unit in the model metadata.
    • Or one input tensor (kTfLiteString) that is shapeless or has shape [1] containing the input string.

    At least one output tensor (kTfLiteFloat32/kTfLiteUint8) with shape [1 x N] where N is the number of dimensions in the produced embeddings.

    Declaration

    Swift

    class TextEmbedder : NSObject
  • Options for setting up a MPPTextEmbedder.

    Declaration

    Swift

    class TextEmbedderOptions : TaskOptions, NSCopying
  • Represents the embedding results generated by MPPTextEmbedder. *

    Declaration

    Swift

    class TextEmbedderResult : TaskResult