Custom models with ML Kit

By default, ML Kit’s APIs make use of Google trained machine learning models. These models are designed to cover a wide range of applications. However, some use cases require models that are more targeted. That is why some ML Kit APIs now allow you to replace the default models with custom TensorFlow Lite models.

Both the Image Labeling and the Object Detection & Tracking API offer support for custom image classification models. They are compatible with a selection of high-quality pre-trained models on TensorFlow Hub or your own custom model trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite Model Maker.

If you need a custom solution for other domains or use-cases, visit the On-device Machine Learning page for guidance on all of Google's solutions and tools for on-device machine learning.

Benefits of using ML Kit with custom models

The benefits for using a custom image classification model with ML Kit are:

  • Easy-to-use high level APIs - No need to deal with low-level model input/output, handle image pre-/post-processing or building a processing pipeline.
  • No need to worry about label mapping yourself, ML Kit extracts the labels from TFLite model metadata and does the mapping for you.
  • Supports custom models from a wide range of sources, from pre-trained models published on TensorFlow Hub to new models trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite Model Maker.
  • Supports models hosted with Firebase. Reduces APK size by downloading models on demand. Push model updates without republishing your app and perform easy A/B testing with Firebase Remote Config.
  • Optimized for integration with Android’s Camera APIs.

And, specifically for Object Detection and Tracking:

  • Improve classification accuracy by locating the objects first and only run the classifier on the related image area.
  • Provide a real-time interactive experience by providing your users immediate feedback on objects as they are being detected and classified.

Use a pre-trained image classification model

You can use pre-trained TensorFlow Lite models, provided they meet a set of criteria. Through TensorFlow Hub we are offering a set of vetted models - from Google or other model creators - that meet these criteria.

Use a model published on TensorFlow Hub

TensorFlow Hub offers a wide range of pre-trained image classification models - from various model creators - that can be used with the Image Labeling and Object Detection and Tracking APIs. Follow these steps.

  1. Pick a model from the collection of ML Kit compatible models.
  2. Download the .tflite model file from the model details page. Where available, pick a model format with metadata.
  3. Follow our guides for the Image Labeling API or Object Detection and Tracking API on how to bundle model file with your project and use it in your Android or iOS application.

Train your own image classification model

If no pre-trained image classification model fits your needs, there are various ways to train your own TensorFlow Lite model, some of which are outlined and discussed in more detail below.

Options to train your own image classification model
AutoML Vision Edge
  • Offered through Google Cloud AI
  • Create state-of the art image classification models
  • Easily evaluate between performance and size
TensorFlow Lite Model Maker
  • Re-train a model (transfer learning), takes less time and requires less data than training a model from scratch
Convert a TensorFlow model to TensorFlow Lite
  • Train a model with TensorFlow and then convert it to TensorFlow Lite

AutoML Vision Edge

Image classification models trained using AutoML Vision Edge are supported by the custom models in the Image Labeling and Object Detection and Tracking API APIs. These APIs also support download of models that are hosted with Firebase model deployment.

To learn more about how to use a model trained with AutoML Vision Edge in your Android and iOS apps, follow the custom model guides for each API, depending on your use case.

TensorFlow Lite Model Maker

The TFLite Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications. You can follow the Colab for Image classification with TensorFlow Lite Model Maker.

To learn more about how to use a model trained with Model Maker in your Android and iOS apps, follow our guides for the Image Labeling API or the Object Detection and Tracking API, depending on your use case.

Models created using TensorFlow Lite converter

If you have an existing TensorFlow image classification model, you can convert it using the TensorFlow Lite converter. Please ensure the model created meets the compatibility requirements below.

To learn more about how to use a TensorFlow Lite model in your Android and iOS apps, follow our guides for the Image Labeling API or the Object Detection and Tracking API, depending on your use case.

TensorFlow Lite model compatibility

You can use any pre-trained TensorFlow Lite image classification model, provided it meets these requirements:

Tensors

  • The model must have only one input tensor with the following constraints:
    • The data is in RGB pixel format.
    • The data is UINT8 or FLOAT32 type. If the input tensor type is FLOAT32, it must specify the NormalizationOptions by attaching Metadata.
    • The tensor has 4 dimensions : BxHxWxC, where:
      • B is the batch size. It must be 1 (inference on larger batches is not supported).
      • W and H are the input width and height.
      • C is the number of expected channels. It must be 3.
  • The model must have at least one output tensor with N classes and either 2 or 4 dimensions:
    • (1xN)
    • (1x1x1xN)
  • Currently only single-head models are fully supported. Multi-head models may output unexpected results.

Metadata

You can add metadata to the TensorFlow Lite file as explained in Adding metadata to TensorFlow Lite model.

To use a model with FLOAT32 input tensor, you must specify the NormalizationOptions in the metadata.

We also recommend that you attach this metadata to the output tensor TensorMetadata: