ML Kit

Use on-device machine learning in your apps to easily solve real-world problems.

ML Kit is a mobile SDK that brings Google's on-device machine learning expertise to Android and iOS apps. Use our powerful yet easy to use Vision and Natural Language APIs to solve common challenges in your apps or create brand-new user experiences. All are powered by Google's best-in-class ML models and offered to you at no cost.

ML Kit's APIs all run on-device, allowing for real-time use cases where you want to process a live camera stream for example. This also means that the functionality is available offline.

What's new

  • ML Kit is now generally available, with the exception of Pose Detection, Entity Extraction, and Selfie Segmentation which are offered in beta.

  • The Selfie Segmentation API is in beta. This API allows developers to easily separate the background from users within a scene and focus on what matters. Adding cool effects to selfies or inserting your users into interesting background environments has never been easier.

  • The Entity Extraction API is in beta. This API allows you to detect and locate entities (e.g. addresses, date/time etc.) from raw text. It supports 11 entity types and 15 languages, using the same technology that powers smart text selection in Android Q+.

  • The Pose Detection API is in (beta). This API is a CPU based, lightweight, and versatile solution to help developers track a user’s physical actions in real time within their apps. Pose Detection offers a 33 point skeletal match of a users body in real time and has been tuned to work well for complex athletic use cases. Pose Detection comes in two optimized SDKs: base and accurate. The API also includes a Z Coordinate, in addition to X and Y, to help with depth analysis.

  • ML Kit's on-device APIs are now offered through a standalone SDK, independent from Firebase. Learn more about this change in our migration guide. Cloud APIs and custom model deployment are still available via Firebase Machine Learning.

Learn more