Use on-device machine learning in your apps to easily solve real-world problems.
ML Kit is a mobile SDK that brings Google's on-device machine learning expertise to Android and iOS apps. Use our powerful yet easy to use Vision and Natural Language APIs to solve common challenges in your apps or create brand-new user experiences. All are powered by Google's best-in-class ML models and offered to you at no cost.
ML Kit's APIs all run on-device, allowing for real-time use cases where you want to process a live camera stream for example. This also means that the functionality is available offline.
ML Kit is now out of beta and generally available, with the exception of Pose Detection and Entity Extraction, which are offered in beta.
Added a new Entity Extraction API (beta). This API allows you to detect and locate entities (e.g. addresses, date/time etc.) from raw text. It supports 11 entity types and 15 languages, using the same technology that powers smart text selection in Android Q+.
Added a new Pose Detection API (beta), a CPU based, lightweight, and versatile solution to help developers track a user’s physical actions in real time within their apps. Pose Detection offers a 33 point skeletal match of a users body in real time and has been tuned to work well for complex athletic use cases. Pose Detection comes in two optimized SDKs: base and accurate.
ML Kit's on-device APIs are now offered through a standalone SDK, independent from Firebase. Learn more about this change in our migration guide. Cloud APIs and custom model deployment are still available via Firebase Machine Learning.
- Explore the ready-to-use APIs: text recognition, face detection, barcode scanning, image labeling, object detection and tracking, smart reply, text translation, and language identification.
- Learn how to use custom TensorFlow Lite image labeling models in your apps. Read Custom models with ML Kit.
Take a look at our sample apps and codelabs. They help you get started with all of the APIs.