Use on-device machine learning in your apps to easily solve real-world problems.
ML Kit is a mobile SDK that brings Google's on-device machine learning expertise to Android and iOS apps. Use our powerful yet easy to use Vision and Natural Language APIs to solve common challenges in your apps or create brand-new user experiences. All are powered by Google's best-in-class ML models and offered to you at no cost.
ML Kit's APIs all run on-device, allowing for real-time use cases where you want to process a live camera stream for example. This also means that the functionality is available offline.
Added a new Pose Detection API, a CPU Based, lightweight and versatile solution to help developers track a user’s physical actions in real time within their apps. Pose Detection offers a 33 point skeletal match of a users body in real time and has been tuned to work well for complex athletic use cases. Pose Detection comes in two optimized SDKs: base and accurate.
ML Kit's on-device APIs are now available as a standalone SDK. Learn more about this change in our migration guide. Cloud APIs, AutoML Vision Edge, and custom model deployment are still available via Firebase Machine Learning.
- Read the migration guide.
- Explore the ready-to-use APIs: text recognition, face detection, barcode scanning, image labeling, object detection and tracking, smart reply, text translation, and language identification.
- Learn how to use custom TensorFlow Lite image labeling models in your apps. Read Custom models with ML Kit.
Take a look at our sample apps and codelabs. They help you get started with all of the APIs.