Use on-device machine learning in your apps to easily solve real-world problems.
ML Kit is a mobile SDK that brings Google's on-device machine learning expertise to Android and iOS apps. Use our powerful yet easy to use Vision and Natural Language APIs to solve common challenges in your apps or create brand-new user experiences. All are powered by Google's best-in-class ML models and offered to you at no cost.
ML Kit's APIs all run on-device, allowing for real-time use cases where you want to process a live camera stream for example. This also means that the functionality is available offline.
Added a new Digital Ink Recognition API that recognizes text and shapes handwritten on a digital surface, such as a touch screen. Supports 300+ languages, as well as emojis, basic shapes and autodraw. It uses the same technology that powers handwriting recognition layouts in Gboard, the Google Translate apps and the Quick, Draw! game.
ML Kit's on-device APIs are now available as a standalone SDK. Learn more about this change in our migration guide. Cloud APIs, AutoML Vision Edge, and custom model deployment are still available via Firebase Machine Learning.
- Read the migration guide.
- Explore the ready-to-use APIs: text recognition, face detection, barcode scanning, image labeling, object detection & tracking, smart reply, text translation, and language identification.
- Learn how to use custom TensorFlow Lite image labeling models in your apps. Read Custom models with ML Kit.
Take a look at our sample apps and codelabs. They help you get started with all of the APIs.