The Mobile Vision API is now a part of ML Kit. We strongly encourage you to try it out, as it comes with new capabilities like on-device image labeling! Also, note that we ultimately plan to wind down the Mobile Vision API, with all new on-device ML capabilities released via ML Kit. Feel free to reach out to Firebase support for help.

Get Started with the Mobile Vision API

The Mobile Vision API has detectors that let you find objects in photos and video.

This training will guide you to install a sample application for Android that will detect faces in photos in real time.

Before you begin

  • Set up your Android development environment. If you are new to developing Android applications, see Building Your First App.
  • Have an Android device for testing, that runs Android 2.3 (Gingerbread) or higher and includes the Google Play Store.

Download and run the sample app

For this exercise, you'll need to download the Photo Demo sample Android application.

To download and set up the sample application in Android Studio:

  1. Download the Vision samples from Github.

    You can either use the "Download ZIP" button on the Github Page or clone on the command line:

    git clone https://github.com/googlesamples/android-vision.git

  2. Import the photo-demo project in Android Studio:

    • Click File > New > Import Project.
    • In the "Select Eclipse or Gradle Project to Import" window, navigate to the directory where you downloaded the vision samples repository.
    • Select the "photo-demo" folder and click OK.
    • Android Studio may prompt you to install the latest version of various Android libraries, especially com.android.gms.play-services in this case. Click "Install Repository and sync project" and follow the instructions.
  3. Connect your device over USB. You should see a notification that says ‘USB Debugging Enabled’. If you don’t see this notification, follow Step 2 here to enable USB debugging on your device, then plug your device in again.

  4. Run the app either by clicking the green arrow in the bar, or go to Run > Run 'app'.

The app should show a face image with circles marking the eyes, nose, mouth and cheeks.

Next Steps

Now that you have your environment set up to run with the Mobile Vision API, there are a few things you can do next:

For more information on setting up Google Play Services to work with your projects, see Setting Up Google Play Services and compile using com.google.android.gms:play-services-vision.