The Mobile Vision API is deprecated

Track Faces and Barcodes

This page is a walk through of how to build an app that uses the rear camera to show a view of the detected faces and barcodes. The screenshot below shows it in action. We'll show you how to track all of the barcodes and draw an overlay around each, indicating the value, position, and size. Additionally, we'll draw a rectangle overlay around each face, indicating the face ID (assigned sequentially), position, and size.

If you want to follow along with the code, or just want to build and try out the app, build the sample MultiDetectorDemo application by following the instructions on the Getting Started page.

This tutorial will show you how to:

  1. Create barcode and face detectors.
  2. Track multiple faces and barcodes using a GMVMultiDetectorDataOutput.
  3. Create separate graphics for barcodes and faces.

Creating the Multi-Detector Pipeline

GoogleMVDataOutput contains multiple instances of AVCaptureDataOutput that extend AVCaptureVideoDataOutput to allow you to integrate face tracking with your AVFoundation video pipeline.

Import the GoogleMobileVision framework to use the detector API and the GoogleMVDataOutput framework to use the video tracking pipeline.

@import GoogleMobileVision;
@import GoogleMVDataOutput;

Creating a Barcode Detector

GMVDetector *barcodeDetector = [GMVDetector detectorOfType:GMVDetectorTypeBarcode

See the Barcode API Tutorial: Barcodes Overview for more information on the default barcode detection settings.

Creating a Face Detector

  NSDictionary *faceOptions = @{
      GMVDetectorFaceTrackingEnabled : @(YES),
      GMVDetectorFaceMinSize : @(0.15)
  GMVDetector *faceDetector = [GMVDetector detectorOfType:GMVDetectorTypeFace

See the Face API Tutorial: Face Tracker for more information on the default face detection settings that are implied by creating the face detector in this way.

Creating a Multi-Detector DataOutput

A GMVMultiDetectorDataOutput is a concrete sub-class of AVCaptureVideoDataOutput. It processes the video frames and delegates the detection to its associated detectors. It then delivers the detection results to the appropriate GMVOutputTrackerDelegate based on its GMVMultiDetectorDataOutputDelegate.

To create an instance of GMVMultiDetectorDataOutput and associate the detectors to it.

NSArray *detectors = @[faceDetector, barcodeDetector];
GMVDataOutput *dataOutput = [[GMVMultiDetectorDataOutput alloc] initWithDetectors:detectors];
dataOutput.multiDetectorDataDelegate = self;

Implementing the GMVMultiDetectorDataOutputDelegate

The barcode and face detectors may detect multiple features in each frame. The GMVMultiDetectorDataOutput will call its delegate to create a FaceTracker or a BarcodeTracker instance for every feature detection that it sees.

#pragma mark - GMVMultiDetectorDataOutputDelegate

- (id<GMVOutputTrackerDelegate>)dataOutput:(GMVDataOutput *)dataOutput
                              fromDetector:(GMVDetector *)detector
                         trackerForFeature:(GMVFeature *)feature {
  if ([feature.type isEqualToString:GMVFeatureTypeFace]) {
    FaceTracker *tracker = [[FaceTracker alloc] init];
    return tracker;
  } else if([feature.type isEqualToString:GMVFeatureTypeBarcode]) {
    BarcodeTracker *tracker = [[BarcodeTracker alloc] init];
    return tracker;
  return nil;

Conceptually, this unifies the barcode and face portions of the pipeline to look like the image below:

The multi-detector will receive a series of images from the video pipeline. Each image is submitted to the barcode detector and the face detector, resulting in both barcode detection and face detection on each frame.

Adding the DataOutput to the Video Pipeline

The code for setting up and executing barcode/face tracking is in ViewController.m, which is the main view controller for this app. Typically, the video pipeline and detectors are specified in the viewDidLoad method as showed here:

- (void)viewDidLoad {
  [super viewDidLoad];

  // Setup default camera settings.
  self.session = [[AVCaptureSession alloc] init];
  self.session.sessionPreset = AVCaptureSessionPresetMedium;
  [self setCameraSelection];

Instantiate an AVCaptureSession to coordinate the data flow from the input to the output. Add the instance of GMVMultiDetectorDataOutput to the session to process video frames.

  // Setup the GMVDataOutput with the session.
  [self.session addOutput:dataOutput];

Instantiate an AVCaptureVideoPreviewLayer with the session to display camera feed. In this example code, we have an overlay UIView which sits on top of the main view to replace the eyes with cartoons.

  // Setup camera preview.
  self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
  [self.view layer] addSublayer:self.previewLayer];

Creating Separate Graphics for Barcodes and Faces

Each of the GMVOutputTrackerDelegate instances created in the earlier code snippets maintains graphics on a view that is overlaid upon the video camera preview:

As mentioned earlier, GMVMultiDetectorDataOutputDelegate creates an instance of GMVOutputTrackerDelegate, FaceTracker or BarcodeTracker in this example. The BarcodeTracker and FaceTracker classes implement the specifics of how to render the graphics associated with a barcode or face instance, respectively.

For example, the BarcodeTracker draws the detected barcodes “raw value” and bounding box:

#pragma mark - GMVOutputTrackerDelegate

- (void)dataOutput:(GMVDataOutput *)dataOutput detectedFeature:(GMVFeature *)feature {
  self.barcodeView = [[UIView alloc] initWithFrame:CGRectZero];
  self.barcodeView.backgroundColor = [UIColor blueColor];
  [self.overlay addSubview:self.barcodeView];

  self.valueLabel = [[UILabel alloc] initWithFrame:CGRectZero];
  [self.overlay addSubview:self.valueLabel];

- (void)dataOutput:(GMVDataOutput *)dataOutput
updateFocusingFeature:(GMVBarcodeFeature *)barcode
      forResultSet:(NSArray<GMVBarcodeFeature *> *)features {
  self.barcodeView.hidden = NO;
  CGRect rect = [self scaleRect:barcode.bounds];
  self.barcodeView.frame = rect;

  self.valueLabel.hidden = NO;
  self.valueLabel.text = barcode.rawValue;
  self.valueLabel.frame = CGRectMake(rect.origin.x, rect.origin.y + rect.size.height, 200, 20);

- (void)dataOutput:(GMVDataOutput *)dataOutput
updateMissingFeatures:(NSArray<GMVFaceFeature *> *)features {
  self.barcodeView.hidden = YES;
  self.valueLabel.hidden = YES;

- (void)dataOutputCompletedWithFocusingFeature:(GMVDataOutput *)dataOutput{
  [self.barcodeView removeFromSuperview];
  [self.valueLabel removeFromSuperview];

The FaceGraphic draws the detected face ID and bounding box:

#pragma mark - GMVOutputTrackerDelegate

- (void)dataOutput:(GMVDataOutput *)dataOutput detectedFeature:(GMVFeature *)feature {
  self.faceView = [[UIView alloc] initWithFrame:CGRectZero];
  self.faceView.backgroundColor = [UIColor redColor];
  [self.overlay addSubview:self.faceView];

  self.idLabel = [[UILabel alloc] initWithFrame:CGRectZero];
  [self.overlay addSubview:self.idLabel];

- (void)dataOutput:(GMVDataOutput *)dataOutput
  updateFocusingFeature:(GMVFaceFeature *)face
           forResultSet:(NSArray<GMVFaceFeature *> *)features {
  self.faceView.hidden = NO;
  CGRect rect =  [self scaleRect:face.bounds]
  self.faceView.frame = rect;

  self.idLabel.hidden = NO;
  self.idLabel.text = [NSString stringWithFormat:@"id : %lu", face.trackingID];
  self.idLabel.frame = CGRectMake(rect.origin.x, rect.origin.y + rect.size.height, 200, 20);

- (void)dataOutput:(GMVDataOutput *)dataOutput
  updateMissingFeatures:(NSArray<GMVFaceFeature *> *)features {
  self.faceView.hidden = YES;
  self.idLabel.hidden = YES;

- (void)dataOutputCompletedWithFocusingFeature:(GMVDataOutput *)dataOutput{
  [self.faceView removeFromSuperview];
  [self.idLabel removeFromSuperview];