Recognize text in images with ML Kit on iOS

You can use ML Kit to recognize text in images or video, such as the text of a street sign. The main characteristics of this feature are:

Text Recognition API
DescriptionRecognize Latin-script text in images or videos.
SDK nameGoogleMLKit/TextRecognition (version 2.2.0)
ImplementationAssets are statically linked to your app at build time.
App size impactAbout 17 MB
PerformanceReal-time on most devices.

See the ML Kit quickstart sample on GitHub for an example of this API in use, or try the codelab.

Before you begin

  1. Include the following ML Kit pods in your Podfile:
    pod 'GoogleMLKit/TextRecognition','2.2.0'
    
  2. After you install or update your project's Pods, open your Xcode project using its .xcworkspace. ML Kit is supported in Xcode version 12.4 or greater.

1. Create an instance of TextRecognizer

Create an instance of TextRecognizer by calling +textRecognizer:

Swift

let textRecognizer = TextRecognizer.textRecognizer()
      

Objective-C

MLKTextRecognizer *textRecognizer = [MLKTextRecognizer textRecognizer];
      

2. Prepare the input image

Pass the image as a UIImage or a CMSampleBufferRef to the TextRecognizer's process(_:completion:) method:

Create a VisionImage object using a UIImage or a CMSampleBuffer.

If you use a UIImage, follow these steps:

  • Create a VisionImage object with the UIImage. Make sure to specify the correct .orientation.

    Swift

    let image = VisionImage(image: UIImage)
    visionImage.orientation = image.imageOrientation

    Objective-C

    MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];
    visionImage.orientation = image.imageOrientation;

If you use a CMSampleBuffer, follow these steps:

  • Specify the orientation of the image data contained in the CMSampleBuffer.

    To get the image orientation:

    Swift

    func imageOrientation(
      deviceOrientation: UIDeviceOrientation,
      cameraPosition: AVCaptureDevice.Position
    ) -> UIImage.Orientation {
      switch deviceOrientation {
      case .portrait:
        return cameraPosition == .front ? .leftMirrored : .right
      case .landscapeLeft:
        return cameraPosition == .front ? .downMirrored : .up
      case .portraitUpsideDown:
        return cameraPosition == .front ? .rightMirrored : .left
      case .landscapeRight:
        return cameraPosition == .front ? .upMirrored : .down
      case .faceDown, .faceUp, .unknown:
        return .up
      }
    }
          

    Objective-C

    - (UIImageOrientation)
      imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation
                             cameraPosition:(AVCaptureDevicePosition)cameraPosition {
      switch (deviceOrientation) {
        case UIDeviceOrientationPortrait:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored
                                                                : UIImageOrientationRight;
    
        case UIDeviceOrientationLandscapeLeft:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored
                                                                : UIImageOrientationUp;
        case UIDeviceOrientationPortraitUpsideDown:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored
                                                                : UIImageOrientationLeft;
        case UIDeviceOrientationLandscapeRight:
          return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored
                                                                : UIImageOrientationDown;
        case UIDeviceOrientationUnknown:
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
          return UIImageOrientationUp;
      }
    }
          
  • Create a VisionImage object using the CMSampleBuffer object and orientation:

    Swift

    let image = VisionImage(buffer: sampleBuffer)
    image.orientation = imageOrientation(
      deviceOrientation: UIDevice.current.orientation,
      cameraPosition: cameraPosition)

    Objective-C

     MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer];
     image.orientation =
       [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation
                                    cameraPosition:cameraPosition];

3. Process the image

Then, pass the image to the process(_:completion:) method:

Swift

textRecognizer.process(visionImage) { result, error in
  guard error == nil, let result = result else {
    // Error handling
    return
  }
  // Recognized text
}

Objective-C

[textRecognizer processImage:image
                  completion:^(MLKText *_Nullable result,
                               NSError *_Nullable error) {
  if (error != nil || result == nil) {
    // Error handling
    return;
  }
  // Recognized text
}];

4. Extract text from blocks of recognized text

If the text recognition operation succeeds, it returns a Text object. A Text object contains the full text recognized in the image and zero or more TextBlock objects.

Each TextBlock represents a rectangular block of text, which contain zero or more TextLine objects. Each TextLine object contains zero or more TextElement objects, which represent words and word-like entities such as dates and numbers.

For each TextBlock, TextLine, and TextElement object, you can get the text recognized in the region and the bounding coordinates of the region.

For example:

Swift

let resultText = result.text
for block in result.blocks {
    let blockText = block.text
    let blockLanguages = block.recognizedLanguages
    let blockCornerPoints = block.cornerPoints
    let blockFrame = block.frame
    for line in block.lines {
        let lineText = line.text
        let lineLanguages = line.recognizedLanguages
        let lineCornerPoints = line.cornerPoints
        let lineFrame = line.frame
        for element in line.elements {
            let elementText = element.text
            let elementCornerPoints = element.cornerPoints
            let elementFrame = element.frame
        }
    }
}

Objective-C

NSString *resultText = result.text;
for (MLKTextBlock *block in result.blocks) {
  NSString *blockText = block.text;
  NSArray<MLKTextRecognizedLanguage *> *blockLanguages = block.recognizedLanguages;
  NSArray<NSValue *> *blockCornerPoints = block.cornerPoints;
  CGRect blockFrame = block.frame;
  for (MLKTextLine *line in block.lines) {
    NSString *lineText = line.text;
    NSArray<MLKTextRecognizedLanguage *> *lineLanguages = line.recognizedLanguages;
    NSArray<NSValue *> *lineCornerPoints = line.cornerPoints;
    CGRect lineFrame = line.frame;
    for (MLKTextElement *element in line.elements) {
      NSString *elementText = element.text;
      NSArray<NSValue *> *elementCornerPoints = element.cornerPoints;
      CGRect elementFrame = element.frame;
    }
  }
}

Tips to improve performance

  • For processing video frames, use the results(in:) synchronous API of the detector. Call this method from the AVCaptureVideoDataOutputSampleBufferDelegate's captureOutput(_, didOutput:from:) function to synchronously get results from the given video frame. Keep AVCaptureVideoDataOutput's alwaysDiscardsLateVideoFrames as true to throttle calls to the detector. If a new video frame becomes available while the detector is running, it will be dropped.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each processed input frame. See the updatePreviewOverlayViewWithLastFrame in the ML Kit quickstart sample for an example.
  • Consider capturing images at a lower resolution. However, also keep in mind this API's image dimension requirements.