Prerequisites
Before you start to migrate your code, be sure you meet these requirements:
- ML Kit supports Xcode 13.2.1 or greater.
- ML Kit supports iOS version 15.5 or greater.
- ML Kit does not support 32-bit architectures (i386 and armv7). ML Kit does support 64-bit architectures (x86_64 and arm64).
- The ML Kit library is only provided as cocoapods. You can't mix frameworks and cocoapods, so in order to use this library you need to first migrate to use cocoapods.
Update Cocoapods
Update the dependencies for the ML Kit iOS cocoapods in your app’s Podfile:
API | Old pod name(s) | New pod name(s) |
---|---|---|
Barcode scanning | Firebase/MLVision Firebase/MLVisionBarcodeModel |
GoogleMLKit/BarcodeScanning |
Face detection | Firebase/MLVision Firebase/MLVisionFaceModel |
GoogleMLKit/FaceDetection |
Image labeling | Firebase/MLVision Firebase/MLVisionLabelModel |
GoogleMLKit/ImageLabeling |
Object detection and tracking | Firebase/MLVisionObjectDetection | GoogleMLKit/ObjectDetection |
Text recognition | Firebase/MLVision Firebase/MLVisionTextModel |
GoogleMLKit/TextRecognition |
AutoML image labeling (bundled model) | Firebase/MLVisionAutoML | GoogleMLKit/ImageLabelingCustom |
AutoML image labeling (model download from Firebase) | Firebase/MLVisionAutoML | GoogleMLKit/ImageLabelingCustom GoogleMLKit/LinkFirebase |
Language ID | Firebase/MLNaturalLanguage Firebase/MLNLLanguageID |
GoogleMLKit/LanguageID |
Smart reply | Firebase/MLNaturalLanguage Firebase/MLNLSmartReply |
GoogleMLKit/SmartReply |
Translate | Firebase/MLNaturalLanguage Firebase/MLNLTranslate |
GoogleMLKit/Translate |
Update names of classes, enums, and types
In general, classes , enums, and types need to be renamed as follows:
- Swift: Remove the
Vision
prefix from class names and enums - Objective-C: Replace both
FIRVision
andFIR
class name and enum prefixes byMLK
For some class names and types this general rule does not apply:
Swift
Old class or type | New class or type |
---|---|
AutoMLLocalModel | LocalModel |
AutoMLRemoteModel | CustomRemoteModel |
VisionBarcodeDetectionCallback | BarcodeScanningCallback |
VisionBarcodeDetector | BarcodeScanner |
VisionBarcodeDetectorOptions | BarcodeScannerOptions |
VisionImage | VisionImage (no change) |
VisionPoint | VisionPoint (no change) |
VisionOnDeviceAutoMLImageLabelerOptions | CustomImageLabelerOptions |
VisionOnDeviceImageLabelerOptions | ImageLabelerOptions |
Objective-C
Old class or type | New class or type |
---|---|
FIRAutoMLLocalModel | MLKLocalModel |
FIRAutoMLRemoteModel | MLKCustomRemoteModel |
FIRVisionBarcodeDetectionCallback | MLKBarcodeScanningCallback |
FIRVisionBarcodeDetector | MLKBarcodeScanner |
FIRVisionBarcodeDetectorOptions | MLKBarcodeScannerOptions |
FIRVisionImage | MLKVisionImage |
FIRVisionOnDeviceAutoMLImageLabelerOptions | MLKCustomImageLabelerOptions |
FIRVisionOnDeviceImageLabelerOptions | MLKImageLabelerOptions |
FIRVisionPoint | MLKVisionPoint |
Objective-C
Update method names
Update method names according to these rules:
Domain entry point classes (
Vision
,NaturalLanguage
) no longer exist. They have been replaced by task specific classes. Replace calls to their various factory methods for getting detectors with direct calls to each detector's factory method.The
VisionImageMetadata
class has been removed, along with theVisionDetectorImageOrientation
enum. Use theorientation
property ofVisionImage
to specify the display orientation of an image.The
onDeviceTextRecognizer
method that gets a newTextRecognizer
instance has been renamed totextRecognizer
.The confidence property has been removed from text recognition result classes, including
TextElement
,TextLine
, andTextBlock
.The
onDeviceImageLabeler
andonDeviceImageLabeler(options:)
methods to get a newImageLabeler
instance have been merged and renamed toimageLabeler(options:)
.The
objectDetector
method to get a newObjectDetector
instance has been removed. UseobjectDetector(options:)
instead.The
type
property has been removed fromImageLabeler
and theentityID
property has been removed from the image labeling result class,ImageLabel
.The barcode scanning API
detect(in _:, completion:)
has been renamed toprocess(_:, completion:)
to be consistent with other vision APIs.The Natural Language APIs now use the term "language tag" (as defined by the BCP-47 standard) instead of "language code".
TranslateLanguage
now uses readable names (like .english) for its constants instead of language tags ( like .en).
Here are some examples of old and new Swift methods:
Old
let options = VisionOnDeviceImageLabelerOptions() options.confidenceThreshold = 0.75 let labeler = Vision.vision().onDeviceImageLabeler(options: options) let detector = Vision.vision().faceDetector(options: options) let localModel = AutoMLLocalModel(manifestPath: "automl/manifest.json") let options = VisionOnDeviceAutoMLImageLabelerOptions(localModel: localModel) options.confidenceThreshold = 0.75 let labeler = vision.onDeviceAutoMLImageLabeler(options: options) let detector = Vision.vision().objectDetector()
New
let options = ImageLabelerOptions() options.confidenceThreshold = NSNumber(value:0.75) let labeler = ImageLabeler.imageLabeler(options: options) let detector = FaceDetector.faceDetector(options: options) let localModel = LocalModel(manifestPath: "automl/manifest.json") let options = CustomImageLabelerOptions(localModel: localModel) options.confidenceThreshold = NSNumber(value:0.75) let labeler = ImageLabeler.imageLabeler(options: options) let detector = ObjectDetector.objectDetector(options: ObjectDetectorOptions())
Here are some examples of old and new Objective-C methods:
Old
FIRVisionOnDeviceImageLabelerOptions *options = [[FIRVisionOnDeviceImageLabelerOptions alloc] init]; options.confidenceThreshold = 0.75; FIRVisionImageLabeler *labeler = [[FIRVision vision] onDeviceImageLabelerWithOptions:options]; FIRVisionFaceDetector *detector = [[FIRVision vision] faceDetectorWithOptions: options]; FIRAutoMLLocalModel *localModel = [[FIRAutoMLLocalModel alloc] initWithManifestPath:@"automl/manifest.json"]; FIRVisionOnDeviceAutoMLImageLabelerOptions *options = [[FIRVisionOnDeviceAutoMLImageLabelerOptions alloc] initWithLocalModel: localModel]; options.confidenceThreshold = 0.75 FIRVisionImageLabeler *labeler = [[FIRVision vision] onDeviceAutoMLImageLabelerWithOptions:options]; FIRVisionObjectDetector *detector = [[FIRVision vision] objectDetector];
New
MLKImageLabelerOptions *options = [[MLKImageLabelerOptions alloc] init]; options.confidenceThreshold = @(0.75); MLKImageLabeler *labeler = [MLKImageLabeler imageLabelerWithOptions:options]; MLKFaceDetector *detector = [MLKFaceDetector faceDetectorWithOptions:options]; MLKLocalModel *localModel = [[MLKLocalModel alloc] initWithManifestPath:@"automl/manifest.json"]; MLKCustomImageLabelerOptions *options = [[MLKCustomImageLabelerOptions alloc] initWithLocalModel:localModel]; options.confidenceThreshold = @(0.75) MLKImageLabeler *labeler = [MLKImageLabeler imageLabelerWithOptions:options]; MLKObjectDetectorOptions *options = [[MLKObjectDetectorOptions alloc] init]; MLKObjectDetector *detector = [MLKObjectDetector objectDetectorWithOptions:options];
API-specific changes
Object detection and tracking
If your app uses object classification, be aware that the new SDK has changed the way returns the classification category for detected objects.
VisionObjectCategory
in VisionObject
is returned as text
in an ObjectLabel
object, instead of an integer. All possible string categories are included in the
DetectedObjectLabel
enum.
Note that the .unknown
category has been removed. When the confidence of classifying
an object is low, the classifier returns no label at all.
Here is an example of the old and new Swift code:
Old
if (object.classificationCategory == .food) { ... }
New
if let label = object.labels.first { if (label.text == DetectedObjectLabel.food.rawValue) { ... } } // or if let label = object.labels.first { if (label.index == DetectedObjectLabelIndex.food.rawValue) { ... } }
Here is an example of the old and new Objective-C code:
Old
if (object.classificationCategory == FIRVisionObjectCategoryFood) { ... }
New
if ([object.labels[0].text isEqualToString:MLKDetectedObjectLabelFood]) { ... } // or if ([object.labels[0].index == MLKDetectedObjectLabelIndexFood]) { ... }
Remove Firebase dependencies (Optional)
This step only applies when these conditions are met:
- Firebase ML Kit is the only Firebase component you use
- You only use on-device APIs
- You don't use model serving
If this is the case, you can remove Firebase dependencies after migration. Follow these steps:
- Remove the Firebase configuration file by deleting the GoogleService-Info.plist file from your app’s directory and your Xcode project.
- Remove any Firebase cocoapod, such as
pod 'Firebase/Analytics'
, from your Podfile. - Remove any FirebaseApp initialization, such as
FirebaseApp.configure()
from your code. - Delete your Firebase app at the Firebase console accroding to the instructions on the Firebase support site.
Getting Help
If you run into any issues, please check out our Community page where we outline the channels available for getting in touch with us.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-05 UTC.