Predictions using TensorFlow and Cloud AI Platform

TensorFlow is an open source ML platform that supports advanced ML methods such as deep learning. This page describes TensorFlow specific features in Earth Engine. Although TensorFlow models are developed and trained outside Earth Engine, the Earth Engine API provides methods for exporting training and testing data in TFRecord format and importing/exporting imagery in TFRecord format. See the TensorFlow examples page for more information about how to develop pipelines for using TensorFlow with data from Earth Engine. See the TFRecord page to learn more about how Earth Engine writes data to TFRecord files.

ee.Model

The ee.Model package handles interaction with TensorFlow backed machine learning models.

Interacting with models hosted on AI Platform

A new ee.Model instance can be created with ee.Model.fromAiPlatformPredictor(). This is an ee.Model object that packages Earth Engine data into tensors, forwards them as predict requests to Google AI Platform then automatically reassembles the responses into Earth Engine data types. Note that depending on the size and complexity of your model and its inputs, you may wish to adjust the minimum node size of your AI Platform model to accommodate a high volume of predictions.

Earth Engine requires AI Platform models to use TensorFlow's SavedModel format. Before a hosted model can interact with Earth Engine, its inputs/outputs need to be compatible with the TensorProto interchange format, specifically serialized TensorProtos in base64. To make this easier, the Earth Engine CLI has the model prepare command that wraps an existing SavedModel in the required operations to convert input/output formats.

To use a model with ee.Model.fromAiPlatformPredictor(), you must have sufficient permissions to use the model. Specifically, you (or anyone who uses the model) needs at least the ML Engine Model User role. You can inspect and set model permissions from the models page of the Cloud Console.

Regions

You should use regional endpoints for your models, specifying the region at model creation, version creation and in ee.Model.fromAiPlatformPredictor(). Any region will work (don't use global), but us-central1 is preferred. Don't specify the REGIONS parameter. If you are are creating a model from the Cloud Console, ensure the regional box is checked.

Costs

Image Predictions

Use model.predictImage() to make predictions on an ee.Image using a hosted model. The return type of predictImage() is an ee.Image which can be added to the map, used in other computations, exported, etc. Earth Engine will automatically tile the input bands and adjust the output projection for scale changes and overtiling as needed. (See the TFRecord doc for more information on how tiling works). Note that Earth Engine will always forward 3D tensors to your model even when bands are scalar (the last dimension will be 1).

Nearly all convolutional models will have a fixed input projection (that of the data on which the model was trained). In this case, set the fixInputProj parameter to true in your call to ee.Model.fromAiPlatformPredictor(). When visualizing predictions, use caution when zooming out on a model that has a fixed input projection. This is for the same reason as described here. Specifically, zooming to a large spatial scope can result in requests for too much data and may manifest as slowdowns or rejections by AI Platform.