Earth Engine은 Vertex AI에 호스팅된 모델에 대한 커넥터로 ee.Model를 제공합니다.
Earth Engine은 이미지 또는 표 데이터를 온라인 예측 요청으로 Vertex AI 엔드포인트에 배포된 학습된 모델에 전송합니다. 그러면 모델 출력을 Earth Engine 이미지 또는 표로 사용할 수 있습니다.
TensorFlow 모델
TensorFlow는 딥 러닝과 같은 고급 ML 방법을 지원하는 오픈소스 머신러닝(ML) 플랫폼입니다. Earth Engine API는 TFRecord 형식으로 이미지, 학습, 테스트 데이터를 가져오거나 내보내는 메서드를 제공합니다. Earth Engine의 데이터와 함께 TensorFlow를 사용하는 데모는 ML 예시 페이지를 참고하세요. Earth Engine에서 데이터를 TFRecord 파일에 쓰는 방법에 관한 자세한 내용은 TFRecord 페이지를 참고하세요.
ee.Model
ee.Model 패키지는 호스팅된 머신러닝 모델과의 상호작용을 처리합니다.
Vertex AI의 호스팅 모델
ee.Model.fromVertexAi를 사용하여 새 ee.Model 인스턴스를 만들 수 있습니다. Earth Engine 데이터를 텐서로 패키징하고 Vertex AI에 예측 요청으로 전달한 다음 응답을 Earth Engine으로 재조합하는 ee.Model 객체입니다.
Earth Engine과 상호작용하려면 호스팅된 모델의 입력과 출력이 지원되는 교환 형식과 호환되어야 합니다. 기본값은 TensorProto 교환 형식, 특히 base64로 직렬화된 TensorProto입니다(참조). 이는 ML 예시 페이지에 표시된 대로 학습 후 저장하기 전에 프로그래매틱 방식으로 수행하거나 입력 및 출력 변환을 로드, 추가, 다시 저장하여 수행할 수 있습니다. 지원되는 기타 페이로드 형식에는 RAW_JSON가 포함된 JSON과 ND_ARRAYS가 포함된 다차원 배열이 있습니다. 자세한 내용은 페이로드 형식 문서를 참고하세요.
엔드포인트 IAM 권한
ee.Model.fromVertexAi()와 함께 모델을 사용하려면 모델을 사용할 수 있는 충분한 권한이 있어야 합니다. 특히 나 또는 모델을 사용하는 모든 사용자는 모델이 호스팅되는 Cloud 프로젝트에 대해 최소한 Vertex AI 사용자 역할이 있어야 합니다. Identity and Access Management (IAM) 컨트롤을 사용하여 Cloud 프로젝트의 권한을 제어합니다.
리전
모델을 엔드포인트에 배포할 때는 배포할 리전을 지정해야 합니다. Earth Engine 서버와 가까워서 실적이 가장 좋을 수 있으므로 us-central1 리전을 사용하는 것이 좋지만 거의 모든 리전에서 작동합니다. Vertex AI 리전 및 각 리전에서 지원되는 기능에 관한 자세한 내용은 Vertex AI 위치 문서를 참고하세요.
AI Platform에서 마이그레이션하는 경우 Vertex AI에는 전역 엔드포인트가 없으며 ee.Model.fromVertexAi()에는 region 매개변수가 없습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2024-10-01(UTC)"],[[["\u003cp\u003eEarth Engine can connect to models hosted on Vertex AI using \u003ccode\u003eee.Model\u003c/code\u003e, enabling online prediction requests with Earth Engine data.\u003c/p\u003e\n"],["\u003cp\u003eSupported model types include TensorFlow, PyTorch, and AutoML, with inputs and outputs needing to be compatible with the specified interchange format.\u003c/p\u003e\n"],["\u003cp\u003eUsers need Vertex AI user role permissions for the Cloud Project to utilize the model with \u003ccode\u003eee.Model.fromVertexAi()\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eWhile \u003ccode\u003eus-central1\u003c/code\u003e region is recommended for optimal performance, other regions are supported for model deployment on Vertex AI endpoints.\u003c/p\u003e\n"],["\u003cp\u003eAssociated costs for Vertex AI, Cloud Storage, and Earth Engine should be considered, with detailed pricing information available on respective pages.\u003c/p\u003e\n"]]],[],null,["# Predictions from Hosted Models\n\nEarth Engine provides `ee.Model` as a connector to models hosted on\n[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform).\nEarth Engine will send image or table data as online prediction requests to a\ntrained model deployed on a Vertex AI endpoint. The model outputs are then\navailable as Earth Engine images or tables.\n\nTensorFlow Models\n-----------------\n\n[TensorFlow](https://www.tensorflow.org/) is an open source machine learning\n(ML) platform that supports advanced ML methods such as deep learning. The Earth\nEngine API provides methods for importing and or exporting imagery, training and\ntesting data in TFRecord format. See the\n[ML examples page](/earth-engine/guides/ml_examples) for demonstrations that use\nTensorFlow with data from Earth Engine. See the\n[TFRecord page](/earth-engine/guides/tfrecord) for details about how Earth\nEngine writes data to TFRecord files.\n\n`ee.Model`\n----------\n\nThe `ee.Model` package handles interaction with hosted machine learning models.\n\n### Hosted Models on Vertex AI\n\nA new `ee.Model` instance can be created with\n[ee.Model.fromVertexAi](/earth-engine/apidocs/ee-model-fromvertexai). This is an\n`ee.Model` object that packages Earth Engine data into tensors, forwards them as\npredict requests to [Vertex AI](https://cloud.google.com/vertex-ai) then\nreassembles the responses into Earth Engine.\n\nEarth Engine supports TensorFlow (e.g. a\n[SavedModel](https://www.tensorflow.org/guide/saved_model#save_and_restore_models)\nformat), PyTorch, and AutoML models. To prepare a model for hosting,\n[save it](https://cloud.google.com/vertex-ai/docs/training/exporting-model-artifacts),\n[import it to Vertex AI](https://cloud.google.com/vertex-ai/docs/model-registry/import-model),\nthen\n[deploy the model to an endpoint](https://cloud.google.com/vertex-ai/docs/predictions/get-predictions#deploy_a_model_to_an_endpoint).\n\n### Input Formats\n\nTo interact with Earth Engine, a hosted model's inputs and outputs need to be\ncompatible with a supported interchange format. The default is the TensorProto\ninterchange format, specifically serialized TensorProtos in base64\n([reference](https://cloud.google.com/vertex-ai/docs/general/base64)). This can\nbe done programmatically, as shown on the\n[ML examples page](/earth-engine/guides/ml_examples), after training and before\nsaving, or by loading, adding the input and output transformation, and\nre-saving. Other supported payload formats include\nJSON with `RAW_JSON` and multi-dimensional arrays with `ND_ARRAYS`. See our\n[payload format documentation](/earth-engine/guides/ee-vertex-payload-formats)\nfor more details.\n\n### Endpoint IAM Permissions\n\nTo use a model with `ee.Model.fromVertexAi()`, you must have sufficient\npermissions to use the model. Specifically, you (or anyone who uses the model)\nneeds at least the\n[Vertex AI user role](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.user)\nfor the Cloud Project where the model is hosted. You control permissions for\nyour Cloud Project using\n[Identify and Access Management (IAM)](https://cloud.google.com/iam) controls.\n\n### Regions\n\nWhen deploying your model to an endpoint, you will need to specify which region\nto deploy to. The `us-central1` region is recommended since it will likely\nperform best due to proximity to Earth Engine servers, but almost any region\nwill work. See the\n[Vertex AI location docs](https://cloud.google.com/vertex-ai/docs/general/locations)\nfor details about Vertex AI regions and what features each one supports.\n\nIf you are migrating from AI Platform, note that Vertex AI does not have a\nglobal endpoint, and `ee.Model.fromVertexAi()` does not have a `region`\nparameter.\n\n### Costs\n\n| **Warning:** These guides use billable components of Google Cloud.\n\nFor detailed information on costs, see each product's associated pricing page.\n\n- Vertex AI ([pricing](https://cloud.google.com/vertex-ai/pricing))\n- Cloud Storage ([pricing](https://cloud.google.com/storage/pricing))\n- Earth Engine ([pricing (commercial)](https://earthengine.google.com/commercial))\n\nYou can use the\n[Pricing Calculator](https://cloud.google.com/products/calculator) to generate a\ncost estimate based on your projected usage.\n\n### Further Reading\n\nFor more details on how to use a hosted model with Earth Engine see our\n[Image Prediction page](/earth-engine/guides/ee-vertex-image-predictions) for\nimage prediction, or our\n[Properties Prediction page](/earth-engine/guides/ee-vertex-property-predictions)"]]