Get started with CXR Foundation
Stay organized with collections
Save and categorize content based on your preferences.
You can get started in 4 ways:
Run it locally
Download the model
from Hugging Face and run it
locally.
This is the recommended option, if you want to experiment with the model and
don't need to handle a high volume of data. Our GitHub repository includes a
notebook
that you can use to explore the model.
Deploy your own online service
CXR Foundation can be deployed as a highly available and scalable HTTPS endpoint
on Vertex AI. The easiest way is through
Model Garden.
This option is ideal for production-grade, online applications with low latency,
high scalability and availability requirements. Refer to
Vertex AI's service level agreement (SLA)
and pricing model for online
predictions.
Read the
API specification
to learn how to create online clients that interact with the service. A sample
notebook
is available to help you get started quickly.
For custom requirements, you can also adapt our
model serving implementation
and host it yourself on any API management system.
Launch a batch job
For larger dataset in a batch workflow, it's best to launch it as a
Vertex AI batch prediction job.
Note that Vertex AI's SLA and
pricing model are different for
batch prediction jobs.
Refer to the "Get batch predictions" section in the
Quick start with Model Garden notebook
to get started.
Try out our online service
You can test out the online service before committing to deploying your own
using our
research endpoint.
This endpoint is for research purposes only.
You can reach out in several ways:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-02-11 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-02-11 UTC."],[[["\u003cp\u003eCXR Foundation can be run locally by downloading the model and using the provided notebook for exploration and experimentation.\u003c/p\u003e\n"],["\u003cp\u003eFor production-grade applications, deploy CXR Foundation as an online service on Vertex AI for high availability and scalability.\u003c/p\u003e\n"],["\u003cp\u003eUtilize Vertex AI batch prediction jobs for processing large datasets in batch workflows.\u003c/p\u003e\n"],["\u003cp\u003eA research endpoint is available for testing the online service before deploying your own instance.\u003c/p\u003e\n"],["\u003cp\u003eEngage with the CXR Foundation community through GitHub Discussions, Issues, or by contacting the team directly via email.\u003c/p\u003e\n"]]],[],null,["# Get started with CXR Foundation\n\nYou can get started in 4 ways:\n\nRun it locally\n--------------\n\nDownload the model\n[from Hugging Face](https://huggingface.co/google/cxr-foundation) and run it\nlocally.\n\nThis is the recommended option, if you want to experiment with the model and\ndon't need to handle a high volume of data. Our GitHub repository includes a\n[notebook](https://github.com/google-health/cxr-foundation/blob/master/notebooks/quick_start_with_hugging_face.ipynb)\nthat you can use to explore the model.\n\nDeploy your own online service\n------------------------------\n\nCXR Foundation can be deployed as a highly available and scalable HTTPS endpoint\non [Vertex AI](https://cloud.google.com/vertex-ai). The easiest way is through\n[Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/cxr-foundation).\n\nThis option is ideal for production-grade, online applications with low latency,\nhigh scalability and availability requirements. Refer to\n[Vertex AI's service level agreement (SLA)](https://cloud.google.com/vertex-ai/sla)\nand [pricing model](https://cloud.google.com/vertex-ai/pricing) for online\npredictions.\n\nRead the\n[API specification](/health-ai-developer-foundations/cxr-foundation/serving-api)\nto learn how to create online clients that interact with the service. A sample\n[notebook](https://github.com/google-health/cxr-foundation/blob/master/notebooks/quick_start_with_model_garden.ipynb)\nis available to help you get started quickly.\n\nFor custom requirements, you can also adapt our\n[model serving implementation](https://github.com/google-health/cxr-foundation/tree/master/python/serving)\nand host it yourself on any API management system.\n\nLaunch a batch job\n------------------\n\nFor larger dataset in a batch workflow, it's best to launch it as a\n[Vertex AI batch prediction job](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions#request_a_batch_prediction).\nNote that [Vertex AI's SLA](https://cloud.google.com/vertex-ai/sla) and\n[pricing model](https://cloud.google.com/vertex-ai/pricing) are different for\nbatch prediction jobs.\n\nRefer to the \"Get batch predictions\" section in the\n[Quick start with Model Garden notebook](https://github.com/google-health/cxr-foundation/blob/master/notebooks/quick_start_with_model_garden.ipynb)\nto get started.\n\nTry out our online service\n--------------------------\n\nYou can test out the online service before committing to deploying your own\nusing our\n[research endpoint](/health-ai-developer-foundations/model-serving/research-endpoints).\n\nThis endpoint is for research purposes only.\n\nContact\n-------\n\nYou can reach out in several ways:\n\n- Start or join a conversation on [GitHub Discussions](https://github.com/google-health/cxr-foundation/discussions).\n- File a Feature Request or Bug at [GitHub Issues](https://github.com/google-health/cxr-foundation/issues).\n- Send us feedback at `hai-def@google.com`."]]