Get started with MedGemma
Stay organized with collections
Save and categorize content based on your preferences.
You can get started in 4 ways:
Run it locally
Download the MedGemma model you are interested in from Hugging
Face
and run it locally.
This is the recommended option if you want to experiment with MedGemma and don't
need to handle a high volume of data. Note to run the 27B model without
quantization you will need to use Colab
Enterprise. Our GitHub
repository includes a
notebook
that you can use to explore the model.
Deploy your own online service
MedGemma can be deployed as a highly available and scalable HTTPS endpoint on
Vertex AI. The easiest way is through
Model
Garden.
This option is ideal for production-grade, online applications with low latency,
high scalability and availability requirements. Refer to Vertex AI's service
level agreement (SLA) and pricing
model for online predictions.
A sample
notebook
is available to help you get started quickly.
Fine-tune MedGemma
MedGemma can be fine-tuned using your own medical data to optimize its
performance for your use cases. A sample
notebook
is available that you can adapt for your data and use case.
Launch a batch job
For larger datasets in a batch workflow, it's best to launch it as a Vertex AI
batch prediction
job.
Note that Vertex AI's SLA and pricing
model are different for batch
prediction jobs.
You can reach out in several ways:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-05-20 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-05-20 UTC."],[],[],null,["You can get started in 4 ways:\n\nRun it locally\n\nDownload the MedGemma model you are interested in from [Hugging\nFace](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)\nand run it locally.\n\nThis is the recommended option if you want to experiment with MedGemma and don't\nneed to handle a high volume of data. Note to run the 27B model without\nquantization you will need to use [Colab\nEnterprise](https://cloud.google.com/colab/docs/introduction). Our GitHub\nrepository includes a\n[notebook](https://github.com/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb)\nthat you can use to explore the model.\n\nDeploy your own online service\n\nMedGemma can be deployed as a highly available and scalable HTTPS endpoint on\n[Vertex AI](https://cloud.google.com/vertex-ai). The easiest way is through\n[Model\nGarden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma).\n\nThis option is ideal for production-grade, online applications with low latency,\nhigh scalability and availability requirements. Refer to [Vertex AI's service\nlevel agreement (SLA)](https://cloud.google.com/vertex-ai/sla) and [pricing\nmodel](https://cloud.google.com/vertex-ai/pricing) for online predictions.\n\nA sample\n[notebook](https://github.com/google-health/medgemma/blob/main/notebooks/quick_start_with_model_garden.ipynb)\nis available to help you get started quickly.\n\nFine-tune MedGemma\n\nMedGemma can be fine-tuned using your own medical data to optimize its\nperformance for your use cases. A sample\n[notebook](https://github.com/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb)\nis available that you can adapt for your data and use case.\n\nLaunch a batch job\n\nFor larger datasets in a batch workflow, it's best to launch it as a [Vertex AI\nbatch prediction\njob](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions#request_a_batch_prediction).\nNote that [Vertex AI's SLA](https://cloud.google.com/vertex-ai/sla) and [pricing\nmodel](https://cloud.google.com/vertex-ai/pricing) are different for batch\nprediction jobs.\n\nContact\n\nYou can reach out in several ways:\n\n- Start or join a conversation on [GitHub\n Discussions](https://github.com/google-health/medgemma/discussions).\n- File a Feature Request or Bug at [GitHub\n Issues](https://github.com/google-health/medgemma/issues).\n- Send us feedback at `hai-def@google.com`."]]