Get started with TxGemma
Stay organized with collections
Save and categorize content based on your preferences.
You can get started in 4 ways:
Run it locally
Select which TxGemma variant you want to use, download it from Hugging Face and
run it locally.
This is the recommended option, if you want to experiment with the model and
don't need to handle a high volume of data. Our GitHub repository includes a
notebook
that you can use to get started with TxGemma.
We have two other notebooks:
Deploy your own online service
TxGemma can be deployed as a highly available and scalable HTTPS endpoint on
Vertex AI. The easiest way is through
Model Garden.
This option is ideal for production-grade, online applications with low latency,
high scalability and availability requirements. Refer to
Vertex AI's service level agreement (SLA)
and pricing model for online
predictions.
A sample notebook
is available to help you get started quickly.
Launch a batch job
For larger dataset in a batch workflow, it's best to launch TxGemma as a
Vertex AI batch prediction job.
Note that Vertex AI's SLA and
pricing model are different for
batch prediction jobs.
Refer to the "Get batch predictions" section in the
Quick start with Model Garden
notebook to get started.
You can reach out in several ways:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-03-25 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-25 UTC."],[],[],null,["# Get started with TxGemma\n\nYou can get started in 4 ways:\n\nRun it locally\n--------------\n\nSelect which TxGemma variant you want to use, download it from [Hugging Face](https://huggingface.co/collections/google/health-ai-developer-foundations-hai-def-6744dc060bc19b6cf631bb0f) and\nrun it locally.\n\nThis is the recommended option, if you want to experiment with the model and\ndon't need to handle a high volume of data. Our GitHub repository includes a\n[notebook](https://github.com/google-gemini/gemma-cookbook/blob/main/TxGemma/%5BTxGemma%5DQuickstart_with_Hugging_Face.ipynb)\nthat you can use to get started with TxGemma.\n\nWe have two other notebooks:\n\n- [Finetune with Hugging Face](https://github.com/google-gemini/gemma-cookbook/blob/main/TxGemma/%5BTxGemma%5DFinetune_with_Hugging_Face.ipynb) which shows how you can finetune TxGemma.\n- [Agentic-Tx Demo](https://github.com/google-gemini/gemma-cookbook/blob/main/TxGemma/%5BTxGemma%5DAgentic_Demo_with_Hugging_Face.ipynb) which has an example agentic implementation of TxGemma. You'll need a Gemini API key to run this.\n\nDeploy your own online service\n------------------------------\n\nTxGemma can be deployed as a highly available and scalable HTTPS endpoint on\n[Vertex AI](https://cloud.google.com/vertex-ai). The easiest way is through\n[Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/txgemma).\n\nThis option is ideal for production-grade, online applications with low latency,\nhigh scalability and availability requirements. Refer to\n[Vertex AI's service level agreement (SLA)](https://cloud.google.com/vertex-ai/sla)\nand [pricing model](https://cloud.google.com/vertex-ai/pricing) for online\npredictions.\n\nA sample [notebook](https://github.com/google-gemini/gemma-cookbook/blob/main/TxGemma/%5BTxGemma%5DQuickstart_with_Model_Garden.ipynb)\nis available to help you get started quickly.\n\nLaunch a batch job\n------------------\n\nFor larger dataset in a batch workflow, it's best to launch TxGemma as a\n[Vertex AI batch prediction job](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions#request_a_batch_prediction).\nNote that [Vertex AI's SLA](https://cloud.google.com/vertex-ai/sla) and\n[pricing model](https://cloud.google.com/vertex-ai/pricing) are different for\nbatch prediction jobs.\n\nRefer to the \"Get batch predictions\" section in the\n[Quick start with Model Garden](https://github.com/google-gemini/gemma-cookbook/blob/main/TxGemma/%5BTxGemma%5DQuickstart_with_Model_Garden.ipynb)\nnotebook to get started.\n\nContact\n-------\n\nYou can reach out in several ways:\n\n- File a Feature Request or Bug at [GitHub Issues](https://github.com/google-gemini/gemma-cookbook/issues).\n- Send us feedback at [`hai-def@google.com`](mailto:hai-def@google.com)."]]