Get started with TxGemma

  • TxGemma can be run locally for experimentation or deployed as an online service on Vertex AI for production applications.

  • For large datasets, TxGemma can be used with Vertex AI batch prediction jobs.

  • Additional resources include notebooks for quickstarts, finetuning, and an agentic demo.

  • Support and feedback can be provided via GitHub Issues or email.

You can get started in 4 ways:

Run it locally

Select which TxGemma variant you want to use, download it from Hugging Face and run it locally.

This is the recommended option, if you want to experiment with the model and don't need to handle a high volume of data. Our GitHub repository includes a notebook that you can use to get started with TxGemma.

We have two other notebooks:

Deploy your own online service

TxGemma can be deployed as a highly available and scalable HTTPS endpoint on Vertex AI. The easiest way is through Model Garden.

This option is ideal for production-grade, online applications with low latency, high scalability and availability requirements. Refer to Vertex AI's service level agreement (SLA) and pricing model for online predictions.

A sample notebook is available to help you get started quickly.

Launch a batch job

For larger dataset in a batch workflow, it's best to launch TxGemma as a Vertex AI batch prediction job. Note that Vertex AI's SLA and pricing model are different for batch prediction jobs.

Refer to the "Get batch predictions" section in the Quick start with Model Garden notebook to get started.

Contact

You can reach out in several ways: