July 30, 2025 Introducing LangExtract: A Gemini Powered Information Extraction Library
LangExtract is an open-source Python library that uses large language models (LLMs) to reliably and programmatically extract structured information from unstructured text. It's designed to solve the challenges of manually processing large volumes of documents like clinical notes or legal reports. We showcase Radiology use cases using Gemini. Read More
July 09, 2025 Expanding MedGemma Collection with 27B Multimodal and MedSigLIP
In May of this year, we expanded the HAI-DEF collection with MedGemma, a collection of generative models based on Gemma 3 that are designed to accelerate healthcare and lifesciences AI development. Today, we're proud to announce two new models in this collection. Read More
July 05, 2025 Concluding Paris Hackathon
We invited doctors, developers and researchers to come togeher in Google Paris and prototype new medical solutions using our open models. Read More
May 29, 2025 MedGemma at Out Of Pocket (OOP) Hackathon
The weekend before its official debut at Google I/O, we provided a select group of developers in San Francisco with early access to MedGemma, challenging them to solve real-world problems in healthcare. Read More
May 22, 2025 Introducing MedGemma
At Google I/O, we announced MedGemma, Google's most capable open model for multimodal medical text and image comprehension. It is a collection of Gemma 3 variants (4B multimodal and 27B text-only), fine-tuned for medical tasks. Read More
March 25, 2025 Introducing TxGemma
TxGemma is Google's first open LLM for predicting properties of therapeutics. It is a collection of Gemma 2 variants (2B, 9B, and 27B), fine-tuned for therapeutic tasks using data from the Therapeutic Data Commons. It includes TxGemma-Predict for direct task outputs and TxGemma-Chat for conversational applications. Read More
March 17, 2025 Expanding HAI-DEF with HeAR
Our Health Acoustic Representations (HeAR) model which we published last August is now openly available as an open model in the HAI-DEF collection. Read More
March 14-16, 2025 HAI-DEF at MIT GrandHack 2025
We challenged researchers, developers, clinicians, and entrepreneurs to use HAI-DEF models in solving real world problems. Read More
Nov 25, 2025 Introducing Health AI Developer Foundations (HAI-DEF)
We are open-sourcing our foundation models under flexible terms of use, enabling developers and researchers to use our pretrained models at no cost, customize them, and use for product research & development. The models are available in Google Model Garden and Hugging Face. While they are not exclusive to Google Cloud, you'll find them production-ready when deployed using Google Model Garden. Read More
Oct 21, 2024 Experimental API for 3D Computed Tomography Foundation Model
We've created Computed Tomography (CT) Foundation, a new AI tool that takes 3D CT scans and encodes them into small, information-rich numerical embeddings. This breakthrough allows researchers to build powerful new models for CT studies with significantly less data and computational resources than ever before. We've made this models available to the researchers as experimental APIs. Read More
Aug 19, 2024 Exploring Disease Detection Based on Bioacoustics
We've developed Health Acoustic Representations (HeAR), an AI model that learns to extract health insights from bioacoustic sounds like coughs. We show that using this embedding model new models for a wide range of diseases can be developed with less data and computational resources. We've made the model available to researchers as an experimental API. Read More
Mar 8, 2024 Experimental APIs for Dermatology and Pathology Foundation Models
We've developed new deep learning models in Dermatology and Pathology that can encode medical images into compressed numerical vectors called embeddings. These embeddings capture an image's key features, allowing new models for various medical tasks to be developed with far less data and compute than traditional methods. We've made these models available to researchers as experimental APIs. Read More
May 15, 2024 Exploring Medical AI with Med-Gemini
In this research we explore Med-Gemini, a family of models based on Gemini that have been fine-tuned for medical tasks. Read More
Aug 3, 2023 Exploring Multimodal Medical AI
In this research we explore integrating various sources like images, lab results, and patient notes to build a more comprehensive and assistive medical AI. Read More
July 19, 2022 Simplified transfer learning for CXR model development
We've developed a new machine learning method to create pre-trained neural networks that convert chest X-Ray (CXR) images into data-rich vectors (embeddings). We show that with this technique model researchers can train powerful new models using far less data and computational power than traditional methods. Read More