Get started with HeAR
Stay organized with collections
Save and categorize content based on your preferences.
You can get started in 4 ways:
Run it locally
Download the model from Hugging Face and
run it locally.
This is the recommended option, if you want to experiment with the model and
don't need to handle a high volume of data. Our GitHub repository includes a
notebook
that you can use to explore the model.
Deploy your own online service
HeAR can be deployed as a highly available and scalable HTTPS endpoint on
Vertex AI. The easiest way is through
Model Garden.
This option is ideal for production-grade, online applications with low latency,
high scalability and availability requirements. Refer to
Vertex AI's service level agreement (SLA)
and pricing model for online
predictions.
Read the API specification
to learn how to create online clients that interact with the service. A sample
notebook
is available to help you get started quickly.
For custom requirements, you can also adapt our
model serving implementation
and host it yourself on any API management system.
Launch a batch job
For larger dataset in a batch workflow, it's best to launch HeAR as a
Vertex AI batch prediction job.
Note that Vertex AI's SLA and
pricing model are different for
batch prediction jobs.
Refer to the "Get batch predictions" section in the
Quick start with Model Garden
notebook to get started.
Try out our online service
You can test out the online service before committing to deploying your own
using our
research endpoint.
This endpoint is for research purposes only.
You can reach out in several ways:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-03-18 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-03-18 UTC."],[],[],null,["# Get started with HeAR\n\nYou can get started in 4 ways:\n\nRun it locally\n--------------\n\nDownload the model from [Hugging Face](https://huggingface.co/google/hear) and\nrun it locally.\n\nThis is the recommended option, if you want to experiment with the model and\ndon't need to handle a high volume of data. Our GitHub repository includes a\n[notebook](https://github.com/google-health/hear/blob/master/notebooks/quick_start_with_hugging_face.ipynb)\nthat you can use to explore the model.\n\nDeploy your own online service\n------------------------------\n\nHeAR can be deployed as a highly available and scalable HTTPS endpoint on\n[Vertex AI](https://cloud.google.com/vertex-ai). The easiest way is through\n[Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/hear).\n\nThis option is ideal for production-grade, online applications with low latency,\nhigh scalability and availability requirements. Refer to\n[Vertex AI's service level agreement (SLA)](https://cloud.google.com/vertex-ai/sla)\nand [pricing model](https://cloud.google.com/vertex-ai/pricing) for online\npredictions.\n\nRead the [API specification](/health-ai-developer-foundations/hear/serving-api)\nto learn how to create online clients that interact with the service. A sample\n[notebook](https://github.com/google-health/hear/blob/master/notebooks/quick_start_with_model_garden.ipynb)\nis available to help you get started quickly.\n\nFor custom requirements, you can also adapt our\n[model serving implementation](https://github.com/google-health/hear/tree/master/python/serving)\nand host it yourself on any API management system.\n\nLaunch a batch job\n------------------\n\nFor larger dataset in a batch workflow, it's best to launch HeAR as a\n[Vertex AI batch prediction job](https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions#request_a_batch_prediction).\nNote that [Vertex AI's SLA](https://cloud.google.com/vertex-ai/sla) and\n[pricing model](https://cloud.google.com/vertex-ai/pricing) are different for\nbatch prediction jobs.\n\nRefer to the \"Get batch predictions\" section in the\n[Quick start with Model Garden](https://github.com/google-health/hear/blob/master/notebooks/quick_start_with_model_garden.ipynb)\nnotebook to get started.\n\nTry out our online service\n--------------------------\n\nYou can test out the online service before committing to deploying your own\nusing our\n[research endpoint](/health-ai-developer-foundations/model-serving/research-endpoints).\n\nThis endpoint is for research purposes only.\n\nContact\n-------\n\nYou can reach out in several ways:\n\n- Start or join a conversation on [GitHub Discussions](https://github.com/google-health/hear/discussions).\n- File a Feature Request or Bug at [GitHub Issues](https://github.com/google-health/hear/issues).\n- Send us feedback at [`hai-def@google.com`](mailto:hai-def@google.com)."]]