AI-generated Key Takeaways
-
Run MedSigLIP locally for experimentation and small datasets.
-
Deploy MedSigLIP as an online service on Vertex AI for production applications requiring high availability and scalability.
-
Fine-tune MedSigLIP with your own medical data to optimize performance for specific use cases.
-
Launch MedSigLIP as a batch job on Vertex AI for processing larger datasets in a batch workflow.
-
Contact options include GitHub Discussions, GitHub Issues, and email for support and feedback.
You can get started in 4 ways:
Run it locally
Download the model from Hugging Face and run it locally.
This is the recommended option if you want to experiment with the model and don't need to handle a high volume of data. Our GitHub repository includes a notebook that you can use to explore the model.
Deploy your own online service
MedSigLIP can be deployed as a highly available and scalable HTTPS endpoint on Vertex AI. The easiest way is through Model Garden.
This option is ideal for production-grade, online applications with low latency, high scalability and availability requirements. Refer to Vertex AI's service level agreement (SLA) and pricing model for online predictions.
Fine-tune MedSigLIP
MedSigLIP can be fine-tuned using your own medical data to optimize its performance for your use cases. A sample notebook is available that you can adapt for your data and use case.
Launch a batch job
For larger dataset in a batch workflow, it's best to launch it as a Vertex AI batch prediction job. Note that Vertex AI's SLA and pricing model are different for batch prediction jobs.
Refer to the "Get batch predictions" section in the Quick start with Model Garden notebook to get started.
Contact
You can reach out in several ways:
- Start or join a conversation on GitHub Discussions.
- File a Feature Request or Bug at GitHub Issues.
- Send us feedback at
hai-def@google.com
.