This page provides an overview of best practices and recommended tools for working with Large Language Models (LLMs) to develop solutions for Google Workspace.
When developing on Google Workspace, LLMs can help you in the following ways:
- Generate or troubleshoot code for calling Google Workspace APIs.
- Build a solution based on the latest Google Workspace developer documentation.
- Access Google Workspace resources from the command line or your integrated development environment (IDE).
Use a Model Context Protocol (MCP) for Google Workspace
A Model Context Protocol (MCP) is a standardized open protocol that provides context to LLMs and AI agents so that they can return better quality information in multi-turn conversations.
Google Workspace has an MCP server that provides a schema for an LLM to access and search developer documentation. You can use this server when you're building or using AI agents to do any of the following:
- Retrieve up-to-date information about Google Workspace APIs and services.
- Build and preview user interfaces (UIs) that extend Google Workspace applications. You can use these UIs to build Google Workspace add-ons, Google Chat apps, Google Drive apps, and more.
To deploy the server, visit the Google Workspace GitHub repository:
View Google Workspace MCP Developer Assist on GitHub
Use AI code assistants
We recommend the following AI code assist tools to incorporate into your workflow for Google Workspace development:
Google AI Studio: Generate code for your Google Workspace solutions, including code for Google Apps Script projects.
Gemini Code Assist: Lets you use LLMs right from your IDE and includes the
@googledocs
command to access Google Docs documents.