This page provides an overview of best practices and recommended tools for working with Large Language Models (LLMs) to develop solutions for Google Workspace.
When developing on Google Workspace, LLMs can help you in the following ways:
- Generate or troubleshoot code for calling Google Workspace APIs.
- Build a solution based on the latest Google Workspace developer documentation.
- Access Google Workspace resources from the command line or your integrated development environment (IDE).
Use a Model Context Protocol (MCP) for Google Workspace Development
A Model Context Protocol (MCP) is a standardized open protocol that provides context to LLMs and AI agents so that they can return better quality information in multi-turn conversations.
Google Workspace has an MCP server that provides tools for an LLM to access and search developer documentation. You can use this server when you're building or using AI agents to do any of the following:
- Retrieve up-to-date information about Google Workspace APIs and services.
- Fetch official Google Workspace documentation and snippets.
To get started, add this server to your MCP client configuration. For example,
to add the server to Gemini Code Assist, add the following to your
settings.json
file:
{
"mcpServers": {
"workspace-developer": {
"httpUrl": "https://workspace-developer.goog/mcp",
"trust": true
},
}
}
To improve tool usage, it may be necessary to add instructions similar to the
following to a rules file such as GEMINI.md
:
Always use the `workspace-developer` tools when using Google Workspace APIs.
Use AI code assistants
We recommend the following AI code assist tools to incorporate into your workflow for Google Workspace development:
Google AI Studio: Generate code for your Google Workspace solutions, including code for Google Apps Script projects.
Gemini Code Assist: Lets you use LLMs right from your IDE and includes the
@googledocs
command to access Google Docs documents.