Learning how Google's user agents interact with your website can help get your most important pages appearing in more places in the Google ecosystem, without overloading your servers. Whether you're a site owner or developer, this documentation gives you better control over how your site is crawled.
You can verify if a web crawler or fetcher accessing your server really is from Google, such as Googlebot.
Google's crawlers automatically adjust to your website and server to efficiently find and showcase your best and freshest content. To help you manage urgent situations related to your server and crawling, you have the option to proactively reduce the crawl rate, if needed.
You can use a robots.txt file to allow or disallow crawling of individual pages, or your whole website.

How crawling preferences affect where your site appears on Google

Google's crawling infrastructure is shared across a variety of Google products. This means that following best practices helps your web content be discovered more efficiently and featured on Google.
Google Search uses Googlebot to crawl your site to find relevant content for users.
Google-Extended is used by Gemini apps and Vertex AI API for Gemini. Learn more about Google-Extended.
Crawling preferences addressed to the Storebot-Google user agent affect all surfaces of Google Shopping.
The AdSense crawler visits participating sites in order to provide them with relevant ads.
Crawling preferences addressed to the Googlebot-News user agent affect the Google News product, including news.google.com and the Google News app.
The Google-NotebookLM fetcher requests individual URLs that NotebookLM users have provided as sources for their projects.