Googlebot is the generic name for Google's two types of web crawlers:

You can identify the subtype of Googlebot by looking at the user agent string in the request. However, both crawler types obey the same product token (user agent token) in robots.txt, and so you cannot selectively target either Googlebot Smartphone or Googlebot Desktop using robots.txt.

For most sites Google primarily indexes the mobile version of the content. As such the majority of Googlebot crawl requests will be made using the mobile crawler, and a minority using the desktop crawler.

How Googlebot accesses your site

For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to delays it's possible that the rate will appear to be slightly higher over short periods.

Googlebot was designed to be run simultaneously by thousands of machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites that they might crawl. Therefore, your logs may show visits from several IP addresses, all with the Googlebot user agent. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server. If your site is having trouble keeping up with Google's crawling requests, you can reduce the crawl rate.

Googlebot crawls primarily from IP addresses in the United States. In case Googlebot detects that a site is blocking requests from the United States, it may attempt to crawl from IP addresses located in other countries. The list of currently used IP address blocks used by Googlebot is available in JSON format.

Googlebot crawls over HTTP/1.1 and, if supported by the site, HTTP/2. There's no ranking benefit based on which protocol version is used to crawl your site; however crawling over HTTP/2 may save computing resources (for example, CPU, RAM) for your site and Googlebot.
To opt out from crawling over HTTP/2, instruct the server that's hosting your site to respond with a 421 HTTP status code when Googlebot attempts to crawl your site over HTTP/2. If that's not feasible, you can send a message to the Googlebot team (however this solution is temporary).

Googlebot can crawl the first 15MB of an HTML file or supported text-based file. Each resource referenced in the HTML such as CSS and JavaScript is fetched separately, and each fetch is bound by the same file size limit. After the first 15MB of the file, Googlebot stops crawling and only considers the first 15MB of the file for indexing. The file size limit is applied on the uncompressed data. Other Google crawlers, for example Googlebot Video and Googlebot Image, may have different limits.

Blocking Googlebot from visiting your site

It's almost impossible to keep a web server secret by not publishing links to it. For example, as soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to crawl an incorrect link from your site.

If you want to prevent Googlebot from crawling content on your site, you have a number of options. Be aware of the difference between preventing Googlebot from crawling a page, preventing Googlebot from indexing a page, and preventing a page from being accessible at all by both crawlers or users.

Verifying Googlebot

Before you decide to block Googlebot, be aware that the user agent string used by Googlebot is often spoofed by other crawlers. It's important to verify that a problematic request actually comes from Google. The best way to verify that a request actually comes from Googlebot is to use a reverse DNS lookup on the source IP of the request, or to match the source IP against the Googlebot IP ranges.