Googlebot is the generic name for Google's two types of web crawlers:

You can identify the subtype of Googlebot by looking at the HTTP user-agent request header in the request. However, both crawler types obey the same product token (user agent token) in robots.txt, and so you cannot selectively target either Googlebot Smartphone or Googlebot Desktop using robots.txt.

For most sites Google primarily indexes the mobile version of the content. As such the majority of Googlebot crawl requests will be made using the mobile crawler, and a minority using the desktop crawler.

How Googlebot accesses your site

For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to delays it's possible that the rate will appear to be slightly higher over short periods.

Googlebot was designed to be run simultaneously by thousands of machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites that they might crawl. Therefore, your logs may show visits from several IP addresses, all with the Googlebot user agent. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server. If your site is having trouble keeping up with Google's crawling requests, you can reduce the crawl rate.

Googlebot crawls primarily from IP addresses in the United States. In case Googlebot detects that a site is blocking requests from the United States, it may attempt to crawl from IP addresses located in other countries. The list of IP address ranges that are used by Googlebot is available in JSON format.

Googlebot crawls over HTTP/1.1 and, if supported by the site, HTTP/2. There's no ranking benefit based on which protocol version is used to crawl your site; however crawling over HTTP/2 may save computing resources (for example, CPU, RAM) for your site and Googlebot.
To opt out from crawling over HTTP/2, instruct the server that's hosting your site to respond with a 421 HTTP status code when Googlebot attempts to crawl your site over HTTP/2. If that's not feasible, you can send a message to the Googlebot team (however this solution is temporary).

Googlebot can crawl the first 15MB of an HTML file or supported text-based file. Each resource referenced in the HTML such as CSS and JavaScript is fetched separately, and each fetch is bound by the same file size limit. After the first 15MB of the file, Googlebot stops crawling and only sends the first 15MB of the file for indexing consideration. The file size limit is applied on the uncompressed data. Other Google crawlers, for example Googlebot Video and Googlebot Image, may have different limits.

When crawling from IP addresses in the US, the timezone of Googlebot is Pacific Time.

Blocking Googlebot from visiting your site

It's almost impossible to keep a site secret by not publishing links to it. For example, as soon as someone follows a link from your "secret" site to another site, your "secret" site URL may appear in the referrer tag and can be stored and published by the other site in its referrer log.

If you want to prevent Googlebot from crawling content on your site, you have a number of options. Remember there's a difference between crawling and indexing; blocking Googlebot from crawling a page doesn't prevent it from appearing in search results:

Verifying Googlebot

Before you decide to block Googlebot, be aware that the HTTP user-agent request header used by Googlebot is often spoofed by other crawlers. It's important to verify that a problematic request actually comes from Google. The best way to verify that a request actually comes from Googlebot is to use a reverse DNS lookup on the source IP of the request, or to match the source IP against the Googlebot IP ranges.