Google Compute Engine allows you to choose the region and zone where certain resources live, giving you control over where your data is stored and used. For example, when you create an instance or disk, you are prompted to select a zone where that resource should serve traffic from. Other resources, such as static IPs, live in regions and you must select a region for where each static IP should live.
Resources that are specific to a zone or a region can only be used by other resources in the same zone or region. For example, disks and instances are both zonal resources. If you want to attach a disk to an instance, both resources must reside in the same zone. Similarly, if you want to assign a static IP address to an instance, your instance must reside in the same region as the static IP.
Each region is completely isolated from other regions, and each zone is completely independent of other zones. If a zone or a region suffers a failure, other zones and regions won't be affected.
Note: Only certain resources are region- or zone-specific. Other resources, such as images, are global resources that can be used by any other resources across any location.
You can see a list of available zones by running:
gcutil listzones --project=<project-id>
- Available Regions & Zones
- Scheduled Maintenance
Each region in Google Compute Engine contains any number of zones. To determine what zones belong to what region, review the fully qualified name of the zone. Each zone name contains two parts that describe each zone in detail. The first part of the zone name is the region and the second part of the name describes the zone in the region:
The region of a zone describes the geographic location where your resources are stored. Choose a region that makes sense for your scenario. For example, if you only have customers on the east coast of the US, or if you have specific needs that require your data to live in the US, it makes sense to store your resources in a zone with a us-east region. A region contains one or more zones.
A zone is an isolated location within a region that is independent of other zones in the same region. Zones are designed to support instances or applications that have high availability requirements. Zones are designed to be fault-tolerant, so that you can distribute instances and resources across multiple zones to protect against the system failure of a single zone. This keeps your application available even in the face of expected and unexpected failures. The fully-qualified name is made up of
<region>/<zone>. For example, the fully-qualified name for zone
Depending on how widely you want to distribute your resources, you may choose to create instances across multiple zones within one region or across multiple regions and multiple zones.
Example: A zone named
us-east1-a is in region
us-east1 in zone a.
The following diagram provides some examples of how regions and zones relate to each other. Notice that each region is independent of other regions and each zone is isolated from other zones in the same region.
Note: This diagram is an example to demonstrate zones and does not reflect actual available zones.
Available Regions & Zones
The following is a list of available regions and zones:
Note: The selection of a zone does not guarantee that project data at rest is kept only in that zone. See the FAQ for more details.
Caution: us-central2-a has been deprecated and will be permanently turned down by December 31st, 2013. You should move all resources to us-central1-a and/or us-central1-b and ensure that you are no longer using any resources in us-central2-a after December 31st, 2013.
To view a list of available zones, you can always run:
gcutil --project=<project-id> listzones
To view a list of available regions using gcutil, use the
listregions command. The command lists all available regions and
provides information such as any relevant deprecation status and the status of
the region itself.
gcutil --project=<project-id> listregions +-----------------+------------------------+--------+-------------+ | name | description | status | deprecation | +-----------------+------------------------+--------+-------------+ | example-region | Description of region | UP | | | example-region2 | Description of region2 | UP | | +-----------------+------------------------+--------+-------------+
To get information about a single region, use the
gcutil --project=<project-id> getregion example-region +---------------+----------------------------------------+ | property | value | +---------------+----------------------------------------+ | name | example-region | | description | Description of region | | creation-time | 2013-04-29T11:18:01.821-07:00 | | status | UP | | zones | zones/example-zone,zones/example-zone2 | | deprecation | | | replacement | | | | | | usage | | +---------------+----------------------------------------+
Google periodically performs scheduled maintenance on its infrastructure: patching systems with the latest software, performing routine tests and preventative maintenance, and generally ensuring that our infrastructure is as fast and efficient as we know how to make it.
There are currently two types of scheduled maintenance:
- Transparent Maintenance
Transparent maintenance affects only a small piece of the infrastructure in a given zone and Google Compute Engine automatically moves your instances elsewhere in the zone, out of the way of the maintenance work.
- Scheduled Zone Maintenance Windows
For scheduled zone maintenance windows, Google takes an entire zone offline for roughly two weeks to perform various, disruptive maintenance tasks.
The type of scheduled maintenance your instances will experience currently depends on the zone your instances are running in:
- In us-central1-a and us-central1-b zones, only transparent will occur. This means that scheduled zone maintenance windows will no longer happen in either of these zones and, as long as your instances are set to live migrate, they will not be taken offline for these events.
- In all other zones, scheduled zone maintenance windows will still occur. We are in the process of planning and rolling out the hardware and software required to support the transparent maintenance for all of our zones, so check back periodically for updates.
During transparent maintenance, Google Compute Engine automatically moves your instances away from maintenance events so that maintenance work is transparent to your applications and workloads. Your instance continues to run within the same zone with no action on your part.
Transparent maintenance is only available in us-central1-a and us-central1-b zones, in place of scheduled zone maintenance windows. This means that any instance set to live migrate in either of these two zones will not experience downtime due to scheduled zone maintenance windows.
During transparent maintenance, you can set Google Compute Engine to handle your instances in two ways:
- Live migrate
Google Compute Engine can automatically migrate your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but we expect most applications and workloads won't even notice.
- Terminate and reboot
Google Compute Engine automatically signals your instance to shut down, waits a short time for it to shut down cleanly, and then restarts it away from the scheduled maintenance event.
For more information on how to set the options above for your instances, see Setting Instance Scheduling Options.
Scheduled zone maintenance windows
For all other zones outside of us-central1-a and us-central1-b, there will be periods of time when certain zones are taken offline for maintenance tasks, such as software upgrades. When a zone is taken down for maintenance, the following happens:
- All VM instances in that zone are terminated and deleted from your project. This means all scratch disk data is also lost.
- All persistent disks will be preserved, but are unavailable until the maintenance window ends.
When a zone comes back online, you need to recreate your instances in the affected zone. The Google Compute Engine team will notify users of upcoming maintenance windows in a timely manner so that users can perform any tasks necessary before the zone is taken offline.
Although maintenance windows are an inconvenient and unavoidable part of the service, you can use the tips from the How to Design Robust Systems section to design a system that can withstand maintenance windows, zone failures, and unexpected interruptions.
Certain resources, such as static IPs, images, firewall rules, and networks, have defined project-wide quota limits and per-region quota limits. When you create these resources, it counts towards your total project-wide quota or your per-region quota, if applicable. If any of the affected quota limits are exceeded, you won't be able to add more resources of the same type in that project or region.
For example, if your global target pools quota is 50 and you create 25 rules in example-region and 25 pools in example-region2, you reach your project-wide quota and won't be able to create more target pools in any region within your project until you free up space. Similarly, if you have a per-region quota of 7 reserved IP addresses, you can only reserve up to 7 IP addresses in a single region. Once you hit that limit, you will either need to reserve IP addresses in a new region or release some IP addresses.
When selecting zones, here are some things to keep in mind:
- Communication within and across regions will incur different costs.
Generally, communication within regions will always be cheaper and faster than communication across different regions.
- Design important systems with redundancy across multiple zones.
At some point in time, your instances may be terminated because of maintenance windows or because of an unexpected failure. To mitigate the effects of these events, duplicate important systems in multiple zones, in case a zone hosting your instance goes offline or is taken down for servicing. For example, if you host VM instances in zones
us-east1-bis taken down for maintenance or fails unexpectedly, your instances in zone
us-east1-awill still be available. However, if you host all your instances in
us-east1-b, you won’t be able to access any of your instances if
us-east1-bever falls offline. For more tips on how to design systems for availability, see Designing Robust Systems.