Google Compute Engine

Printable Page

This is an auto-generated page of all the compute docs. Notice that this does not include reference docs.

Contents

    1. docs
    2. pricing
    3. signup
    4. getting-started
    5. quickstart
    6. authentication
    7. authorization
    8. overview
    9. resource-quotas
    10. disks
    1. images
    2. instances
    3. instances-and-network
    4. load-balancing
    5. machine-types
    6. metadata
    7. networking
    8. projects
    1. protocol-forwarding
    2. zones
    3. startupscript
    4. getting-started
    5. api-rate-limits
    6. robustsystems
    7. access
    8. sending-mail
    9. performance
    1. release-notes
    2. libraries
    3. java
    4. python
    5. quickstart-tool
    6. python-guide
    7. javascript-guide
    8. console
    9. gcutil
    10. tips
    1. samples-and-videos
    2. faq
    3. troubleshooting
    4. security-bulletins
    5. filereport

    Page: docs

    Google Compute Engine is a service that provides virtual machines that run on Google infrastructure. Google Compute Engine provides a command-line tool called gcutil that you can use to configure, deploy, and manage multiple virtual machines in multiple datacenter locations. These virtual machines can be used as distributed computing power on their own, or in conjunction with other services such as Google App Engine.

    For each of your virtual machines, you can select and configure your VM image and choose from a variety of hardware configurations, depending on your RAM, CPU, and disk space requirements. You have administrator privileges on your machine and can install new packages and configure it as you like, with very few restrictions.

    What Google Compute Engine is Best For

    Google Compute Engine is designed to run high performance, computationally intensive virtual machine arrays in the Google Cloud. It is designed for network and computation intensive data processing operations and can be used alone for computationally intensive tasks. However, it is best used as a generalized computing resource in combination with other Google Cloud technologies such as Google App Engine or Google Storage.

    What Google Compute Engine is (Currently) Not Good For

    As Google Compute Engine is currently in beta, there are some operating restrictions. As such, you should keep the following points in mind:

    • We don't recommend using the system to run production workloads, or to store important data.
    • We don't recommend using the service for workloads that require highly available, long running individual jobs.
    • There will be periods where some zones will be out of service for update and maintenance.

    Accessing Google Compute Engine

    The most common way to manage Google Compute Engine resources is using the command-line tool.

    To administrate or manage instances, you can also ssh in directly, without the command-line tool, although it is easier to use the command-line tool to ssh in to your instances.

    Note: Google Compute Engine does not guarantee 100% uptime, so you should take steps to make sure that your service can easily regenerate the state on an instance whenever a failure occurs. If you do not, your service will be adversely affected when your instances lose their state and scratch data. For more information, see tips on designing robust systems.

    This is a beta preview of the product; the specifications and usage may change. Additionally, you will be required to recreate your instances periodically to support maintenance windows and software upgrades.

    Get Started

    Google Compute Engine usage is currently by invitation only. If you have been invited to use compute, here is how we suggest you get started:

    1. Try the Hello World example.
    2. Read the Google Compute Engine Overview page, and the child topics for details about the components of a virtual machine on Google Compute Engine.
    3. Read the Developer's Guide to learn the different ways to manage and configure machines.

    Page: pricing

    Google Compute Engine charges for usage on a monthly basis, using the following price sheet. A bill is sent out at the beginning of each month for the previous month's usage.

    Prices are effective April 1, 2014.

    Contents

    Machine Type Pricing

    Google Compute Engine currently offers the following machine types in the US, Europe, and Asia. The billing model for machine types is listed below, but Compute Engine also provides automatic discounts off these prices for sustained use. You can also use our Google Cloud Pricing Calculator to better understand price for different configurations.

    Machine Type Billing Model

    1. All machine types are charged a minimum of 10 minutes.

      For example, if you run your virtual machine for 2 minutes, you will be billed for 10 minutes of usage.

    2. After 10 minutes, instances are charged in 1 minute increments, rounded up to the nearest minute.

      An instance that lives for 11.25 minutes will be charged for 12 minutes of usage.

    US

    Standard Machine Types

    Configuration Virtual Cores Memory (GB1) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-standard-1 1 3.75 2.75 0 $0.07
    n1-standard-2 2 7.50 5.50 0 $0.14
    n1-standard-4 4 15 11 0 $0.28
    n1-standard-8 8 30 22 0 $0.56
    n1-standard-16 Preview! 16 60 44 0 $1.12

    Shared-Core Machine Types

    Shared-core machine types are ideal for applications that don't require a lot of resources. Shared-core machine types are more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU instance types.

    f1-micro Bursting

    f1-micro machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    f1-micro 1 0.60 Shared CPU, not guaranteed 0 $0.013
    g1-small 1 1.70 1.40 0 $0.035

    High Memory Machine Types2

    High memory instances are ideal for tasks that require more memory relative to virtual cores.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highmem-2 2 13 5.50 0 $0.164
    n1-highmem-4 4 26 11 0 $0.328
    n1-highmem-8 8 52 22 0 $0.656
    n1-highmem-16 Preview! 16 104 44 0 $1.312

    High CPU Machine Types3

    High CPU machine types are ideal for tasks that require more virtual cores relative to memory.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highcpu-2 2 1.80 5.50 0 $0.088
    n1-highcpu-4 4 3.60 11 0 $0.176
    n1-highcpu-8 8 7.20 22 0 $0.352
    n1-highcpu-16 Preview! 16 14.40 44 0 $0.704

    Europe

    Standard Machine Types

    Configuration Virtual Cores Memory (GB1) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-standard-1 1 3.75 2.75 0 $0.077
    n1-standard-2 2 7.50 5.50 0 $0.154
    n1-standard-4 4 15 11 0 $0.308
    n1-standard-8 8 30 22 0 $0.616
    n1-standard-16 Preview! 16 60 44 0 $1.232

    Shared-Core Machine Types

    Shared-core machine types are ideal for applications that don't require a lot of resources. Shared-core machine types are more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU machine types.

    f1-micro Bursting

    f1-micro machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    f1-micro 1 0.60 Shared CPU, not guaranteed 0 $0.014
    g1-small 1 1.70 1.40 0 $0.0385

    High Memory Machine Types2

    High memory machine types are ideal for tasks that require more memory relative to virtual cores.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highmem-2 2 13 5.50 0 $0.18
    n1-highmem-4 4 26 11 0 $0.36
    n1-highmem-8 8 52 22 0 $0.72
    n1-highmem-16 Preview! 16 104 44 0 $1.44

    High CPU Machine Types3

    High CPU machine types are ideal for tasks that require more virtual cores relative to memory.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highcpu-2 2 1.80 5.50 0 $0.096
    n1-highcpu-4 4 3.60 11 0 $0.192
    n1-highcpu-8 8 7.20 22 0 $0.384
    n1-highcpu-16 Preview! 16 14.40 44 0 $0.768

    Asia

    Standard Machine Types

    Configuration Virtual Cores Memory (GB1) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-standard-1 1 3.75 2.75 0 $0.077
    n1-standard-2 2 7.50 5.50 0 $0.154
    n1-standard-4 4 15 11 0 $0.308
    n1-standard-8 8 30 22 0 $0.616
    n1-standard-16 Preview! 16 60 44 0 $1.232

    Shared-Core Machine Types

    Shared-core machine types are ideal for applications that don't require a lot of resources. Shared-core machine types are more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU machine types.

    f1-micro Bursting

    f1-micro machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    f1-micro 1 0.60 Shared CPU, not guaranteed 0 $0.014
    g1-small 1 1.70 1.40 0 $0.0385

    High Memory Machine Types2

    High memory machine types are ideal for tasks that require more memory relative to virtual cores.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highmem-2 2 13 5.50 0 $0.18
    n1-highmem-4 4 26 11 0 $0.36
    n1-highmem-8 8 52 22 0 $0.72
    n1-highmem-16 Preview! 16 104 44 0 $1.44

    High CPU Machine Types3

    High CPU machine types are ideal for tasks that require more virtual cores relative to memory.

    Configuration Virtual Cores Memory (GB) GCEUs
    what is this?
    Local Disk (GB) Price (USD) / Hour
    n1-highcpu-2 2 1.80 5.50 0 $0.096
    n1-highcpu-4 4 3.60 11 0 $0.192
    n1-highcpu-8 8 7.20 22 0 $0.384
    n1-highcpu-16 Preview! 16 14.40 44 0 $0.768

    11GB is defined as 2^30 bytes
    2High memory machine types have 6.50GB of RAM per virtual core
    3High CPU machine types have one virtual core for every 0.90 GB of RAM

    Sustained Use Discounts

    If you run an instance for a significant portion of the billing month, you may qualify for a sustained use discount. When you use an instance for more than 25% of a month, Compute Engine automatically gives you a discount for every incremental minute you use for that instance. The discount increases with usage and you can get up to a 30% net discount for instances that run the entire month. Sustained use discounts are calculated and applied to your bill at the end of the month.

    For computing sustained use, equivalently provisioned machines running non-concurrently are treated as Inferred Instances. This gives you the flexibility to start and stop instances freely and still receive the maximum sustained use discount available across all your instances.

    Sustained use discounts are given on incremental use after certain usage thresholds are reached. This means that you still only pay for the hours (or minutes!) that you use an instance for, and Compute Engine automatically gives you the best price. There’s no reason to run an instance for longer than you need it.

    The table below describes the discount at each usage level. These discounts apply for all instance types.

    Usage Level (% of month) % at which incremental is charged Example incremental rate (USD/per hour) for an n1-standard-1 instance
    0%-25% 100% of base rate $0.07
    25%-50% 80% of base rate $0.056
    50%-75% 60% of base rate $0.042
    75%-100% 40% of base rate $0.028

    For example, if you run an n1-standard-1 instance for 75% of the month, your charges are calculated as follows:

    • The first 25% will be charged at the full on-demand rate.
    • The next 25% will be charged at a 20% discount off the on-demand rate.
    • The last 25% will be charged at a 40% discount off the on-demand rate.

    For this example, sustained use discounts resulted in a net discount of 20% for this instance. The table below demonstrates this example:

    Usage level (% of month) for an n1-standard-1 instance % of base rate usage charge Example rate (USD/per hour) Calculated charges, assuming 30-day month
    First 0-25% of usage 100% $0.07 (base rate)
    • 30 days x .25 = 7.5 days
    • 7.5 days x 24 hours = 180 hours
    • 180 hours x $0.07 USD/per hour = $12.60
    Usage between 25%-50% of the month 80% $0.056
    • 180 hours x $0.056 USD/per hour = $10.08
    Usage between 50%-75% of the month 60% $0.042
    • 180 hours x $0.042 USD/per hour = $7.56

    The total charge for this instance is $12.60 + $10.08 + $7.56 = $30.24.

    This charge will appear on your bill at the end of the month. Without sustained use discounts, the same n1-standard-1 instance running for the same amount of time would cost 20% more:

    • 30 days per month x .75 = 22.5 days
    • 22.5 days x 24 hours per day = 540 hours
    • 540 hours x $0.07/USD per hour = $37.80

    The following graph demonstrates how your effective discount increases with use.

    Click to enlarge

    For example, if you use a virtual machine for 50% of the month, you get an effective discount of 10%, if you use it for 75% of the month, you get an effective discount of 20%, and if you use it for 100% of the month, you get an effective discount of 30%. You can also use the cloud Google Cloud Pricing Calculator to estimate your sustained use discount for any arbitrary workload.

    Viewing Sustained Use Discounts

    Sustained use discounts will automatically appear on your bill at the end of each month as separate line items that are credited to your account. You can view this discount in the Google Developers Console billing history. These discounts will appear as credits, with “Sustained Use” in the description. For example, the billing history might look like the following for an n1-standard-1 instance with sustained use discounts:

    Date Description Debits ($) Credits ($)
    Mar 1 - Mar 31 Google Compute Standard Intel N1 1 VCPU running in NA: 13567 Minutes (Project:1234567895) $37.80
    Mar 1 - Mar 31 Google Compute Standard Intel N1 1 VCPU running in NA Sustained Use Discount: 3381 Minutes (Project:1234567895) $7.56

    Inferred Instances

    When computing sustained use discounts, Compute Engine will give you the maximum discount available using inferred instances. An inferred instance combines multiple, non-overlapping instances of the same instance type in the same zone into a single instance for billing. With inferred usage, you are more likely to qualify for sustained use discounts.

    The example below shows how a customer’s usage (in this case comprised of five distinct instances), is combined to find the smallest number of simultaneous running instances (in this example, three instances) each with the longest possible duration. We call each of these an “inferred instance”. Compute Engine then calculates sustained use discounts based on the percentage of the time each of these inferred instances were running.

    inferred instance diagram

    Instance Uptime

    Instance uptime is measured as the number of minutes between when you start an instance and when you stop an instance, the latter being when the instance state is TERMINATED. In some cases, your instance may suffer from a failure and be marked as TERMINATED by the system; in these cases, you will not be charged for usage once the instance reaches the TERMINATED state. If an instance is idle, but still has a state of RUNNING, it will be charged for instance uptime. The easiest way to determine the status of an instance is to use gcutil with the gcutil listinstances --project=<project-id> command or to visit the Google Developers Console.

    Instance uptime is rounded up to the nearest minute. For example, if you run two separate n1-standard-1 virtual instances for 14.5 minutes each. You will be billed for 15 minutes per instance, at the n1-standard-1 machine type pricing.

    Note that Google Compute Engine bills for a minimum of 10 minutes of usage, so if you run an instance for 2 minutes of uptime, you are billed for 10 minutes. After 10 minutes, your instance is billed on a per minute basis. For more information, see the billing model.

    Premium Operating Systems

    Pricing for premium operating systems differ based on the machine type where the premium operating system image is used. For example, an f1-micro instance will be charged $0.02 per hour for a SUSE image, while an n1-standard-8 instance will be charged $0.11 per hour. All prices for premium operating systems are in addition to charges for using a machine type. For example, the total price of using an n1-standard-8 instance would be the sum of the machine type cost and the image cost:

    n1-standard-8 cost + image cost = $0.829 + $0.11 = $0.939 per hour

    Note: Pricing for premium operating systems are the same worldwide and do not differ based on zones or regions, as machine type prices do.

    Red Hat Enterprise Linux (RHEL) images

    • $0.06 USD/per hour for instance types with less than 8 virtual cores
    • $0.13 USD/per hour for instance types with 8 virtual cores or more

    All RHEL images are charged a 1 hour minimum. After 1 hour, RHEL images are charged by 1 hour increments, rounded up to the nearest hour. For example, if you run a RHEL image for 45 minutes, you will be billed for an hour. If you run an RHEL image for 63 minutes, you will be billed for 2 hours.

    SUSE images

    • $0.02 USD/per hour for f1-micro and g1-small machine types
    • $0.11 USD/per hour for all other machine types

    All SUSE images are charged a 10 minute minimum. After 10 minutes, SUSE images are charged by 1 minute increments, rounded up to the nearest minute.

    Windows Server images

    Note: Google Compute Engine does not currently charge for Windows Server images but will begin charging on May 1st, 2014, using the prices below.

    • $0.02 USD/per hour for f1-micro and g1-small machine types
    • $0.04 USD per core/hour for all other machine types

    Standard machine types, high CPU machine types, and high memory machine types are charged based on the number of cores. For example, n1-standard-4, n1-highcpu-4, and n1-highmem-4 are 4-core machines, and are charged at $0.16 USD/per hour (4 x $0.04 USD/per hour).

    All Windows images are charged a 10 minute minimum. After 10 minutes, Windows images are charged by 1 minute increments, rounded up to the nearest minute.

    Network Pricing

    General Pricing

    Traffic type Price
    Ingress No charge
    Egress to the same Zone
    Egress to different Google Cloud service within the same Region
    Egress to Google products (such as YouTube, Maps, Drive)*
    Egress to a different Zone in the same Region (per GB) $0.01
    Egress to a different Region within the US (per GB) $0.01*
    Inter-continental Egress At internet egress rate

    *Promotional pricing

    Internet Egress (Americas/EMEA Destination) Price (per GB)
    0-1TB $0.12
    1-10TB $0.11
    10+ TB $0.08
    Internet Egress (APAC Destination) Price (per GB)
    0-1TB $0.21
    1-10TB $0.18
    10+ TB $0.15

    Load Balancing and Protocol Forwarding

    Type Hourly Service Charge (US$) Price (per GB) of Data Processed (US$)
    US
    • $0.025 (up to 5 rules* included)
    • $0.010 for each additional rule*
    $0.008
    Europe
    • $0.028 (up to 5 rules* included)
    • $0.011 for each additional rule*
    $0.009
    Asia
    • $0.028 (up to 5 rules* included)
    • $0.011 for each additional rule*
    $0.009
    *A forwarding rule that is created for either load balancing or protocol forwarding counts towards this limit.

    Calculating Load Balancing and Protocol Forwarding Hourly Service Charges

    Compute Engine charges for each forwarding rule that is created for load balancing or protocol forwarding. A load balancing rule is a forwarding rule that is used to load balance a pool of instances and a protocol forwarding rule is a forwarding rule that is used to route packets of selective protocols to a single compute virtual instance. Up to 5 forwarding rules you create are charged at $0.025/per hour. For example, if you create one forwarding rule, you will be charged for $0.025/per hour. If you have 3 forwarding rules, you will still be charged for $0.025/per hour. However, if you have 10 rules, you will be charged:

    • 5 forwarding rules rules = $0.025/per hour
    • Each additional forwarding rule = $0.01/per hour

    So the calculation for 10 forwarding rules would be:

    $0.025 for 5 rules/per hour + 5 additional rules @ $0.01/per hour = $0.075/per hour

    For an example price calculation, see the Pricing Example.

    Persistent Disk Pricing

    Persistent disks are charged for the amount of provisioned space per disk. Persistent disk I/O operations are included in the charges for provisioned space and persistent disk performance grows linearly to the size of the persistent disk volume, so you may want to create a larger or smaller persistent disk to account for your I/O needs. For more information, see the persistent disk and the v1 transition guide documentation.

    Persistent disks are prorated based on a granularity of seconds. For example, a 200GB volume would cost $8.00 for the whole month. If you only provisioned a 200GB volume for half a month, it would cost $4.00.

    Once you successfully delete a persistent disk, you will no longer be charged for that disk. If your persistent disk isn't accessible due to maintenance events, we won't charge for the time during which your persistent disk was inaccessible.

    Persistent disk snapshots

    Persistent disk snapshots are only charged for the total size of the snapshot. For example, if you only used 2TB of disk space on a 5TB persistent disk, your snapshot size will be around 2TB, rather than the full 5TB of provisioned disk space. Google Compute Engine also provides differential snapshots, which means that after the initial snapshot, subsequent snapshots only contain data that has changed since the previous snapshot, providing for a generally lower cost for snapshot storage.

    Type Price
    Provisioned Space (per GB/month) $0.04
    Snapshot Storage (per GB/month) $0.125
    IO Operations No additional charge

    Image Storage

    Type Price
    Image Storage (per GB/month) $0.085

    IP Address Pricing

    Type Price/Hour
    Static IP Address (assigned but unused) $0.01
    Static IP Address (assigned and in use) No charge
    Ephemeral IP Address (attached to instance or forwarding rule) No charge

    Note: If you want to remove a static IP address from your project, you can do so using the Addresses collection.

    Viewing Usage

    The Google Developers Console provides a transaction history for each of your projects. This history describes your current balance and estimated resource usage for that particular project. To view a project's transaction history:

    1. Log into the Google Developers Console.
    2. Select the project for which you would like to see the current usage and balance.
    3. Click on Billing in the left-hand navigation menu.

    Pricing Calculator

    To help you understand how your bill is calculated, you can use our Google Cloud Pricing Calculator.

    Page: signup

    1. Sign up for a Google account.

      If you don't already have one, sign up for a Google account.

    2. Go to the Google Developers Console.
    3. Select your desired project to enable Google Compute Engine.

      If you want to create a new project, click on Create Project.

    4. Activate Google Compute Engine.

      Click on Compute Engine to activate the service.

    5. Set up billing.

      Google Compute Engine will prompt you to set up billing before you can use the service. It is not possible to use Google Compute Engine without activating billing.

    That's it! You can now start using Google Compute Engine.

    Using Google Compute Engine

    Now that you are signed up with Google Compute Engine, you can:

    Page: getting-started

    This page describes some ways to get started using Google Compute Engine.

    Accessing Google Compute Engine

    There are several methods and tools you can use to interact with Google Compute Engine:

    • gcutil

      gcutil is a command-line tool that you can download and install on your machine and use to interact with Google Compute Engine. gcutil can perform any action that you can do using the RESTful API directly. It is an easy and simple way to start using Google Compute Engine.

      A majority of the examples and code snippets provided in the documentation use gcutil commands.

    • Google Developers Console

      Google Developers Console is a web application that lets you create and manage resources through an easy-to-use graphical interface.

    • Google APIs Client Libraries

      Use the Google APIs Client Libraries to build applications and tools to interact with the Google Compute Engine API.

    • The RESTful API

      Make requests to the API directly using REST methods.

    Authorizing Requests

    In order to access the Google Compute Engine API, you need to authenticate using OAuth 2.0. To learn how to authorize requests to Google Compute Engine from your applications, see Authorizing Requests to Google Compute Engine.

    Launching an Instance

    To start learning how to use Google Compute Engine, follow the Quickstart guide, which discusses how to use gcutil to launch an instance, set up firewalls, and start an Apache server.

    Page: quickstart

    It's easy to get started using Google Compute Engine. This quickstart provides step-by-step instructions on how to get started using Google Compute Engine. By the end of this quickstart, you should know how to:

    1. Start a virtual machine

      To start, you will create a virtual machine instance with a machine type, an image, and a root persistent disk.

    2. Configure a firewall to allow traffic to and from the Internet

      In step 2 of this exercise, you will create a firewall rule to allow external traffic to and from the Internet.

    3. Install an Apache server to serve web pages

      Next, you will ssh into your virtual machine and install Apache.

    4. Lastly and optionally, we'll discuss how to delete your resources once you're done with them.

    You can also watch the video on this page to walk you through the steps of this tutorial. The video covers the same concepts and shows you how to perform each step of this quickstart.

    This guide is intended as a beginner's tutorial to Google Compute Engine.If you would like to access Google Compute Engine programmatically, you might consider using the Python or Javascript client libraries.

    Contents

    1. Setup
    2. Start an instance
    3. Add a firewall
    4. Log in to an instance
    5. Serve web pages
    6. Delete your instance and persistent disk
    7. Next steps

    Setup

    Before you can run this exercise, you must set up your environment as described here:

    1. Sign up for Google Compute Engine, if you haven't already.
    2. Download and Install the gcutil command line tool.
    3. If you didn't set your project ID in the installation procedure, you can still set a default gcutil project ID by running:
      $ gcutil getproject --project=<project-id> --cache_flag_values

      You were prompted to create a project ID when you enabled the service in the Google Developers Console. If your project belongs to a specific domain, your project ID would be in the form <domain>:<your-chosen-project-id>.

      Optional: For this tutorial, we are also using the --cache_flag_values flag, which tells gcutil to store all the flags in this command in a file at ~/.gcutil.flags. All subsequent gcutil commands can omit these flags, such as the --project flag, because gcutil reuses the initial flag value. If you do not include the --cache_flag_values flag, you need to provide all required flags for every gcutil action. The rest of this quickstart guide omits the --project flag because we have set the --cache_flag_values flag above.

      Similarly, you can also use the --cache_flag_values to store flags such as --zone and --machine_type, which might simplify instance creation if you plan to create many instances in the same zone or with the same machine types. You can always override the cached flag values by providing a new flag value on the command line:

      gcutil addinstance --zone=<new-zone> ...

    Create an instance

    First, you will need to create a new virtual machine instance. Your instances run on the Compute Engine infrastructure and you can ssh in to and configure them, install and run software, create a network, create a server farm, and much more.

    When you create an instance, Google Compute Engine will automatically create a root persistent disk for you, using the image that you specified. The root persistent disk stores the root filesystem and OS image that your instance needs to boot.

    Here are the steps to creating your first instance:

    1. Create and Start Your Instance

      To create an instance, you need to execute the addinstance command:

      gcutil addinstance my-first-instance --machine_type=n1-standard-2 \
      --image=debian-7 --zone=us-central1-a --wait_until_running --auto_delete_boot_disk

      where:

      • my-first-instance is a name that you choose for your instance. Your instance name must adhere to the restrictions described on the Instance resource page. For this example, use my-first-instance.
      • --machine_type=n1-standard-2 sets your desired machine types. A machine type determines the memory, number of virtual cores, and persistent disk limits that your instance will have.
      • --image=debian-7 sets the image and operating system to use. If you provide debian-7, gcutil automatically resolves the image to the latest debian-7 image. Image versions are marked by dates so the newest version of the image will be the one with the most recent date. An image contains the operating system and root file system that is necessary for starting an instance. Currently, Google Compute Engine offers Debian and CentOS images. This image will be applied to a root persistent disk which is automatically created if you do not specify a disk on instance creation. The newly-created persistent boot disk is named after your instance, in the format <instance-name>.
      • --zone=us-central1-a determines where your instance should live. In this example, the instance will be created in the us-central1-a zone.
      • --wait_until_running flag tells gcutil to block until the instance is running. By default, after you run the addinstance command, gcutil will exit the addinstance command even if your instance has not yet started running. You must then check the status of your instance and wait for the status to change to RUNNING before you can use your new instance. To avoid this extra step, you can use the ‑‑wait_until_running flag, which will force the addinstance command to wait until the instance is running before it exits, indicating that the instance is ready to use.
      • --auto_delete_boot_disk tells Compute Engine that the associated root persistent disk should be deleted if the instance is deleted. This helps you remove unused resources and saves you an extra API call to delete the root disk separately. If you omit this flag, the default for this property is false.

      View a list of important flags and parameters for more information or run gcutil --help for the full list of available flags.

    2. Select a passphrase for ssh keys

      If this is your first time using gcutil to add an instance, you will be asked to create a passphrase to protect your ssh keys.

    3. (Optional) Check Instance Status

      After calling gcutil addinstance, Google Compute Engine will launch your instance. Before you can use your instance, however, you must wait for your instance to report its status as RUNNING. If you didn't use the --wait_until_running flag with the addinstance command, you can check on the status of the instance by querying Google Compute Engine. To check the status of your instance, execute the following command:

      $ gcutil getinstance my-first-instance
    4. Make note of your instance's external IP

      By default, your instance is assigned a new ephemeral external IP. Make note of this IP when you perform a getinstance command:

      $ gcutil getinstance my-first-instance
      +------------------------+---------------------+
      | name                   | my-first-instance   |
      | ...                    | ...                 |
      | description            |                     |
      |   ip                   | 10.207.85.170       |
      |   access-configuration | External NAT        |
      |   type                 | ONE_TO_ONE_NAT      |
      |   external-ip          | 192.158.28.53       |
      +------------------------+---------------------+
      

      You need this IP later to browse to your new web server.

    Add a firewall

    By default, Google Compute Engine blocks all connections to and from an instance to the Internet. To install Apache and serve web pages, you need to create a firewall rule that permits incoming HTTP traffic on port 80.

    Every project comes with two default firewalls:

    • A firewall that allows SSH access to any instance.
    • A firewall that allows all communication between instances in the same network.

    You must manually create a firewall rule that allows HTTP requests to your instance. For this example, create a new firewall using the following gcutil command:

    $ gcutil addfirewall http2 --description="Incoming http allowed." --allowed="tcp:http"
    INFO: Waiting for asynchronous operation to complete. Current status: RUNNING. Sleeping for 5s.
    
    +---------------------+------------------------------------------------+
    |      property       |                     value                      |
    +---------------------+------------------------------------------------+
    | name                | operation-1340145141786-4c2dadb1ed2a0-578a4b5e |
    | creation time       |                                                |
    | status              | DONE                                           |
    | progress            | 100                                            |
    | statusMessage       |                                                |
    | target              | httpstuff                                      |
    | target id           | 12918719030692922187                           |
    | client operation id |                                                |
    | insertTime          | YYYY-MM-DDT22:32:21.786                        |
    | startTime           | YYYY-MM-DDT22:32:21.964                        |
    | endTime             | YYYY-MM-DDT22:32:22.976                        |
    | operationType       | insert                                         |
    | error code          |                                                |
    | error message       |                                                |
    +---------------------+------------------------------------------------+
    

    By performing this command, you have:

    • Created a new firewall named http2 that allows tcp:http traffic
    • Assigned the firewall to the default network in the project. Since you didn't specify a network for the firewall rule, it was automatically applied to the default network.
    • Allowed all sources inside and outside the network (including over the Internet) to make requests to the server. We didn't specify a permitted source for the firewall, so all sources are allowed to make requests to instances assigned to the default network (source 0.0.0.0/0 is the default setting, meaning all sources are allowed).
    • Applied this firewall rule to all instances on the network. Because we did not specify a target for your firewall, the firewall applies this rule to all instances in the network.

    To review information about your firewall at any time, perform a gcutil getfirewall request:

    $ gcutil getfirewall http2
    
    +---------------+-------------------------+
    |   property    |          value          |
    +---------------+-------------------------+
    | name          | http2                   |
    | description   | Incoming http allowed.  |
    | creation time | YYYY-MM-DDT22:32:22.347 |
    | network       | default                 |
    | source IPs    | 0.0.0.0/0               |
    | source tags   |                         |
    | target tags   |                         |
    | allowed       | tcp: 80                 |
    +---------------+-------------------------+

    Whenever you create a firewall, you can restrict the sources and targets to specific callers and instances using appropriate addfirewall flags. To see a complete list of supported flags, run gcutil help addfirewall. See Networking and Firewalls for more information about how networking works in Google Compute Engine.

    Log in

    The gcutil tool has a built-in ssh command that enables you to ssh into an instance using the instance name.

    To log in to your instance, execute the following command:

    $ gcutil ssh my-first-instance

    The SSH keys you created previously will be used to authenticate your SSH session. You should now be at the command prompt in your instance's home directory.

    Once you have logged in, you can do anything you could do on any other standard Linux machine, including installing applications. You have root permissions on your instance and full control over everything.

    If you need to log out of your instance, you can execute the following command:

    me@my-first-instance$ exit

    Serve web pages

    It is easy to configure your instance to serve HTTP requests from outside its network. Once your instance is running, install Apache on your instance, as follows.

    1. Install Apache HTTP Server

      You have full administrator privileges on any instance that you start in Google Compute Engine. Instances have access to package repositories by default.

      Debian

      Within your instance, run the following commands:

      me@my-first-instance$ sudo apt-get update
      me@my-first-instance$ sudo apt-get install apache2
      ...output omitted...
      CentOS

      1. Make sure the operating system firewall is disabled.
        # Save your iptable settings
        user@my-first-instance:~$ sudo service iptables save
        
        # Stop the iptables service
        user@my-first-instance:~$ sudo service iptables stop
        
        # Disable iptables on startup.
        user@my-first-instance:~$ sudo chkconfig iptables off
      2. Install Apache using the following commands:
        me@my-first-instance:~$ sudo yum install httpd
        ...
        Installed size: 3.6 M
        Is this ok [y/N]: y
        
        me@my-first-instance:~$ sudo service httpd start
        Starting httpd:                                            [  OK  ]
        
    2. Create a New Home Page

      The Apache web server comes with a default web page that we can overwrite to prove that we really control our server. ssh into your instance and execute the following command to overwrite the existing Apache home page:

      Debian
      me@my-first-instance$ echo '<!doctype html><html><body><h1>Hello World!</h1></body></html>' | sudo tee /var/www/index.html
      CentOS
      me@my-first-instance$ echo '<!doctype html><html><body><h1>Hello World!</h1></body></html>' | sudo tee /var/www/html/index.html

      You can also use the gcutil push command from your local terminal to copy files from your local machine to your instance. However, push does not allow you to save in a directory that requires root permissions, which is necessary in this scenario because the default location above for web server files requires root permissions. You can change the default location to somewhere that doesn't require root access, or open the existing file for edit as root. See the Apache HTTP Server Docs for more information.

    3. Browse to Your Home Page

      Browse to the external IP address listed previously when you called gcutil getinstance. The full URL of your page is http://<IP address>

    Delete your instance and root persistent disk

    When you are finished using your instance, you can delete it and the associated root persistent disk. Deleting an instance stops the virtual machine and removes it from the project.

    To delete your instance and root persistent disk, execute the following command:

    $ gcutil deleteinstance my-first-instance [--delete_boot_pd]

    When you create a persistent disk, it counts towards your persistent disk quota and also incurs monthly persistent disk charges. To make sure you're not charged for persistent disks that you aren't using, make sure to delete your persistent disks when you no longer need them. This can done either by specifying the --auto_delete_boot_disk flag, like you did earlier, or by providing the --delete_boot_pd flag. If you don't provide either flags, gcutil prompts you to decide if you want to delete the root persistent disk.

    If you wanted to keep your root persistent disk and skip the prompt, you can also provide the --nodelete_boot_pd flag instead.

    For more information about quotas, review the quota page. For information about persistent disk pricing, review the pricing page.

    Next Steps

    Congratulations! You've just run your first Google Compute Engine instance. Learn more about Google Compute Engine on the following pages:

    1. Read the Overview page to get an idea of the basic elements of the Google Compute Engine architecture.
    2. Read the details pages for the various resources including instances, networks and firewalls, and disks.
    3. Read the gcutil page to learn more about the tool.

    Page: authentication

    You can authenticate from Google Compute Engine to other Google services using OAuth 2.0 authentication. Depending on your use case, you can either perform the standard OAuth 2.0 flow or use service accounts, as described here.

    Not what you're looking for? Are you looking for documentation on how to authorize requests to Google Compute Engine?

    Contents

    Overview

    When you write applications that need to talk to other Google services, you need to authenticate your application to these services before you can perform any operations or access any data. Often times, you can authenticate these applications using the standard OAuth 2.0 flow, but Google Compute Engine makes this process simpler by providing service accounts.

    When you create a Google Compute Engine project, Google Compute Engine also creates a service account for that project. This service account authenticates the project to other Google services but you can also use the service account to authenticate your instances to other APIs. This is ideal for cases when you have applications running within instances that need to programmatically talk between services but don't need access to user data.

    For example, lets say you want to write an application that runs within a VM instance. This application saves and retrieves log files to and from Google Cloud Storage but in order for the application to talk to Google Cloud Storage, it needs to authenticate to the Google Cloud Storage API. You can use the standard OAuth 2.0 flow (as briefly described in the table below) to do so or you can save yourself several steps if you set up your instance to use service accounts. Your application can then authenticate seamlessly to Google Cloud Storage and other Google APIs without having to perform the full OAuth 2.0 flow, which is handled by the instance. Here is a table the briefly outlines the steps required for each authentication process:

    Standard OAuth 2.0 Flow Service Accounts
    1. Set up your application to construct an OAuth 2.0 authorization request to the Google OAuth 2.0 authorization server.
    2. Log in and consent to the access.
    3. The server returns an authorization code.
    4. Exchange your authorization code for a token.
    5. Use the token to make requests to the API.
    1. Create an instance with access to a service account.
    2. Set up your application to query the metadata server with a simple REST request to obtain a token.
    3. Use the token to make requests to the API.

    If you don't need access to user information, using a service account is a quicker, easier way to set up your application to access other APIs. This document discusses:

    • Using service accounts with your applications

      If you want to run an application from within your instance that needs access other Google services, you can setup your application to use service accounts.

    • Using service accounts with tools and libraries

      When you set up a service account with your instance, it automatically allows tools and libraries within that instance, such as gsutil, to use the service account credentials to access the respective API, without requiring additional authentication on your end. See Using Service Accounts with Tools and Client Libraries for more information.

    Note: Google Compute Engine service accounts are different than Developers Console service accounts. For more information on how they are different and how to use Developers Console service accounts instead of Google Compute Engine service accounts, see Developers Console service accounts.

    Service Account Names

    All Google Compute Engine projects can have one Google Compute Engine service account associated with them and multiple Developers Console service accounts. In the API, the Google Compute Engine service account is referred to as the default account, but you can find out the actual service account name by querying the metadata server. Generally, the account name has the format:

    123845678986@project.gserviceaccount.com

    Preparing an Instance to Use Service Accounts

    Before you can use a service account for your application or tools, you must first set up your instance with a service account. To do so, provide the --service_account_scopes=<scope> flag during instance creation. The OAuth2 scope you provide determines the level of access your instance has to that service:

    gcutil --project=<project-id> addinstance <instance name> --service_account_scope=<scope>

    Behind the scenes, Google Compute Engine uses these scopes to request a refresh token that it holds on to. Although you cannot directly access the refresh token yourself, you can use the metadata server to request an access token that you can use in requests.

    Note: A Google Compute Engine project can only hold a certain number of refresh tokens. If you reach this limit, you will need to free up refresh tokens before you can create additional instances with service account scopes. For more information, see Service Account Token Limits.

    For example, you can create a new instance and authorize it to use a service account to access Google Cloud Storage with full control like so:

    gcutil --project=<project-id> addinstance <instance name> \
      --service_account_scopes=https://www.googleapis.com/auth/devstorage.full_control

    To specify multiple scopes, so your instance can access multiple services, provide a list of comma-separated scopes:

    gcutil --project=<project-id> addinstance <instance name> \
      --service_account_scopes=https://www.googleapis.com/auth/devstorage.full_control,https://www.googleapis.com/auth/compute

    Keep in mind the following as you're setting up your instance to use service accounts:

    • Once you have created an instance with a service account and specified scopes, you cannot change or expand the list of scopes.

      You should specify all desired scopes at instance creation time.

    • Each Google service has its own unique scopes and you'll need to find the right scope for your desired service.

      Generally, scopes are listed in the API documentation of the service. For example, see Google Cloud Storage Developer's Guide for more specific scope information.

    • Currently, it is only possible to authorize your instance for the service account of the project it belongs to.

      It is not possible to specify multiple service accounts or service accounts of other projects.

    Service Account Scope Aliases

    Usually, OAuth 2.0 scopes are long URIs that can be difficult to remember. For example, the Google Cloud Storage OAuth 2.0 scopes all begin with the URI:

    https://www.googleapis.com/auth/devstorage.<scope>

    For convenience, gcutil provides a list of aliases that you can use in place of the longer scope URIs. Here is the current list of available aliases:

    Service Full Scope Alias Description
    Google BigQuery https://www.googleapis.com/auth/bigquery bigquery Access to Google BigQuery API
    Google Cloud SQL https://www.googleapis.com/auth/sqlservice cloudsql Access to Google Cloud SQL API
    Google Compute Engine https://www.googleapis.com/auth/compute.readonly compute-ro Read-only access to Google Compute Engine
    https://www.googleapis.com/auth/compute compute-rw Read-write access to Google Compute Engine
    Google Cloud Storage https://www.googleapis.com/auth/devstorage.read_only storage-ro Read-only access to Google Cloud Storage
    https://www.googleapis.com/auth/devstorage.read_write storage-rw Read-write access to Google Cloud Storage
    https://www.googleapis.com/auth/devstorage.full_control storage-full Full access to Google Cloud Storage
    Google App Engine Task Queue https://www.googleapis.com/auth/taskqueue taskqueue Access to Google App Engine Task Queue API

    Specify the alias the same way you would specify the normal scope URI:

    gcutil --project=<project-id> addinstance <instance name> \
      --service_account_scopes=storage-full

    Note: These aliases are only recognized by gcutil and aren't recognized by the API or other libraries and tools. For those situations, you would need to specify the full scope URI.

    Using Service Accounts with Applications

    You can use service accounts with your applications to authenticate to other Google services. To do so:

    1. Make sure your instance has been set up to use a service account

      If not, follow the instructions provided above under Preparing an Instance to Use Service Accounts.

    2. Query the metadata server for an access token

      One you have created your instance, you need to query the metadata server for an access token. Here's how to do so using curl:

      curl "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" -H "X-Google-Metadata-Request: True"

      Your token automatically has access to any of the scopes you specified when you initially started the instance. It is not possible to ask for a token outside of the scopes you originally indicated.

      The server returns something similar to the following:

      {
        "access_token":"ya29.AHES6ZRN3-HlhAPya30GnW_bHSb_QtAS08i85nHq39HE3C2LTrCARA",
        "expires_in":3599,
        "token_type":"Bearer"
      }
    3. Use the access token in requests to the desired API

      Use the access_token provided by the metadata server to perform requests to the API. Note that you can't use the access token for a scope that you didn't specify. For example, you can not use the access token above to access the Google Prediction API because the scope was only authorized for the Google Cloud Storage API and the Google Compute Engine API.

    As a security feature, access tokens are designed to expire after a short period of time. Once an access token reaches 5 minutes of its expiration window, the metadata server no longer caches it. Within the 5 minute expiration window, you can request a new access token to use.

    If you need a refresh token, which never expires and is used to request new access tokens, you need to perform the standard OAuth 2.0 flow.

    Python Example

    Here is a basic Python example that demonstrates how to request a token and access the Google Cloud Storage API from within an instance. This sample:

    1. Connects to the metadata server
    2. Provides a scope for access (in this case, the scope is for read-write access to Google Cloud Storage)
    3. Queries the metadata server
    4. Extracts the access token from the response
    5. Uses the access token to make a request to Google Cloud Storage

    Note that we didn't need to construct an OAuth 2.0 authorization request to the Google OAuth 2.0 authorization server.

    #!/usr/bin/python
    
    import json
    import urllib
    import httplib2
    
    METADATA_SERVER = 'http://metadata/computeMetadata/v1/instance/service-accounts'
    SERVICE_ACCOUNT = 'default'
    GOOGLE_STORAGE_PROJECT_NUMBER = 'YOUR_GOOGLE_STORAGE_PROJECT_NUMBER'
    
    """This script grabs a token from the metadata server, and queries the
    Google Cloud Storage API to list buckets for the desired project."""
    
    def main():
      token_uri = '%s/%s/token' % (METADATA_SERVER, SERVICE_ACCOUNT)
      http = httplib2.Http()
      resp, content = http.request(token_uri, method='GET', body=None, headers={'X-Google-Metadata-Request': 'True'}) # Make request to metadata server
    
      if resp.status == 200:
        d = json.loads(content)
        access_token = d['access_token'] # Save the access token
    
        # Construct the request to Google Cloud Storage
        resp, content = http.request('https://storage.googleapis.com', \
                                      body=None, \
                                      headers={'Authorization': 'OAuth ' + access_token, \
                                               'x-goog-api-version': '2', \
                                               'x-goog-project-id': GOOGLE_STORAGE_PROJECT_NUMBER })
    
        if resp.status == 200:
           print content
        else:
           print resp.status
    
      else:
        print resp.status
    
    if __name__ == '__main__':
      main()

    Using Service Accounts with Tools and Client Libraries

    By default, tools that are provided in your instance, such as gsutil and gcutil, recognize and use a service account if it is available. For gsutil (and access to Google Cloud Storage), use one of the scopes defined at the Google Cloud Storage developer's guide and for gcutil, use one of the following scopes:

    Scope Meaning
    https://www.googleapis.com/auth/compute Read-write access to Google Compute Engine methods.
    https://www.googleapis.com/auth/compute.readonly Read-only access to Google Compute Engine methods.

    This only applies to tools that are provided automatically with the instance. If you created new tools, or added custom tools, you need to authorize your application as described in Using Service Accounts with your Application.

    Checking if an Instance Uses Service Accounts

    To find out if an instance is using service accounts, you can do either of the following:

    • Query the metadata server from within that instance

      To query the metadata server from within the instance itself, make a request to http://metadata/computeMetadata/v1/instance/service-accounts/. Here's how you would use curl to do so:

      user@myinst:~$ curl "http://metadata/computeMetadata/v1/instance/service-accounts/" -H "X-Google-Metadata-Request: True"
      761881434886@project.gserviceaccount.com/
      default/
    • Use gcutil from your local machine

      Run the following command from your local machine:

      gcutil getinstance <instance-name> --print_json --project=<project-id>

    The response returned from either method should contain the following:

    {
       "serviceAccounts":[
          {
             "email":"123845678986@project.gserviceaccount.com",
             "scopes":[
                "https://www.googleapis.com/auth/devstorage.full_control"
             ]
          }
       ]
    }
    • If your instance is using a Google Compute Engine service account:

      The service account name will appear in the response. In this example, the service account 123845678986@project.gserviceaccount.com is associated with the Google Cloud Storage scope.

    • If your instance isn't using a Google Compute Engine service account:

      You will receive an empty serviceAccounts list.

    Back to top

    Service Account Token Limits

    When you set up an instance to use a service account, Google Compute Engine automatically creates OAuth2 refresh tokens for your service account to use for authenticating to other Google services. Google Compute Engine creates OAuth2 refresh tokens for each VM instance that uses a unique set of {service_account, service_account_scopes} and reuses refresh tokens for each VM instance that uses an existing set of {service_account, service_account_scopes}. For example, if you have three instances:

    Instance name service_account service_account_scopes
    instance1 default scope1, scope2
    instance2 default scope2, scope1
    instance3 default scope1, scope2, scope3

    Google Compute Engine creates two refresh tokens: one for the set {default, scope1, scope2} and one for the set {default, scope1, scope2, scope3}. Google Compute Engine does not create an additional refresh token for instance2 because service_account_scopes are unordered sets, so {scope1, scope2} is equivalent to {scope2, scope1} and can use the same refresh token.

    There is a limit to the total number of refresh tokens that your service account can have at any one point in time. Currently, this limit is 600. If this limit is reached, Google Compute Engine will not be able to create an instance which requires a new refresh token, and you will get a SERVICE_ACCOUNT_TOO_MANY_TOKENS error. For example, if you have reached the refresh token limit, and you attempt to create an instance with a new, unique set {default, scope1, scope2, scope3, scope4), the action fails and you will receive the SERVICE_ACCOUNT_TOO_MANY_TOKENS error.

    However, if Google Compute Engine already has a refresh token for an existing set, you can create new instances with that same set and it will reuse the same refresh token. For example, if you created an instance4 with the same set as instance1 {default, scope1, scope2}, the instance creation will succeed because you are not creating a new refresh token.

    How to Free Up Refresh Tokens

    If you reach the refresh token limit and cannot create new refresh tokens, you must delete some refresh tokens associated with the service account. To do so, delete Google Compute Engine instances that are using the service account, in order to reduce the number of distinct service_account_scopes sets used by the service account.

    Note: If you add the “default” service account as a member to other projects, and it has created refresh tokens in those projects, you may have to track down and delete instances in those projects. In addition, it is possible that other Google Services may be using the service account, in which case you must track down those uses and free them.

    Back to top

    Google Developers Console Service Accounts

    Currently, Google offers two different types of service accounts that you can use. We are working on clarifying the terminology here, but until then, here are how the two different types of service accounts differ:

    • Developers Console Service Accounts

      Developers Console service accounts lets you write applications which can authenticate themselves to access a compatible Google API using a JSON Web Token (JWT). The JWT can be used to acquire OAuth2 access tokens that can access Google APIs. You can write applications on Google Compute Engine that uses Developers Console service accounts, by following the Developers Console Service Accounts documentation.

    • Google Compute Engine Service Accounts

      Google Compute Engine service accounts work without the need to create or manage a JWT. Instead, programs running within your Google Compute Engine instances can automatically acquire OAuth2 access tokens with credentials to access any service in your Google API project (and any other services that granted access to that service account).

    Page: authorization

    About authorization protocols

    Your application must use OAuth 2.0 to authorize requests. No other authorization protocols are supported.

    Authorizing requests with OAuth 2.0

    The details of the authorization process, or "flow," for OAuth 2.0 vary somewhat depending on what kind of application you're writing. The following general process applies to all application types:

    1. When you create your application, you register it using the Google Developers Console. Google then provides information you'll need later, such as a client ID and a client secret.
    2. Activate the Google Compute Engine API in the Google Developers Console. (If the API isn't listed in the Developers Console, then skip this step.)
    3. When your application needs access to user data, it asks Google for a particular scope of access.
    4. Google displays a consent screen to the user, asking them to authorize your application to request some of their data.
    5. If the user approves, then Google gives your application a short-lived access token.
    6. Your application requests user data, attaching the access token to the request.
    7. If Google determines that your request and the token are valid, it returns the requested data.

    Some flows include additional steps, such as using refresh tokens to acquire new access tokens. For detailed information about flows for various types of applications, see Google's OAuth 2.0 documentation.

    Here's the OAuth 2.0 scope information for the Google Compute Engine API:

    Scope Meaning
    https://www.googleapis.com/auth/compute Read-write access to Google Compute Engine methods.
    https://www.googleapis.com/auth/compute.readonly Read-only access to Google Compute Engine methods.

    To request access using OAuth 2.0, your application needs the scope information, as well as information that Google supplies when you register your application (such as the client ID and the client secret).

    Tip: The Google APIs client libraries can handle some of the authorization process for you. They are available for a variety of programming languages; check the page with libraries and samples for more details.

    If you need to create and add custom images, you also need to provide Google Compute Engine with the scopes to Google Cloud Storage. Google Compute Engine uses Google Cloud Storage to store and retrieve images and can only do so using the proper Google Cloud Storage scopes. The Google Cloud Storage scopes are as follows:

    Scope Meaning
    https://www.googleapis.com/auth/devstorage.read_only Read-only access to Google Cloud Storage.
    https://www.googleapis.com/auth/devstorage.write_only Write-only access to Google Cloud Storage.
    https://www.googleapis.com/auth/devstorage.read_write Read and write access to Google Cloud Storage.
    https://www.googleapis.com/auth/devstorage.full_control Full control access to Google Cloud Storage.

    Page: overview

    At the core of Google Compute Engine are virtual machine instances that run on Google's infrastructure. Each virtual machine instance is considered an Instance resource and part of the Instance collection. When you create a virtual machine instance, you are creating an Instance resource that uses other resources, such as Disk resources, Network resources, Image resources, and so on. Each resource performs a different function. For example, a Disk resource functions as data storage for your virtual machine, similar to a physical hard drive, and a Network resource helps regulate traffic to and from your instances.

    All resources belong to the global, regional, or zonal plane. For example, images are a global resource so they can be accessed from all other resources. Static IPs are a regional resource, and only resources that are part of the same region can use the static IPs in that region. Google Cloud Platform resources are hosted in multiple locations world-wide. These locations are composed of regions and zones within those regions. Putting resources in different zones in a region provides isolation for many types of infrastructure, hardware, and software failures. Putting resources in different regions provides an even higher degree of failure independence. This allows you to design robust systems with resources spread across different control planes.

    Contents

    Google Developers Console Projects

    Before you can start using Google Compute Engine, you must enable the service for a Google Developers Console. The Developers Console is designed to be a one-stop shop for you to create and manage multiple API "projects" at once. Each project is a totally compartmentalized world. Projects do not share resources, can have different owners and users, are billed separately, and managed separately.

    Once Google Compute Engine is enabled, any resources you create or use in Google Compute Engine belongs to the project. It is possible to have many projects with Google Compute Engine enabled. To differentiate between them, Google Compute Engine requires that you always identify the project you're working in when interacting with the Google Compute Engine service.

    Identifying Projects

    In order to interact with Google Compute Engine resources, you must provide identifying project information for every request.

    A project can be identified two ways: using a project ID, or a project number. A project ID is the customized name you chose when you created the project, or when you activated an API that required you to create a project ID. It can be found in the Dashboard of the project, and looks similar to the following:

    If you are creating a project within a certain domain, such as a company or organization, the Developers Console will automatically prepend your chosen project ID with the correct domain. For example, say you are creating a project that belongs to the domain example.com; when the Console prompts you, choose a name like you would normally. After you're done and you've saved your changes, notice that your project ID is now in the format:

    domain:project_id

    For example, if you chose the name mysampleproject, the full project ID would be:

    example.com:mysampleproject

    When you specify your project ID, you need to include the full project ID, including any domain. For example, when using gcutil, you can specify your project ID like so:

    gcutil addinstance mynewinstance --project=example.com:mysampleproject [--cache_flag_values]

    When you choose your project ID (or any resource names), avoid providing sensitive information in your names.

    Alternatively, you can also use the project number to identify your project to Google Compute Engine. Your project number is unique to the project and can be found in the URL of the project:

    Generally, we recommend using the project ID to identify your projects because it is easier to remember than the numeric ID.

    Project Team Members

    Projects have team members that can collaborate on and access the project to varying degrees. Team members can be added as an owner, editor, or viewer. Every project can have one or more owners, editors, and viewers. Depending on their role, team members can access Google Compute Engine resources for that project accordingly:

    • To add or modify Google Compute Engine resources in a project, you need to be an owner or editor of that project
    • To list information about certain resources within a project, you need to be a viewer, owner, or editor of that project

    To add team members to a project, see Managing project members.

    For more information, see the Projects.

    Global Resources

    Global resources are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include:

    • Images - Images can be used by any instance or disk resource in the same project as the image. Google also provides preconfigured images that you can use to boot your instance, or you can customize an image to use instead.
    • Snapshots - Persistent disk snapshots are available to all disks within the same project as the snapshot.
    • Network - A network can be used by any instance in the same project.
    • Firewalls - Firewalls apply to a single network, but are considered a global resource because they can be used by any network in the same project.
    • Routes - Routes allow you to create complicated networking scenarios by letting you manage how traffic destined for a certain IP range should be routed, similar to how a router directs traffic within a local area network. Routes apply to networks within a Google Compute Engine project and are considered a global resource.
    • Global Operations - Operations are both a per-zone resource and a global resource. If you are performing an operation on a global resource, the operation is considered a global operation. For example, inserting an image would be considered a global operation, because images are a global resource.

      Note: Operations are unique in that they span all three scopes (global resources, regional operations, and zonal operations). As such, a request to list operations automatically returns operations across all three scopes.

    Most of the global resources are briefly described below.

    Image Resources

    When you start an instance, you must select an image to use. An image resource contains an operating system and root file system necessary for starting your instance. Google maintains and provides images that are ready-to-use or you can customize an image and use that as your image of choice for creating instances. Depending on your needs, you can also apply an image to a persistent disk and use the persistent disk as your root file system.

    Images are a global resource, so you can use any image with an instance or disk. All your custom images are also global. For more information, see Images.

    Snapshot Resources

    Persistent disk snapshots lets you copy data from existing persistent disk and apply them to new persistent disks. This is especially useful for creating backups of your persistent disk data in cases of unexpected failures and zone maintenance events. Since snapshots are a global resource, you can apply a snapshot to any disk in any zone. If a persistent disk in a zone is taken offline, you can use snapshots to recreate the same disk in another zone of your choice

    For more information, see Persistent Disk Snapshots.

    Network Resources

    A project has one or more Network resources that define how instances communicate with each other, with other networks, and with the outside world. Each instance belongs to a single network and any communication between instances in different networks must be through a public IP address.

    A network defines the address range and gateway address of all instances connected to it, which you can configure to suit your needs. Networks are associated with Firewall resources, which allow you to specify the types of connections that are permitted into an instance. For example, you can configure the network and firewall resources of a specific instance so that the instance can have an externally visible IP address that lets it act as an HTTP server, or handle SSH, UDP, or other requests as defined by the network and firewall settings.

    Networks belong to a single project but are a global resource; any instance within the same project as the network may use the network.

    The default Network

    Every project comes preconfigured with a single Network resource named "default". The default network includes two firewalls: a firewall that allows all instances in the network to communicate over TCP/UDP/ICMP, and a firewall that supports ssh into the network from outside. No other connections are supported by default. You can modify or delete the default firewalls or add new firewalls to your project's default network to customize how your instances communicate with each other and the world.

    Most users will not need to create any Networks above and beyond the default Network.

    For more information, see Networking and Firewalls

    Firewall Resources

    A Firewall resource contains one or more rules that permit connections into instances. Every firewall resource is associated with one and only one network. It is not possible to associate one firewall with multiple networks.

    No communication is allowed into an instance unless a Firewall resource permits the network traffic, even between instances on the same network. However, an instance is always allowed to communicate out, unless it is trying to communicate through one of the blocked traffic sources. In other words, firewalls only apply to incoming connections. A firewall resource consists of:

    • A set of allowed sources. This can either be explicit IP address ranges or a set of instances defined by a tag on the instance.
    • A set of target VMs, defined by tags on the instances
    • A set of protocols and ports

    With these primitives, Google Compute Engine provides a flexible configuration to allow connections from any source or to any target. To get started, here are some firewall examples:

    • A firewall that allows incoming TCP connections to port 80 and 443 on instances tagged 'frontend' from anywhere.
    • A firewall that allows SSH requests into any instance from just your workstation's IP.
    • A firewall that allows all TCP or UDP requests from instances labeled "frontend" to any instances labeled "backend" over port 118.

    For more information, see Networking and Firewalls.

    Route Resources

    Google Compute Engine offers a routing table that lets you manage how traffic destined for a certain IP range should be routed. Similar to a physical router in your local area network, all outbound traffic is compared to your routes table and appropriately forwarded if the outbound packet matches any rules in the routes table. For example, you can route traffic destined for the Internet through the nearest VPN gateway. Routes apply to networks inside your Google Compute Engine project and are considered a global resource.

    Regional Resources

    Regional resources are accessible by any resources within the same region. For example, if you reserve a static IP in a specific region, only instances within that region can use that static IP. Each region also has one or more zones and you can find out which zone belongs to which region by performing a gcutil getregion request.

    Regional resources include:

    • Addresses - Reserved addresses are static external IP addresses that you reserve and manage for your instances. Addresses are a regional resource and can only be used by instances that are in the same region as the address.
    • Regional Operations - Operations are a per-zone resource, a per-region resource, and a global resource. If you are performing an operation on a resource that lives within a region, the operation is considered a per-region operation. For example, reserving an address is considered regional operation, because the operation is being performed on a region-specific resource, an address.

      Note: Operations are unique in that they span all three scopes (global resources, regional operations, and zonal operations). As such, a request to list operations automatically returns operations across all three scopes.

    Address resources are discussed in detail below.

    Address Resources

    When you create an instance, an ephemeral external IP address is automatically assigned to your instance by default. This address is attached to your instance for the life of the instance and is released once the instance has been terminated. If you want to reserve a static IP address instead, you can use the Addresses collection, a self-service API that allows you to reserve, release, and manage static IPs for your project. Using the Addresses collection, you can also promote an ephemeral IP address to a static IP address. For more information, review the Reserved IP Addresses documentation.

    As a regional resource, an Address resource is only available to instances that are in a zone that is hosted in the same region as the Address resource.

    Traffic to and from that instance to Internet requires that an instance has an external IP address assigned to the instance. If an instance doesn't have an external IP address, it can only access other instances and will not be able to reach the Internet.

    Zone Resources

    A zone is an independent entity in a specific geographical location where you can run your resources. For example, a zone named us-central1-a indicates a location in the central United States. Choosing a zone is important for several reasons:

    • Handling failures

      It is important to distribute your resources across multiple zones to plan for scheduled or unscheduled zone outages. Since each zone is an independent entity, zone failures should not affect other zones. If a zone becomes unavailable, you can transfer traffic to another zone, allowing your services to remain running in the face of failures. For more information about distributing your resources and designing a robust system, see Designing Robust Systems.

    • Decreased latency

      To decrease latency, you may want to choose a zone that is close to your point of service. For example, if you mostly have customers on the West Coast of the US, then you may want to choose a zone that is close to that area, in order to decrease latency between your virtual machine instances and your customers.

    Resources that are hosted in a zone are called per-zone resources. Zone-specific resources, or per-zone resources, are unique to that zone and are only usable by other resources in the same zone. For example, an instance is a per-zone resource. When you create an instance, you must provide the zone where the instance should live. The instance can access other resources within the same zone, and can access global resources, but it cannot access other per-zone resources in a different zone, such as a Disk resource.

    Note: As an exception, instances can talk to instances in another zone if they belong to the same network.

    Per-zone resources include:

    • Instances - A virtual machine instance must reside within a zone and can access global resources or resources within the same zone.
    • Disks - A Disk resource can only be accessed by other instances within the same zone. For example, you can only attach a disk in the same zone as the instance; you cannot attach a disk to an instance in another zone.
    • Machine Types - Machine types are per-zone resources. Instances and disks can only uses machine types that are in the same zone.
    • Per-zone Operations - Operations are both a per-zone resource and a global resource. If you are performing an operation on a resource that lives within a zone, the operation is considered a per-zone operation. For example, inserting an instance is considered a per-zone operation, because the operation is being performed on a zone-specific resource, an instance.

      Note: Operations are unique in that they span all three scopes (global resources, regional operations, and zonal operations). As such, a request to list operations automatically returns operations across all three scopes.

    Most zonal resources are described briefly below.

    Instance Resource

    Instances (virtual machine instances) are the heart of Google Compute Engine. A Google Compute Engine instance is a virtual machine running on a Linux configuration. You can choose to customize as little or as much of your instances as you would like, including customizing the hardware, OS, disk, and other configuration options. You can start and customize instances individually with very few restrictions and you have root privileges on any instance that you start. An instance is a member of a single project and a single zone. Instances cannot be shared across zones or projects.

    Instances are a per-zone resource, so all requests to an instance require a zone specification.

    For more information, see Instances.

    Machine Type Resources

    An instance's machine type determines the number of cores, the amount of memory, and the number of persistent disk allowed your virtual machine instance. Machine types are a zonal resource, so you can only use machine types that are in the same zone as your instances. Some machine types are not available in all zones.

    For more information, see Machine types.

    Disk Resources

    A persistent disk is persistent network storage that you can attach and detach between instances. Persistent disks live as long as you need them to and can be tied to the life of the instance or can live beyond the life of the instance. Instances must store their data on a persistent disk and you can create both root persistent disk or data persistent disks.

    All instances must use a root persistent disk which contains an operating system and root filesystem from which the instance boots. Data persistent disks do not contain a root filesystem but act as additional storage for an instance. You can attach multiple persistent disks to an instance or attach a single persistent disk to multiple instances in read-only mode.

    Disk Encryption

    All information stored on persistent disks is encrypted before being written to physical media, and the keys are tightly controlled by Google.

    For more information, see Disks.

    Aggregate Lists

    By default, list requests to resource collections return a list of resources in a particular control plane. For example, when you query the API for a list of Instance resources, you must provide the zone for which you want to list instances. To list resources across all zones or regions, you can perform an aggregate list query. Each per-region and per-zone resource has an aggregate list URI that can be queried to list all resources of that type. For example, to list all instances across all zones, you can make a request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/aggregated/instances

    Similarly, to list all addresses across all regions, make a request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/aggregated/addresses

    For more information, review the aggregateList method for that resource.

    Page: resource-quotas

    Resource quotas describe the default quota for each resource on a per-project basis. Each Google Developers Console project is subject to default quotas on the number of resources that a project can have.

    Each project is subject to the following Google Compute Engine global resource limits:

    • Networks: 5
    • Firewalls: 100
    • Images: 100
    • Snapshots: 5000
    • Routes: 100
    • Forwarding Rules: 50
    • Target Pools: 50
    • Health Checks: 50

    Regional Quotas

    Each region within a project is subject to the following regional quotas:

    • CPUs: 24
    • Maximum total aggregate disk space: 5TB
    • In-use IP addresses (both ephemeral and reserved): 23
    • Reserved IP addresses: 7

    To check how much of each resource you have used at any time, perform the appropriate list() calls for each resource, either through the API or through gcutil. For example, you can list all your networks for a specific project by running:

    gcutil listnetworks --project=<project-id>

    If you need more than the default resource quotas listed here, you can request more quota for resources using the quota change request form.

    Page: disks

    A Persistent Disk resource provides disk space for your instances and contains the root filesystem that your instance boots from. You can create additional persistent disks to store data. A single persistent disk can be used across multiple instances, and an instance can attach multiple persistent disks. Before your virtual machine instance can use a persistent disk, the persistent disk must exist beforehand.

    A persistent disk's lifespan is independent from an instance. Disks are bound to a zone rather than an instance. If your instance suffers a failure, or is moved out of the way of a maintenance event, all data in a persistent disk is preserved. Persistent disks can only be used by instances residing in the same zone. To use a non-root persistent disk for a first time, you must format and mount the disk first. If you are just attaching an already formatted persistent disk to a new instance, you only need to mount it to the instance (and skip formatting if you'd like). If you are attaching a root persistent disk, you do not need to format or mount it, as Google Compute Engine will do that for you.

    Instances that use persistent disks can be live migrated out of the way of schedule maintenance or impending failures of underlying infrastructure. For more information about scheduled maintenance and instance migration, see Scheduled Maintenance.

    Persistent disks are per-zone resources.

    Useful gcutil commands:

    • gcutil push to copy data from your local computer to an instance.
    • gcutil pull to copy data from an instance to your local computer.
    • gcutil listdisks to list persistent disks.
    • gcutil getdisk to get information about a specific persistent disk.
    • gcutil adddisk to add a new persistent disk to the project.
    • gcutil deletedisk to remove a persistent disk.

    Contents

    Disk Encryption

    Google Compute Engine protects the confidentiality of persistent disks by encrypting them using AES-128-CBC, and this encryption is applied before the data leaves the virtual machine monitor and hits the disk. We also protect the integrity of persistent disks via a HMAC scheme.

    Google Compute Engine generates and tightly controls access to a unique encryption key for each disk. Encryption is always enabled and is transparent to Google Compute Engine users.

    Persistent Disk Performance

    Persistent disk performance depends on the size of the volume. Larger volumes can achieve higher I/O levels than smaller volumes. There are no separate I/O charges as the cost of the I/O capability is included in the price of the space. Persistent disk performance is determined as follows:

    • Input/output operations per second (IOPS) performance caps grow linearly with the size of the persistent disk volume all the way to the maximum size of 10TB.
    • Throughput caps also grow linearly up to the maximum bandwidth for the virtual machine.

    This model is a more granular version of what people would see with a RAID set. The more disks in a RAID set, the more I/O it can perform and the finer a RAID set is carved up, the less I/O there is per partition. However, instead of growing volumes in increments of entire disks, persistent disk gives Compute Engine customers granularity at the GB level for their volumes.

    The pricing and performance model provides three main benefits:

    • Operational simplicity

      In the previous persistent disk model (before Compute Engine's general availability announcement in December 2013), to increase I/O to a virtual machine, customers needed to create multiple small volumes and then stripe them together inside the virtual machine. This created unnecessary complexity at volume creation time and throughout the volume’s lifetime because it required complicated management of snapshots. Under the covers, persistent disk stripes data across a very large number of physical drives, making it redundant for users to also stripe data across separate disk volumes. In this current model, a single 1TB volume performs the same as 10 x 100GB volumes striped together.

    • Predictable pricing

      Volumes are priced only on a per GB basis. This price pays for both the volume’s space and all the I/O that the volume is capable of. Customers’ bills do not vary with usage of the volume.

    • Predictable performance

      This model allows more predictable performance than other possible models for HDD-based storage while still keeping the price very low.

    Volume I/O caps distinguish between read and write I/O and between IOPS and bandwidth.

    Maximum Sustained IOPS / TB (scales linearly up to 10 TB) Maximum Sustained throughput / TB Maximum Sustained throughput / VM
    Read 300 IOPS 120 MB/s 180 MB/s
    Write 1500 IOPS 90 MB/s 120 MB/s

    Please note the following points when considering the information in the chart:

    • The caps in the chart are for the maximum sustained IO.

      To better serve the many cases where I/O spikes, Compute Engine allows virtual machines to save up I/O capability and burst over the numbers listed here. In this way, smaller volumes can be used for use cases where I/O is typically low but periodically bursts well above the average. For example, boot volumes tend to be small and infrequently accessed, but sometimes need to perform heavy I/O for tasks like booting and package installation.

    • Performance depends on I/O pattern and volume size.

      IOPS and throughput caps have per TB values. These numbers need to be multiplied by a volume’s size to determine the cap for that volume. There is a throughput cap per virtual machine that comes from the virtual machine itself, not the volume. Observed throughput caps for a volume will be the lower of the volume’s cap and the virtual machine cap.

    • Larger virtual machines tend to have higher performance levels than smaller virtual machines.

      Generally, the performance levels for virtual machines are as follows:

      • 4 and 8 core virtual machines will reach the top performance levels listed in the chart above.
      • 1 and 2 core virtual machines perform at lower levels than the 4 and 8 core virtual machines.
      • Shared core virtual machines tend to have low I/O capability and should be used only where high I/O is not required.

    As a concrete example of how to use this chart, a 500 GB (0.5 TB) volume can execute up to 150 small random reads, 750 small random writes, 60 MB/s streaming reads and 45 MB/s streaming writes.

    To determine what size of volume is required to have the same optimal performance as a typical 7200 RPM SATA drive, you must first identify the I/O pattern of the volume. Use the chart below to determine the best volume size for a specific I/O pattern.

    IO Pattern Size of volume to approximate a typical 7200 RPM SATA drive
    Small random reads 250 GB
    Small random writes 50 GB
    Streaming large reads 1000 GB
    Streaming large writes 1333 GB

    Persistent disk volumes perform much better than physical hard drives, achieving the speed of a single disk with much less space than you would have to buy with an off the shelf drive.

    Caution: Not all configurations of virtual machine and applications are able to reach all the performance numbers in the charts above. For example, the default Linux images provided by Compute Engine are tuned for random I/O and transactional storage and will easily reach the IOPS limits. But, consequently, these images are not tuned well for sequential I/O and some common operations like large file copies can be much slower than the performance caps.

    As a workaround, the easiest way to tune for sequential I/O and speed up operations like cp, is to increase the size of the operating system readahead cache. To do that, add the following command to your /etc/rc.local file on your virtual machine instance:

    /sbin/blockdev --setra 16384 <drive-name>

    where <drive-name> is the drive name inside the virtual machine, such as /dev/sda.

    You should selectively add this only to virtual machines that perform sequential I/O operations regularly, because it will have adverse effects on transactional workloads. You can find more details about performance tuning at the Compute Engine Disks: Price, Performance, and Persistence technical article, but one tip will help you get good performance quickly: do nothing for random IO, but for sequential IO, run the command above.

    Disk Interface

    By default, Google Compute Engine uses a SCSI for attaching persistent disks. Images that provided on or after 20121106 will have virtio SCSI enabled by default. Images using Google-provided kernels older than 20121106 only support a virtio block interface. If you're currently using images that have a block interface, you should consider switching to a newer image that uses SCSI.

    If you are using the latest Google images, they should already be set to use SCSI.

    Creating a New Persistent Disk

    Before setting up a persistent disk, keep in mind the following restrictions:

    • A persistent disk can only be used by one instance in a read-write capacity.

      While you can attach a persistent disk to multiple instances, the persistent disk can only be accessed in read-only mode when it is being used by more than one instance. If the persistent disk is attached to just one instance, it can be used by that instance with read-write capacity.

    • Generally, persistent disks are not mounted or formatted when they are first created and attached.

      Root persistent disks are mounted on instance start up, but for additional persistent disks that are not used for booting an instance, you must mount and format a disk explicitly the first time you use it. After the initial formatting, you do not need to format the disk again (unless you would like to). If your instance fails or if you reboot it, you need to remount the disk. You can remount persistent disks automatically by modifying the /etc/fstab file.

    Every region has a quota of the total persistent disk space that you can request. Call gcutil getregion to see your quotas for that region:

    $ gcutil --project=myproject getregion us-central1
    +--------------------+-------------------------------------------------+
    | name               | us-central1                                     |
    | description        | Region for zones in us-central1-a               |
    | creation-time      | 2013-09-03T12:05:02.515-07:00                   |
    | status             | UP                                              |
    | zones              | zones/us-central1-a,zones/us-central1-b         |
    | deprecation        |                                                 |
    | replacement        |                                                 |
    | usage              |                                                 |
    |   cpus             | 0.0/24.0                                        |
    |   disks-total-gb   | 0.0/5120.0                                      |
    |   static-addresses | 0.0/7.0                                         |
    |   in-use-addresses | 0.0/23.0                                        |
    +--------------------+-------------------------------------------------+

    Persistent disk size guidelines

    To decide how big to make your persistent disk, determine what performance you require and then use the persistent disk performance chart to choose the right persistent disk size for your needs.

    If you aren't sure what your performance or I/O needs are, we recommend profiling your application and learning your I/O patterns so you can size your disks correctly and optimize the cost of your volumes. To get started, the following table provides some general recommendations based on your current setup:

    If you are currently using... ...with the following I/O pattern We recommend the following persistent disk size ...with the following monthly cost
    A single SATA hard drive in a physical machine.. Small transactional I/Os 250GB $10
    A single SATA hard drive in a physical machine.. Streaming I/O 1500GB $60
    A Compute Engine virtual machine mounting a persistent disk volume Small transactional I/Os 500GB if mostly writes, 1000GB if mostly reads $20 for 500GB, $40 for 1000GB
    A Compute Engine virtual machine mounting a persistent disk volume Streaming I/Os 1500GB $60

    Create a disk

    To create a new persistent disk in your project, call gcutil adddisk with the following syntax:

    gcutil --project=<project-id> adddisk <disk-name> [--size_gb=<size> --zone=<zone-name> \
    --source_snapshot=<snapshot-name> --source_image=<image-name>]

    Important flags and parameters:

    ‑‑project=<project‑id>
    [Required] The ID of the project where this persistent disk should live.
    <disk‑name>
    [Required] The name for the persistent disk, when managing it as a Google Compute Engine resource using gcutil. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
    ‑‑size_gb=<size>
    [Optional] When specified alone, creates an empty persistent disk with the specified size, in GB. This should be an integer value, up to the remaining disk size quota for the project. You can also specify this field alongside the ‑‑source_image or ‑‑source_snapshot parameters, which creates a disk using the image or snapshot provided, that is the size of ‑‑size_gb. ‑‑size_gb must also be equal to or larger than the size of the image (10GB) or the size of the snapshot.

    Any root persistent disks created from an source image or source snapshot with an explicit ‑‑size_gb that is larger than the filesystem needs to be re-partitioned from within an instance before the additional space can be used. For empty disks, you can mount the entire disk to an instance without creating a partition. For more information about partitioning a persistent disk, see Repartitioning a Root Persistent Disk. For information about formatting and mounting non-root persistent disks, see Formatting Disks.

    If you are creating a non-root persistent disk from a snapshot that is larger than the original snapshot size, you will need to follow the instructions to restore your snapshot to a larger size.

    ‑‑zone=<zone‑name>
    [Optional] The zone where this persistent disk should reside. If you don't specify this flag, gcutil prompts you to select a zone from a list.
    ‑‑source_snapshot=<snapshot‑name>
    [Optional] The persistent disk snapshot from which to create the persistent disk. You can also use this alongside ‑‑size_gb to explicitly set the size of the persistent disk. ‑‑size_gb must be larger than or equal to the size of a snapshot.
    ‑‑source_image=<image‑name>
    [Optional] The image to apply to this persistent disk. This option creates a root persistent disk. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk with enough space to store your root filesystem files. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of an image, which is 10GB.

    Note: Since v1, you can no longer specify an image with a Google-provided kernel for your sourceImage. For more information, see the v1 transition guide.

    When you run the gcutil adddisk command above, Google Compute Engine give you a list of zones for you to choose from, where your persistent disk should live. If you plan to attach this persistent disk to an instance, the persistent disk must be in the same zone as the instance that uses it.

    You can check on the status of the disk creation process by running gcutil getdisk <disk‑name>. Your disk can have one of the following statuses:

    • CREATING - This disk is in the process of being created.
    • FAILED - This disk was not created successfully.
    • READY - This disk was created successfully and is ready to be used.

    Once the disk status is READY, you can use your new persistent disk by attaching it to your instance as described in Attaching a Persistent Disk to an Instance.

    Attaching a Persistent Disk to an Instance

    After you have created your persistent disk, you must attach it to your instance to use it. You can attach a persistent disk in two ways:

    To use a persistent disk with an instance, the persistent disk must live in the same zone as your desired instance. For example, if you want to create an instance in zone us-central1-a and you want to attach a persistent disk to the instance, the persistent disk must also reside in us-central1-a.

    Persistent Disk Size Limits

    Before you attach a persistent disk to an instance, note that your persistent disks are subjected to certain size and quantity restrictions. Standard, high memory, and high CPU machine types can attach up to 16 persistent disks. Shared-core machine types can attach up to 4 persistent disks.

    Additionally, machine types have a restriction on the total maximum amount of persistent disk space that can be mounted at a given time. If you reach the total maximum size for that instance, you won't be able to attach more persistent disks until you unmount some persistent disks from your instance. By default, you can mount up to 10TB of persistent disk space for standard, high memory, and high CPU machine types, or you can mount up to 3TB for shared-core machine types.

    For example, if you are using an n1-standard-1 machine type, you can choose to attach up to 16 persistent disks whose combined size is equal to or less than 10TB or you can attach one 10TB disk. Once you've reached that 10TB limit, you cannot mount additional persistent disks until you unmount some space.

    Note: The default limit of aggregate persistent disk space for a region is 5TB. This is significantly less than the 10TB limit mentioned above. As such, if you were hoping to attach up to 10TB or more of persistent disks to instances, you will also need to request a quota increase for the total amount of aggregate disk space available to your project. For more information, see resource quotas.

    To find out an instance's machine type, run gcutil getinstance <instance-name> ‑‑project=<project-id>.

    Attaching a Disk During Instance Creation

    To attach a persistent disk to an instance during instance creation, follow the instructions described. Note that if you are attaching a root persistent disk that is larger than the original source (such as the image or snapshot), you need to repartition the persistent disk before you can use the extra space.

    If you attach a data persistent disk that was originally created using a snapshot, and you created the data disk to be larger than the original size of the snapshot, you will need to resize the filesystem to the full size of the disk. For more information, see Restoring a Snapshot to a Larger Size.

    1. Create the persistent disk by calling gcutil adddisk <disk‑name> ‑‑project=<project‑id>
    2. Create the instance where you would like to attach the disk, and assign the disk using the ‑‑disk=<disk‑name> flag.

      Here is the abbreviated syntax to attach a persistent disk to an image:

      gcutil --project=<project-id> addinstance --disk=<disk-name>[,deviceName=<alias-name>,mode=<mode>,boot] <instance-name> \
      [--auto_delete_boot_disk]

      Important Flags and Parameters:

      ‑‑disk=<disk‑name>
      [Required] The disk resource name used when you created the disk. If you don't specify <alias‑name>, the disk is exposed to the instance as a block device at/dev/disk/by‑id/google‑<disk‑name>. Learn more about the difference between disk names and alias names.
      deviceName=<alias‑name>
      [Optional] An optional alias name for the disk device. If specified, the disk will be exposed to the instance as a block device at /dev/disk/by‑id/google‑<alias‑name>. Learn more about the difference between disk names and alias names.
      mode=<mode>,boot
      [Optional] The mode for which you want to attach this persistent disk and if you want to attach this disk as a root persistent disk, indicated by the boot flag.

      Valid mode values are:

      • read_write or rw: Attach this disk in read-write mode. This is the default behavior. If you are attaching a root persistent disk with a Google-provided image, you must attach the disk in read-write mode. If a persistent disk is already attached to an instance in read-write mode and you try to attach this disk to another instance, this command will fail with the following error:
        error   | RESOURCE_IN_USE
        message | The disk resource '<disk‑name>' is already being used
        in read-write mode
      • read_only or ro: Attach this disk in read-only capacity. Specify this if you're attaching this disk to multiple instances.
      <instance‑name>
      [Required] The name of the instance you are creating and attaching the persistent disk to.
      ‑‑project=<project‑id>
      [Required] The name of the project where this instance should live.
      ‑‑[no]auto_delete_boot_disk
      [Optional] If this is a root persistent disk, determines if the root persistent disk should be deleted automatically when the instance is deleted. The default is false.
      Disk Names vs. Alias Names

      When you attach a persistent disk to an instance, you have the option of setting an alias name for the disk. Aliases are useful for providing functional names to persistent disks that may have more unrecognizable names. For example, if you have a persistent disk named pd20120326, you can choose to attach it under an alias name, such as "databaselogs," which provides more information about the actual disk than the original disk name. Note that this doesn't change the name of the persistent disk but simply exposes it to the instance with another name:

      gcutil --project=my-project addinstance --disk=pd20120326,deviceName=databaselogs mynewinstance 

      Setting an alias name is optional, so you can choose to simply use the disk name if you would like. If you do set an alias name, note that your persistent disk is exposed to the instance at:

      /dev/disk/by-id/google-<alias-name>

      If you do not specify an alias name, your disk is exposed at:

      /dev/disk/by-id/google-<disk-name>

      To attach multiple disks to an instance, you can specify multiple ‑‑disk flags. For instance:

      gcutil addinstance --disk=disk1 --disk=disk2 my-multi-disk-instance --project=my-project
      
    3. ssh into your instance

      You can do this using gcutil ‑‑project=<project‑id> ssh <instance‑name>.

    4. Create your disk mount point, if it doesn't already exist

      For example, if you want to mount your disk at /mnt/pd0, create that directory:

      me@my-instance:~$ sudo mkdir -p /mnt/pd0
    5. Determine the /dev/* location of your persistent disk by running:
      me@my-instance:~$ ls -l /dev/disk/by-id/google-*
      lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
      lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd0 -> ../../sdb #pd0 is mounted at /dev/sdb
    6. Format your persistent disk:
      me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/<disk-or-alias-name> <mount-point>

      Specify the local disk alias if you assigned one, or the disk's resource name if you haven't. In this case, the disk alias is /dev/sdb:

      me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0

      In this example, you are mounting your persistent disk /dev/diskalias at /mnt/pd0 but you can choose to mount your persistent disk anywhere e.g. /home. If you have multiple disks, you can specify a different mount point for each disk.

    That's it! You have mounted your persistent disk and can start using it immediately. To demonstrate this process from start to finish, the following example attaches a previously created persistent disk named pd1 to an instance named diskinstance, formats it, and mounts it:

    1. Create an instance and attach the pd1 persistent disk.
      $ gcutil --project=my-project addinstance --disk=pd1 diskinstance  --auto_delete_boot_disk
      ...select a zone, image, and machine type for your instance...
      INFO: Waiting for insert of diskinstance. Sleeping for 3s.
      INFO: Waiting for insert of diskinstance. Sleeping for 3s.
      
      Table of resources:
      
      +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
      |     name     |  machine-type |                              image                             | network |   network-ip   | external-ip  |       disks          |    zone     | tags-fingerprint | metadata-fingerprint | status  | status-message |
      +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
      | diskinstance | n1-standard-1 | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD  | default | 00.000.000.000 | 000.000.0.00 |   <zone>/disks/pd1   |    <zone>   | 42WmSpB8rSM=     | 42WmSpB8rSM=         | RUNNING |
      +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
      
      Table of operations:
      
      +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
      |                      name                      | status | status-message |             target              | operation-type | error | warning |
      +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
      | operation-1358529117225-4d3933570d191-0fc3d1da | DONE   |                |  <zone>/instances/diskinstance  |     insert     |       |         |
      +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
      
    2. ssh into the instance.
      $ gcutil --project=my-project ssh diskinstance
      ...
      The programs included with this system are free software;
      the exact distribution terms for each program are described in the
      individual files in /usr/share/doc/*/copyright.
      
      This software comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
      applicable law.
      
      
    3. Change to root:
      user@diskinstance:~$ sudo -s
    4. Create a directory to mount the new persistent disk.
      root@diskinstance:~# mkdir /mnt/pd0
    5. Determine where pd1 is currently mounted by getting a list of available persistent disks on the instance.
      root@diskinstance:~$ ls -l /dev/disk/by-id/google-*
      lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
      lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd1 -> ../../sdb #pd1 is mounted at /dev/sdb
    6. Run the safe_format_and_mount tool.
      root@diskinstance:~# /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0
      mke2fs 1.41.11 (14-Mar-2010)
      Filesystem label=
      OS type: Linux
      Block size=4096 (log=2)
      Fragment size=4096 (log=2)
      Stride=8 blocks, Stripe width=0 blocks
      655360 inodes, 2621440 blocks
      131072 blocks (5.00%) reserved for the super user
      First data block=0
      Maximum filesystem blocks=2684354560
      80 block groups
      ...
    7. Give all users write access to the drive.
      root@diskinstance:~# chmod a+w /mnt/pd0
    8. Create a new file called hello.txt on the new mounted persistent disk.
      root:~# echo 'Hello, World!' > /mnt/pd0/hello.txt
    9. Print out the contents of the file to demonstrate that the new file is accessible and lives on the persistent disk.
      root@diskinstance:~# cat /mnt/pd0/hello.txt
      Hello, World!

    Attaching a Disk to a Running Instance

    You can attach an existing persistent disk to a running instance using the attachdisk command in gcutil or attachDisk in the API. Persistent disks can be attached to multiple instances at the same time in read-only mode (with the exception of root persistent disk, which should only be attached to one instance at a time). If you've already attached a disk to an instance in read-write mode, that disk cannot be attached to any other instance. You also cannot attach the same disk to the same instance multiple times, even in read-only mode.

    Note: If you attach a data persistent disk that was originally created using a snapshot, and you created the disk to be larger than the original size of the snapshot, you will need to resize the filesystem to the full size of the disk. For more information, see Restoring a Snapshot to a Larger Size.

    To attach a persistent disk to an existing instance in gcutil:

    gcutil --project=<project-id> attachdisk --zone=<zone> --disk=<disk-name>,[deviceName=<alias-name>,mode=<mode>] <instance-name>

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID for this request.
    ‑‑zone=<zone>
    [Optional] The zone where the instance and disk reside. Although this is optional, it is preferable that you specify the zone with your request so that gcutil won't have to use additional API calls to find out the zone of your instance and disk.
    ‑‑disk=<disk‑name>
    [Required] The disk resource name used when you created the disk. If you don't specify <alias‑name>, the disk is exposed to the instance as a block device at/dev/disk/by‑id/google‑<disk‑name>. Learn more about the difference between disk names and alias names.

    You can only attach one persistent disk to an instance at a time. If you specify multiple persistent disks, the last ‑‑disk argument is the persistent disk is attached to the instance.

    deviceName=<alias‑name>
    [Optional] An optional alias name for the disk device. If specified, the disk will be exposed to the instance as a block device at /dev/disk/by‑id/google‑<alias‑name>. If not specified, the disk will be exposed to the instance using the disk‑name. Learn more about the difference between disk names and alias names.
    mode=<mode>
    [Optional] The mode for which you want to attach this persistent disk. Valid mode values are:
    • read_write or rw: Attach this disk in read-write capacity. This is the default behavior. If a persistent disk is already attached to an instance in read-write mode and you try to attach this disk to another instance, this command will fail with the following error:
      error   | RESOURCE_IN_USE
      message | The disk resource '<disk‑name>' is already being used
      in read-write mode
    • read_only or ro: Attach this disk in read-only capacity. Specify this if you're attaching this disk to multiple instances.
    <instance‑name>
    [Required] The name of the instance to attach this disk to.

    To attach a persistent disk to a running instance through the API, perform a POST request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/zones/<zone>/instances/<instance-name>/attachDisk

    Your request body must contain the following:

    bodyContent = {
        "type": "persistent",
        "mode": "<mode>",
        "source": "https://www.googleapis.com/compute/v1/projects/<project-id>/zone/<zone>/disks/<disk>"
      }

    For more information, see the attachDisk reference documentation.

    Detaching a Persistent Disk

    You can detach a persistent disk from a running instance by using the gcutil detachdisk command.

    To detach a disk using gcutil:

    gcutil --project=<project-id> detachdisk --zone=<zone> --device_name=<disk-name> <instance-name>

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID where this instance belongs to.
    ‑‑zone=<zone>
    [Optional] The zone where the instance and disk resides. Although this is optional, it is preferable that you specify the zone with your request so that gcutil won't have to use additional API calls to find out the zone of your instance and disk.
    ‑‑device_name=<disk‑name>
    [Required] The name of the disk to detach. If you attached the disk using an alias name, specify the alias name instead of the disk name.
    <instance‑name>
    [Required The instance from which you want to detach the disk.

    To detach a disk in the API, perform an empty POST request to the following URL:

    https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>/detachDisk?deviceName=<disk-name>

    For more information, see the detachDisk reference documentation.

    Root Persistent Disk

    Each instance has an associated root persistent disk where the filesystem for the instance is stored. It is possible to create and attach a root persistent disk to an instance during instance creation or by creating the disk separately and attaching it to a new instance. All persistent disk features and limitations apply to root persistent disks.

    Create a Root Persistent Disk During Instance Creation

    When you start an instance without specifying a ‑‑disk flag, gcutil automatically creates a root persistent disk for you using the image that you provided in your request. The new root persistent disk is named after the instance by default. For example, if you create an instance using the following command:

    user@local~:$ gcutil --project=my-project addinstance awesomeinstance --image=debian-7 --auto_delete_boot_disk

    gcutil automatically create a persistent boot disk using the latest Debian 7 image, with the name awesomeinstance, and boots your instance off the new persistent disk. The ‑‑auto_delete_boot_disk flag also indicates that Compute Engine should automatically delete the root persistent disk when the instance is deleted. You can also change the auto-delete state later on.

    You can also create multiple instances and root persistent disks by providing more than one instance name:

    gcutil addinstance --project=<project-id> <instance-name-1> ... <instance-name-n> --auto_delete_boot_disk

    Create a Stand-alone Root Persistent Disk

    You can create a standalone-root persistent disk outside of instance creation and attach it to an instance afterwards. In gcutil, this is possible using the standard gcutil adddisk command. It is possible to create a root persistent disk from an image or a snapshot, using the ‑‑source_image or ‑‑source_snapshot flags.

    gcutil adddisk --project=<project-id> <disk-name> [--source_image=<image-name> \
    --source_snapshot=<snapshot-name> --size_gb=<size>]

    Important Flags and Parameters:

    ‑‑size_gb=<size>
    [Optional] The size of the disk, in GB. This should be an integer value, up to the remaining disk size quota for the project. Note that any root persistent disk created from an image or a snapshot that is larger than the filesystem needs to be re-partitioned from within an instance before the additional space can be used. For non-root persistent disks, you can mount the entire disk to an instance without creating a partition.

    For more information, see documentation about how to repartition a root persistent disk. For information about formatting and mounting non-root persistent disks, see Formatting Disks.

    <disk‑name>
    [Required] The name for the persistent disk, when managing it as a Google Compute Engine resource using gcutil. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
    ‑‑project=<project‑id>
    [Required] The ID of the project where this persistent disk should live.
    ‑‑source_image=<image‑name>
    [Optional] The image to apply to this persistent disk. This option creates a persistent disks which may be used as a root persistent disk. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk with enough space to store your root filesystem files. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of an image which is around 10GB.
    ‑‑source_snapshot=<snapshot‑name>
    [Optional The snapshot to apply to this disk. Snapshots can also be used to create root persistent disk, in addition to regular data disks. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk that is the same size as the snapshot. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of the snapshot.

    In the API, create a new persistent disk with the sourceImage query parameter in the following URI:

    https://www.googleapis.com/compute/v1/projects/<project>/zones/<zone>/disks?sourceImage=<source-image>
    ‑‑sourceImage=<source‑image>
    [Required] The URL-encoded, fully-qualified URI of the source image to apply to this persistent disk.

    Using an Existing Root Persistent Disk

    To start an instance with an existing root persistent disk in gcutil, provide the boot parameter when you attach the disk. When you create a root persistent disk using a Google-provided image, you must attach it to your instance in read-write mode. If you try to attach it in read-only mode, your instance may be created successfully, but it won't boot up correctly.

    In the API, insert an instance with a populated boot field:

    [{ ...
    "disk": [{
      "deviceName": "<disk-name>",
      "source": "<disk-uri>",
      "boot": "true",
      ...
    }]

    When you are using the API to specify a root persistent disk:

    • You can only specify the boot field on one disk. You may attach many persistent disks but only one can be the root persistent disk.
    • You must attach the root persistent disk as the first disk for that instance.
    • When the source field is specified, you cannot specify the initializeParams field, as they conflict with each other. Providing a source indicates that the root persistent disk exists already, whereas specifying initializeParams indicates that Compute Engine should create the root persistent disk.

    Repartitioning a Root Persistent Disk

    By default, when you create a root persistent disk with a source image or a source snapshot, your disk is automatically partitioned with enough space for the root filesystem. It is possible to create a root persistent disk with more disk space using the sizeGb field but the additional persistent disk space won't be recognized until you repartition your persistent disk. Follow these instructions to repartition a root persistent disk with additional disk space, using fdisk and resize2fs:

    1. If you haven't already, create your root persistent disk:
      user@local:~$ gcutil --project=<project-id> adddisk <disk-name> --source_image=<image> --size_gb=<size-larger-than-10gb>
    2. Start an instance using the root persistent disk.
      user@local:~$ gcutil --project=<project-id> addinstance <instance-name> --disk=<disk-name>,boot
    3. Check the size of your disk.

      Although you have specified a size larger than 10GB for your persistent disk, notice that only the 10GB of root disk space appears:

      user@mytestinstance:~$ df -h
      Filesystem      Size  Used Avail Use% Mounted on
      rootfs           10G  641M  8.9G   7% /
      /dev/root        10G  641M  8.9G   7% /
      none            1.9G     0  1.9G   0% /dev
      tmpfs           377M  116K  377M   1% /run
      tmpfs           5.0M     0  5.0M   0% /run/lock
      tmpfs           753M     0  753M   0% /run/shm
    4. Run fdisk.
      user@mytestinstance:~$ sudo fdisk /dev/sda
      Note: If you are running a CentOS image, you will need to turn off DOS-compatibility and change the unit to sectors to complete this step. This is because CentOS starts fdisk in a deprecated mode which is DOS-compatible but will cause an alignment error in your persistent disks. Enter the follow commands in your prompt to turn off DOS-compatibility mode:
      Command (m for help): c
      DOS Compatibility flag is not set
      
      Command (m for help): u
      Changing display/entry units to sectors

      When prompted, enter in p to print the current state of /dev/sda which will display the actual size of your root persistent disk. For example, this root persistent disk has ~50GB of space:

      The device presents a logical sector size that is smaller than
      the physical sector size. Aligning to a physical sector (or optimal
      I/O) size boundary is recommended, or performance may be impacted.
      
      Command (m for help): p
      
      Disk /dev/sda: 53.7 GB, 53687091200 bytes
      4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x000d975a
      
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1            2048    20971519    10484736   83  Linux

      Make note of this device ID number for future steps. In this example, the device ID is 83.

    5. Next, enter d at the prompt to delete the logical partition at /dev/sda so we can resize the partition. This won't delete any files on the system.
      Command (m for help): d
      Selected partition 1

      Enter p at the prompt to review and confirm that the original partition has been deleted (notice the empty lines after Device Boot where the partition use to be):

      Command (m for help): p
      
      Disk /dev/sda: 53.7 GB, 53687091200 bytes
      4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x000d975a
      
         Device Boot      Start         End      Blocks   Id  System
      
      
    6. Next, type n at the prompt to create a new partition. Select the default values for partition type, number, and the first sector when prompted:
      Command (m for help): n
      Partition type:
         p   primary (0 primary, 0 extended, 4 free)
         e   extended
      Select (default p): p
      Partition number (1-4, default 1): 1
      First sector (2048-104857599, default 2048):
      Using default value 2048
      Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
      Using default value 104857599

      Confirm that your partition was created:

      Command (m for help): p
      
      Disk /dev/sda: 53.7 GB, 53687091200 bytes
      4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x000d975a
      
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1            2048   104857599    52427776   83  Linux
    7. Check that your device ID is the same ID number that you made note of in step 4. For this example, the device ID matches the original ID of 83.
    8. Commit your changes by entering w at the prompt:
      Command (m for help): w
      The partition table has been altered!
      
      Calling ioctl() to re-read partition table.
      
      WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
      The kernel still uses the old table. The new table will be used at
      the next reboot or after you run partprobe(8) or kpartx(8)
      Syncing disks.
    9. Reboot your instance. This will close your current SSH connection. Wait a couple minutes before performing another ssh connection.
      user@mytestinstance:~$ sudo reboot
    10. SSH back into your instance.
      user@local:~$ gcutil --project=<project-id> ssh <instance-name>
    11. Resize your filesystem to the full size of the partition:
      user@mytestinstance:~$ sudo resize2fs /dev/sda1
      resize2fs 1.42.5 (29-Jul-2012)
      Filesystem at /dev/sda1 is mounted on /; on-line resizing required
      old_desc_blocks = 1, new_desc_blocks = 4
      The filesystem on /dev/sda1 is now 13106944 blocks long.
    12. Verify that your filesystem is now the correct size.
      user@mytestinstance:~$ df -h
      Filesystem      Size  Used Avail Use% Mounted on
      rootfs           50G  1.3G   47G   3% /
      /dev/root        50G  1.3G   47G   3% /
      none            1.9G     0  1.9G   0% /dev
      tmpfs           377M  116K  377M   1% /run
      tmpfs           5.0M     0  5.0M   0% /run/lock
      tmpfs           753M     0  753M   0% /run/shm
      

    Persistent Disk Snapshots

    Google Compute Engine offers the ability to take snapshots of your persistent disk and create new persistent disks from that snapshot. This can be useful for backing up data, recreating a persistent disk that might have been lost, or copying a persistent disk. Google Compute Engine provides differential snapshots, which allow for better performance and lower storage charges for users. Differential snapshots work in the following manner:

    1. The first successful snapshot of a persistent disk is a full snapshot that contains all the data on the persistent disk.
    2. The second snapshot only contains any new data or modified data since the first snapshot. Data that hasn't changed since snapshot 1 isn't included. Instead, snapshot 2 contains references to snapshot 1 for any unchanged data.
    3. Snapshot 3 contains any new or changed data since snapshot 2 but won't contain any unchanged data from snapshot 1 or 2. Instead, snapshot 3 contains references to blocks in snapshot 1 and snapshot 2 for any unchanged data.

    This repeats for all subsequent snapshots of the persistent disk.

    Note: Snapshots are always created based on the last successful snapshot taken. For example, if you start the creation process for snapshot A, and also start a creation process for snapshot B before snapshot A is completed, snapshot B won't be based on snapshot A, and instead will be a full snapshot without any references to A. If you create snapshot A and snapshot B, but snapshot B fails to create successfully or is corrupted, when you attempt to create snapshot C, snapshot C will be based on snapshot A rather than snapshot B, since A was the last successful snapshot taken.

    The diagram below attempts to illustrate this process:

    Diagram describing how to create a snapshot

    Snapshots are a global resource. Because they are geo-replicated, they will survive maintenance windows. It is not possible to share a snapshot across projects. You can see a list of snapshots available to a project by running:

    gcutil listsnapshots --project=<project-id>

    To list information about a particular snapshot:

    gcutil getsnapshot <snapshot-name> --project=<project-id>

    Creating a Snapshot

    Before you create a persistent disk snapshot, you should ensure that you are taking a snapshot that is consistent with the desired state of your persistent disk. If you take a snapshot of your persistent disk in an "unclean" state, it may force a disk check and possibly lead to data loss. To help with this, Google Compute Engine encourages you to make sure that your disk buffers are flushed before you take your snapshot. For example, if your operating system is writing data to the persistent disk, it is possible that your disk buffers are not yet cleared. Follow these instructions to clear your disk buffers:

    Linux
    • Unmount the filesystem

      This is the safest, most reliable way to ensure your disk buffers are cleared. To do this:

      1. ssh into your instance.
      2. Run sudo umount <disk‑location>.
      3. Create your snapshot.
      4. Remount your persistent disk.
    • Alternatively, you can also sync your filesystem

      If unmounting your persistent disk is not an option, such as in scenarios where an application might be writing data to the disk, you can sync your filesystem to flush the disk buffers. To do this:

      1. ssh into your instance.
      2. Stop your applications from writing to your persistent disk.
      3. Run sudo sync.
      4. Create your snapshot.
    Windows

    Caution: Taking a snapshot of persistent disk attached to a Windows instance requires that the instance is terminated. Make sure you have saved all your data before you continue with this process. If you will be using the snapshot to start multiple virtual machines, Compute Engine recommends that you sysprep the disk.

    1. Log onto your Windows instance.
    2. Make a copy of the following file:
      C:\Program Files\Google Compute Engine\sysprep\unattended.xml
    3. Edit the copied file to provide a password. Look for the following fields: AdministratorPassword and AutoLogon. Provide a generic password for both fields. This password is only used temporarily during the setup process and will be changed by the instance setup script to the password provided by the metadata server. This is necessary so that you can log into future virtual machine instances that use this image.
    4. Run the following command in a cmd window:
      gcesysprep -unattend unattended-copy.xml

    Once you run the gcesysprep command, your Windows will terminate. Afterwards, you can take a snapshot of the root persistent disk.

    Create your snapshot using the gcutil addsnapshot command:

    gcutil --project=<project-id> addsnapshot <snapshot-name> --source_disk=<source-disk>

    Important Flags and Parameters:

    <snapshot‑name>
    [Required] The name for this new snapshot.
    ‑‑zone=<zone>
    [Optional] The zone of the persistent disk.
    ‑‑source‑disk=<source‑disk>
    [Required] The persistent disk from which to create the snapshot.
    ‑‑project=<project‑id>
    [Required] The project ID of the source disk. The snapshot is created within the same project. For example, if the source disk belongs to a project named myfirstproject, the snapshot will also belong to the project.

    gcutil waits until the operation returns a status of READY or FAILED, or reaches the maximum timeout and returns the last known details of the snapshot in JSON format.

    Caution: If you create a snapshot from a persistent disk and the snapshot creation failed, you won't be able to delete the original persistent disk until you clean up the failed snapshot. This prevents accidentally removing source data if the snapshot did not successfully copy your persistent disk information.

    Creating a New Persistent Disk from a Snapshot

    After creating a persistent disk snapshot, you can apply data from that snapshot to new persistent disks. It is only possible to apply data from a snapshot when you first create a persistent disk. You cannot apply a snapshot to an existing persistent disk, or apply a snapshot to persistent disks that belongs to a different project than that snapshot.

    To apply a data from persistent disk snapshot, run the gcutil adddisk command with the ‑‑source_snapshot flag:

    gcutil adddisk <disk-name> --project=<project-id> --source_snapshot=<snapshot-name> [--size_gb=<size>]

    Important Flags and Parameters:

    <disk‑name>
    [Required] The name for the persistent disk. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
    ‑‑project=<project‑id>
    [Required] The ID of the project where this persistent disk should live.
    ‑‑source_snapshot=<snapshot‑name>
    [Required] The persistent disk snapshot whose data should be to applied to this disk.
    ‑‑size_gb=<size>
    [Optional] The size of the persistent disk. This must be equal or larger than the size of the snapshot. If you create a non-root persistent disk that is larger than the original size of the snapshot, you will need to follow the instructions to restore a snapshot to a larger size in order to use the additional space.

    If not specified, the persistent disk size will be the same size as the original disk from which the snapshot was created.

    Restoring a Snapshot to a Larger Size

    You can restore a non-root persistent disk snapshot to a larger size than the original snapshot but you must run some extra commands from within the instance for the additional space to be recognized by the instance. For example, if your original snapshot is 500GB, you can choose to restore it to a persistent disk that is 600GB or more. However, the extra 100GB won't be recognized by the instance until you mount and resize the filesystem.

    The instructions that follow discuss how to mount and resize your persistent disk using resize2fs as an example. Depending on your operating system and filesystem type, you may need to use a different filesystem resizing tool. Please refer to your operating system documentation for more information.

    Note: This only works for non-root persistent disk snapshots. If you are restoring a root persistent disk snapshot that is larger than the original snapshot size, you must follow the instructions to Repartition the root persistent disk for the extra space to be recognized by the instance.

    1. Create a new persistent disk from your non-root snapshot that is larger than the snapshot size.

      Provide the ‑‑size_gb flag to specify a larger persistent disk size. For example:

      me@local~:$ gcutil --project=exampleproject adddisk newdiskname --source_snapshot=my-data-disk-snapshot --size_gb=600
    2. Attach your persistent disk to an instance.
      me@local~:$ gcutil --project=exampleproject attachdisk my-instance-name --disk=newdiskname
    3. ssh into your instance.
      me@local~:$ gcutil --project=exampleproject ssh my-instance-name
    4. Determine the /dev/* location of your persistent disk by running:
      me@my-instance-name:~$ ls -l /dev/disk/by-id/google-*
      lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
      lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-newdiskname -> ../../sdb # newdiskname is located at /dev/sdb
    5. Mount your new persistent disk.

      Create a new mount point. For example, you can create a mount point called /mnt/pd1.

      user@my-instance-name:~$ sudo mkdir /mnt/pd1

      Mount your persistent disk:

      me@my-instance-name:~$ sudo mount -a /dev/sdb /mnt/pd1
    6. Resize your persistent disk using resize2fs.
      me@my-instance-name:~$ sudo resize2fs /dev/sdb
    7. Check that your persistent disk reflects the new size.
      me@my-instance-name:~$ df -h
      Filesystem                                              Size  Used Avail Use% Mounted on
      rootfs                                                  296G  671M  280G   1% /
      udev                                                     10M     0   10M   0% /dev
      tmpfs                                                   3.0G  112K  3.0G   1% /run
      /dev/disk/by-uuid/36fd30d4-ea87-419f-a6a4-a1a3cf290ff1  296G  671M  280G   1% /
      tmpfs                                                   5.0M     0  5.0M   0% /run/lock
      /dev/sdb                                                593G  198M  467G   1% /mnt/pd1 # The persistent disk is now ~600GB

    Deleting a Snapshot

    Google Compute Engine provides differential snapshots so that each snapshot only contains data that has changed since the previous snapshot. For unchanged data, snapshots use references to the data in previous snapshots. When you delete a snapshot, Google Compute Engine goes through the following procedures:

    1. The snapshot is immediately marked as DELETED in the system.
    2. If the snapshot has no dependent snapshots, it is deleted outright.
    3. If the snapshot has dependent snapshots:
      1. Any data that is required for restoring other snapshots will be moved into the next snapshot. The size of the next snapshot will increase.
      2. Any data that is not required for restoring other snapshots will be deleted. This lowers the total size of all your snapshots.
      3. The next snapshot will no longer reference the snapshot marked for deletion but will instead reference the existing snapshot before it.

    The diagram below attempts to illustrate this process:

    Diagram describing the process for deleting a snapshot

    To delete a snapshot, run:

    gcutil --project=<project-id> deletesnapshot <snapshot-name>

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID for this request.
    <snapshot‑name>
    [Required] The name of the snapshot to delete.

    Attaching Multiple Persistent Disks to One Instance

    To attach more than one disk to an instance, run addinstance with multiple ‑‑disk flags, one for each disk to attach.

    Attaching a Persistent Disk to Multiple Instances

    It is possible to attach a persistent disk to more than one instance. However, if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode.

    If you attach a persistent disk in read-write mode and then try to attach the disk to subsequent instances, Google Compute Engine returns an error similar to the following:

    error   | RESOURCE_IN_USE
    message | The disk resource '<disk‑name>' is already being used
    in read-write mode

    To attach a persistent disk to an instance in read-only mode, review instructions for attaching a persistent disk and set the <mode> to read_only.

    Getting Persistent Disk Information

    To see a list of persistent disks in the project:

    gcutil --project=<project-id> listdisks

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID for this request.

    To see detailed information about a specific persistent disk:

    gcutil --project=<project-id> getdisk <disk-name>

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID for this request.
    <disk‑name>
    [Required] The disk for which you want to get more information.

    By default, gcutil provides an aggregate listing of all your resources across all available zones. If you want a list of resources from just a single zone, provide the ‑‑zone flag in your request.

    $ gcutil --project=<project-id> listdisks --zone=<zone>

    Important Flags and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID for this request.
    ‑‑zone=<zone>
    [Required] The zone from which you want to list instances.

    In the API, you need to make requests to two different methods to get a list of aggregate resources or a list of resources within a zone. To make a request for an aggregate list, make a GET request to that resource's aggregatedList URI:

    https://www.googleapis.com/compute/v1/aggregated/disks

    In the client libraries, make a request to the disks().aggregatedList function:

    def listAllDisks(auth_http, gce_service):
      request = gce_service.disks().aggregatedList(project=PROJECT_ID)
      response = request.execute(auth_http)
    
      print response

    To make a request for a list of instances within a zone, make a GET request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks

    In the API client libraries, make a disks().list request:

    def listDisks(auth_http, gce_service):
      request = gce_service.disks().list(project=PROJECT_ID,
        zone='<zone>')
      response = request.execute(auth_http)
    
      print response

    Migrating a Persistent Disk to a Different Instance in the Same Zone

    To migrate a persistent from one instance to another, you can detach a persistent disk from an instance and reattach it to another instance (either a running instance or a new instance). If you merely want to migrate the information, you can take a persistent disk snapshot and apply it to the new disk.

    Persistent disks retain all their information indefinitely until they are deleted, even if they are not attached to a running instance.

    Migrating a Persistent Disk to a Different Zone

    You cannot attach a persistent disk to an instance in another zone. If you want to migrate your persistent disk data to another zone, you can use persistent disk snapshots. To do so:

    1. Create a snapshot of the persistent disk you would like to migrate
    2. Apply the snapshot to a new persistent disk in your desired zone

    Deleting a Persistent Disk

    When you delete a persistent disk, all its data is destroyed and you will not be able to recover it.

    You cannot delete a persistent disk that is assigned to a specific instance. To check whether a disk is assigned, run gcutil listinstances, which will list all persistent disks in use by each instance.

    To delete a disk:

    gcutil  deletedisk <disk-name> --project=<project-id>

    Important Flag and Parameters:

    ‑‑project=<project‑id>
    [Required] The project ID of the disk.
    <disk‑name>
    [Required] The name of the disk to delete.

    Setting the Auto-delete State of a Persistent Disk

    Read-write persistent disks can be automatically deleted when the associated virtual machine instance is deleted. This behavior is controlled by the autoDelete property on the virtual machine instance for a given attached persistent disk and can be updated at any time. Similarly, you can also prevent a persistent disk from being deleted as well, by marking the autoDelete value as false.

    Note: You can only set the auto delete state of persistent disk attached in read-write mode.

    To set the auto delete state of a persistent disk in gcutil, use the gcutil setinstancediskautodelete command:

    gcutil --project=<project-id> setinstancediskautodelete <instance-name> --device_name=<device-name> --zone=<zone> --[no]auto_delete

    Important flags and parameters:

    ‑‑project=<project-id>
    [Required] The project ID for this request.
    <instance-name>
    [Required] The name of the instance for which you want to update the auto delete status of the persistent disk.
    ‑‑device_name=<device-name>
    [Required] The device name of the persistent disk. This is the device name specified at instance creation time, if applicable, and may not be the same as the disk name. If you aren't sure, use the persistent disk name.
    ‑‑zone=<zone>
    [Required] The zone for this request.
    ‑‑[no]auto_delete
    [Required] The auto-delete state to set.

    In the API, make a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>/setDiskAutoDelete?deviceName=deviceName,autoDelete=true

    Using the client library, use the instances().setDiskAutoDelete method:

    def setAutoDelete(gce_service, auth_http):
      request = gce_service.instances().setDiskAutoDelete(project=PROJECT_ID, zone=ZONE, deviceName=DEVICE_NAME, instance=INSTANCE, autoDelete='true')
      response = request.execute(http=auth_http)
    
      print response

    Formatting Disks

    Before you can use non-root persistent disks in Google Compute Engine, you need to format and mount them. We provide the safe_format_and_mount tool in our images to assist in this process. The safe_format_and_mount tool can be found at the following location on your virtual machine instance:

    /usr/share/google/safe_format_and_mount

    The tool performs the following actions:

    • Format the disk (only if it is unformatted)
    • Mount the disk

    This can be helpful if you need to use a non-root persistent disk from a startup script, because the tool prevents your script from accidentally reformatting your disks and erasing your data.

    safe_format_and_mount works much like the standard mount tool:

    $ sudo mkdir <mount-point>
    $ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" <disk-name> <mount-point>

    You can alternatively format and mount disks using standard tools such as mkfs and mount.

    Caution: If you are formatting disks from a startup script, the startup script runs on every boot (due to reboot or unexpected failure and you risk data loss if you do not take precautions to prevent reformatting your data on boot. You should also back up all important data and set up data recovery systems as a precaution.

    Checking an Instance's Available Disk Space

    If you're not sure how much disk space you have, you can check the disk space of an instance's mounted disks using the following command:

    me@my-instance:~$ sudo df -h

    To match up a disk's file system name, run:

    me@my-instance:~$ ls -l /dev/disk/by-id/google-*
    ...
    lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-mypd -> ../../sdb  # google-mypd corresponds to /dev/sdb
    lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-pd0 -> ../../sdc
    
    me@my-instance:~$ sudo df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda1             9.4G  839M  8.1G  10% /
    ....
    /dev/sdb              734G  197M  696G   1% /mnt/pd0 # sdb has 696GB of available space left

    Page: images

    An Image resource contains a boot loader, an operating system, and a root file system that is necessary for starting an instance. You must always specify an image when you create an instance or when you create a root persistent disk. By default, Google Compute Engine installs the root filesystem defined by the image on a root persistent disk.

    Images belong to a project and can either be only accessible by that project (private) or accessible by any project (public). The project owner controls how accessible the image is. Depending on the contents of the image, some images cost money to use, while others are free. For example, Google-provided images of Red Hat Enterprise Linux (RHEL) and SUSE cost an additional fee to use.

    This page describes the Image resource in detail, how to create an image from a root persistent disk, and how to create a new image altogether.

    Contents

    Accessing images

    Images can be either public or private. Public images are available to all users, while private images are only available to the project that the image resides in. Accessing an image depends on whether an image is public or private and whether you have the right permissions.

    Public images

    Public images are visible by everybody and can be used by any Compute Engine project. Public images can be provided and maintained by Google, open-source communities, and third-party vendors. These images contain preloaded operating systems that can run on Compute Engine instances.

    It is not currently possible for users to create public images. Users who want to share an image publicly should grant public access to the original Google Cloud Storage bucket where the source image file is stored.

    Private images

    Users create private images by default. Private images are only accessible from within the project that owns the image. As a user, you can create a new image from an existing root persistent disk or you can create a new image from scratch. To create a private image, see Creating an image from a root persistent disk.

    Listing all images

    You can see a list of all images available to your project by performing a gcutil listimages command. This lists all public and private images that your project can use.

    gcutil --project=<project-id> listimages

    Starting an instance from an image

    Using an image differs if you are using gcutil or using the API.

    Starting an instance in gcutil

    To use an image in gcutil, you can either provide the fully-qualified URL to a specific image for private images or use prefix matching to use the latest Debian or CentOS image offered by Compute Engine. To use prefix matching, provide one of the following values with your --image flag in gcutil:

    • debian-7 - Fetches the latest Debian 7 image.
    • centos-6 - Fetches the latest CentOS 6 image.
    • rhel-6 - Fetches the latest Red Hat Enterprise Linux 6 image.
    • sles-11 - Fetches the latest SUSE Linux Enterprise Server 11 image.

    For example, if you pass in --image=debian-7, gcutil first looks in your private images for a matching image name, and if it fails to find a match, it moves on to look in the Debian and CentOS projects. Once if finds a match in the image name, it choose the latest image version to use. A request to add an instance using prefix matching looks like the following:

    gcutil --project=<project-id> addinstance mytestinstance --image=debian-7

    If you want to unambiguously specify a certain image, you can pass in the full image name with the image project, in the following format:

    projects/<project-id>/global/images/<image-name>

    To use a specific image in your requests, provide the full image name with the image project using the --image flag. For example, to use a Debian image, make a request like so:

    gcutil --project=<project-id> addinstance myinstance --image=projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD

    Starting an instance in the API

    In the API, prefix matching does not occur and you must provide the full URL to your desired image in your request. For example, to create a root persistent disk in the API, you must provide the sourceImage query parameter with the full URI to your desired image, like so:

    https://www.googleapis.com/compute/v1/projects/<project>/zones/<zone>/disks?sourceImage=<image-url>

    <image-url> must have the following format. As an example, this URL points to a Centos image:

    http://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-vYYYYMMDD

    For more information, see the API reference.

    Deprecating an image

    Google Compute Engine lets you deprecate a private image that you own by setting the deprecation status on the image. Each deprecation status causes a different response from the server, helping you transition users away from unsupported images in a manageable way. Possible deprecation statuses are:

    • DEPRECATED - This image is considered deprecated. When users attempt to use this image, the request will succeed but Google Compute Engine also returns a warning. New links to this image are still allowed.
    • OBSOLETE and DELETED - This image is obsolete or deleted and new users cannot use it. Google Compute Engine returns an error if users try to use the image in their requests. Existing links to this image are still allowed. Note that marking an image as deleted restricts its usage but users still need to manually delete their images to remove it from the image list.

    To set the deprecation status of an image in gcutil, use the gcutil deprecateimage command:

    gcutil --project=<project-id> deprecateimage --state=<deprecation-state> --replacement=<fully-qualified-image-url> [--deleted_on=<date-time> --deprecated_on=<date-time> --obsolete_on=<date-time>] <image-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID where this image lives.
    --state=<deprecation-state>
    [Required] The state of the image. Valid values are:
    • DEPRECATED - This image is considered deprecated. When users attempt to use this image, the request will succeed but Google Compute Engine also returns an warning. New links to this image are still allowed.
    • OBSOLETE - This image is obsolete and new users cannot use it. Google Compute Engine returns an error if users try to use the image in their requests. Existing links to this image are still allowed.
    • DELETED - This image is deleted and users cannot use it. Google Compute Engine returns an error if users attempt to use the image.
    --replacement=<fully-qualified-image-url>
    [Required] Fully-qualified URL to a recommended replacement image.
    --deprecated_on=<date-time>
    [Optional] A valid RFC3339 timestamp indicating when the image was or will be deleted. This is purely informational. Google Compute Engine does not enforce these date and times (e.g. Google Compute Engine won't automatically mark an image as deprecated once the date-time is reached).
    --deleted_on=<date-time>
    [Optional] A valid RFC3339 timestamp indicating when the image was or will be deleted. This is purely informational. Google Compute Engine does not enforce these date and times (e.g. Google Compute Engine won't automatically mark an image as deleted once the date-time is reached). Instead, you will need to run gcutil deleteimage to delete your image. For more information, see Deleting an image.
    --obsolete_on=<date-time>
    [Optional] A valid RFC3339 timestamp indicating when the image was or will be obsolete. This is purely informational. Google Compute Engine does not enforce these date and times (e.g. Google Compute Engine won't automatically mark an image as obsolete once the date-time is reached).
    <image-name>
    [Required] The name of the image to deprecate.

    Deleting an image

    You can only delete images that you or someone who has access to the project has added. To delete an image, run:

    gcutil --project=<project-id> deleteimage <image-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID where this image lives.
    <image-name>
    [Required] The name of the image to delete.

    Creating an image from a root persistent disk

    These instructions describe how to create a new private image based on an existing image running on a root persistent disk. This is ideal for situations where you may have created and then modified a persistent boot disk to a certain state and would like to save that state to be reused.

    Note: These instructions describe how to create a new instance with a public or shared image, customize the root disk, and then create a new image based on that instance. If you already have an instance and root persistent disk for which you want to make an image, skip to step 3.

    To create a new image:

    1. Start a new instance and log into it.
    2. Customize your instance by adding packages, configuring startup daemons, and so on.
    3. Create an image of your instance using gcimagebundle tool that comes installed on the default Google Compute Engine image.
    4. Upload an image into Compute Engine through Google Cloud Storage.
    1. (Optional) Start a new instance or log into an existing instance

      If you want to start a new instance and customize a public or shared image, follow steps 1 and 3. If you already have an existing instance and root persistent disk from which you want to make an image, skip to step 3.

      Review the details about starting a new instance if necessary. Start a new instance as follows:

      $ gcutil --project=<project-id> addinstance --service_account_scopes=storage-rw,compute-rw --image=<image-name> myinst

      Note that you're passing in service account scopes so that you can upload your file to Google Cloud Storage in later steps.

      ssh into your instance:

      $ gcutil --project=<project-id> ssh myinst
    2. (Optional) Customize your instance

      Customize your instance as you like. The following example demonstrates logging in and installing Apache on an instance:

      $ gcutil --project=<project-id> ssh myinst
      ...snip ssh startup info...
      me@myinst:~$ sudo apt-get install apache2
      ...snip installation process...
      
    3. Create a tar file of your image

      Note: It is not currently possible to create an image based on a Windows instance. However, you can take a persistent disk snapshot of the instance's root persistent disk.

      When you are finished customizing your OS, create an image and bundle it into a .tar.gz file. The gcimagebundle tool included in Google Compute Engine images will create an image of the instance for you. The image is stored as a raw block file, packaged and compressed using gzip and tar. The raw block file contains the OS and all installed packages, plus all files in the root persistent disk. It does not include files or packages in a non-root persistent disk.

      The gcimagebundle tool can be accessed by calling gcimagebundle on instances created from Google-provided images. For help, call python gcimagebundle --help

      To save an image of your instance, run the following command:

      me@myinst:~$ sudo gcimagebundle \
        -d /dev/sda -o /tmp/ --log_file=/tmp/abc.log
      

      This command will create an image of the instance in the following location:

      /tmp/<long-hex-number>.image.tar.gz
    4. Upload an image into Compute Engine

      To use a custom image in Compute Engine virtual machine instances, you must first upload the image to Google Cloud Storage, add the image to your Compute Engine project, and apply that image to your instances. Use the following instructions to help you add your image to Compute Engine.

      1. Upload your tar file to Google Cloud Storage

        You must store your tar file on Google Cloud Storage. You will later load the file from that location when adding your new image to your project.

        To upload the tar file to Google Cloud Storage, use the gsutil command line tool that comes preinstalled on your instance.

        1. Set up gsutil.

          If this is your first time using gsutil on this instance AND you didn't set up your instance to use a service account to talk to Google Cloud Storage, run gsutil config and follow the instructions. Otherwise, you can skip this step.

          Note: If your instance is set up with a service account scope to Google Cloud Storage and you run gsutil config, the command creates user credentials that are used by subsequent gsutil commands instead of the service account credentials that were previously used.

        2. Create a bucket.

          The following restrictions apply when you create a Cloud Storage bucket to store your image file:

          Create your bucket using the following command:

          me@myinst:~$ gsutil mb gs://<bucket-name>

          Note: You must enable billing for your Google Cloud Storage account before you can use the Google Cloud Storage service. To understand how Google Cloud Storage bills for usage, see their pricing table.


        3. Copy your file to your new bucket.
          me@myinst:~$ gsutil cp /tmp/<your-image>.image.tar.gz gs://<bucket-name>
      2. Add your customized image to the Images collection

        Log out of your instance by typing exit. Add your saved image to your project's Images list using gcutil on your local machine, as described below. When you add your image to the Images collection, you must choose an image name that is unique among all images in the project and specify the URI of your image in Google Cloud Storage, using the URI scheme shown below.

        $ gcutil --project=<project-id> addimage <image-name> <image-uri>

        Important Flags and Parameters:

        --project=<project-id>
        [Required] The project ID where this image should live.
        <image-name>
        [Required] A name for this new image. Your <image-name> must start with a lowercase letter, be followed by 0-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
        <image-uri>
        [Required] A fully-qualified Google Cloud Storage URI in either of the following formats:
        • gs://<bucket-name>/<your-image>.image.tar.gz
        • https://storage.googleapis.com/<bucket-name>/<your-image>.image.tar.gz

        You can check if the image is ready to use by performing a gcutil getimage call, with returns the image state as well. Once the image is READY, you can use it for your instances.

      3. Use your custom image

        Whenever you want to start a new instance with your custom image, specify the --image flag with the name that you assigned your image in the previous step:

        $ gcutil --project=my-project addinstance --image=<image-name> <instance-1> <instance-2> <instance-n>

        This differs slightly from using an unaltered Debian or CentOS image. You only need to specify the image name for your custom image when adding an instance, as opposed to the fully-qualified image name if you were using an unaltered Debian or CentOS image from the debian-cloud or centos-cloud image.

        If you choose to create multiple instances with one command, gcutil uses the same image for all of the instances and each instance will have its own unique IP address.

    Building an image from scratch

    Google Compute Engine is capable of running a variety of operating systems and you can build an image from scratch with the operating system of your choice and use it on Google Compute Engine virtual machines. However, Google Compute Engine is a unique environment, with certain requirements to ensure that all images run optimally. This section describes these requirements and is intended as an advanced topic geared towards users who would like to use their own images instead of relying on public images.

    Building an image is an advanced task and Compute Engine suggests that only users that specifically need a new image should try to build an image from scratch.

    Note This section provides general recommendations and information for building an image on Google Compute Engine. Depending on your operating system, the steps to build your image may differ and you should refer to your operating system's documentation for specific instructions.

    Basic operating system configuration

    This section describes how to configure a very basic image that can boot within an instance. These are general recommendations and may differ depending on your operating system. Refer to the operating system's documentation for specific instructions.

    Recommended

    • Set the timezone to UTC:
      sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime
    • Use the Google NTP server, metadata.google.internal (169.254.169.254), and remove all other NTP servers.
    • Use tunefs to check disks at desired intervals.
    • Log syslog messages to /dev/ttyS0, so you can debug with gcutil getserialportoutput.

    Optional

    • Enable FIPS mode.
    • Consider updating packages while building images, and then add logic to update packages on first boot.

    Kernel configurations

    You can choose to build and run arbitrary OS kernels on Google Compute Engine if they are compliant with the hardware manifest described below. The following guidelines will help you build your own kernel binaries.

    Hardware Manifest

    The following is a list of devices that your kernel must support:

    • vCPU:
      • Intel(R) Xeon(R) CPU @ 2.60GHz stepping 07
      • CPUID: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc up nopl xtopology pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes avx hypervisor lahf_lm
    • PCI Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
    • ISA bridge: Intel 82371AB/EB/MB PIIX4 ISA (rev 03)
    • Ethernet controller:
      • Virtio-Net Ethernet Adapter
      • vendor = 0x1AF4 (Qumranet/Red Hat), device id = 0x1000. Subsystem ID 0x1.
      • We support Checksum offload
      • We support TSO v4
      • We support UFO
    • SCSI Storage Controller:
      • Virtio-SCSI Storage Controller
      • vendor = 0x1AF4 (Qumranet/Red Hat)
      • device id = 0x1004. Subsystem ID 0x8.
      • We support SCSI Primary Commands 4, SCSI Block Commands 3
      • We support one request queue only
      • Persistent disks report 4 KiB physical sectors / 512 byte logical sectors;
      • We only support block devices (disks)
      • We support the Hotplug / Events feature bit
    • Serial Ports:
      • two 16550A ports
      • ttyS0 on IRQ 4
      • ttyS1 on IRQ 3

    Required Linux Kernel Options

    # to enable paravirtualization functionality.
    CONFIG_KVM_GUEST=y
    
    # to enable the paravirtualized clock.
    CONFIG_KVM_CLOCK=y
    
    # to enable paravirtualized PCI devices.
    CONFIG_VIRTIO_PCI=y
    
    # to enable access to paravirtualized disks.
    CONFIG_SCSI_VIRTIO=y
    
    # to enable access to the networking.
    CONFIG_VIRTIO_NET=y
    

    Network configuration

    In order to ensure high performance network capability, we strongly recommend the following configurations. For more information about Google Compute Engine networks, read the Networking and Firewalls documentation.

    Strongly recommended

    • Use ISC DHCP client. Other clients may be used but have not been tested.
    • For best performance, set MTU to 1460. Google Compute Engine's DHCP server will serve this parameter as the interface-mtu option, which most clients respect.

    Recommended

    • Disable IPv6, as it is not supported.
    • Delete the hostname file. It will be set by startup script, /usr/share/google/set-hostname. To delete the hostname file:
      $ rm /etc/hostname
    • Make sure /etc/hosts includes these entries and add them if they aren't included:
      127.0.0.1 localhost
      169.254.169.254 metadata.google.internal metadata
    • Prevent the instance from remembering its MAC address.

    Optional

    • Disable the operating system firewall, unless you want to restrict outbound traffic.
      • Google Compute Engine provides a firewall for inbound traffic. For more information on firewalls, read the Networking and Firewalls documentation.

    Installing packages

    Google Compute Engine requires a small set of packages to be installed to make sure that the operating systems runs smoothly.

    Required packages

    • Python 2.6 or higher
    • sshd

    Strongly recommended

    When setting up your packages, make sure to:

    • Enable automatic updates
    • Update all packages to the latest version

    Optional

    Google Compute Engine image packages

    Google provides a suite of tools that configure the operating system to work properly with Google Compute Engine. The source code for these tools is hosted on GitHub. For every release, we provide a tarball for each tool, as well as a tarball for the whole repository. Each of the tools is described below:

    For information about each package, review the respective README.md files.

    Note that some of these scripts may require customization to work with your distribution. If so, it is recommended that you clone this GitHub repository and make the changes in that repository. This allows your customizations to be separate from the canonical scripts and the convenience of merging in future improvements. Patches are welcome but require a third-party contributor agreement to be signed.

    SSH configurations

    Users typically log into instances via SSH. It is important to run your images with a secure SSH configuration.

    Recommended

    • Disable root ssh login.
    • Disable password authentication.
    • Disable host based authentication.
    • Enable strict host key checking.
    • Use ServerAliveInterval to keep connections open.

    Next, create a sshd stub file /etc/ssh/sshd_not_to_be_run with just the contents “GOOGLE\n”. Google daemon will generate ssh keys based on the ssh metadata.

    Remove ssh host keys

    Don't use ssh host keys with your instance. Remove them as follows:

    rm /etc/ssh/ssh_host_key
    rm /etc/ssh/ssh_host_rsa_key*
    rm /etc/ssh/ssh_host_dsa_key*
    rm /etc/ssh/ssh_host_ecdsa_key*

    As an example, the following SSH configuration files provide a good starting point for defining your own configuration files. For your /etc/ssh/ssh_config file, start with the following configurations:

    Host *
    Protocol 2
    ForwardAgent no
    ForwardX11 no
    HostbasedAuthentication no
    StrictHostKeyChecking no
    Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
    Tunnel no
    
    # Google Compute Engine times out connections after 10 minutes of inactivity.
    # Keep alive ssh connections by sending a packet every 7 minutes.
    ServerAliveInterval 420

    For your /etc/ssh/sshd_config file, start with the following configurations:

    # Disable PasswordAuthentication as ssh keys are more secure.
    PasswordAuthentication no
    
    # Disable root login, using sudo provides better auditing.
    PermitRootLogin no
    
    PermitTunnel no
    AllowTcpForwarding yes
    X11Forwarding no
    
    # Compute times out connections after 10 minutes of inactivity.  Keep alive
    # ssh connections by sending a packet every 7 minutes.
    ClientAliveInterval 420
    
    # Restrict sshd to just IPv4 for now as sshd gets confused for things
    # like X11 forwarding.

    Security recommendations

    You should always provide a secure operating system environment but it can be difficult to strike a balance between a secure and accessible environment. While many of the precautions below are not absolutely required, we strongly urge you to enable them. Insecure virtual machines are vulnerable to attack and can consume expensive resources.

    OS security best practices

    • Minimize the amount of software installed by default (e.g. perform a minimal install of the OS).
    • Enable automatic updates.
    • By default, all network services disabled except for ssh, dhcp, and ntpd. You can allow a mailserver, such as postfix, to run if it is only accepting connections from localhost.
    • Do not allow externally listening ports except for sshd.
    • Install the denyhosts package to help prevent SSH brute-force login attempts.
    • Remove all unnecessary non-user accounts from the default install.
    • Set the shell of all non-user accounts to /sbin/nologin or /usr/sbin/nologin (depending on where your OS installed nologin) in /etc/passwd.
    • Configure your OS to use salted SHA512 for passwords in /etc/shadow.
    • Set up and configure pam_cracklib for strong passwords.
    • Set up and configure pam_tally to lock out accounts for 5 minutes after 3 failures.
    • Configure the root account to be locked by default in /etc/shadow. Run the following command to lock the root account:
      usermod -L root
    • Deny root in /etc/ssh/sshd_config by adding the following line:
      PermitRootLogin no
    • Create AppArmor or SELinux profiles for all default running network-facing services.
    • Use filesystem capabilities where possible to remove the need for the S*ID bit and to provide more granular control.
    • Enable compiler and runtime exploit mitigations when compiling network-facing software. For example, here are some of the mitigations that the GNU Compiler Collection (GCC) offers and how to enable them:
      • Stack smash protection: Enable this with -fstack-protector. By default, this option protects functions with a stack-allocated buffer longer than eight bytes. To increase protection by covering functions with buffers of at least four bytes, add --param=ssp-buffer-size=4.
      • Address space layout randomization (ASLR): Enable this by building a position-independent executable with -fPIC -pie.
      • Glibc protections: Enable these protections with -D_FORTIFY_SOURCE=2.
      • Global Offset Table (GOT) protection: Enable this runtime loader feature with -Wl,-z,relro,-z,now.
      • Compile-time errors for missing format strings: -Wformat -Wformat-security -Werror=format-security
    • Disable CAP_SYS_MODULE, which allows for loading and unloading of kernel modules. This feature is deprecated in the linux kernel. To disable this feature:
      echo 1 > /proc/sys/kernel/modules_disabled
    • Remove the kernel symbol table:
      sudo rm /boot/System.map
    • Disable booting a new kernel from the current one via the kexec() system call and KEXEC option in reboot().
      • Add LOAD_EXEC=false in /etc/default/kexec
      • Uninstall kexec-tools.

    Kernel build options

    The following options are security settings that Google Compute Engine recommends when building your own kernel:

    Strongly recommended

    It is strongly recommended that you set the following options when building your kernel.

    • CONFIG_STRICT_DEVMEM=y
      • Restrict /dev/mem to allow access to only PCI space, BIOS code and data regions.
    • CONFIG_DEVKMEM=n
      • Disable support for /dev/kmem.
      • Block access to kernel memory.
    • CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
      • Set low virtual memory that is protected from userspace allocation. To set this value, run the following:
        echo "vm.mmap_min_addr = 0" > /etc/sysctl.d/mmap_min_addr.conf

        For more information, see DEFAULT_MMAP_MIN_ADDR and mmap_min_addr.

    • CONFIG_DEBUG_RODATA=y
      • Mark the kernel read-only data as write-protected in the pagetables, in order to catch accidental (and incorrect) writes to such const data. This option may have a slight performance impact because a portion of the kernel code won't be covered by a 2MB TLB anymore.
    • CONFIG_DEBUG_SET_MODULE_RONX=y
      • Catches unintended modifications to loadable kernel module's text and read-only data. This option also prevents execution of module data.
    • CONFIG_CC_STACKPROTECTOR=y
      • Enables the -fstack-protector GCC feature. This feature puts a canary value at the beginning of critical functions, on the stack just before the return address, and validates the value just before actually returning. This also causes stack-based buffer overflows (that need to overwrite this return address) to overwrite the canary, which gets detected and the attack is then neutralized using a kernel panic.
    • CONFIG_COMPAT_VDSO=n
      • Ensures the VDSO isn’t at a predictable address to strengthen ASLR. If enabled, this feature would map the VDSO to the predictable old-style address, providing a predictable location for exploit code to jump to. Say N here if you are running a sufficiently recent glibc version (2.3.3 or later), to remove the high-mapped VDSO mapping and to exclusively use the randomized VDSO.
    • CONFIG_COMPAT_BRK=n
      • Don’t disable heap randomization.
    • CONFIG_X86_PAE=y
      • Set this option for a 32 bit kernel, as PAE is required for NX support. This also enables larger swapspace support for non-overcommit purposes.
    • CONFIG_SYN_COOKIES=y
      • Provides some protection against SYN flooding.
    Recommended
    • CONFIG_SECURITY_YAMA=y
      • This selects Yama, which extends DAC support with additional system-wide security settings beyond regular Linux discretionary access controls. Currently, the setting is ptrace scope restriction.
    • CONFIG_SECURITY_YAMA_STACKED=y
      • This option forces Yama to stack with the selected primary LSM when Yama is available.

    Kernel security settings

    A great way to harden the security of the kernel is through the kernel settings file, /etc/sysctl.conf. The following settings are recommended when setting up the file:

    Strongly recommended
    # enables syn flood protection
    net.ipv4.tcp_syncookies = 1
    
    # ignores source-routed packets
    net.ipv4.conf.all.accept_source_route = 0
    
    # ignores source-routed packets
    net.ipv4.conf.default.accept_source_route = 0
    
    # ignores ICMP redirects
    net.ipv4.conf.all.accept_redirects = 0
    
    # ignores ICMP redirects
    net.ipv4.conf.default.accept_redirects = 0
    
    # ignores ICMP redirects from non-GW hosts
    net.ipv4.conf.all.secure_redirects = 1
    
    # ignores ICMP redirects from non-GW hosts
    net.ipv4.conf.default.secure_redirects = 1
    
    # don't allow traffic between networks or act as a router
    net.ipv4.ip_forward = 0
    
    # don't allow traffic between networks or act as a router
    net.ipv4.conf.all.send_redirects = 0
    
    # don't allow traffic between networks or act as a router
    net.ipv4.conf.default.send_redirects = 0
    
    # reverse path filtering - IP spoofing protection
    net.ipv4.conf.all.rp_filter = 1
    
    # reverse path filtering - IP spoofing protection
    net.ipv4.conf.default.rp_filter = 1
    
    # reverse path filtering - IP spoofing protection
    net.ipv4.conf.default.rp_filter = 1
    
    # ignores ICMP broadcasts to avoid participating in Smurf attacks
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    
    # ignores bad ICMP errors
    net.ipv4.icmp_ignore_bogus_error_responses = 1
    
    # logs spoofed, source-routed, and redirect packets
    net.ipv4.conf.all.log_martians = 1
    
    # log spoofed, source-routed, and redirect packets
    net.ipv4.conf.default.log_martians = 1
    
    # implements RFC 1337 fix
    net.ipv4.tcp_rfc1337 = 1
    
    # randomizes addresses of mmap base, heap, stack and VDSO page
    kernel.randomize_va_space = 2
    Recommended
    # provides protection from ToCToU races
    fs.protected_hardlinks=1
    
    # provides protection from ToCToU races
    fs.protected_symlinks=1
    
    # makes locating kernel addresses more difficult
    kernel.kptr_restrict=1
    
    # set ptrace protections
    kernel.yama.ptrace_scope=1
    
    # set perf only available to root
    kernel.perf_event_paranoid=2

    Packaging the image

    A Google Compute Engine image is a gzipped tarball that contains the raw disk file. When an image is added, Google Compute Engine validates the contents of the image to ensure it can be used. Google Compute Engine offers the gcimagebundle tool which can package your image for you. To use gcimagebundle to package your image, run:

    sudo gcimagebundle -d <boot-device> -o <output-directory>

    If you would like to package the image yourself, use the following requirements for the image format.

    Requirements

    Your archive must satisfy these requirements before it can be added to Google Compute Engine.

    • The archive must be a valid gzipped tarball with the extension .tar.gz.
    • The tar format must be GNU or old-GNU format.
    • Operating system images must have a disk.raw file that is no larger than 10GB. For non-OS images, the disk.raw file can be as large as 30GB.
    • Non-boot images have an uncompressed size limit of 30 GB for disk.raw.
    • The disk.raw file must have an MS-DOS (MBR) partition table.

    Strongly recommended

    The following recommendations prevent the image from using more space than necessary, which reduces the overall storage cost of the image.

    • Create your tarball as sparse.
    • Create your disk.raw as sparse.

    See the Create a blank image file example below.

    Partitions

    Images can have any primary and secondary partitioning scheme that is compatible with the MS-DOS partition table. Google-supplied images typically have the following partitioning scheme:

    • MS-DOS partition table
    • 1 primary ext4 formatted partition

    Example: Create a blank image file

    The following is an example script that describes how to create a blank image file. Note that this script is an example and your own experience may vary.

    # Create sparse blank disk file.
    $ truncate disk.raw ${DISK_SIZE_MB}
    
    # If you can’t use truncate use dd instead.
    # dd if=/dev/zero of=file.img bs=1 count=0 seek=${DISK_SIZE_MB}M
    
    # Create MS-DOS partition table.
    $ parted disk.raw mklabel msdos
    
    # Create primary partition as ext4
    $ parted disk.raw mkpart primary ext4 1 ${DISK_SIZE_MB}
    
    # Create compressed and sparse image tarball
    $ tar -Szcf image.tar.gz disk.raw
    

    Upload an image into Compute Engine

    To use a custom image in Compute Engine virtual machine instances, you must first upload the image to Google Cloud Storage, add the image to your Compute Engine project, and apply that image to your instances. Use the following instructions to help you add your image to Compute Engine.

    1. Upload your tar file to Google Cloud Storage

      You must store your tar file on Google Cloud Storage. You will later load the file from that location when adding your new image to your project.

      To upload the tar file to Google Cloud Storage, use the gsutil command line tool that comes preinstalled on your instance.

      1. Set up gsutil.

        If this is your first time using gsutil on this instance AND you didn't set up your instance to use a service account to talk to Google Cloud Storage, run gsutil config and follow the instructions. Otherwise, you can skip this step.

        Note: If your instance is set up with a service account scope to Google Cloud Storage and you run gsutil config, the command creates user credentials that are used by subsequent gsutil commands instead of the service account credentials that were previously used.

      2. Create a bucket.

        The following restrictions apply when you create a Cloud Storage bucket to store your image file:

        Create your bucket using the following command:

        me@myinst:~$ gsutil mb gs://<bucket-name>

        Note: You must enable billing for your Google Cloud Storage account before you can use the Google Cloud Storage service. To understand how Google Cloud Storage bills for usage, see their pricing table.


      3. Copy your file to your new bucket.
        me@myinst:~$ gsutil cp /tmp/<your-image>.image.tar.gz gs://<bucket-name>
    2. Add your customized image to the Images collection

      Log out of your instance by typing exit. Add your saved image to your project's Images list using gcutil on your local machine, as described below. When you add your image to the Images collection, you must choose an image name that is unique among all images in the project and specify the URI of your image in Google Cloud Storage, using the URI scheme shown below.

      $ gcutil --project=<project-id> addimage <image-name> <image-uri>

      Important Flags and Parameters:

      --project=<project-id>
      [Required] The project ID where this image should live.
      <image-name>
      [Required] A name for this new image. Your <image-name> must start with a lowercase letter, be followed by 0-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
      <image-uri>
      [Required] A fully-qualified Google Cloud Storage URI in either of the following formats:
      • gs://<bucket-name>/<your-image>.image.tar.gz
      • https://storage.googleapis.com/<bucket-name>/<your-image>.image.tar.gz

      You can check if the image is ready to use by performing a gcutil getimage call, with returns the image state as well. Once the image is READY, you can use it for your instances.

    3. Use your custom image

      Whenever you want to start a new instance with your custom image, specify the --image flag with the name that you assigned your image in the previous step:

      $ gcutil --project=my-project addinstance --image=<image-name> <instance-1> <instance-2> <instance-n>

      This differs slightly from using an unaltered Debian or CentOS image. You only need to specify the image name for your custom image when adding an instance, as opposed to the fully-qualified image name if you were using an unaltered Debian or CentOS image from the debian-cloud or centos-cloud image.

      If you choose to create multiple instances with one command, gcutil uses the same image for all of the instances and each instance will have its own unique IP address.

    Page: instances

    An instance is a virtual machine hosted on Google's infrastructure. Instances can run the Linux images provided by Google, or run any customized versions of these images. You can also build and run images of other operating systems.

    Google Compute Engine also lets you specify the machine properties of your instance, such as the number of CPUs and the amount of RAM, based on the machine type you use.

    Instances are a per-zone resource.

    Contents

    Overview

    At the core of Google Compute Engine is the Instance resource. Every instance is a virtual machine that is customizable and manageable by you; there are few restrictions on how you use your instance.

    You can perform basic instance configuration and management using either gcutil tool, the Google Developers Console, or the REST API, but to perform any advanced configuration, you must ssh into the instance. By default, all instances support ssh capability for the instance creator, and optionally for other users.

    As an instance creator you have full root privileges on any instances you have started. An instance administrator can also add system users using standard Linux commands.

    To start an instance using gcutil, call the gcutil addinstance command. This starts the process of reserving the instance, starting it, and then running any startup scripts that you specify. You can check the status of an instance by running gcutil --project=<project-id> getinstance <instance-name> and looking for a status of RUNNING. Use gcutil to add instances to a project and start them, specifying a set of properties for this instance such as desired hardware, image type, zone, and optionally any startup scripts that you want to run. Currently adding and removing an instance is the same as starting and stopping an instance; you cannot add an instance to a project without starting it, or remove it without stopping it.

    A project holds one or more instances but an instance can be a member of one and only one project. When you start an instance, you must specify which project and zone it should belong to. When you stop an instance, it is removed from the project. Project information can be viewed using the gcutil tool, but you must use the Google Developers Console to create and manage projects.

    Instances can communicate with other instances in the same network and with the rest of the world through the Internet. A Network object is restricted to a single project, and cannot communicate with other Network objects. See Networks and Communication for more information about network communication to and from an instance.

    Useful gcutil commands:

    • gcutil listinstances
    • gcutil getinstance
    • gcutil addinstance
    • gcutil deleteinstance
    $ gcutil --project=my-project getinstance myinstance
    +------------------------+--------------------------------------------------------------------------------------------+
    |        property        |                                       value                                                |
    +------------------------+--------------------------------------------------------------------------------------------+
    | name                   | myinstance                                                                                 |
    | description            |                                                                                            |
    | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
    | machine                | n1-standard-1                                                                              |
    | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
    | zone                   | <zone>                                                                                     |
    | tags-fingerprint       | 42WmSpB8rSM=                                                                               |
    | metadata-fingerprint   | 42WmSpB8rSM=                                                                               |
    | status                 | RUNNING                                                                                    |
    | status-message         |                                                                                            |
    |                        |                                                                                            |
    | disk                   | 0                                                                                          |
    |   type                 | PERSISTENT                                                                                 |
    |   mode                 | READ_WRITE                                                                                 |
    |   deviceName           | pd1                                                                                        |
    |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
    |                        |                                                                                            |
    | network-interface      |                                                                                            |
    |   network              | default                                                                                    |
    |   ip                   | 00.000.000.000                                                                             |
    |   access-configuration | External NAT                                                                               |
    |     type               | ONE_TO_ONE_NAT                                                                             |
    |     external-ip        | 000.000.00.000                                                                             |
    |                        |                                                                                            |
    | metadata               |                                                                                            |
    | fingerprint            | 42WmSpB8rSM=                                                                               |
    |                        |                                                                                            |
    | tags                   |                                                                                            |
    | fingerprint            | 42WmSpB8rSM=                                                                               |
    +------------------------+--------------------------------------------------------------------------------------------+

    Creating and Starting an Instance

    An instance is created and started in a single step. Google Compute Engine does not currently allow you add an instance to a project without starting it. An instance takes a few moments to start up. You must check the instance status to learn when it is actually running. See Checking Instance Status for more information.

    Each instance must have a root persistent disk that stores the instances root filesystem. It is possible to create this root persistent disk when you create your instance, or create it separately and attach it to the instance.

    Start an Instance Using gcutil

    To start an instance using gcutil, run the following command.

    gcutil --project=<project-id> addinstance <instance-name> <other-flags>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The ID of the project in which to add the instance. This flag is required for every gcutil command except help, unless you have previously specified the --cache_flag_values flag to store your project ID information.
    instance-name
    [Required] The name to assign to the instance. You will use this name to ssh into the instance, or attach resources to it. The name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    --external_ip_address=<external-ip>
    [Optional] Specifies which externally visible IP address, if any, to assign to this instance. See details on supported flag values. If you omit this flag, an external ephemeral IP address will be assigned to the instance.
    --machine_type=<machine-type>
    [Optional] What machine type to host the instance. Call gcutil listmachinetypes for a list and description of available host hardware, or gcutil getmachinetype for details about a specific hardware configuration. If you do not specify a machine type, gcutil prompts you to select one from a list.
    --image=<fully-qualified-image-name>
    [Optional] The name of the image to install, from the project's images collection. Call gcutil listimages for a list of available images, or gcutil getimage for a complete description of a particular image. If you do not specify an image, gcutil prompts you to select one from a list.
    --metadata=startup-script-url and --metadata_from_file=startup-script
    [Optional] These flags are used to specify any startup scripts that you want associated with this instance. If the server is restarted due to a failure, these scripts will be rerun. See Specifying a Startup Script for more information.
    --disk=<disk-name>[,deviceName=<alias-name>,mode=<mode>,boot]
    [Optional] A persistent disk to associate with this instance. If you do not provide this flag, a root persistent disk will be created for your instance, with the same name as the instance (e.g. if an instance is named newinstance, the associated root persistent disk created in this manner will also be named newinstance. Non-root persistent disks must be created and attached to an instance separately. Alternatively, you can also attach a persistent disk to a running instance. You can choose to attach a disk in read-only or read-write mode; by default, disks are attached in read-write mode. For details about creating a persistent disk resource for an Instance, see Creating and Using Disks.
    --[no]auto_delete_boot_disk
    [Optional] Determines if the root persistent disk for this instance should be deleted automatically when the instance is deleted. The default is false.
    --service_account_scope=<scopes>
    [Optional] Set up this instance to use a service account to access other Google services.
    Other flags
    To see all the flags that you can set with addinstance, run the gcutil help addinstance command.

    Tip: To save your flag values for quick reuse, use the --cache_flag_values=True and optional --cached_flags_file=<somefile> flags in your startup call. This lets you reuse frequently used flags.

    The following example shows starting a new Instance named vm1.

    $ gcutil --project=my-project addinstance vm1 --auto_delete_boot_disk
    ... select a zone, machine type, and image...
    INFO: Waiting for insert of myinstance. Sleeping for 3s.
    INFO: Waiting for insert of myinstance. Sleeping for 3s.
    
    Table of resources:
    
    +--------------+---------------+---------------------------------------------------------------+---------+----------------+--------------+----------------+-------------+------------------+----------------------+---------+----------------+
    |     name     |  machine-type |                            image                              | network |   network-ip   | external-ip  |     disks      |    zone     | tags-fingerprint | metadata-fingerprint | status  | status-message |
    +--------------+---------------+---------------------------------------------------------------+---------+----------------+--------------+----------------+-------------+------------------+----------------------+---------+----------------+
    |     vm1      | n1-standard-1 | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD | default | 00.000.000.000 | 000.000.0.00 |                |    <zone>   | 42WmSpB8rSM=     | 42WmSpB8rSM=         | RUNNING |                |
    +--------------+---------------+---------------------------------------------------------------+---------+----------------+--------------+----------------+-------------+------------------+----------------------+---------+----------------+
    
    Table of operations:
    
    +------------------------------------------------+--------+----------------+----------------------------------+----------------+-------+---------+
    |                      name                      | status | status-message |              target              | operation-type | error | warning |
    +------------------------------------------------+--------+----------------+----------------------------------+----------------+-------+---------+
    | operation-1358529117225-4d3933570d191-0fc3d1da | DONE   |                |     <zone>/instances/vm1         |     insert     |       |         |
    +------------------------------------------------+--------+----------------+----------------------------------+----------------+-------+---------+
    

    Starting an Instance in the API

    To start an instance in the API, construct a request with a source image:

      body = {
        'name': NEW_INSTANCE_NAME,
        'machineType': <fully-qualified-machine-type-url>,
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': <fully-qualified-network-url>
        }],
        'disk': [{
           'autoDelete': 'true',
           'boot': 'true',
           'type': 'PERSISTENT',
           'initializeParams': {
              'diskName': 'my-root-disk',
              'sourceImage': '<fully-qualified-image-url>',
           }
         }]
      }

    By providing the following in your request:

    'disk': [{
      'autoDelete': 'true',
      'boot': 'true',
      'type': 'PERSISTENT',
      'initializeParams': {
        'diskName': 'my-root-disk',
        'sourceImage': '<fully-qualified-image-url>',
      }
    }]

    Compute Engine also creates a root persistent disk with the source image you specify. You can only provide initalizeParams for a root persistent disk, and can only provide it once per instance creation request. Note that the autoDelete flag also indicates to Compute Engine that the root persistent disk should be automatically deleted when the instance is deleted.

    When you are using the API to specify a root persistent disk:

    • You can only specify the boot field on one disk. You may attach multiple persistent disks but only one can be the root persistent disk.
    • You must attach the root persistent disk as the first disk for that instance.
    • When the source field is specified, you cannot specify the initializeParams field, as this conflicts with each other. Providing a source indicates that the root persistent disk exists already, whereas specifying initializeParams indicates that Compute Engine should create the root persistent disk.

    If you're using the API client library, you can start a new instance using the instances().insert function. Here is a snippet from the Python client library:

    def addInstance(auth_http, gce_service):
      # Construct the request body
       body = {
        'name': NEW_INSTANCE_NAME,
        'machineType': <fully-qualified-machine_type_url>,
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': <fully-qualified-network_url>
        }],
        'disk': [{
           'autoDelete': 'true',
           'boot': 'true',
           'type': 'PERSISTENT',
           'initializeParams' :{
              'diskName': 'my-root-disk',
              'sourceImage': '<fully-qualified-image-url>',
           }
        }]
      }
    
      # Create the instance
      request = gce_service.instances().insert(
           project=PROJECT_ID, body=body, zone=DEFAULT_ZONE)
      response = request.execute(auth_http)
      response = _blocking_call(gce_service, auth_http, response)
    
      print response

    You could also make a request to the API directly by sending a POST request to the instances URI with the same request body:

    def addInstance(http, listOfHeaders):
      url = 'https://www.googleapis.com/compute/v1/project/<project-id>/zones/<zone>/instances'
    
      body = {
        'name': NEW_INSTANCE_NAME,
        'machineType': <fully-qualified-machine_type_url>,
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': <fully-qualified-network_url>
        }],
        'disk': [{
           'autoDelete': 'true,
           'boot': 'true',
           'type': 'PERSISTENT',
           'initializeParams': {
              'diskName': 'my-root-disk',
              'sourceImage': '<fully-qualified-image-url>',
           }
         }]
    
      bodyContentURLEncoded = urllib.urlencode(bodyContent)
      resp,content = http.request(uri=url, method="POST", body=dumps(bodyContent), headers=listOfHeaders)
      print resp
      print content

    Checking Instance Status

    When you first create an instance, you should check the instance status to see if it is running before you can expect it to respond to requests. It can take a couple seconds before your instance is fully up and running after the initial addinstance request. You can also check the status of an instance at anytime after instance creation.

    To check the status of an instance, call gcutil listinstances or gcutil getinstance <instance-name>.

    The following states are returned:

    • PROVISIONING - Resources are being reserved for the instance. The instance isn't running yet.
    • STAGING - Resources have been acquired and the instance is being prepared for launch.
    • RUNNING - The instance is booting up or running. You should be able to ssh into the instance soon, though not immediately, after it enters this state.
    • STOPPING - The instance is being stopped either due to a failure, or the instance being shut down. This is a temporary status and the instance will move to either PROVISIONING or TERMINATED.
    • TERMINATED - The instance either failed for some reason or was shutdown. This is a permanent status, and the only way to repair the instance is to delete and recreate it.

    Example

    $ gcutil --project=my-project getinstance myinstance
    +------------------------+--------------------------------------------------------------------------------------------+
    |        property        |                                       value                                                |
    +------------------------+--------------------------------------------------------------------------------------------+
    | name                   | myinstance                                                                                 |
    | description            |                                                                                            |
    | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
    | machine                | n1-standard-1                                                                              |
    | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
    | zone                   | <zone>                                                                                     |
    | tags-fingerprint       | 42WmSpB8rSM=                                                                               |
    | metadata-fingerprint   | 42WmSpB8rSM=                                                                               |
    | status                 | RUNNING                                                                                    |
    | status-message         |                                                                                            |
    |                        |                                                                                            |
    | disk                   | 0                                                                                          |
    |   type                 | PERSISTENT                                                                                 |
    |   mode                 | READ_WRITE                                                                                 |
    |   deviceName           | pd1                                                                                        |
    |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
    |                        |                                                                                            |
    | network-interface      |                                                                                            |
    |   network              | default                                                                                    |
    |   ip                   | 00.000.000.000                                                                             |
    |   access-configuration | External NAT                                                                               |
    |     type               | ONE_TO_ONE_NAT                                                                             |
    |     external-ip        | 000.000.00.000                                                                             |
    |                        |                                                                                            |
    | metadata               |                                                                                            |
    | fingerprint            | 42WmSpB8rSM=                                                                               |
    |                        |                                                                                            |
    | tags                   |                                                                                            |
    | fingerprint            | 42WmSpB8rSM=                                                                               |
    +------------------------+--------------------------------------------------------------------------------------------+

    Connecting to an Instance Using ssh

    By default, you can always connect to an instance using ssh. This is useful so you can manage and configure your instances beyond the basic configuration enabled by gcutil or the REST API. The easiest way to ssh into an instance is to use gcutil --project=<project-id< ssh <instance-name> from your local computer. For example, the following command ssh'es into an instance named myinst:

    $ gcutil --project=my-project ssh myinst

    You can use ssh directly without using the gcutil wrapper, if you want, as described in "Using standard ssh", although this is usually less convenient.

    Setting Up Your ssh Keys

    Before you can access your instance using ssh, you need to set up your ssh keys. Every time you run gcutil addinstance or gcutil ssh, gcutil checks for key files in the location listed below. If you already set up ssh keys when you added another instance in the same project, you won't need to set up keys again. If there are no existing key files, gcutil will:

    1. Prompt you for a passphrase to encrypt your new keys

      As part of the key generation process, gcutil asks for a passphrase to encrypt the keys:

      WARNING: You don't have a public ssh key for Google Compute Engine. Creating one now...
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      

      You are not required to provide a passphrase but if you don't, the keys on your local machine will be unencrypted. We recommend providing a passphrase to encrypt your keys.

    2. Generate your keys using ssh-keygen

      gcutil creates local files to store your public and private key, and copies your public key to the project. By default, gcutil stores ssh keys in the following files on your local computer:

      • $HOME/.ssh/google_compute_engine - Your private key
      • $HOME/.ssh/google_compute_engine.pub - Your public key

      Once gcutil copies your public key to the project, the new key will be added to the VM shortly thereafter. If you want to use existing keys that are stored in a different location, specify the files using the --private_key_file and --public_key_file flags. Note that your instance must have these additional keys installed.

      Note: You need editor permissions to the containing project in order set public keys for that project. For more information, review the documentation for user access permissions.

    You can also install multiple public keys during instance creation by calling:

    gcutil addinstance --authorized_ssh_keys=username1:/path/to/keyfile1,username2:/path/to/keyfile2,...

    If you want to call the API directly, you can specify multiple keys using the metadata field, with each user separated by a newline character:

    "metadata": {
          "kind": "compute#metadata",
          "items": [
            {
              "key": "sshKeys",
              "value": "user1:ssh-rsa 123456787..\nuser2:ssh-rsa abcdef0123..\nuser3:ssh-rsa 456789123"
            },...
    

    Being able to add multiple keys is useful for adding multiple users to an instance at startup time, but it also limits the set of ssh keys to be exactly those you specified.

    That's it! You have set up ssh access to your instances. Note that this process sets up ssh keys for your local computer, but if you want to ssh in from more than one computer, you should read ssh'ing from different clients.

    ssh'ing from different clients

    If you are not using the gcutil ssh command to connect to your instance or if you are connecting to the instance from a different computer, then you must copy your local public/private keypair to the machine from which you want to ssh into that instance. If you forget your passphrase, or are working on a machine that does not have a copy of your keys, you will not be able to ssh into the instance.

    ssh'ing Using standard ssh

    You can use standard ssh rather than gcutil ssh with the following syntax:

    ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o  StrictHostKeyChecking=no \
     -i <key-file> -o LogLevel=QUIET -A  -p 22 <username>@<ipaddress>

    Important Flags and Parameters:

    key-file
    [Required] The file where the keys are stored on the computer e.g. ~/.ssh/google_compute_engine
    username
    [Required] The username to log in that instance. Typically, this is the username of the local user running gcutil.
    ipaddress
    [Required] The external IP address of the instance.

    ssh'ing From One Instance to Another

    If your instance doesn't have an externally-visible IP address, you can still ssh into it by ssh'ing into an instance on the network with an external address, then from there ssh'ing into the internal-only instance from your externally-visible instance. You might need to do this if you've started an instance without an external IP address either intentionally or by accident.

    To ssh from one instance to another:

    1. Start ssh-agent using the following command to manage your keys for you:

      eval `ssh-agent`

    2. Call ssh-add to load the gcutil keys from your local computer into the agent, and use them for all ssh commands for authentication:

      ssh-add ~/.ssh/google_compute_engine

    3. Log into an instance with an external IP address:

      gcutil --project=<project-id> ssh <instance-name>

    4. From this externally-addressable instance, you can now log into any other instance on the same network by calling ssh <instance-name>
    5. When you are done, call exit repeatedly to log out of each instance in turn.
    6. You can continue to simply ssh into your internal instances through your external instance until you close your command window, which will close the ssh-agent context.

    Example

    me@local:~$ eval `ssh-agent`
    Agent pid 17666
    me@local:~$ ssh-add ~/.ssh/google_compute_engine
    Identity added: /home/user/.ssh/google_compute_engine (/home/user/.ssh/google_compute_engine)
    me@local:~$ gcutil --project=my-project ssh myinst
    INFO: Running command line: ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/user/.ssh/google_compute_engine -o LogLevel=QUIET -A -p 22 user@255.255.122.238 --
    Linux myinst 2.6.39-gcg-yyyymmdd1612 #18 SMP .... x86_64 GNU/Linux
    ....
    me@myinst:~$ ssh myinst2
    ....
    1 package can be updated.
    0 updates are security updates.
    me@myinst2:~$ exit
    logout
    Connection to myinst2 closed.
    me@myinst:~$
    

    Root Access and Instance Administrators

    For security reasons, the standard Google images do not provide the ability to ssh in directly as root. The instance creator and any users that were added using the --authorized_ssh_keys flag or the metadata sshKeys value are automatically administrators to the account, with the ability to run sudo without requiring a password.

    Although it is not recommended, advanced users can modify /etc/ssh/sshd_config and restart sshd to change this policy.

    Setting Instance Scheduling Options

    By default, Google Compute Engine automatically manages the scheduling decisions for your instances. For example, if your instance is terminated due to a system or hardware failure, Compute Engine automatically restarts that instance. Instance scheduling options let you change this automatic behavior.

    Maintenance Behavior

    Periodically, Google performs scheduled maintenance on the infrastructure underlying your running instance. Compute Engine automatically moves your instance away from the underlying infrastructure before it is taken offline for the scheduled maintenance. For more information, see Scheduled Maintenance.

    Compute Engine has two ways to move your instances during maintenance events:

    • Live migrate

      Note: This option is only available for instances in US and Asia zones.

      If you choose this option, Google Compute Engine will automatically migrate your instance away from the maintenance event, and your instance remains running during the migration. Your instance may experience a short period of decreased performance, although generally most instances should not notice any difference. This option is ideal for instances that require constant uptime, and can tolerate a short period of decreased performance.

      When Google Compute Engine migrates your instance, it reports a system event that is published to the your list of zone operations. You can review this event by performing gcutil listzoneoperations event or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:

      compute.instances.migrateOnHostMaintenance
    • Terminate and Restart

      If you choose this option, Google Compute Engine will signal your instance to shut down, wait for a short period of time for your instance to shut down cleanly, terminate the instance, and restart it away from the maintenance event. This option is ideal for instances that demand constant, maximum performance, and your overall application is built to handle instance failures or reboots.

      When Google Compute Engine terminates and reboots your instances, it reports a system event that is published to the list of zone operations. You can review this event by performing gcutil listzoneoperations event or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:

      compute.instances.terminateOnHostMaintenance

      You should also be aware that if you select this option, your instances may experience more frequent terminate-and-reboot events, even outside the normal two week maintenance windows, and plan accordingly. When your instance reboots, it will use the same persistent boot disk as before.

    Persistent disks are preserved in both migrate or terminate cases. For the terminate and reboot case, your persistent disk will be briefly detached from the instance while it is being rebooted, and then reattached once the instance is restarted.

    See How to Set Scheduling Options below for the default maintenance behavior values and also how to change this setting on existing instances.

    Automatic Restart

    You can set up Google Compute Engine to automatically restart an instance if it is taken offline by a system event, such as a hardware failure or scheduled maintenance event, using the automaticRestart setting. This setting does not apply if the instance is taken offline through a user action, such as calling sudo shutdown.

    When Google Compute Engine automatically restarts your instance, it reports a system event that is published to the list of zone operations. You can review this event by performing gcutil listzoneoperations event or by viewing the list of operations in the Google Developers Console, or through an API request. The event will appear with the following text:

    compute.instances.automaticRestart 

    How to Set Scheduling Options

    All instances are configured with default values for onHostMaintenance and automaticRestart settings. The the default setting for instances is to set the onHostMaintenance flag to migrate, in which case Google Compute Engine will migrate the instance around scheduled maintenance events.

    If you want to manually set scheduling options of an instance, you can do so when first creating the instance or after the instance is created, using the setScheduling method.

    Specifying scheduling options during instance creation

    To specify the maintenance behavior and automatic restart settings of a new instance in gcutil, use the --on_host_maintenance and --automatic_restart flags:

    gcutil --project=<project-id> addinstance <instance-name> ... [--on_host_maintenance=<behavior>] [--automatic_restart=<restart>]

    Important flags and parameters:

    <project-id>
    [Required] The project ID for this request.
    <instance-name>
    [Required] The name of the instance for which you would like to update the scheduling options.
    --on_host_maintenance=<behavior>
    [Optional] Sets the new maintenance behavior for this instance. Valid values are migrate or terminate. migrate indicates that Google Compute Engine should migrate this instance away from scheduled maintenance windows. terminate means that Google Compute Engine should terminate and reboot this instance in response to scheduled maintenance windows. To makes sure Google Compute Engine automatically restarts the instance, set the --automatic_restart flag. Note that instances may be restarted multiple times and could be restarted outside of scheduled maintenance windows.
    --automatic_restart=<restart>
    [Optional] Indicates if Google Compute Engine should restart an instance if the instance is taken offline due to any system event (such as a maintenance event).

    This is a boolean parameter, with either true or false values. This feature won't apply for situations where an instance is taken offline by a user (such as someone running sudo shutdown).

    In the API, make a POST request to:

    https://www.googleapis.com/compute/v1/projects/<project-id>/global/instances

    with the onHostMaintenance and automaticRestart parameters as part of the request body:

    {
      "kind": "compute#instance",
      "name": "vm1",
      "description": "Front-end for real-time ingest; don't migrate.",
    ...
      // User options for influencing this Instance’s life cycle.
      "scheduling": {
        "onHostMaintenance": "migrate",
        "automaticRestart": "true" # specifies that Google Compute Engine should automatically restart your instance
      }
    }

    For more information, see the instances reference documentation.

    Updating scheduling options for an existing instance

    To update the scheduling options of an instance, use the gcutil setscheduling command with the same parameters and flags used in the instance creation command above:

    gcutil --project=<project-id> setscheduling <instance-name> [--on_host_maintenance=<behavior>] [--automatic_restart=<restart>]

    In the API, you can make a request to the following URL:

    https://www.googleapis.com/compute/v1/projects/<project-id>/global/instances/setScheduling

    The body of your request must contain the new value for the scheduling options:

    {
      "onHostMaintenance": "migrate"
      "automaticRestart": "true" # specifies that Google Compute Engine should automatically restart your instance
    }

    For more information, see the instances : setScheduling reference documentation.

    Installing packages and Configuring an Instance

    The instance creator has administrator privileges on any instance she adds to a project, and is automatically on the SUDO list.

    When you are logged into an instance as the administrator, you can install packages and configure the instance the same way you would a normal Linux box. For example, you can install Apache, as shown here:

    rufus@myinst:~$ sudo apt-get update
    rufus@myinst:~$ sudo apt-get install apache2
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following extra packages will be installed:
    [...]
    

    You can move files between your local computer and instance using gcutil push and gcutil pull as described in Copying Files To/From an Instance.

    Note that your machine needs access to the Internet to be able to run apt-get. This means that it needs either an external IP address, or access to an Internet proxy.

    Scheduled maintenance notice

    It is possible to detect when a maintenance event is about to happen through an instance's virtual machine metadata server. A special metadata attribute, maintenance-event, will update its value shortly before the started and again at the end of a maintenance event, allowing you to detect when a scheduled maintenance event is going to happen and when it ends. You can use this information to help automate any scripts or commands you may want to run at those times.

    For more information, see the Scheduled maintenance notice section on the Metadata server documentation.

    Copying Files To/From an Instance

    Use gcutil push to send files to an instance from your local machine, and gcutil pull to copy files from your instance to your local machine. Note that when calling push, the target directory must exist on the instance.

    Note: An instance must have an external IP address to be able to push or pull files to/from it.

    gcutil --project=<project-id> push <instance-name> <local-file> <remote-target-path>
    gcutil --project=<project-id> pull <instance-name> <file1> <file2> ... <local-directory>

    Example

    # Copy local file named readme.txt up to my instance named myinst
    $ gcutil --project=my-project push myinst readme.txt /home/user/.
    $ gcutil --project=my-project ssh myinst
    
    The programs included with the Debian GNU/Linux system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    
    user@myinst$ ls /home/user
    readme.txt

    Detecting if you are running in Google Compute Engine

    It is common for systems to want to detect if they are running within a specific cloud environment. To enable this, you can query for the metadata server for a specific header that indicates you are running within Compute Engine. For more information, see Detecting if you are running in Compute Engine.

    For images v20131120 and newer, you can request more explicit confirmation using the dmidecode tool. The dmidecode tool can be used to access the DMI/SMBOIS information in /proc/mem directly. Run the following command and the dmidecode tool should print out "Google" to indicate that you are running in Google Compute Engine:

    my@myinst:~$ sudo dmidecode -s bios-vendor | grep Google
    Google

    Enabling network traffic

    By default, all new instances have the following connections enabled:

    • Traffic between instances in the same network, over any port and any protocol.
    • Incoming ssh connections (port 22) from anywhere.

    Any other incoming traffic to an instance is blocked. You must explicitly assign new firewall rules a network to enable other connections. See Connecting to an instance using ssh to learn how to ssh into your instance, or Networks and Firewalls to learn how instances communicate with each other over IP, and how to set up an externally accessible HTTP connection to an instance.

    Using instance tags

    You can assign tags to your instances to help coordinate or group instances that may share common traits or attributes. For example, if there are several instances that perform the same task, such as serving a large website, you may consider tagging these instances with a shared word or term. Instance tags are also used by networks and firewalls to identify which instances firewall rules may apply to. Tags are also reflected in the metadata server, so you can use them for applications running on your instances.

    To assign tags to a running instance using gcutil, use the gcutil setinstancetags command:

    gcutil --project=<project-id> setinstancetags <instance-name> --tags=<tag-1,tag-2,..,tag-n> --fingerprint=<current-fingerprint-hash>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID of the instance.
    <instance-name>
    [Required] The name of the instance to set tags.
    --tags=<tag-1,tag-2,tag-3>
    [Required] A list of tags to apply to the instance. Updating tags is done in a batch request, so you must update the entire list of tags, even if you are only modifying a single tag. For example, if you have a list of tags with values mustard,ketchup,romaine, and you want to remove ketchup, you must respecify the entire tag list, without the ketchup tag:
    ---tags=mustard,romaine
    --fingerprint=<current-fingerprint-hash>
    [Required] The current fingerprint hash of the tag. You can grab the fingerprint hash by performing a gcutil getinstance <instance-name> command, and copying the value of the tags-fingerprint field. The fingerprint you supply must match the current fingerprint on the instance. This performs optimistic locking, so that only one user may update the tag list at any one time.

    The following example demonstrates how to update the instance tags for an instance:

    1. Get information about this instance and note the metadata fingerprint field:
      $ gcutil --project=myproject getinstance myinstance
      +------------------------+--------------------------------------------------------------------------------------------+
      |        property        |                                       value                                                |
      +------------------------+--------------------------------------------------------------------------------------------+
      | name                   | myinstance                                                                                 |
      | description            |                                                                                            |
      | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
      | machine                | n1-standard-1                                                                              |
      | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
      | zone                   | <zone>                                                                                    |
      | tags-fingerprint       | 42WmSpB8rSM=                                                                               |
      | metadata-fingerprint   | 42WmSpB8rSM=                                                                               |
      | status                 | RUNNING                                                                                    |
      | status-message         |                                                                                            |
      |                        |                                                                                            |
      | disk                   | 0                                                                                          |
      |   type                 | PERSISTENT                                                                                 |
      |   mode                 | READ_WRITE                                                                                 |
      |   deviceName           | pd1                                                                                        |
      |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
      |                        |                                                                                            |
      | network-interface      |                                                                                            |
      |   network              | default                                                                                    |
      |   ip                   | 00.000.000.000                                                                             |
      |   access-configuration | External NAT                                                                               |
      |     type               | ONE_TO_ONE_NAT                                                                             |
      |     external-ip        | 000.000.00.000                                                                             |
      |                        |                                                                                            |
      | metadata               |                                                                                            |
      | fingerprint            | 42WmSpB8rSM=                                                                               |
      |                        |                                                                                            |
      | tags                   |                                                                                            |
      | fingerprint            | kFyURcqFcPg=                                                                               |
      |                        | cheese                                                                                     |
      |                        | mustard                                                                                    |
      |                        | romaine                                                                                    |
      +------------------------+--------------------------------------------------------------------------------------------+
      
    2. Next, use that fingerprint to update the metadata entries:
      $ gcutil --project=myproject setinstancetags --tags=mustard,romaine --fingerprint=kFyURcqFcPg= myinstance
      INFO: Waiting for setTags of instance myinstance. Sleeping for 3s
      ....
    3. Run gcutil getinstance again to see your new metadata entries:
      $ gcutil --project=myproject getinstance myinstance
      +------------------------+--------------------------------------------------------------------------------------------+
      |        property        |                                       value                                                |
      +------------------------+--------------------------------------------------------------------------------------------+
      | name                   | myinstance                                                                                 |
      | description            |                                                                                            |
      | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
      | machine                | n1-standard-1                                                                              |
      | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
      | zone                   | <zone>                                                                                     |
      | tags-fingerprint       | 42WmSpB8rSM=                                                                               |
      | metadata-fingerprint   | 42WmSpB8rSM=                                                                               |
      | status                 | RUNNING                                                                                    |
      | status-message         |                                                                                            |
      |                        |                                                                                            |
      | disk                   | 0                                                                                          |
      |   type                 | PERSISTENT                                                                                 |
      |   mode                 | READ_WRITE                                                                                 |
      |   deviceName           | pd1                                                                                        |
      |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
      |                        |                                                                                            |
      | network-interface      |                                                                                            |
      |   network              | default                                                                                    |
      |   ip                   | 00.000.000.000                                                                             |
      |   access-configuration | External NAT                                                                               |
      |     type               | ONE_TO_ONE_NAT                                                                             |
      |     external-ip        | 000.000.00.000                                                                             |
      |                        |                                                                                            |
      | metadata               |                                                                                            |
      | fingerprint            | 42WmSpB8rSM=                                                                               |
      |                        |                                                                                            |
      |                        |                                                                                            |
      | tags                   |                                                                                            |
      | fingerprint            | wHsRKmfQcPg=                                                                               |
      |                        | mustard                                                                                    |
      |                        | romaine                                                                                    |
      +------------------------+--------------------------------------------------------------------------------------------+

    Resetting an instance

    You can perform a hard reset on an instance by using the gcutil resetinstance command or by making a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>/reset

    Performing a reset on your instance is similar to pressing the reset button on your computer. Note that your instance remains in RUNNING mode through the reset.

    To reset your instance using gcutil:

    gcutil --project=<project-id> resetinstance <instance-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <instance-name>
    [Required] The name of the instance to reset.

    To reset your instance using the client libraries, construct a request to the instances().reset method:

    def resetInstance(auth_http, gce_service):
      request = gce_service.instances().reset(project=PROJECT_ID, zone=ZONE_NAME, instance=INSTANCE_NAME)
      response = request.execute(auth_http)
    
      print response

    For more information on this method, see the instances().reset reference documentation.

    Shutting down an instance

    When you shut down an instance, the following things will happen:

    • The instance will disappear but persistent disk data will be retained until the persistent disk is explicitly deleted.
    • If you perform gcutil deleteinstance, and your instance still has resources attached to it (such as a persistent disk, or an ephemeral or static IP address, the resource will be released to be used by another instance. Additionally, if you have a root persistent disk attached to the instance, gcutil prompts you to decide if you want to keep the persistent disk or delete it with the instance.
    • In contrast, if you run sudo poweroff, Google Compute Engine keeps these resources attached to the instance, even if the instance is marked as TERMINATED. You must manually release these resources or run gcutil deleteinstance to release these resources.

    There are two ways to shut down an instance:

    • gcutil deleteinstance (Recommended) - Shuts down the instance and removes it from the list of instances. Requires at least read rights on the project.
      gcutil --project=<project-id> deleteinstance <instance-name-1> [<instance-name-2> <instance-name-3> ...  <instance-name-n>]
    • sudo poweroff - Called when ssh'ed into an instance. Does not require any project rights, but you must still call gcutil deleteinstance at some point afterward, or else the instance will continue to be listed as a member of the project, but will not be accessible.

    Restarting an instance

    There are two valid ways to restart a currently running instance, manually through gcutil or the instance, or automatically using the automaticRestart flag.

    Manually restarting an instance

    To manually restart an instance, use one of the following two methods:

    • sudo reboot - Called when ssh'ed into an instance. Wipes the memory and re-initializes the instance with the original metadata, image, and persistent disks. It will not pick up any updated versions of the image, and the instance will retain the same ephemeral IP address.
    • gcutil deleteinstance followed by gcutil addinstance - This is a completely destructive restart, and will initialize the instance with any information passed into gcutil addinstance. You can then select any new images or other resources you'd like to use. The restarted instance will probably have a different IP address. This method potentially swaps the physical machine hosting the instance.

    Listing all running instances

    You can see a list of all instances in a project by calling gcutil listinstances.

    $ gcutil --project=<project-id> listinstances
    
    +------------+---------------+---------------------------------------------------------------+---------+----------------+----------------+--------------------+--------+---------+----------------+
    |    name    | machine-type  |                             image                             | network |   network-ip   |  external-ip   |       disks        |  zone  | status  | status-message |
    +------------+---------------+---------------------------------------------------------------+---------+----------------+----------------+--------------------+--------+---------+----------------+
    | myinstance | n1-standard-1 | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD | default | 00.000.000.000 | 000.000.00.000 |  <zone>/disks/pd1  | <zone> | RUNNING |                |
    +------------+---------------+---------------------------------------------------------------+---------+----------------+----------------+--------------------+--------+---------+----------------+
    

    By default, gcutil provides an aggregate listing of all your resources across all available zones. If you want a list of resources from just a single zone, provide the --zone flag in your request.

    $ gcutil --project=<project-id> listinstances --zone=<zone>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --zone=<zone>
    [Required] The zone from which you want to list instances.

    In the API, you need to make requests to two different methods to get a list of aggregate resources or a list of resources within a zone. To make a request for an aggregate list, make a GET request to that resource's aggregatedList URI:

    https://www.googleapis.com/compute/v1/aggregated/instances

    In the client libraries, make a request to the instances().aggregatedList function:

    def listAllInstances(auth_http, gce_service):
      request = gce_service.instances().aggregatedList(project=PROJECT_ID)
      response = request.execute(auth_http)
    
      print response

    To make a request for a list of instances within a zone, make a GET request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances

    In the API client libraries, make a instances().list request:

    def listInstances(auth_http, gce_service):
      request = gce_service.instances().list(project=PROJECT_ID,
        zone='<zone>')
      response = request.execute(auth_http)
    
      print response

    Handling instance failures

    Unfortunately, individual instances will experience failures from time to time. This can be due to a variety of reasons, including scheduled maintenance, scheduled zone outages (in zones that do not yet support transparent maintenance), unexpected outages, hardware error, or other system failures. As a way to mitigate such situations, you should use persistent disks and back up your data routinely.

    If an instance fails, it will be restarted automatically, with the same root persistent disk, metadata, and instance settings as when it failed. To control the automatic restart behavior for an instance, see How to Set Scheduling Options. However, if an instance is terminated for a zone maintenance window it will stay terminated and will not be restarted when the zone exits the maintenance window.

    In general, you should design your system to be robust enough that the failure of a single instance should not be catastrophic to your application. For more information, see Designing Robust Systems.

    Creating a Custom Image

    You can create a custom instance image by customizing a provided image, and then load it onto new instances as they are brought up. See Creating and applying a custom image for details.

    Page: instances-and-network

    Every instance is assigned to a single network that defines how that instance can communicate with other instances, other networks, or the Internet. Instances do not need to be in the same zone to share a network. By default, all projects have a default network named default. New instances are automatically assigned to this network, which has the following default firewalls:

    • All traffic between all instances in the same network is allowed, over any port and any protocol
    • Incoming ssh connections (port 22) from anywhere are allowed

    Any other incoming traffic to an instance is blocked, unless you add additional firewalls that allow more connections. Additionally, you can assign arbitrary tags to an instance, such as "frontend" or "sqlserver", and use those as permitted firewall sources/targets. This is a more flexible and scalable system than specifying the source by an IP address. You can assign instance tags when you create your instance using the --tag flag.

    For more information about networks, see Networks and Firewalls.

    Contents

    Instance IP Addresses

    Instances support one or two IP addresses: a required network address, which is used for communicating within the network, and an optional external IP address, which is used to communicate with callers outside the network. Addresses can be assigned at instance creation time or after an instance has been created. Refer to Assigning IP Addresses to Existing Instances to see how to assign external IP addresses after the instance has been created.

    You can see the network and external IP addresses assigned to your instances by calling gcutil listinstances or gcutil getinstance.

    Instances can address each other by network or external address, or by instance name (DNS). Here is a comparison of these three techniques:

    • By instance name - The network transparently resolves an instance name to a network address for instances within that network. Addressing by instance names rather than network address is useful because network addresses can change each time an instance is deleted and recreated. Also, communication is free between instances in the same zone. However, instance names are addressable only within the same network, or when calling gcutil ssh from your local computer.
    • By network address - You can address packets to an instance by network address, but network addresses can change when an instance is restarted, unless you explicitly assign a network address to your instance. The network address is only addressable from another computer within the same network. For more information, see Network Addresses.
    • By external IP address - Only use an external IP address if you must communicate with an instance on another network. Packets sent to an instance using an external address are billed as external traffic, even if the sender is in the same network. For more information, see External IP Addresses.

    The following example demonstrates how to use gcutil ssh and ping to address your instances by instance name, external IP address, and network IP address.

    $ gcutil --project=myproject listinstances
    +-------------+-----------------+----------------------------------------------------------------+---------+---------------+-----------------+-------+--------+-------+---------+---------------+
    |     name    |     machine     |                              image                             | network |   network_ip  |   external_ip   | disks |  zone  | tags  |  status | statusMessage |
    +-------------+-----------------+----------------------------------------------------------------+---------+---------------+-----------------+-------+--------+-------+---------+---------------+
    | instance-1  | n1-standard-2   | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD  | default | 10.35.61.220  | 203.135.113.022 |       | <zone> |       | RUNNING |               |
    | instance-2  | n1-standard-2-d | projects/debian-cloud/global/images/debian-6-squeeze-vYYYYMMDD | default | 10.39.222.105 | 255.255.122.238 |       | <zone> |       | RUNNING |               |
    +-------------+-----------------+----------------------------------------------------------------+---------+---------------+-----------------+-------+--------+-------+---------+---------------+
    
    $ gcutil --project=myproject addfirewall icmpfirewall --allowed=icmp # allow ICMP traffic
    
    $ ping -c 1 203.135.113.022
    PING 203.0.113.1 (203.135.113.022) 56(84) bytes of data.
    64 bytes from 203.135.113.022: icmp_seq=1 ttl=55 time=17.5 ms
    
    --- 203.135.113.022 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 17.568/17.568/17.568/0.000 ms
    
    $ gcutil --project=myproject ssh instance-1
    Welcome to GCE Linux 12.04 LTS (...)
    Running a Google Compute Engine VM Instance
     * Documentation: https://developers.google.com/compute/
     * You are running on an EPHEMERAL root disk, which is NOT PERSISTENT.
       For persistent data, use Persistent Disks:
         https://developers.google.com/compute/docs/disks#persistentdisks
    
    
    The programs included with this system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    This software comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    Last login: Tue Aug 30 11:16:43 ...
    rufus@instance-1:~$ ping -c 1 instance-2
    PING instance-2.my-project.google (10.39.222.105) 56(84) bytes of data.
    64 bytes from instance-2.my-project.google (10.39.222.105): icmp_seq=1 ttl=64 time=1.00 ms
    
    --- instance-2.my-project.google ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 1.005/1.005/1.005/0.000 ms
    
    rufus@instance-1:~$ ping -c 1 10.39.222.105
    PING 10.39.222.105 (10.39.222.105) 56(84) bytes of data.
    64 bytes from 10.39.222.105: icmp_seq=1 ttl=64 time=1.47 ms
    
    --- 10.39.222.105 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 1.473/1.473/1.473/0.000 ms
    

    Network Addresses

    Every instance has a network address that is unique to the network. This address is automatically assigned by Google Compute Engine when you create the instance.

    Google Compute Engine resolves instance names to network addresses when called within the instance's network. For instance, from a VM running inside Google Compute Engine, you can address other instances using ping, curl, or any other program that expects a DNS name.

    Configuring a static network address

    Although Compute Engine doesn't allow creating an instance with a user-defined local IP address, you can use a combination of routes and an instance's ‑‑can_ip_forward ability to add local IP address as a network static address which then maps to your desired virtual machine instance.

    For example, if you want to assign 10.1.1.1 specifically as a network address to a virtual machine instance, you can create a static network route that sends traffic from 10.1.1.1 to your instance, even if the instance's network address assigned by Compute Engine doesn't match your desired network address.

    Use the following instructions to configure your network address using a static network route.

    Note: The following example assumes that your project just has the default network.

    1. Choose an IP address that doesn't belong to any network in your project.

      For this example, we are using 10.1.1.1.

    2. Create a new virtual machine instance and enable IP forwarding.

      By default, Compute Engine won't deliver packets whose destination IP address is different than the IP address of the instance receiving the packet. To disable this destination IP check, you can enable IP forwarding for the instance.

      Other than the zone, you can choose the other attributes of your instance, such as the machine type and image.

      $ gcutil --project=<project-id> addinstance my-configurable-instance --can_ip_forward --zone=us-central1-a
    3. Create a static network route to direct packets destined for 10.1.1.1 to your instance.
      $ gcutil --project=<project-id> addroute ip-10-1-1-1 --next_hop_instance=us-central1-a/instances/my-configurable-instance 10.1.1.1/32
    4. Add a new virtual interface to your instance.

      These instructions are available for Debian and CentOS instances. Select your desired operating system for the right instructions.

      Debian
      1. ssh into your instance.
        $ gcutil --project=<project-id> ssh my-configurable-instance
      2. Append the following lines to the /etc/network/interfaces file.
        # Change to root first
        user@my-configurable-instance:~$ sudo su -
        
        # Append the following lines
        root@my-configurable-instance:~$ cat <<EOF >>/etc/network/interfaces
        iface eth0 inet static
        address 10.1.1.1
        netmask 255.255.255.255
        EOF
      3. Restart the network.
        root@my-configurable-instance:~$ /etc/init.d/networking restart
      CentOS 6
      1. ssh into your instance.
        $ gcutil --project=<project-id> ssh my-configurable-instance
      2. Add the following lines to a new file named /etc/sysconfig/network-scripts/ifcfg-eth0:0
        # Change to root first
        user@my-configurable-instance:~$ sudo su -
        
        # Add the following lines
        root@my-configurable-instance:~$ cat <<EOF >>/etc/sysconfig/network-scripts/ifcfg-eth0:0
        DEVICE="eth0:0"
        BOOTPROTO="static"
        IPADDR=10.1.1.2
        NETMASK=255.255.255.255
        ONBOOT="yes"
        EOF
      3. Bring up your new interface.
        root@my-configurable-instance:~$ ifup eth0:0
    5. Check that your virtual machine instance interface is up by pinging 10.1.1.1 from inside your instance.
      user@my-configurable-instance:~$ ping 10.1.1.1

      You can also try pinging the interface from another instance in your project.

    External Addresses

    You can assign an optional externally visible IP address to specific instances. Outside callers can address a specific instance by external IP if the network firewalls allow it. Only instances with an external address can send and receive traffic from outside the network.

    Note: Google Compute Engine bills for external IP address usage, even for instances that may be within the same network. Any traffic through an external IP addressed will be billed. An instance must have an external address in order to be allowed to make calls outside the network. A firewall rule is not required to allow outgoing calls because packets sent by an instance are never blocked, unless they're using an explicitly blocked port. Return traffic on any open connection is always allowed until the connection reaches 10 minutes of inactivity, when the connection is considered abandoned and return traffic returns to being blocked.

    Google Compute Engine supports two types of externally-visible IP addresses: static IPs, which can be assigned to a project long term or ephemeral IPs, which are assigned for the lifetime of the instance. Once an instance is terminated, the ephemeral IP is released back into the general Google Compute Engine pool and becomes available for use by other projects.

    Ephemeral IP Addresses

    When you create an instance using gcutil, your instance is automatically assigned an ephemeral external IP address. You can choose not to assign an external IP address in gcutil by providing the --external_ip_address=none flag. If you are creating an instance using the API, you need to explicitly provide an accessConfig specification to request an external IP address. If you have an existing instance that doesn't have an external IP address, but would like to assign one, see Assigning IP Addresses to Existing Instances. For information on defining an accessConfig for your instance, see the API reference.

    Reserved IP Addresses

    If you need a static IP address that is assigned to your project until you explicitly release it, you can reserve a new IP address or promote an ephemeral IP address to using gcutil reserveaddress or making a PUT request to the appropriate regional Addresses collection. Static IP addresses are a regional resource and you must select a region where your IP address will live.

    Static IP addresses can only be used by one resource at a time. You cannot assign an IP address to multiple resources. There is also no way to tell whether an IP address is static or ephemeral after it has been assigned to a resource, except to compare it against the list of static IP addresses assigned to the project. Use gcutil listaddresses to see a list of static IP addresses available to the project.

    Note: An external IP address can only be used by a single instance through the access config, but it is possible that your instance may receive traffic from multiple forwarding rules, which may serve external IP addresses that are different than the IP address assigned to the instance. In summary, a virtual machine instance can:

    • Have one external IP address attached using the instance's accessConfig. Packets for this IP will have their destination IP translated to the intance's internal address.
    • Have any number of external IP addresses referencing the instance through forwarding rules and target pools.

    For more information, review the load balancing documentation.

    Reserving an IP Address

    To reserve a static IP address or promote an ephemeral using gcutil:

    gcutil --project=<project-id> reserveaddress --region=<region> [--source_address=<ephemeral-address>] <address-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --region=<region>
    [Required] The region for this IP address.
    --source_address=<ephemeral-address>
    [Optional] The ephemeral IP address to promote to a static IP address. If you do not specify this flag, Google Compute Engine automatically assigns a static IP address.
    <address-name>
    [Required] The name to use for this address. For example, myownip.

    With the API client libraries, use the addresses().insert function, passing in the region for the request and a request body that contains the name of the address and, if desired, the ephemeral IP address to promote. The following is a snippet from the Python client library:

    def reserveIpAddress(auth_http, gce_service):
      addressName = 'testaddress'
      ipAddressToPromote = 'x.x.x.x'
      body = {
        "name": addressName,
        "address": ipAddressToPromote
      }
      request = gce_service.addresses().insert(project=PROJECT_ID,
        region='<region>', body=body)
      response = request.execute(auth_http)
    
      print response

    To make a request to the API directly, make a PUT request to the following URI:

    http://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/addresses

    Your request body should contain the following (omit the address field if you are not promoting an ephemeral IP address):

    body = {
      name: “<address-name>”,
      address: “<address-to-promote>”
    }

    If this is a new static IP address, you can assign it to an instance using the --external_ip_address flag during instance creation:

    gcutil --project=<project-id> addinstance myinst --external_ip_address=x.x.x.x

    where x.x.x.x. is your static IP address. You can also assign an IP address to an existing instance. If you promoted an ephemeral IP address, it remains attached to its current instance.

    Listing Reserved IP Addresses

    To list the reserved IP addresses, run gcutil listaddresses or make a GET request to the API:

    gcutil --project=<project-id> listaddresses

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.

    With the APIs client libraries, use the addresses().list function. Here's a snippet from the Python client library that lists addresses from one region:

    def listIpAddresses(auth_http, gce_service):
      request = gce_service.addresses().list(project=PROJECT_ID,
        region='<region>')
      response = request.execute(auth_http)
    
      print response

    To make a request to the API directly, perform a GET request to the following URI with an empty request body:

    http://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/addresses

    To list all addresses in all regions, make a request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>aggregated/addresses
    In the client libraries, use the addresses().aggregatedList function:

    def listAllIpAddresses(auth_http, gce_service):
      request = gce_service.addresses().aggregatedList(project=PROJECT_ID)
      response = request.execute(auth_http)
    
      print response

    Getting Information about an IP Address

    To get information about a single IP address, use gcutil getaddress or make a GET request to the API.

    To use gcutil getaddress:

    gcutil --project=<project-id> getaddress --region=<region> <address-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --region=<region>
    [Required] The region where this IP address lives.
    <address-name>
    [Required] The IP address name to get.

    With the API client libraries, use the addresses().get method and specify the address for which you want to get more information:

    def getIpAddress(auth_http, gce_service):
      addressName = "testaddress"
      request = gce_service.addresses().get(project=PROJECT_ID,
        region='<region>', address=addressName)
      response = request.execute(auth_http)
    
      print response

    To make a request to the API directly, make a GET request with an empty request body to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/addresses/<address-name>

    Releasing an IP Address

    To release an IP address, use the gcutil releaseaddress command or by sending a DELETE request to the API.

    gcutil --project=<project-id> releaseaddress --region=<region> <address-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --region=<region>
    [Required] The region where this IP address resides.
    <address-name>
    [Required] The IP address to release.

    With the API client libraries, use the addresses().delete method, providing the address name and the region:

    def releaseAddress(auth_http, gce_service):
      addressName = "testaddress"
      request = gce_service.addresses().delete(project=PROJECT_ID,
        region='<region>', address=addressName)
      response = request.execute(auth_http)
    
      print response

    To make a request to the API directly, make a DELETE request to the following URI with an empty request body:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/addresses/<address-name>

    Assigning IP Addresses to Existing Instances

    gcutil allows you to assign external IP addresses to existing instances using the access configuration portion of the instance's network interface. External IP addresses, both ephemeral and static, can be assigned to existing instances in this manner.

    To add an external IP address to an existing instance, use the gcutil addaccessconfig command:

    gcutil --project=<project-id> addaccessconfig  <instance-name> --access_config_name=<name> --access_config_nat_ip=<ip-address> --access_config_type=<access-type> --network_interface_name=<interface-name>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --access_config_name=<name>
    [Required] The name of the access configuration to add. The default value is External NAT.
    --access_config_nat_ip=<ip-address>
    [Required] The external NAT IP of the access configuration. The value may be a static external IP that is reserved by the project or an ephemeral IP address. The default value is ephemeral which specifies that Google Compute Engine should assign an available unreserved ephemeral external IP.
    --access_config_type=<access-type>
    [Required] The access configuration type. The default value is ONE_TO_ONE_NAT. In the current release, ONE_TO_ONE_NAT is the only supported type.
    --network_interface_name=<interface-name>
    [Required] The name of the instance's network interface to add the access configuration. The default value is nic0.

    To delete an access configuration from an instance's network interface, use the gcutil deleteaccessconfig command:

    gcutil --project=<project-id> deleteaccessconfig  <instance-name> --access_config_name=<name> --network_interface_name=<interface-name>

    Important Flags and Parameters:

    --access_config_name=<name>
    [Required The name of the access configuration to delete. The default value is External NAT.
    --network_interface_name<interface-name>
    [Required] The name of the network interface from where to delete the access configuration. The default value is nic0.

    Page: load-balancing

    Google Compute Engine offers server-side load balancing so you can distribute incoming network traffic across multiple virtual machine instances. Load balancing is useful because it helps you support heavy traffic and provides redundancy to avoid failures.

    Google Compute Engine load balancing allows you to create forwarding rule objects that match and direct certain types of traffic to a load balancer. A target pool object controls the load balancer and contains the set of instances to send traffic to. Target pools also contain a health check object to determine the health of instances in the corresponding target pool. For example, a forwarding rule may match TCP traffic to port 80 on a public IP address of 1.2.3.4. Any traffic destined to that IP, protocol, and port is matched by this forwarding rule and directed to the virtual machine instances in a target pool associated with the forwarding rule.

    A target pool contains a set of virtual machine instances that handles incoming traffic from a forwarding rule. Google Compute Engine will route any traffic destined for an IP and port that is being served by a forwarding rule to instances in the specified target pools. Traffic is then spread across all instances in the target pool. A target pool can only contain virtual machines in the same region, and that region must be the same as any forwarding rules that reference it. You can associate target pools in different zones with a single forwarding rule to provide protection from planned or unplanned zone outages.

    The following diagram is an example load balancing setup, where there are two forwarding rule objects that direct traffic to a single target pool.

    Click to enlarge

    Load distribution algorithm

    By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port. Incoming TCP connections are spread across instances and each new connection may go to a different instance. All packets for a connection are directed to the same instance until the connection is closed.

    It is also possible to change the default hash method that determines how traffic is distributed. For more information, see sessionAffinity.

    Contents

    Prerequisites

    In order to use load balancing, you must:

    1. Use an image that is dated 2013-07-23 or newer for your instances.

      If your instances are using an older image, you must recreate your instances using a newer image, or, if you're using a custom image and do not want to use a new image, you will need to explicitly add the IP address from your forwarding rules to your operating system's virtual interface so it can accept connections destined to that load balanced IP. You will need to do this for every instance that should receive load balanced traffic on that IP address. For more details, see using an older image section.

    2. Use gcutil version 1.8.3 or newer.

      Download the newest version if you have not already.

    Quickstart

    Note: This quickstart assumes you are familiar with bash.

    This quickstart guide provides step-by-step instructions on how to set up the Google Compute Engine load balancing service to dispatch traffic to a few virtual machines. In addition to this quickstart, you can also start using the service with the Google Developers Console.

    The rest of this quickstart discusses how to:

    1. Set up your instances to run Apache.
    2. Configure the load balancing service.
    3. Send some traffic to your instances.

    This quickstart demonstrates each of these steps by setting up an Apache server on a virtual machine instance that is used by a target instance to handle traffic from a forwarding rule.

    Set Up Your instances to run Apache

    To begin this quickstart, we're going to create some instances with Apache installed.

    1. Create some startup scripts for your new instances.

      Depending on your operating system, your startup script contents might differ:

      • If you are planning to using Debian on your instances, run the following command:
        me@local:~$ echo "apt-get update && apt-get install -y apache2 && hostname > /var/www/index.html" > \
        $HOME/lb_startup.sh
      • If you are planning on use CentOS for your instances, run the following command:
        me@local:~$ echo "yum -y install httpd && service httpd restart && hostname > /var/www/html/index.html" > \
        $HOME/lb_startup.sh
    2. Create a tag for your future virtual machines, so you can apply a firewall to them later:
      me@local~$: TAG="www-tag"
    3. Choose a zone and a region for your virtual machines:
      me@local:~$ ZONE="us-central1-b"
      me@local:~$ REGION="us-central1"
    4. Create three new virtual machines:
      me@local:~$ gcutil --project=<project-id> addinstance www1 www2 www3 --zone=$ZONE \
                         --tags=$TAG --metadata_from_file=startup-script:$HOME/lb_startup.sh
    5. Create a firewall rule to allow external traffic to this virtual machine instances:
      me@local:~$ gcutil --project=<project-id> addfirewall www-firewall --target_tags=$TAG --allowed=tcp:80

    Now that your virtual machine instances are prepared, you can start setting up your load balancing configuration. You can verify that your instances are running by sending a curl request to each of the instance's external IP address:

    me@local:~$ curl <ip-address>

    To get the IP addresses of your instances, use gcutil listinstances.

    Configure the load balancing service

    Next, you will set up the load balancing service.

    1. Add a HTTP health check object.

      For this example, you are going to use the default settings for the health check mechanism, but you can customize this on your own.

      me@local:~$ gcutil --project=<project-id> addhttphealthcheck basic-check
    2. Add a target pool in the same region as your virtual machine instances.

      The region of your virtual machine instances is us-central1. You are also going to use your newly-created health check object for this target pool.

      me@local:~$ gcutil --project=<project-id> addtargetpool www-pool --region=$REGION --health_checks=basic-check \
                         --instances=$ZONE/instances/www1,$ZONE/instances/www2,$ZONE/instances/www3

      You can also add or remove instances after you've created the target pool. Instances within a target pool must belong to the same region but can be spread out across different zones in the same region. For example, you can have instances in zone us-central1-a and instances in zone us-central1-b in one target pool because they are in the same region, us-central1.

    3. Add a forwarding rule serving on behalf of an external IP and port range, that points to your target pool.

      You can choose to use a reserved static IP address or an ephemeral IP address assigned by Google Compute Engine for your forwarding rules. For this example, you are going to use an ephemeral IP address by leaving out the --ip flag in the command:

      me@local:~$ gcutil --project=<project-id> addforwardingrule www-rule --region=$REGION --port_range=80 --target=www-pool

    Now that you have configured your load balancing service, you can start sending traffic to the forwarding rule and watch the traffic be dispersed to different instances.

    Send traffic to your instances

    To start sending traffic to your forwarding rule, you need to grab the forwarding rule's external IP address:

    me@local:~$ gcutil --project=<project-id> listforwardingrules
    +----------+-------------+-------------+---------+----------+------------+----------------------------------+
    |   name   | description |   region    |   ip    | protocol | port-range |              target              |
    +----------+-------------+-------------+---------+----------+------------+----------------------------------+
    | www-rule |             | us-central1 | 1.2.3.4 | TCP      | 80-80      | us-central1/targetPools/www-pool |
    +----------+-------------+-------------+---------+----------+------------+----------------------------------+
    

    Assign the external IP address for the www-rule forwarding rule to the IP alias:

    me@local:~$ IP="1.2.3.4"

    Next, use curl to access the IP address. The response will alternate randomly among the three instances. If your response is initially unsuccessful, wait 30 seconds or so for the configuration to be fully loaded and for your instances to be marked healthy before trying again:

    me@local:~$ while true; do curl -m1 $IP; done

    Forwarding Rules

    Forwarding rules work in conjunction with target pools and target instances to support load balancing and protocol forwarding features. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target pools (for load balancing) or target instances (for protocol forwarding). It is not possible to use either of these features without a forwarding rule.

    Forwarding Rule resources live in the Forwarding Rules collection. Each forwarding rule matches a particular IP address, protocol, and optionally, port range to a single target pool or target instance. When traffic is sent to an external IP address that is served by a forwarding rule, the forwarding rule directs that traffic to the corresponding target pool or target instances. You can create up to 50 forwarding rule objects per project.

    A forwarding rule object contains the following properties:

    • name - [Required] The name of the forwarding rule. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    • region - [Required] The region where this forwarding rule resides. For example:
      "region" : "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region-name>"
    • IPAddress - [Optional] A single IP address this forwarding rule matches to. All traffic directed to this IP address will be handled by this forwarding rule. The IP address must be a static reserved IP address or, if left empty, an ephemeral IP address is assigned to the forwarding rule upon creation. For example:
      "IPAddress" : "1.2.3.4"
    • target [Required] - The Target Pool or Target Instance resource that this forwarding rule directs traffic to. Must be a fully-qualified URL such as:
      http://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>

      For target instances, the URL will look like:

      http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/targetInstances/<target-pool-name>

      The target pool or target instance must exist before you create your forwarding rule and must reside in the same region as the forwarding rule.

    • IPProtocol - [Optional] The type of protocol that this forwarding rule matches. Valid values are:

      If left empty, this field will default to TCP. Also note that certain protocols can only be used with target pools or target instances:

      • If you use ESP, AH, or SCTP, you must specify a target instance. It is not possible to specify a target pool when using these protocols.
      • If you use TCP or UDP, you can specify either a target pool or a target instance.
    • portRange - [Optional] A single port or single contiguous port range, ranging from low to high for which this forwarding rule matches. Packets of the specified protocol sent to these ports will be forwarded on to the appropriate target pool or target instance. If this field is left empty, then the forwarding matches traffic for all ports for the specified protocol. For example:
      "portRange" : ["200-65536"]

      You can only specify this field for TCP, UDP, and SCTP protocols.

    Adding a Forwarding Rule

    To add a new forwarding rule, you can use the gcutil addforwardingrule command or create a POST request to the ForwardingRules collection. To create a forwarding rule using gcutil:

    gcutil --project=<project-id> addforwardingrule <forwarding-rule-name> \
           [--description=<description-text>] --ip=<external-ip-address> \
           [--target_pool=<target-pool> --target_instance=<target-instance> \
           --protocol=<protocol>] [--port_range=<port-range>]  \
           --region=<region>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this forwarding rule.
    <forwarding-rule-name>
    [Required] The name for this forwarding rule.
    --description=<description-text>
    [Optional] The description for this forwarding rule.
    --ip=<external-ip-address>
    [Optional] An external static IP that this forwarding rule serves on behalf of. This can be a reserved static IP, or if left blank or unspecified, the default is to assign an ephemeral IP address. Multiple forwarding rules can use the same IP address as long as their port range and protocol do not overlap. For example, --ip="1.2.3.106".
    --target_pool=<target-pool>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies a target pool that handles traffic from this forwarding rule. The target pool must already exist before you can use it for a forwarding rule and it must reside in the same region as the forwarding rule. This is specifically for load balancing. For example: 'mytargetpool'.
    --target_instance=<target-instance>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies target instance that handles traffic from this forwarding rule. This is specifically for protocol forwarding. You must specify only one of either --target_pool or --target_instance..
    --protocol=<protocol>
    [Optional] The protocol that this forwarding rule is handling. If left empty, this field will default to TCP. Also note that certain protocols can only be used with target pools or target instances:
    • If you use ESP, AH, or SCTP, you must specify a target instance. It is not possible to specify a target pool when using these protocols.
    • If you use TCP or UDP, you can specify either a target pool or a target instance.
    --port_range=<port-range>
    [Optional] The list of ports for which this forwarding rule is responsible for. Packets of the specified protocol sent to these ports will be forwarded on to the appropriate target pool. If this field is left empty, then the forwarding rule sends traffic for all ports for the specified protocol. Can be a single port, or a range of ports. You can only set this field for TCP, UDP, and SCTP protocols.
    --region=<region>
    [Required] The region where this forwarding rule should reside. For example, us-central1. This must be the same region as the target pool.

    To add a forwarding rule using the API, perform a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/forwardingRules

    Your request body should contain the following fields:

     bodyContent = {
       "name": <name>,
       "IPAddress": <external-ip>,
       "IPProtocol": <tcp-or-udp>,
       "portRange": <port-range>,
       "target": <uri-to-target-resource>
     }

    Listing Forwarding Rules

    To get a list of forwarding rules, use gcutil listforwardingrules.

    gcutil --project=<project-id> listforwardingrules [--region=<region>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for which you want to list your forwarding rules.
    --region=<region>
    [Optional] The region for which you want to list forwarding rules. If not specified, all forwarding rules across all regions are listed.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules

    Getting Forwarding Rules

    To get information about a single forwarding rules, use gcutil getforwardingrule.

    gcutil --project=<project-id> getforwardingrule <forwarding-rule-name> --region=<region>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for which you want to get your forwarding rule.
    --region=<region>
    [Required] The region where the forwarding rule resides.
    <forwarding-rule-name>
    [Required] The forwarding rule name.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules/<forwarding-rule-name>

    Updating the Forwarding Rule Target

    If you have already created a forwarding rule but want to change the target pool that the forwarding rule is using, you can do so using the gcutil setforwardingruletarget command:

    gcutil --project=<project-id> setforwardingruletarget <forwarding-rule-name> \
           --region=<region> [--target_pool=<target-pool-name> --target_instance=<target-instance-name>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for this request.
    <forwarding-rule-name>
    [Required] The forwarding rule name.
    --region=<region>
    [Required] The region where the forwarding rule resides.
    --target_pool=<target-pool>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies a target pool to add or update. The target pool must already exist before you can use it for a forwarding rule and it must reside in the same region as the forwarding rule. This is specifically for load balancing. For example: 'mytargetpool'.
    --target_instance=<target-instance>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies target instance to add or update for this forwarding rule. This is specifically for protocol forwarding. You must specify only one of either --target_pool or --target_instance..

    In the API, make a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/forwardingRules/<forwardingRule>/setTarget

    Your request body should contain the URL to the target instance or target pool resource you want to set. For instance, for target pools, the URI format should be:

    body = {
      "target": "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>"
    }

    Deleting Forwarding Rules

    To delete a forwarding rule, use the gcutil deleteforwardingrule command:

    gcutil --project=<project-id> deleteforwardingrule [-f] <forwarding-rule-name> [--region=<region>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID where this forwarding rule lives.
    -f, --force
    [Optional] Bypass the confirmation prompt to delete this forwarding rule.
    <forwarding-rule-name>
    [Required] The forwarding rule to delete.
    --region=<region>
    [Optional] The region of this forwarding rule. If you do not specify this flag, gcutil performs an extra API request to determine the region for your forwarding rule.

    To delete a forwarding rule from the API, make a DELETE request to the following URI, with an empty request body:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules/<forwarding-rule-name>

    Target pools

    Note: If you intend for your target pool to contain a single virtual machine instance, you should consider using the Protocol Forwarding feature instead.

    A Target Pool resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Google Compute Engine picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port. See the Load Distribution Algorithm for more information about how traffic is distributed to instances.

    Target pools can only be used with forwarding rules that handle TCP and UDP traffic. For all other protocols, you must create a target instance. You must create a target pool before you can use it with a forwarding rule. Each project can have up to 50 target pools. A target pool is made up of the following properties:

    name
    [Required] The name of this target pool. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])?, which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    description
    [Optional] A user-defined description of this target pool.
    region
    [Required] The fully-qualified URL to the region where this target pool should live. This should be the same region where your desired instances will live. For example:
    "region" : "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region-name>"
    healthChecks[ ]
    [Optional] An optional list of health checks for this target pool. See Health Checking for more information.
    instances[ ]
    [Required] A list of instance URLs that should handle traffic for this target pool. All instances must reside in the same region as the target pool, but instances can belong to different zones within a single region. For example:
    "instances" : [
      "https://www.googleapis.com/compute/v1/projects/<project-id>/zones/us-central1-a/instances/<instance-name>",
      "https://www.googleapis.com/compute/v1/projects/<project-id>/zones/us-central1-b/instances/<instance-name>"
    ]
    sessionAffinity
    [Optional] Describes the method used to select a backend virtual machine instance. You can only set this value during the creation of the target pool. Once set, you cannot modify this value. By default, a 5-tuple method is used and the default value for this field is NONE. The 5-tuple hash method selects a backend based on:
    • Layer 4 Protocol (e.g. TCP, UDP)
    • Source / Destination IP
    • Source / Destination Port

    5-tuple hashing provides a good distribution of traffic across many virtual machines but if you wanted to use a single backend with a specific client ('stick' a client to a single virtual machine instance), you can also specify the following options:

    CLIENT_IP_PROTO
    3-tuple hashing, which includes the source / destination IP and network protocol
    CLIENT_IP
    2-tuple hashing, which includes the source / destination IP

    In general, if you select a 3-tuple or 2-tuple method, it will provide for better session affinity than the default 5-tuple method, at the cost of possibly unequal distribution of traffic.

    Caution: If a large portion of your clients are behind a proxy server, you should not use the sessionAffinity feature because it would force all clients behind the proxy to be pinned to a specific backend.

    backupPool
    [Optional] A fully-qualified URL to another target pool resource. You must also define failoverRatio to use this feature. If the ratio of healthy virtual machines in your primary target pool falls below the failoverRatio, Google Compute Engine sends traffic to your backup pool. You can only provide one backup pool per primary target pool. The backup pool must be in the same region as the primary target pool. If the ratio of healthy instances in your primary target pool falls below your configured failover ratio, Google Compute Engine uses the following rules to route your traffic:
    1. If a primary target pool is declared unhealthy (falls below the failover ratio), traffic will be sent to healthy instances in the backup pool.
    2. If the primary target pool is declared unhealthy, but there are no remaining healthy instances in the backup pool, traffic is sent to the remaining healthy instances in the primary pool.
    3. If the primary pool is unhealthy and there are no remaining healthy instances in either pools, traffic will be sent to all instances in the primary pool so as to not drop traffic.
    4. If the primary pool doesn't contain any instances, and none of the instances in the backup pool are healthy, traffic will be sent to all instances in the backup pool so as to not drop any traffic.

    At most, only one level of failover is supported. For example, if target pool A has backup pool B and back pool B has a backup pool C, then traffic intended for target pool A can only reach up to backup pool B and not C.

    Note: If you intend to use backup target pools, you should set up health checks because backup target pools will not work correctly without health checks enabled.

    failoverRatio - [Optional]
    A float between 0.0 and 1.0, which determines when this target pool is declared unhealthy. For example, if this value is set to .1, then this target pool is declared unhealthy if the number of healthy instances is below .1 (10%). You must define this if you define the backupPool field.

    The diagram below helps visualize how the failover ratio and backup pool works together:

    Adding a target pool

    To add a target pool, use the gcutil addtargetpool command or make a request to the target pool URI. To use gcutil:

    gcutil --project=<project-id> addtargetpool <target-pool-name> \
           --region=<region> [--health_checks=<health-checks> \
           --backup_pool=<backup-pool> --failover_ratio=<failover-ratio>
           --session_affinity=<session-affinity>] --instances=<instance-names>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this target pool.
    --region=<region>
    [Required] The region to create this target pool in. For example, --region=us-central1.
    <target-pool-name>
    [Required] The name for this target pool. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])?, which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    --health_checks=<health-checks>
    [Optional] The HTTP health check resources to use for instances this target pool. The load balancing service supports, at most, one health check attached to a target pool. If this is left empty, which is the default value, traffic is sent to all instances within the pool as if the instances were all healthy. The health status for this pool will appear as unhealthy as a warning that the target pool is not protected from an instance failure or application failure.
    --backup_pool=<backup-pool>
    [Optional] Specifies the backup target pool to use if the ratio of healthy instances in this target pool falls under the --failover_ratio. The backup pool must be in the same region as the target pool. If you set this flag, you must also set the --failover_ratio flag. See backupPools for more information.

    Note: If you intend to use backup target pools, you should set up health checks because backup target pools will not work correctly without health checks enabled.

    --failover_ratio=<failover-ratio>
    [Optional] Specifies the ratio of the healthy instances in your target pool before it is considered unhealthy. You must declare a float between 0.0 and 1.0. For example, if you set this to .1 (10%), and the percentage of healthy instances in this target pool falls below 10%, Google Compute Engine will start using the backup pool. You must specify this flag if you set the --backup_pool flag. See failoverRatio for more information.
    --session_affinity=<session-affinity>
    [Optional] Specifies the session affinity option for the load balancing service. Session affinity describes how the load balancing service should connect external clients to backend machines. By default, this is NONE, which means that Google Compute Engine uses a 5-tuple hashing method to connect traffic from external clients to backend instances. Other available values are the CLIENT_IP value which means that the service should use a 2-tuple hashing method to connect external clients to a single backend or a CLIENT_IP_PROTO value, which means that the service should use a 3-tuple hashing method to connect external clients to a backend. For more information, see sessionAffinity.
    --instances=<instance-names>
    [Optional] A comma-separated list of instances to use for this target pool. For example, --instances=us-central1-a/instances/myinst,us-central1-b/instances/testinstance,us-central1-a/instances/testinstance2. All instances in one target pool must belong to the same region as the target pool. Instances do not need to exist at the time the target pool is created and can be created afterwards.

    If you try to specify duplicate instance names, such as us-central1-a/instances/foo,us-central1-b/instances/bar,us-central1-a/instances/foo, Google Compute Engine ignores the duplicate entries and accepts us-central1-a/instances/foo, us-central1-b/instances/bar.

    To create a target pool in the API, make a HTTP POST request to the following URI:

    https://www.googleapis.com/v1/compute/projects/<project-id>/regions/<region>/targetPools

    Your request body must contain the required properties described above:

    bodyContent = {
        "name": name,
        "instances": [
           "http://www.googleapis.com/v1/compute/project/<project-id>/zone/<zone>/instances/<instance-name>",
           "http://www.googleapis.com/v1/compute/project/<project-id>/zone/<zone>/instances/<instance-name>",
        ]
      }

    Listing target pools

    To list target pools, run gcutil listtargetpools.

    gcutil --project=<project-id> listtargetpools

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for which you want to list target pools.

    In the API, make an empty HTTP GET request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools

    Getting a target pool

    To get information about a single target pool, use the gcutil gettargetpool command:

    gcutil --project=<project-id> gettargetpool --region=<region> <target-pool-name>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this target pool.
    --region=<region>
    [Required] The region this target pool is from. For example, --region=us-central1.
    <target-pool-name>
    [Required] The target pool to get.

    In the API, send an empty HTTP GET request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>

    Getting the health status of instances

    To check the current health status of an instance in your target pool or of all instances in the target pool, you can use the gcutil getargetpoolhealth command. The command returns the health status as determined by the configured health check, either healthy or unhealthy.

    To use gcutil:

    gcutil --project=<project-id> gettargetpoolhealth <target-pool-name> \
           [--instances=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n>] \
           --region=<region>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <target-pool>
    [Required] The target pool for which you want to query the instance.
    --instances=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n>
    [Optional] Required if your want to get the health status of specific instances within a target pool. The instance or instances to query for must be in the format <zone>/instances/<instance>. For example, us-central1-a/instances/instance1,us-central1-b/instances/instance2.

    If you do not define any instances, Google Compute Engine will check the health status of all instances in the target pool.

    --region=<region>
    [Required] The region for the target pool.

    In the API, make a HTTP POST request to the following URI with the instance specified in the request body:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<targetPool>/getHealth

    The request body should contain the following:

    body = {
      "instance": "http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>"
    }

    Deleting a target pool

    To delete a target pool, you must first make sure that the target pool is not being referenced by any forwarding rules. If a forwarding rule is currently referencing a target pool, you must delete the forwarding rule to remove the reference.

    After you remove a target pool from being referenced by any forwarding rules, delete it using the gcutil deletetargetpool command:

    gcutil --project=<project-id> deletetargetpool --zone=<zone> [--f] <target-pool-name>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this target pool.
    --region=<region>
    [Required] The region to delete this target pool from. For example, --region=us-central1.
    --f, --force
    [Optional] Bypass the confirmation prompt to delete this target pool.
    <target-pool-name>
    [Required] The target pool to delete.

    In the API, make an empty HTTP DELETE request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>

    Updating target pools

    If you created you target pool and want to update the target pool specifications, such as adding or removing instances from the instance list, or adding or removing a health check object, you can use the following custom verbs to do so.

    Adding and removing an instance from a target pool

    You can add or remove instances from an existing target pool using the gcutil addtargetpoolinstance and gcutil removetargetpoolinstance commands.

    In the API, you can make a HTTP POST request to the following URIs:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<targetPool>/removeInstance
    https://www.googleapis.com/compute/v1/projects/<project-d>/regions/<region>/targetPools/<targetPool>/addInstance

    The body of your request should include the fully-qualified URIs to the instances that you want to add or remove:

    body = {
    ...
     "instances": [
        {
          "instance": "http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>"
        },
        {
          "instance": "http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>"
        },
        ...
      ]
    ...
    }

    To use the gcutil addtargetpoolinstance command:

    gcutil --project=<project-id> addtargetpoolinstance <target-pool> \
           --instances=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <target-pool>
    [Required] The target pool for which you want to add an instance.
    --instance=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n>
    [Required] The instance to add to this target pool, in the format <zone>/instances/<instance>. For example, us-central1-a/instances/instance1. The instance must be in the same region as the target pool.
    --region=<region>
    [Required] The region for the target pool.

    Note: If you add an instance that already exists in the target pool, nothing will happen, although the operation will return as successful.

    To use the gcutil removetargetpoolinstance command:

    gcutil --project=<project-id> removetargetpoolinstance <target-pool> \
           --instances=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n> \
           --region=<region>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <target-pool>
    [Required] The target pool for which you want to remove an instance.
    --instance=<zone-1>/instances/<instance-1>,<zone-2>/instances/<instance-2>,<zone-n>/instances/<instance-n>
    [Required] The instance or instances to remove from this target pool, in the format <zone>/instances/<instance>. For example,us-central1-a/instances/instance1,us-central1-b/instances/instance2.
    --region=<region>
    [Required] The region for the target pool.

    For more information, see the API reference documentation for targetPools:addInstance and targetPools:removeInstance.

    Adding and removing a health check from a target pool

    Health check objects are standalone, global resources that can be associated or disassociad from any target pool. If you'd like to disassociate or associate a health check to an existing target pool, you can do so using the gcutil addtargetpoolhealthcheck and gcutil removetargetpoolhealthcheck commands. You can also use the API directly.

    If you disassociate all health checks from a target pool, Google Compute Engine will treat all instances as healthy and send traffic to all instances in the target pool. However, if you query for the health status of a target pool without a health check, the status will return as unhealthy to indicate that the target pool does not have a health check. We recommend that your target pools should have associated health checks to help you manage your instances.

    To associate or disassociate a health check using the API, make a HTTP POST request to the appropriate URIs:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<targetPool>/removeHealthCheck
    https://www.googleapis.com/compute/v1/projects/<project-d>/regions/<region>/targetPools/<targetPool>/addHealthCheck

    The body of your request should contain the health check to associate or disassociate:

    body = {
      "healthCheck": "http://www.googleapis.com/compute/v1/projects/<project-id>/global/httpHealthChecks/<httpHealthCheck>"
      }
    

    For more information, see the API reference documentation for targetPools:addHealthCheck and targetPools:removeHealthCheck.

    To use the gcutil addtargetpoolhealthcheck command:

    gcutil --project=<project-id> addtargetpoolhealthcheck <target-pool> \
           --health_check=<health-check-name> --region=<region>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <target-pool>
    [Required] The target pool for which you want to add a health check.
    --health_check=<health-check-name>
    [Required] The health check to add to this target pool.
    --region=<region>
    [Required] The region for the target pool.

    To use the gcutil removetargetpoolhealthcheck command:

    gcutil --project=<project-id> removetargetpoolhealthcheck <target-pool> \
           --health_check=<health-check-name> --region=<region>

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <target-pool>
    [Required] The target pool for which you want to remove a health check.
    --health_check=<health-check-name>
    [Required] The health check to remove from this target pool.
    --region=<region>
    [Required] The region for the target pool.

    Adding and removing a backup target pool

    When you first create a target pool, you can choose to apply a backup target pool that receives traffic if your primary target pool becomes unhealthy. If you didn't specify a backup target pool when you created your primary target pool and would like to add one, or if you wanted to remove a backup target pool from an existing target pool, you can do so using gcutil settargetpoolbackup or using the setBackup() method in the API. If you have never set up a backup target pool before, note that you should also set up health checks for the feature to work correctly.

    Caution: If your target pool currently has the sessionAffinity field set, resizing the target pool could cause requests that come from the same IP to go to a different instance initially. Eventually, all connections from the IP will go to the same virtual machine, as the old connections close.

    To update the a backup pool resource using gcutil:

    gcutil --project=<project-id> settargetpoolbackup <primary-target-pool> \
                --backup_pool=<backup-pool> --failover_ratio=<failover-ratio>

    Important flags and parameters

    --project=<project-id>
    [Required] The project ID for this request.
    <primary-target-pool>
    [Required] The primary target pool for which you want to add or remove a backup pool.
    --backup_pool=<backup-pool>
    [Required] Sets the backup pool. If you do not set this flag, Google Compute Engine removes the backup pool from your primary target pool. If you set this flag, you must also set the --failover_ratio flag; additionally, your backup target pool must also reside in the same region as your primary target pool.
    --failover_ratio=<failover-ratio>
    [Optional] Specifies the ratio of the healthy instances in your target pool before it is considered unhealthy. You must declare a float between 0.0 and 1.0. For example, if you set this to .1 (10%), and the percentage of healthy instances in this target pool falls below 10%, Google Compute Engine will start using the backup pool. You must specify this flag if you set the --backup_pool flag. If you do not set this value, Google Compute Engine removes any existing failover ratio for this target pool, disabling the backup target pool.

    To make a request to update or remove a backup pool through the API, make a HTTP POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/project/regions/<region>/targetPools/<targetPool>/setBackup?failoverRatio=<failover>

    Your resource body must contain the URL to your backup pool:

    body = {
      "target": "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<backup-pool-name>"
    }

    If you define an empty target, or do not define a failover ratio, the backup pool behavior will be disabled for this target pool.

    For more information, see the API reference documentation for targetPools:setBackup.

    Health checking

    When adding a target pool, you can also specify a health check object to use to determine the health of your instances. Health check objects are global resources that can be associated with any target pools.

    To perform health checks, Compute Engine sends health check requests from the IP 169.254.169.254 to each instance within a target pool and the result of the health check is used to determine if the instance receives new connections. Compute Engine will forward new connections only to instances that are marked healthy. If an instance becomes unhealthy, it continues to receive packets for its existing connections until they are terminated or the instance terminates. Unhealthy instances will not receive new connections. This allows instances to shutdown gracefully without abruptly breaking TCP connections. To take advantage of this, your application should fail its health check for a period of time before shutting down or before you remove it from the target pool.

    Although health checks are optional, Google Compute Engine recommends health checks for your target pools to protect from failures.

    Steps to set up health checks

    To set up health checking, you must:

    1. Create a health check object.

      A health check object must first exist before you can use it. For the full details for creating a health check object, see adding a health check. You can quickly add a health check with the default settings by running:

      $ gcutil --project=<project-id> addhttphealthcheck my-new-healthcheck
    2. Add a health check object to a target pool.

      You can add a health check to an existing target pool, or to a new target pool. This example adds the health check object to an existing target pool:

      $ gcutil --project=<project-id> --region=<region> addtargetpoolhealthcheck my-pool-name --health_check=my-new-healthcheck
    3. Allow connections from the Compute Engine health check URL.

      Compute Engine sends health check requests to each instance from the IP 169.254.169.254.You will need to ensure that the correct firewall rules are in place to allow 169.254.169.254 to connect to your instances.

    Adding a health check

    Before you can associate a health check with a target pool, you must first create an HTTP Health Check resource. Keep in mind that creating a health check resource does not automatically apply the health check resource to a target pool. You must manually associate a health check resource with a target pool before it can perform health checking for you. For more information, see the Adding a Target Pool section, which describes how to associate a health check during the creation of new target pool, or review the Adding or removing a health check from a target pool section to add a health check to an existing target pool.

    A health check resource is made up of the following properties:

    name
    [Required] The name for this health check.
    host
    [Optional] The value of the host header used in this HTTP health check request. For example, if you are serving webpages from an instance using the domain www.mydomain.com, you may want to set the hostname as www.mydomain.com so that a health check for that host is performed. By default, this is empty and Google Compute Engine automatically sets the host header in health requests to the same external IP address as the forwarding rule associated with this target pool. For example, if a forwarding rule has an external IP address of 1.2.3.4 and is directing traffic to a target pool named tp1 that has a health check object with a default host setting, the host header would be set to 1.2.3.4.
    requestPath
    [Optional] The request path for this health check. For example, /healthcheck. The default value is /.
    port
    [Optional] The TCP port number for this health check request. The default value is 80.
    checkIntervalSec
    How often, in seconds, to perform a health check for an instance. The default value is 5 seconds.
    timeoutSec
    If Google Compute Engine doesn't receive a HTTP 200 response from the instance by the timeoutSec, the health check request is considered a failure. The default is 5 seconds.
    unhealthyThreshold
    The number of consecutive health check failures before a healthy instance is marked as unhealthy. The default is 2.
    healthyThreshold
    The number of consecutive successful health check attempts before an unhealthy instance is marked as healthy. The default is 2.

    Note: It is not possible to define different health check parameters for each instance. You can only define health check parameters that apply to all instances in that target pool.

    To add a health check object, use the gcutil addhttphealthcheck command or make a HTTP POST request to the health check URI. To use the gcutil addhttphealthcheck command:

    gcutil --project=<project-id> addhttphealthcheck <health-check-name> \
                [--check_interval_sec=<interval-in-secs> --check_timeout_sec=<timeout-secs> \
                --description=<description> --healthy_threshold=<healthy-threshold> \
                --unhealthy_threshold=<unhealthy-threshold> --host=<host> \
                --request_path=<path> --port=<port>]

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <health-check-name>
    [Required] Name of the health check.
    --check_interval_sec=<interval-in-secs>
    [Optional] How often, in seconds, to perform a health check. The default value is 5 seconds.
    --check_timeout_sec=<timeout-secs>
    [Optional] If Google Compute Engine does not receive a HTTP 200 response from the instance by timeoutSec, the health check request is considered a failure. The default is 5 seconds.
    --description=<description>
    [Optional] A description for this health check.
    --healthy_threshold=<healthy-threshold>
    [Optional] The number of consecutive of successful health check attempts before an unhealthy instance is considered healthy. The default is 2.
    --unhealthy_threshold=<unhealthy-threshold>
    [Optional] The number of consecutive health check failures before a healthy instance is considered unhealthy. The default is 2.
    --host=<host>
    [Optional] The value of the host header used in this HTTP health check request. For example, if you are serving webpages from an instance using the domain www.mydomain.com, you may want to set the hostname as www.mydomain.com so that a health check for that host is performed. By default, this is empty and Google Compute Engine automatically sets the host header in health requests to the same external IP address as the forwarding rule associated with this target pool. For example, if a forwarding rule has an external IP address of 1.2.3.4 and is directing traffic to a target pool named tp1 that has a health check object with a default host setting, the host header would be set to 1.2.3.4.
    --request_path=<path>
    [Optional] The request path to use for this health check. By default, this is /.
    --port=<port>
    [Optional] The port to use for this health check. This can differ from the ports configured on your forwarding rule. By default, this is port 80.

    To make a request to the API, make a HTTP POST request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/global/httpHealthChecks

    The body of your request should contain, at a minimum, the following fields:

    body = {
      "name" : <health-check-name>,
    }

    Refer to the httpHealthChecks reference documentation for more information on constructing a http health check API request.

    Updating health checks

    If you need to update the properties of an existing health check, you can use the gcutil updatehttphealthcheck command or make a HTTP PUT or HTTP PATCH request to the appropriate URI. To use gcutil updatehttphealthcheck, provide all the same flags as if you're adding a health check. The values of your request will replace the existing values of the health check:

    gcutil --project=<project-id> updatehttphealthcheck <health-check-name> \
                --check_interval_sec=<interval-in-secs> --check_timeout_sec=<timeout-secs> \
                --description=<description> --healthy_threshold=<healthy-threshold> \
                --unhealthy_threshold=<unhealthy-threshold> --host=<host> \
                --request_path=<path> --port=<port>

    See Adding a health check for a detailed description of the above flags.

    In the API, you can choose to update your health check using the standard HTTP PUT request or use HTTP PATCH to partially update your health check. Make either a HTTP PUT, or HTTP PATCH request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project>/global/httpHealthChecks/<health-check-name>

    Your request body should contain your desired fields and values to update for this health check. See the httpHealthCheck resource representation to see all available fields for this resource.

    Listing health checks

    To list health checks, perform a gcutil listhttphealthchecks command:

    gcutil --project=<project-id> listhttphealthchecks

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.

    In the API, make a HTTP GET request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/global/httpHealthChecks

    Getting a health check

    To get information about a health check, use gcutil gethttphealthcheck:

    gcutil --project=<project-id> gethttphealthcheck <health-check-name>

    Important parameters and flags:

    --project=<project-id>
    [Required] The project ID for this request.
    <health-check-name>
    [Required] The name of the health check to get.

    In the API, make a HTTP GET request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/global/httpHealthChecks/<health-check-name>

    Deleting a health check

    In order to delete a health check object, you must remove references to the health check from any existing target pools, or delete the target pools that are using the health check object altogether. If you just want to update the health check resource, you can do using the gcutil updatehttphealthchecks command.

    To delete a health check object, use gcutil deletehttphealthcheck or make a HTTP DELETE request to the appropriate URI. To use gcutil deletehealtcheck:

    gcutil --project=<project-id> deletehttphealthcheck [-f] <health-check-1> <health-check-2> <health-check-n>

    Important parameters and flags:

    --project=<project-id>
    [Required] The project ID for this request.
    --f, --force
    [Optional] Bypass the confirmation prompt to delete these health checks.
    <health-check-n>
    [Required] The name of the health check or health checks to delete.

    In the API, make an empty HTTP DELETE request to the following URI:

    http://www.googleapis.com/compute/v1/projects/<project-id>/global/httpHealthChecks/<health-check-name>

    Using an older image

    In order to use load balancing, your instances must use an image dated later than 2013-07-18. If you would prefer to use an older image or a custom image, you need to explicitly add the IP address from your forwarding rules to your operating system's virtual interface so it can accept connections that are destined to that load balanced IP. You will need to do this for all instances that should receive load balanced traffic on that IP address. Without this, your operating system's network stack will reject packets destined to that IP since the load balancing service does not translate load-balanced addresses to your internal instances' IP addresses.

    For example, if a forwarding rule is load balancing on behalf of the external IP address 1.2.3.4, you would need to add that IP address to all the virtual machine instances that the forwarding rule is routing traffic to. To do that, run the following command on each of your instances:

    sudo ip route add to local 1.2.3.4/32 dev eth0 proto 66

    You can replace eth0:1 with your desired virtual interface. If you would like to programmatically get the list of external IPs pointing at your virtual machine instance, you can query the metadata server at the following URI:

    http://metadata/computeMetadata/v1beta1/instance/network-interfaces/0/forwarded-ips/?recursive=true

    Page: machine-types

    Machine types determine the physical specifications of your machines, such as the amount of memory, virtual cores, and persistent disk limits an instance will have. All machine types are currently managed by Google Compute Engine.

    Machine types are divided in different classes, including:

    • Standard machine types
    • High CPU machine types
    • High memory machine types
    • Small machine types

    Each machine type is billed differently. For pricing information, review the pricesheet.

    Available Machine Types

    Standard Machine Types

    Machine Name Description Virtual CPUs1 Memory (GB) GCEUs
    (what is this?)
    Max Number Of Persistent Disks (PDs)* Max Total PD size (TB)
    n1-standard-1 Standard 1 CPU machine type with 1 virtual CPU and 3.75 GB of memory. 1 3.75 2.75 16 10
    n1-standard-2 Standard 2 CPU machine type with 2 virtual CPUs and 7.5 GB of memory. 2 7.50 5.50
    n1-standard-4 Standard 4 CPU machine type with 4 virtual CPUs and 15 GB of memory. 4 15 11
    n1-standard-8 Standard 8 CPU machine type with 8 virtual CPUs and 30 GB of memory. 8 30 22
    n1-standard-16 Preview! Standard 16 CPU machine type with 16 virtual CPUs and 60 GB of memory. 16 60 44

    *Persistent disk usage is charged separately from machine type pricing.
    1For the 'n1' series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the 'n1-standard-2' machine type will see a whole physical core.

    High Memory Machine Types*

    High memory machine types are ideal for tasks that require more memory relative to virtual cores.

    Machine Name Description Virtual CPUs1 Memory (GB) GCEUs
    (what is this?)
    Max Number Of Persistent Disks (PDs)** Max Total PD size (TB)
    n1-highmem-2 High memory 2 CPU machine type with 2 virtual CPUs and 13 GB of memory. 2 13 5.50 16 10
    n1-highmem-4 High memory 4 CPU machine type with 4 virtual CPUs, and 26 GB of memory. 4 26 11
    n1-highmem-8 High memory 8 CPU machine type with 8 virtual CPUs and 52 GB of memory. 8 52 22
    n1-highmem-16 Preview! High memory 16 CPU machine type with 16 virtual CPUs and 104 GB of memory. 16 104 44

    *High memory machine types have 6.50GB of RAM per virtual core.
    **Persistent disk usage is charged separately from machine type pricing.
    1For the 'n1' series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the 'n1-standard-2' machine type will see a whole physical core.

    High CPU Machine Types*

    High CPU machine types are ideal for tasks that require more virtual cores relative to memory.

    Machine Name Description Virtual CPUs1 Memory (GB) GCEUs
    ( what is this?)
    Max Number Of Persistent Disks (PDs)** Max Total PD size (TB)
    n1-highcpu-2 High CPU machine type with 2 virtual CPUs and 1.80 GB of memory. 2 1.80 5.50 16 10
    n1-highcpu-4 High CPU machine type with 4 virtual CPUs and 3.60 GB of memory. 4 3.60 11
    n1-highcpu-8 High CPU machine type with 8 virtual CPUs and 7.20 GB of memory. 8 7.20 22
    n1-highcpu-16 Preview! High CPU machine type with 16 virtual CPUs and 14.4 GB of memory. 16 14.4 44

    *High CPU machine types have one virtual core for every 0.90 GB of RAM.
    **Persistent disk usage is charged separately from machine type pricing.
    1For the 'n1' series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the 'n1-standard-2' machine type will see a whole physical core.

    Shared-Core Machine Types

    Shared-core machine types are ideal for applications that don't require a lot of resources. Shared-core instances are more cost-effective for running small, non-resource intensive applications than standard, high-memory or high-CPU instance types.

    f1-micro Bursting

    f1-micro machine types offer bursting capabilities that allow instances to use additional physical CPU for short periods of time. Bursting happens automatically when your instance requires more physical CPU than originally allocated. During these spikes, your instance will opportunistically take advantage of available physical CPU in bursts. Note that bursts are not permanent and are only possible periodically.

    Machine Name Description Virtual CPUs Memory (GB) GCEUs
    ( what is this?)
    Max Number Of Persistent Disks (PDs)* Max Total PD size (TB)
    f1-micro Micro machine type with 1 virtual CPU, 0.60 GB of memory (no scratch disk), backed by a shared physical core. 1 0.60 Shared CPU, not guaranteed 4 3
    g1-small Shared-core machine type with 1 virtual CPU, 1.70 GB of memory (no scratch disk space), backed by a shared physical core. 1 1.70 1.38

    *Persistent disk usage is charged separately from machine type pricing.

    What are Google Compute Engine Units (GCEUs)?

    GCEU (Google Compute Engine Unit), pronounced GQ, is a unit of CPU capacity that we use to describe the compute power of our instance types. We chose 2.75 GCEUs to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge or Ivy Bridge platform.


    To view a list of available machine types, you can always run:

    gcutil --project=google listmachinetypes

    Requesting access to 16-core machine types

    Currently, 16-core machine types are available in limited preview. This means that users can request access to the machine types but it is not yet available to everyone. If you would like to request access to these machine types, fill out the Limited Preview request form.

    Over time, we are working to make these machine types generally available.

    Using a Machine Type

    In gcutil, you can specify your desired machine type by choosing it from the prompt:

    user@local:~$ gcutil --project=<project-id> addinstance <instance-name>
    ....
    13: n1-highcpu-8        8 vCPUs, 7.2 GB RAM
    15: n1-highmem-2        2 vCPUs, 13 GB RAM
    17: n1-highmem-4        4 vCPUs, 26 GB RAM
    19: n1-highmem-8        8 vCPUs, 52 GB RAM
    20: n1-highmem-16       16 vCPUs, 104 GB RAM
    21: f1-micro    1 vCPU (shared physical core) and 0.6 GB RAM
    22: g1-small    1 vCPU (shared physical core) and 1.7 GB RAM
    >>> 20

    You can also provide it using the --machine_type flag:

    gcutil --project=<project-id> addinstance <instance-name> --machine_type=n1-standard-1

    In the API, provide your machine type as part of the request body to create an instance:

     body = {
        'name': NEW_INSTANCE_NAME,
        'machineType': <fully-qualified-machine_type_url>,
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': <fully-qualified-network_url>
        }],
        'disk': [{
           'source': <fully-qualified-boot-disk-url>,
           'boot': 'true',
         }]
      }

    Page: metadata

    Every instance has metadata associated with it. Some of this metadata can be defined by the user, and other information, such as the host name, is assigned by Google Compute Engine during the startup process.

    Compute Engine might offer more than one metadata version at a single time, but it is recommended that you always use the newest metadata server version available. At any time, Google Compute Engine may add new entries to the metadata server and add new fields to responses. Check back periodically for changes!

    Current version: v1

    Contents

    Metadata server

    Every instance stores its metadata on the metadata server. You can query this metadata server programmatically for information such as the instance's host name, instance ID, startup scripts, and custom metadata. It also provides access to service account information. Your instance automatically has access to the metadata server API without any additional authorization.

    Metadata is stored in the format key:value. There is a default set of metadata entries that every instance has access to and you can also choose to create custom metadata. To query for certain metadata, you can construct a request to the metadata server using the URL or the IP address.

    • Full URL: http://metadata.google.internal/computeMetadata/v1/
    • Shorthand URL: http://metadata/computeMetadata/v1/
    • IP address: http://169.254.169.254/computeMetadata/v1/

    For more information, see Querying Metadata.

    Default metadata

    Google Compute Engine defines a set of default metadata entries that provide information about your instance or project. Default metadata is always defined and set by the server. You cannot manually edit any of these metadata pairs.

    The following is a list of default metadata available to a project. Some metadata entries are directories that contain other metadata keys. This difference is marked by a trailing slash in the metadata name. For example, attributes/ is a directory that contains other keys, while numeric-project-id is a metadata key or endpoint that maps to a value.

    Note: This document uses "metadata keys" and "metadata endpoints" interchangeably; both phrases refer to a metadata key that directly maps to one or more values.

    Relative to http://metadata.google.internal/computeMetadata/v1/project/
    Metadata Entry Description
    attributes/ A directory of custom metadata values passed to the project.
    attributes/sshKeys A list of ssh keys that can be used to connect to instances in the project.
    numeric-project-id The numeric project ID of the instance, which is not the same as the project name visible in the Google Developers Console. Do not use this with the --project flag for any gcutil calls; instead use the project-id property value.
    project-id The project ID.

    The following is a list of default metadata available to an instance:

    Relative to http://metadata.google.internal/computeMetadata/v1/instance/
    Metadata Entry Description
    attributes/ A directory of custom metadata values passed to the instance during startup. See Specifying Custom Metadata below.
    description The free-text description of an instance, assigned using the --description flag, or set in the API.
    disks/ A directory of disks attached to this instance.
    hostname The host name of the instance.
    id The ID of the instance. This is a unique, numerical ID that is generated by Google Compute Engine. This is useful for identifying instances if you do not want to use instance names.
    image The fully-qualified image name.
    machine-type The fully-qualified machine type name of the instance's host machine.
    network-interfaces/ A directory of network interfaces for the instance.
    network-interfaces/<index>/forwarded-ips/ A directory of any external IPs that are currently pointing to this virtual machine instance, for the network interface at <index>. Specifically, provides a list of external IPs served by forwarding rules that direct packets to this instance.
    scheduling/ A directory with the scheduling options for the instance.
    scheduling/on-host-maintenance The instance's scheduled maintenance event behavior setting. This value is set with the ‑‑on_host_maintenance flag or via the API.
    scheduling/automatic-restart The instance's automatic restart setting. This value is set with the ‑‑automatic_restart flag or via the API.
    maintenance-event The path that indicates that a scheduled maintenance event is affecting this instance. See Scheduled maintenance notice for details.
    project-id The instance's project ID. You can use this ID in the --project flag for any gcutil calls.
    service-accounts/ A directory of service accounts associated with the instance.
    tags Any tags associated with the instance.
    zone The instance's zone.

    Querying metadata

    You can query a metadata server only from its associated instance. You cannot query an instance's metadata from another instance or directly from your local computer. For example, you would send a curl or wget command from the instance to its metadata server.

    You can query the metadata server using the root metadata server URL or the IP address. All of these URLs will work for your metadata requests.

    • Full URL: http://metadata.google.internal/computeMetadata/v1/
    • Shorthand URL: http://metadata/computeMetadata/v1/
    • IP address: http://169.254.169.254/computeMetadata/v1/

    These are the root URLs for all instance and project metadata. Specific metadata values are defined as sub-paths below these root URLs.

    When you query the metadata server, you must also provide the following header in all of your requests:

    Metadata-Flavor: Google

    This header indicates that the request was sent with the intention of retrieving metadata values, rather than unintentionally from an insecure source, and allows the metadata server to return the data you requested. If you do not provide this header, the metadata server denies your request.

    Note: Previously, the Metadata-Flavor: True header was required in requests. Both of these headers are still supported but we recommend using the Metadata-Flavor header rather than the Metadata-Flavor header.

    Depending on the type of query you make, the metadata server can return data in a number of ways.

    Querying directory listings

    The metadata server uses directories to organize certain metadata keys. Any metadata entry ending in a trailing slash is a directory. For example, the disks/ entry is a directory of disks attached to that instance:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
    0/
    1/
    2/
    

    Similarly, if you wanted more information about the 1/ directory, you can query the specific URL for that directory:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/1/" -H "Metadata-Flavor: Google"
    device-name
    index
    mode
    type

    Querying metadata endpoints

    Other metadata entries are keys or endpoints that return one or more values. To query for data from a metadata endpoint, send a query to that particular endpoint. For example, to query the mode of a specific disk, query the following endpoint:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/1/mode" -H "Metadata-Flavor: Google"
    READ_WRITE
    

    By default, each endpoint defines the format for returned data. Some endpoints may return data in JSON format by default, while other endpoints may return data as a string. You can override the default data format specification by using the alt=json or alt=text query parameters, which returns data in JSON string format or as a plaintext representation, respectfully.

    For example, the tags key automatically returns data in JSON format. You can choose to return data in text format instead, by specifying the alt=text query parameter:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags" -H "Metadata-Flavor: Google"
    ["bread","butter","cheese","cream","lettuce"]
    
    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags?alt=text" -H "Metadata-Flavor: Google"
    bread
    butter
    cheese
    cream
    lettuce

    Querying recursive contents

    If you want to return all contents underneath a directory, use the recursive=true query parameter with your request:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/?recursive=true" -H "Metadata-Flavor: Google"
    [{"deviceName":"boot","index":0,"mode":"READ_WRITE","type":"PERSISTENT"},
    {"deviceName":"persistent-disk-1","index":1,"mode":"READ_WRITE","type":"PERSISTENT"},
    {"deviceName":"persistent-disk-2","index":2,"mode":"READ_ONLY","type":"PERSISTENT"}]

    By default, recursive contents are returned in JSON format. If you want to return these contents in text format, append the alt=text query parameter:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/?recursive=true&alt=text" -H "Metadata-Flavor: Google"
    0/device-name boot
    0/index 0
    0/mode READ_WRITE
    0/type PERSISTENT
    1/device-name persistent-disk-1
    1/index 1
    1/mode READ_WRITE
    1/type PERSISTENT
    2/device-name persistent-disk-1
    2/index 2
    2/mode READ_ONLY
    2/type PERSISTENT

    Detecting if you are running in Compute Engine

    You can easily detect if your applications or scripts are running within a Compute Engine instance by using the metadata server. When you make a request to the server, any response from the metadata server will contain the Metadata-Flavor: Google header. You can look for this header to reliably detect if you are running in Compute Engine.

    For example, the following curl request returns a Metadata-Flavor: Google header, indicating that the request is being made from within a Compute Engine instance.

    me@my-inst:~$ curl metadata.google.internal -i
    HTTP/1.1 200 OK
    Metadata-Flavor: Google
    Content-Type: application/text
    Date: Thu, 10 Apr 2014 19:24:27 GMT
    Server: Metadata Server for VM
    Content-Length: 22
    X-XSS-Protection: 1; mode=block
    X-Frame-Options: SAMEORIGIN
    
    0.1/
    computeMetadata/

    Specifying custom metadata

    You can set custom metadata for an instance or project outside of the server-defined metadata. This is useful for passing in arbitrary values to your project or instance that can be queried by your code on the instance.

    When you specify custom metadata, the metadata server stores it in the attributes/ directory for that instance or project. To query for all the custom metadata available to an instance or project, query the attributes/ directory:

    curl "http://metadata.google.internal/computeMetadata/v1/<instance|project>/attributes/" -H "Metadata-Flavor: Google"

    Note: Google Compute Engine limits the length of your custom metadata value to 32768 bytes. If your metadata exceeds this limit, you won't be able to specify it on the command line. Instead, you create startup scripts, store them on Google Cloud Storage, and run the scripts during instance creation time. See startup scripts for more information.

    Setting custom instance metadata

    You can set custom metadata for an instance either using the --metadata flag during instance creation, or by using the gcutil setinstancemetadata method on a running instance.

    Setting Metadata during Instance Creation

    To pass in custom metadata during instance creation, provide the --metadata flag with your request. You can provide this flag as many times as you would like, and pass in multiple metadata pairs. The following example demonstrates starting a new instance with the key "bread" and the value "butter", and querying it from the instance.

    $ gcutil --project=myproject addinstance myinstance --metadata=bread:butter
    ... select a zone, machine type, and image...
    INFO: Waiting for insert of myinstance. Sleeping for 3s.
    INFO: Waiting for insert of myinstance. Sleeping for 3s.
    ...
    $ gcutil --project=myproject ssh myinstance
    ...omit ssh startup info...
    user@myinstance:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/bread" -H "Metadata-Flavor: Google"
    butter

    Updating Or Setting Metadata on a Running Instance

    If you want to set or update the metadata entries for a running instance, you can use the gcutil setinstancemetadata command:

    gcutil --project=<project-id> setinstancemetadata <instance-name> --metadata=<key-1:value-1> --metadata=<key-2:value-2> --metadata=<key-n:value-n> --fingerprint=<current-fingerprint-hash> 

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID of the instance.
    <instance-name>
    [Required] The instance name for which you want to update metadata.
    --metadata=<key-n:value-n>
    [Required] Metadata data entries to update. All metadata updates are done in a batch request. This means that you must set all metadata entries in every update request, even if you are just updating a one or two entries. For example, assume your instance metadata looks like this:
    | metadata               |                |
    | fingerprint            | K5IUL75tSYQ=   |
    |   bread                | mayo           |
    |   cheese               | cheddar        |
    |   lettuce              | butter         |
    

    If you want to remove the lettuce:butter entry, you need to update the entire list, omitting the entry you want to delete:

    gcutil --project=myproject setinstancemetadata ... --metadata=bread:mayo --metadata=cheese:chedder

    --fingerprint=<current-fingerprint-hash>
    [Required] The current fingerprint hash of the metadata list. You can grab the fingerprint hash by performing a gcutil getinstance <instance-name> command, and copying the value of the metadata fingerprint field. The fingerprint you supply must match the current fingerprint on the instance. This performs optimistic locking, so that only one user may update the metadata list at any one time.

    For example:

    $ gcutil --project=myproject getinstance myinstance
    +------------------------+--------------------------------------------------------------------------------------------+
    |        property        |                                       value                                                |
    +------------------------+--------------------------------------------------------------------------------------------+
    | name                   | myinstance                                                                                 |
    | description            |                                                                                            |
    | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
    | machine                | n1-standard-1                                                                              |
    | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
    | zone                   | <zone>                                                                                     |
    | tags-fingerprint       | kFyURcqFcPg=                                                                               |
    | metadata-fingerprint   | 42WmSpB8rSM=                                                                               |
    | status                 | RUNNING                                                                                    |
    | status-message         |                                                                                            |
    |                        |                                                                                            |
    | disk                   | 0                                                                                          |
    |   type                 | PERSISTENT                                                                                 |
    |   mode                 | READ_WRITE                                                                                 |
    |   deviceName           | pd1                                                                                        |
    |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
    |                        |                                                                                            |
    | network-interface      |                                                                                            |
    |   network              | default                                                                                    |
    |   ip                   | 00.000.000.000                                                                             |
    |   access-configuration | External NAT                                                                               |
    |     type               | ONE_TO_ONE_NAT                                                                             |
    |     external-ip        | 000.000.00.000                                                                             |
    |                        |                                                                                            |
    | metadata               |                                                                                            |
    | fingerprint            | 42WmSpB8rSM=                                                                               |
    |   foo                  | bar                                                                                        |
    |   baz                  | bat                                                                                        |
    |   fe                   | fi                                                                                         |
    |   fo                   | fum                                                                                        |
    | tags                   |                                                                                            |
    | fingerprint            | kFyURcqFcPg=                                                                               |
    |                        | cheese                                                                                     |
    |                        | mustard                                                                                    |
    |                        | romaine                                                                                    |
    +------------------------+--------------------------------------------------------------------------------------------+
    
    $ gcutil --project=myproject setinstancemetadata --metadata=foo:bar --metadata=baz:bat --fingerprint=42WmSpB8rSM= myinstance
    INFO: Waiting for setMetadata of instance myinstance. Sleeping for 3s
    ....
    
    $ gcutil --project=myproject getinstance myinstance
    +------------------------+--------------------------------------------------------------------------------------------+
    |        property        |                                       value                                                |
    +------------------------+--------------------------------------------------------------------------------------------+
    | name                   | myinstance                                                                                 |
    | description            |                                                                                            |
    | creation-time          | 2013-01-18T11:15:54.054-08:00                                                              |
    | machine                | n1-standard-1                                                                              |
    | image                  | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD                              |
    | zone                   | <zone>                                                                                     |
    | tags-fingerprint       | wkFyURcqFcPg=                                                                              |
    | metadata-fingerprint   | 76YdAlA9rSL=                                                                               |
    | status                 | RUNNING                                                                                    |
    | status-message         |                                                                                            |
    |                        |                                                                                            |
    | disk                   | 0                                                                                          |
    |   type                 | PERSISTENT                                                                                 |
    |   mode                 | READ_WRITE                                                                                 |
    |   deviceName           | pd1                                                                                        |
    |   source               | http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks/<disk> |
    |                        |                                                                                            |
    | network-interface      |                                                                                            |
    |   network              | default                                                                                    |
    |   ip                   | 00.000.000.000                                                                             |
    |   access-configuration | External NAT                                                                               |
    |     type               | ONE_TO_ONE_NAT                                                                             |
    |     external-ip        | 000.000.00.000                                                                             |
    |                        |                                                                                            |
    | metadata               |                                                                                            |
    | fingerprint            | 76YdAlA9rSL=                                                                               |
    |   foo                  | bar                                                                                        |
    |   baz                  | bat                                                                                        |
    | tags                   |                                                                                            |
    | fingerprint            | kFyURcqFcPg=                                                                               |
    |                        | cheese                                                                                     |
    |                        | mustard                                                                                    |
    |                        | romaine                                                                                    |
    +------------------------+--------------------------------------------------------------------------------------------+

    Applying a Startup Script using Custom Metadata

    The custom metadata option is especially useful for specifying startup scripts that run during instance boot. Startup scripts can be used to install software, check and start services, or set custom environment variables. Using gcutil, you can pass in startup scripts directly using the --metadata flag or from a local file using the --metadata_from_file flag. For example:

    • Passing in a startup script from a local file:
      gcutil addinstance test-instance --metadata_from_file=startup-script:<file> --project=<project-id>
    • Passing in a startup script from Google Cloud Storage:
      gcutil addinstance test-instance --metadata=startup-script-url:<url> --project=<project-id>
    • Passing in your startup script directly:
      gcutil addinstance test-instance --metadata=startup-script:"#! /bin/bash
      > # Installs apache and a custom homepage
      > apt-get update
      > apt-get install -y apache2
      > cat <<EOF > /var/www/index.html
      > <html><body><h1>Hello World</h1>
      > <p>This page was created from a simple start up script!</p>
      > </body></html>
      > EOF"

    For more information about startup scripts and how to use them, see Using Startup Scripts.

    Setting project-wide custom metadata

    If you want to set project-level custom metadata, which is accessible by all instances in that project, you can do so using the setcommoninstancemetadata command. For example, if you define a project-wide metadata pair of baz:bat, the metadata pair is automatically available to all instances at the project/attributes/ directory:

    http://metadata.google.internal/computeMetadata/v1/project/attributes/

    To set project-wide metadata using the gcutil command tool, use the gcutil setcommoninstancemetadata command. For example:

    $ gcutil --project=myproject setcommoninstancemetadata --metadata foo:bar --metadata baz:bat [-f]
    $ gcutil --project=myproject getproject
    +--------------------------+---------------------------------------+
    | name                     | myproject                             |
    | description              |                                       |
    | creation-time            | 2012-01-11T17:45:37.812-08:00         |
    | usage                    |                                       |
    |   snapshots              | 1.0/1000.0                            |
    |   networks               | 4.0/5.0                               |
    |   firewalls              | 4.0/100.0                             |
    |   images                 | 3.0/100.0                             |
    |   routes                 | 8.0/100.0                             |
    |   forwarding-rules       | 1.0/50.0                              |
    |   target-pools           | 2.0/50.0                              |
    |   health-checks          | 2.0/50.0                              |
    | common-instance-metadata |                                       |
    |   foo                    | bar                                   |
    |   baz                    | bat                                   |
    +--------------------------+---------------------------------------+

    Important Flags and Parameters:

    --project=<project-id>
    [Required] This flag is required for every gcutil command except help, unless you have previously specified the --cache_flag_values flag to store your project ID information.
    --metadata
    [Optional] Specifies a single entry specified as a colon-separated key value pair. This flag can be specified multiple times on the command line.
    --metadata_from_file
    [Optional] Specifies a single entry specified as a key and a file from which to read the key. The two parts are separated by a colon. This flag can be specified multiple times on the command line.
    -f
    [Optional] Specifies that the update should be forced even if it will remove existing metadata entries. Since project-wide metadata is updated in a single batch operation, this flag is required if any of the existing metadata keys are not specified in the command. This helps prevent deleting metadata accidentally.

    Updating Project-Wide Metadata

    Similar to instance metadata, project metadata updates are done in batch requests. This means that you must set all metadata entries in every update request, even if you are just updating a one or two entries. For example, lets assume you have the following metadata entries:

    +-----------------------------------------+---------------------------------+
    | common-instance-metadata                |                                 |
    |   baz                                   | bat                             |
    |   foo                                   | bar                             |
    |   try                                   | eat                             |
    |   car                                   | saw                             |
    +-----------------------------------------+---------------------------------+

    If you wanted to update the metadata value for baz and foo, you must also set the values of try and car, even if their values are the same. If you don't explicitly set every metadata value, gcutil safely blocks the operation, unless you provide the -f flag. With the -f flag, gcutil removes existing metadata entries that aren't set in your request. To update baz and foo, you would need to do the following:

    $ gcutil setcommoninstancemetadata --metadata baz:new --metadata foo:new --metadata try:eat metadata car:saw --project=my-project

    If you're sure you want to erase the metadata entries that aren't explicitly set in your update request, rerun your command with the -f flag.

    If you're updating metadata that may be several strings long, you may want to use the --metadata_from_file flag, which reads in the contents of a file as the key value:

    gcutil setcommoninstancemetadata --metadata_from_file=<key>:<file> --project=my-project

    This is especially useful for setting metadata attributes like sshKeys, which is usually a random combination of several strings. Rather than providing the raw string value, you could just save the string in the file and provide the file with the --metadata_from_file flag.

    Note: Setting and updating project metadata is slightly different than setting instance metadata because you're not required to provide a fingerprint hash with your request. However, you must explicitly define every metadata entry or provide the -f flag in your request to erase metadata entries you didn't redefine.

    Waiting for change

    Given that metadata values can change while your instance is running, the metadata server offers the ability to be notified of metadata changes using the wait-for-change feature. This feature allows you to perform hanging GET requests that only returns when your specified metadata has changed. You can use this feature on custom metadata or server-defined metadata, so if anything changes about your instance or project, or if someone updates a custom metadata, you can programmatically react to the change. For example, you can perform a request on the tags key so that the request only returns if the contents of the tags metadata has changed. When the request returns, it provides the new value of that metadata key.

    Note: You can only perform a wait-for-change request on a metadata endpoint or recursively on the contents of a directory. It is not possible to perform a wait-for-change request on a directory listing and the metadata server fails your request if you try to do so.

    It is also not possible to perform a wait-for-change request for a service account token. If you try to make a wait-for-change request to the service account token URL, the request fails immediately.

    In both these cases, Google Compute Engine returns a 400 Invalid Request error.

    To perform a wait-for-change request, query a metadata key and append the ?wait_for_change=true query parameter:

    user$myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true" -H "Metadata-Flavor: Google"

    Once there is a change to the specified metadata key, the query returns with the new value. In this example, if a request is made to the setInstanceTags method, the request returns with the new values:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true" -H "Metadata-Flavor: Google"
    cheese
    lettuce

    You can also perform a wait-for-change request recursively on the contents of a directory:

    user$myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=true&wait_for_change=true" -H "Metadata-Flavor: Google"

    The metadata server returns the new contents if there is any change:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=true&wait_for_change=true" -H "Metadata-Flavor: Google"
    {"cheese":"lettuce","cookies":"cream"}

    The wait-for-change feature also lets you match ETags with your request and set timeouts.

    Using ETags

    If you submit a simple wait-for-change query, the metadata server returns if anything has changed in the contents of that metadata. However, there is an inherent race condition between a metadata update and a wait-for-change request being issued, so it is useful to have a reliable way to know you are getting the latest metadata value. To help with this, you can use the last_etag query parameter, which compares the ETag value you provide with the ETag value saved on the metadata server. If the ETag values match, then the wait-for-change request will be accepted. If the ETag values do not match, this indicates that the contents of the metadata has changed since the last time you retrieved the ETag value, and the metadata server returns immediately with this latest value.

    To grab the current ETag value for a metadata key, make a request to that key and print the headers. In CURL, you can do this with the -v flag:

    user@myinst:~$ curl -v "http://metadata.google.internal/computeMetadata/v1/instance/tags" -H "Metadata-Flavor: Google"
    * About to connect() to metadata port 80 (#0)
    *   Trying 169.254.169.254... connected
    * Connected to metadata (169.254.169.254) port 80 (#0)
    > GET /computeMetadata/v1/instance/tags?wait_for_change=true HTTP/1.1
    > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
    > Host: metadata
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Content-Type: application/text
    < ETag: 411261ca6c9e654e
    < Date: Wed, 13 Feb 2013 22:43:45 GMT
    < Server: Metadata Server for VM
    < Content-Length: 26
    < X-XSS-Protection: 1; mode=block
    < X-Frame-Options: SAMEORIGIN
    <
    cheese
    lettuce

    You can also grab the ETag programmatically. The following example uses Python to extract the ETag value from the metadata response:

    import httplib2
    
    def main():
      http = httplib2.Http()
      response, content = http.request('http://metadata.google.internal/computeMetadata/v1/instance/tags', headers={'Metadata-Flavor': 'True'})
      etag = response['etag']
    
      print etag
    
    if __name__ == '__main__':
      main()

    Use that ETag value in your wait-for-change request:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true&last_etag=411261ca6c9e654e" -H "Metadata-Flavor: Google"

    The metadata server will match your specified ETag value and if that value changes, the request returns with the new contents of your metadata key.

    Using 0 As Your ETag Value

    The metadata server will never return 0 as an ETag. You could use that information to simplify some of your code. For example, the following code example sets the initial ETag value as 0, performs a request to the server, which immediately returns with the initial data and the current ETag value, and then uses that information to wait for a change. This method would save you from writing additional lines of code to grab the first initial ETag.

    import httplib2
    
    METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
    
    def main():
      http = httplib2.Http()
      tagsUrl = METADATA_URL + 'instance/tags?wait_for_change=true'
    
      # set the first last_etag as 0
      last_etag = 0
      while True:
        # returns immediately on the initial request because 0 is invalid
        # otherwise wait until something changes
        resp,content = http.request(uri=tagsUrl + '&last_etag=' + str(last_etag), method='GET', body='', headers={'Metadata-Flavor': 'True'})
    
        if resp != 500:
          last_etag = resp['etag']
          print content
    
    if __name__ == '__main__':
      main()

    Setting timeouts

    If you would like your wait-for-change request to time out after a certain number of seconds, you can set the timeout_sec=<timeout-in-seconds> query parameter. The timeout_sec parameter limits the wait time of your request to the number of seconds you specified and once the request reaches that limit, it returns the current contents of the metadata key. Here is an example of a wait-for-change request that is set to time out after 360 seconds:

    user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true&timeout_sec=360" -H "Metadata-Flavor: Google"

    When you set the timeout_sec parameter, the request always returns after the specified number of seconds, whether or not the metadata value has actually changed. It is only possible to set an integer value for your timeout.

    Status codes

    When you perform a wait-for-change request, the metadata server returns standard HTTP status codes to indicate success or failures. In the case of errors, network conditions may cause the metadata server to fail your request and return an error code. In these cases, you should design your application to be fault-tolerant and to be able recognize and handle these errors.

    The possible states that the metadata server returns are:

    Status Description
    HTTP 200 Success! A value was changed, or you reached your specified timeout_sec and the request returned successfully.
    Error 400 Your request was invalid. Please fix your query and retry the request.
    Error 404 The metadata value you specified no longer exists. This error also returns if your metadata is deleted while you are waiting on a change.
    Error 500 There was a temporary server error. Please retry your request.

    Scheduled maintenance notice

    The metadata server provides information about an instance's scheduling options and settings, through the scheduling/ directory and the maintenance-event attribute. You can use these attributes to learn about a virtual machine instance's scheduling options, and also use this information to notify you when a maintenance event is about to happen, specifically through the maintenance-event attribute.

    The maintenance-event attribute changes its value to indicate the start and end of a maintenance event. The initial value of the attribute is NONE, which indicates that no maintenance event is starting. 60 seconds before a scheduled maintenance event, the maintenance-event value changes from NONE to MIGRATE_ON_HOST_MAINTENANCE. Throughout the duration of the maintenance event, the value remains the same. Once the maintenance event ends, the value returns to NONE.

    Caution: To receive notification of maintenance events through the metadata server, your instance's scheduling option must be set to migrate. The maintenance-event attribute will only update for virtual machine instances set to the migrate option. Virtual machine instances that are set to terminate will experience a power button push and won't be notified of maintenance events through this attribute.

    To query the maintenance-event attribute, make a request like so:

    user@myinst:~$ curl http://metadata.google.internal/computeMetadata/v1/instance/maintenance-event
    NONE

    You can use the maintenance-event with the Waiting for change feature to notify your scripts and applications when a maintenance event is about to start and end. This lets you automate any actions that you might want to run before or after the event. The following Python sample provides a example of how you might implement these two features together.

    Note: During the maintenance event, the metadata server might briefly return a 503 Service Unavailable code. If your application receives a 503 error code, you should retry your request.

    import httplib
    import sys
    import time
    import urllib
    import urllib2
    
    METADATA_URL = 'http://metadata/computeMetadata/v1/'
    
    
    class Error(Exception):
      pass
    
    
    class UnexpectedStatusException(Error):
      pass
    
    
    class UnexpectedMaintenanceEventException(Error):
      pass
    
    
    def WatchMetadata(metadata_key, handler, initial_value=None):
      """Watches for a change in the value of metadata.
    
      Args:
        metadata_key: The key identifying which metadata to watch for changes.
        handler: A callable to call when the metadata value changes. Will be passed
          a single parameter, the new value of the metadata.
        initial_value: The expected initial value for the metadata. The handler will
          not be called on the initial metadata request unless the value differs
          from this.
    
      Raises:
        UnexpectedStatusException: If the http request is unsuccessful for an
          unexpected reason.
      """
      params = {
          'wait_for_change': 'true',
          'last_etag': 0,
          }
    
      while True:
        value = initial_value
        # start a hanging-GET request for maintenance change events.
        url = '{base_url}{key}?{params}'.format(
            base_url=METADATA_URL,
            key=metadata_key,
            params=urllib.urlencode(params)
            )
        req = urllib2.Request(url, headers={'Metadata-Flavor': ''})
    
        try:
          response = urllib2.urlopen(req)
          content = response.read()
          status = response.getcode()
        except urllib2.HTTPError as e:
          content = None
          status = e.code
    
        if status == httplib.SERVICE_UNAVAILABLE:
          time.sleep(1)
          continue
        elif status == httplib.OK:
          # Extract new maintenance-event value and latest etag.
          new_value = content
          headers = response.info()
          params['last_etag'] = headers['ETag']
        else:
          raise UnexpectedStatusException(status)
    
        # If the maintenance value changed, call the appropriate handler.
        if value != new_value:
          value = new_value
          handler(value)
    
    
    def HandleMaintenance(on_maintenance_start, on_maintenance_end):
      """Watches for and responds to maintenance-event status changes.
    
      Args:
        on_maintenance_start: a callable to call before host maintenance starts.
        on_maintenance_end: a callable to call after host maintenance ends.
    
      Raises:
        UnexpectedStatusException: If the http request is unsuccessful for an
          unexpected reason.
        UnexpectedMaintenanceEventException: If the maintenance-event value is not
          either NONE or MIGRATE_ON_HOST_MAINTENANCE.
    
      Note: Instances that are set to TERMINATE_ON_HOST_MAINTENANCE will receive a
      power-button push and will not be notified through this script.
      """
      maintenance_key = 'instance/maintenance-event'
    
      def Handler(event):
        if event == 'MIGRATE_ON_HOST_MAINTENANCE':
          on_maintenance_start()
        elif event == 'NONE':
          on_maintenance_end()
        else:
          raise UnexpectedMaintenanceEventException(event)
    
      WatchMetadata(maintenance_key, Handler, initial_value='NONE')
    
    
    def OnMaintenanceStart():
      # Add commands to perform before maintenance starts here.
      pass
    
    
    def OnMaintenanceEnd():
      # Add commands to perform after maintenance is complete here.
      pass
    
    
    if __name__ == '__main__':
      # Perform actions when maintenance events occur.
      HandleMaintenance(OnMaintenanceStart, OnMaintenanceEnd)
    
      # An example of watching for changes in a different metadata field.
      # Replace 'foo' with an existing custom metadata key of your choice
      #
      # WatchMetadata('instance/attributes/foo',
      #               lambda val: sys.stdout.write('%s\n' % val))

    Transitioning to v1

    The v1 metadata server functions slightly differently that the previous v1beta1 server. Here are some of the changes that need to be made for the new metadata server:

    • Update metadata requests to include the Metadata-Flavor: Google header

      The new metadata server requires that all requests provide the Metadata-Flavor: Google header, which indicates that the request was made with the intention of retrieving metadata values, Update your requests to include this new header. For example, a request to the disks/ attribute now looks like the following:

      user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
    • Update requests that use the header X-Forwarded-For header

      These requests are automatically rejected by the server, as it generally indicates that the requests are proxied. Update your requests so they do not contain this header.

    Page: networking

    To manage traffic to and from your instances, use Network and Firewall resources to create rules that dictate how your instances interact with each other and with the Internet.

    Networks and firewalls are global resources.

    Contents

    Overview

    Google Compute Engine offers a configurable and flexible networking system that enables you to specify permitted connections between the outside world and instances. You can manage your Google Compute Engine network by configuring three objects: the Network object, Firewall objects, and individual instance settings.

    Networks

    A project can contain multiple networks and each network can have multiple instances attached to it. A network object allows you to define a gateway IP and the network range for the instances attached to that network. By default, every project is provided with a default network with preset configurations and firewall rules. You can choose to customize the default network by adding or removing firewall rules, or you can also choose to create new network objects. Generally, most users only need one network, although you can create up to five networks by default.

    A network belongs to only one project and each instance can only belong to one network. All Google Compute Engine networks use the IPv4 protocol. Google Compute Engine currently does not support IPv6. However, Google is a major advocate of IPv6 and it is an important future direction.

    For more information, see the Networks section.

    Firewalls

    By default, all incoming traffic from outside a network is blocked and no packet is allowed into an instance without an appropriate firewall rule. To allow incoming network traffic, you need to set up firewalls to permit these connections. Each firewall represents a single rule that determines what traffic is permitted into the network. It is possible to have many firewall rules and to be as general or specific as you would like. For example, you can create a firewall that allows all traffic through port 80 to all instances, or you can create a rule that only allows traffic from one specific IP or IP range to one specific instance.

    Firewalls only regulate incoming traffic to an instance. Once a connection has been established with an instance, traffic is permitted in both directions over that connection. To prevent an instance from sending outgoing packets, use another technology such as iptables.

    If the instance has an external IP address, it can also send outgoing packets outside the network. By default, gcutil assigns an ephemeral IP address to all new instances, unless otherwise specified. If you are using the API to create new instances, you can explicitly request an external IP address. All traffic through an external IP address, including traffic between instances in the same network, will be billed according to the price sheet.

    Every instance also has a network IP that is addressable only within the network. Within the network, instances can also be addressed by instance name; the network will resolve an instance name into a network address transparently for you.

    For more information, see the Firewalls and Instances and Networks sections.

    Blocked Traffic

    Google Compute Engine blocks or restricts traffic through all of the following ports/protocols between the Internet and virtual machines, as well as between two virtual machines when traffic is addressed to their public IP addresses (this also includes load-balanced addresses).

    Note: These restrictions do not apply for traffic between two virtual machines through their private addresses.

    • All outgoing traffic to port 25 (SMTP) is blocked.
    • Most outgoing traffic to port 465 or 587 (SMTP over SSL) is blocked. except for known Google IP addresses
    • All traffic that uses a protocol other than TCP, UDP, and ICMP is blocked.

    Routes

    Every Google Compute Engine project has a Routes collection that contains all routes for that project. A route specifies how packets leaving a virtual machine instance should be handled. For example, a route may specify that packets destined for a particular network range should be handled by a gateway virtual machine machine instance that you configure and operate.

    When you add a route to the Routes collection, you must also specify which instances the route should apply to. Each virtual machine instance in your project then pulls from this centralized Routes collection to create a read-only individual routes table which Google Compute Engine uses to direct outgoing packets for those instances.

    A single route is made up of a route name, a destination range, a next-hop specification, any instance tags, and a priority value. By default, every network has two default routes: a route that directs traffic to the Internet and a route that directs traffic to other instances within the network.

    Routes allow you to implement more advanced networking functions in your virtual machines, such as creating VPNs (virtual private networks), setting up many-to-one NAT (networking address translation), and setting up transparent proxies. If you do not need any advanced routing solutions, the default routes should be sufficient for handling most outgoing traffic.

    For more information, see the Routes Collection section.

    Networks

    Every instance is a member of a single network. A network performs the same function that a router does in a home network: it describes the network range and gateway IP address, handles communication between instances, and serves as a gateway between instances and callers outside the network. A network is constrained to a single project; it cannot span projects. Any communication between instances in different networks, even within the same project, must be through external IP addresses. In the API, a network is represented by the Network object.

    Note: Google Compute Engine networks only supports point to point IPv4 traffic. Broadcast and Multicast are not supported.

    The Network object exposes the following properties, which can also be set using the gcutil addnetwork command.

    • IPv4Range - The network address range for this Network object, in CIDR notation. You can change these, but you must use one of the standard private network address ranges. For example:
      "IPv4Range": "10.0.0.0/16"
    • gatewayIPv4 - The gateway address for the network. The default value is 10.0.0.1. For example:
      "gatewayIPv4": "10.0.0.1"
    • name - The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])? which can be restated as this:
      • The first character must be a lowercase letter,
      • All following characters must be a dash, lowercase letter, or digit,
      • The last character must be a lowercase letter or digit.

    All projects include a default Network object named default with the following values:

    • IPv4Range - 10.240.0.0/16
    • gatewayIPv4 - 10.240.0.1
    • name - default

    The default network also has the following firewalls:

    • default-allow-internal - Allows network connections of any protocol and port between instances on the network.
    • default-ssh - Allows TCP connections from any source to any instance on the network, over port 22.

    If you do not explicitly assign your instances to a network, all instances added to the project are automatically assigned this default network in the instance's networkInterfaces.network property. Currently, an instance can be assigned only one network. However, you can create up to five networks per project by default (including the default network) and assign instances to different networks using the gcutil --network=<network-name> flag:

    gcutil addnetwork <network-name> --project=<project-id>
    gcutil addinstance <instance-name> --network=<instance-name> --project=<project-id>

    You can also choose to request more quota if you need more than five networks for your project.

    Several firewalls can be applied to the same network to describe which instances can accept what kinds of incoming connection requests. A Firewall object has a reference to the Network object but a Network object does not have a list of Firewall objects.

    Example

    The following example requests information about the network named default. It uses the --print_json flag to show the complete JSON description of the Network resource.

    $ gcutil --print_json getnetwork default --project=<project-id>
    {
      "IPv4Range": "10.240.0.0/16",
      "description": "Default network for kermit",
      "gatewayIPv4": "10.240.0.1",
      "id": "18446744031643681077",
      "kind": "cloud#network",
      "name": "projects/kermit/networks/default",
      "selfLink": "https://www.googleapis.com/compute/v1/projects/kermit/global/networks/default"
    }

    Useful gcutil commands:

    • gcutil addnetwork
    • gcutil listnetworks
    • gcutil getnetwork
    • gcutil deletenetwork

    Note that there is no way to modify an existing network using gcutil.

    Adding a Network

    Every project starts with one network called default. The default network is automatically created for you, and you do not need to take any action to initialize it. For many Google Compute Engine users, the default network is all that is needed. The default network functions just like any network you would create, so you can add or remove firewalls from it or delete it altogether.

    However, you can create up to four additional networks for a single project to help manage your instances. Creating multiple networks for a project can help you isolate sets of instances, so that instances in different networks can only communicate with each other through external IP addresses. For example, you might want to isolate instances in your testing and production systems from one another. Establishing those instances on different networks is an effective way of achieving this protection.

    When creating instances in multiple networks, note that instance names are unique across a project. You cannot use the same instance name for two instances in the same project, even if they are in different networks. To add a new network to a project, use the gcutil addnetwork command:

    gcutil addnetwork <network-name> --project=<project-id>

    For example, to add networks called production execute the following:

    gcutil addnetwork production --project=<project-id>

    To list the networks for your project, execute the following:

    $ gcutil listnetworks --project=<project-id>
    +------------+---------------------------------+---------------+------------+
    |    name    |           description           |   addresses   |  gateway   |
    +------------+---------------------------------+---------------+------------+
    | default    | Default network for the project | 10.240.0.0/16 | 10.240.0.1 |
    | production |                                 | 10.0.0.0/8    | 10.0.0.1   |
    +------------+---------------------------------+---------------+------------+
    

    When adding a network, you can optionally specify the address range and/or gateway address for the network to the gcutil addnetwork command. To see a full list of values that you can set, run gcutil help addnetwork. If you don't specify a default gateway address or range, gcutil uses the default IPv4 gateway (10.0.0.1) and the IPv4 Range (10.0.0.0/16).

    Each network has its own set of firewalls controlling connectivity. When you create a new network, there are no firewalls permitting connections of any type. After creating a new network, you should create firewalls for it. To create a firewall for the testing network that allows all traffic between the instances on the network, execute the following:

    gcutil addfirewall internal --network=testing --allowed_ip_sources=10.0.0.0/16 --allowed=tcp,udp,icmp --project=<project-id>

    Note that the address range supplied to the --allowed_ip_sources flag matches the network's address range, as exposed by gcutil listnetworks above.

    Let's also create a firewall that allows HTTP traffic to all instances on the testing network. Execute the following:

    gcutil addfirewall web --network=testing --allowed="tcp:http" --project=<project-id>

    For more information on firewalls, see the Firewalls section.

    In order for your network to be useful, you need to add instances to it. To add an instance to a network, use the --network flag during instance creation. For example, to add an instance named test-instance to the testing network, execute the following:

    gcutil addinstance test-instance --network=testing --project=<project-id>

    Note: Many gcutil commands take an optional --network flag that has a default value of default. When using multiple networks per project, be sure to specify the network for these commands.

    Deleting a Network

    Before you can delete a network, you must first delete all firewalls attached to that network, all instances that use that network, and delete all manually-created routes that apply to the network.

    If you try to delete a network that still has firewalls, manually-created routes, and/or instances attached, Google Compute Engine returns a RESOURCE_IN_USE_BY_ANOTHER_RESOURCE error.

    Once you have deleted all firewalls and instances attached to the network, run the following command to delete the network:

    gcutil deletenetwork <network-name> --project=<project-id>
    <network-name>
    The name or names of the networks to delete.
    <project-id>
    The name of the project the networks belongs to.

    Setting Up a Network Proxy

    You can design your network so that only one instance has external access, and all other instances in the network use that instance as a proxy server to the outside world. This is useful if you want to control access into or out of your network, or reduce the cost of paying for multiple external IP addresses.

    This particular example discusses how to set up a network proxy on Google Compute Engine instances that use a Debian image. It uses a gateway instance as a Squid proxy server but this is only one way of setting up a proxy server.

    To set up a Squid proxy server:

    1. Set up one instance with an external (static or ephemeral) IP address. For this example we'll call this instance gateway.
    2. Set up one or more instances without external IP addresses by specifying addinstance ... --external_ip_address=none. For this example, we'll call this instance hidden.
    3. Learn how to ssh from one instance to another, because you will not be able to ssh directly into your internal-only instances.
    4. Add a firewall to allow traffic on tcp:3128:
      gcutil addfirewall <firewall-name> --network=<network-name> --allowed="tcp:3128" --project=<project-id>
    5. Install Squid on gateway, and configure it to allow access from any machines on the shared network. This assumes that gateway and hidden are both connected to the default network, which enables them to connect:
      user@gateway:~$ sudo apt-get install squid3
      
      # Enable any machine on the local network to use the Squid3 server
      sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(http_access deny to_localhost\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(acl localnet src 10.0.0.0/8.*\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(acl localnet src 172.16.0.0/12.*\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(acl localnet src 192.168.0.0/16.*\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(acl localnet src fc00\:\:/7.*\):\1:' /etc/squid3/squid.conf
      sed -i 's:#\(acl localnet src fe80\:\:/10.*\):\1:' /etc/squid3/squid.conf
      
      # Prevent proxy access to metadata server
      user@gateway:~$ sudo cat <<EOF >>/etc/squid3/squid.conf
      acl to_metadata dst 169.254.169.254
      http_access deny to_metadata
      EOF
      
      # Start Squid
      user@gateway:~$ sudo service squid3 start
    6. Configure hidden to use gateway as its proxy. ssh into hidden and define its proxy URL addresses to point to gateway on port 3128 (the default Squid configuration) as shown here:
      user@gateway:~$ ssh hidden
      user@hidden:~$ sudo -s
      echo "export http_proxy=\"http://gateway.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
      echo "export https_proxy=\"http://gateway.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
      echo "export ftp_proxy=\"http://gateway.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
      echo "export no_proxy=169.254.169.254,metadata,metadata.google.internal" >> /etc/profile.d/proxy.sh
      
      # Update sudoers to pass these env variables through
      cp /etc/sudoers /tmp/sudoers.new
      chmod 640 /tmp/sudoers.new
      echo "Defaults env_keep += \"ftp_proxy http_proxy https_proxy no_proxy"\" >>/tmp/sudoers.new
      chmod 440 /tmp/sudoers.new
      visudo -c -f /tmp/sudoers.new && cp /tmp/sudoers.new /etc/sudoers
      

      Note: VM instances that use a proxy server won't be able to access the metadata server by default, as all requests to the metadata server will be forwarded to the proxy.

    7. Exit sudo, load the variables, and run apt-get on hidden. It should now work using gateway as a proxy. If gateway were not serving as a proxy, apt-get would not work because hidden has no direct connection to the Internet.
      root@hidden:~# exit
      user@hidden:~$ source ~/.profile
      user@hidden:~$ sudo apt-get update
      ....
      

    Setting Up an External HTTP Connection

    The default network does not include a firewall that enables HTTP connections to your instances. However, it is fairly simple to add a firewall to your network that allows HTTP connections. Note that an instance must have an external IP address before it can receive traffic from outside its network.

    The following command creates a firewall that allows incoming http requests from anywhere, to any instance connected to this network. If you want to restrict which instances can be connected to, you can assign specific targets or target tags.

    gcutil --project=<project-id> addfirewall <firewall-name> --description="Incoming http allowed." --allowed="tcp:http"

    The gcutil tool will automatically specify ports for well-known protocols; for example, in the above firewall, port 80 will be added to the firewall rule.

    Example

    gcutil --project=myproject addfirewall samplehttp --description="Incoming http allowed." --allowed="tcp:http"
    +---------------+------------------------+
    | name          | samplehttp             |
    | description   | Incoming http allowed. |
    | network       | default                |
    |               |                        |
    | rule          |                        |
    |   source      | 0.0.0.0/0              |
    |   source_tags |                        |
    |   target      |                        |
    |   target_tags |                        |
    |   protocol    | tcp                    |
    |   ports       | 80                     |
    +---------------+------------------------+
    

    Sending E-mail Though the Mail Gateway

    This topic has moved.

    Advanced Networking Details

    This section provides some low-level details not covered in the previous sections. You do not need to read this for typical usage, but it provides more insight about how networking works in Google Compute Engine.

    The following image describes a similar network to the overview diagram above, but with some additional details:

    A more detailed diagram of the Google Compute Engine network

    Network

    • Stores a lookup table that tracks every active connection. When a packet arrives that is part of an active connection, it will be sent to the destination without consulting any other firewall rules.
    • Stores a lookup table that associates external IP addresses with instances. All packets to the external IP are routed to the network, which looks up the internal IP corresponding to that address, and forwards it to the instance.
    • Performs MAC address lookups (proxy ARP) for a given IP address.
    • Routes packets between instances on the same network.
    • Routes packets externally (billing involved).

    Firewalls

    • Google Compute Engine uses firewalls that can block packets into an instance; they cannot block outgoing packets. If you want to block outgoing packets from an instance, you must configure another technology, such as iptables, on your instances.
    • Every instance has a "hidden" firewall rule saying that once a connection has been made, all calls and replies will be permitted over that source+target+port+protocol connection until it expires after about 10 minutes of inactivity. However, if a reply is sent to a different port, for example if an FTP request asks for a response on another port, the response is not automatically allowed; there must be a firewall rule permitting a connection over the new port.

    Instances

    • Each instance's metadata server also acts as a DNS lookup service (DNS resolver) for that instance. DNS lookups are performed for instance names. The metadata server itself stores all DNS information for the local network, and queries Google's public DNS server for any addresses outside of the local network.
    • An instance is not aware of any external IP address assigned to it; the Network stores a lookup table that lists the network IP of every instance with an external IP address, and the corresponding external IP address.

    Who Handles What

    Under the covers, different networking features are handled by different parts of the Google Compute Engine system. Some of these are standard networking features that are well documented, and some of them are specific to Google Compute Engine. Some features you can configure, and some you cannot. Google Compute Engine uses Linux's VIRTIO network module to model Ethernet card and router functionality, but higher levels of the networking stack, such as ARP lookups, are handled using standard networking software.

    • ARP lookup

      The instance kernel issues ARP requests; the Network issues ARP replies. The mapping between MAC addresses and IP addresses is handled by the instance kernel.

    • MAC lookup table, IP lookup table, active connection table

      These tables are hosted on the underlying network and cannot be inspected or configured.

    • DNS server

      Each instance's metadata server acts as a DNS server. It stores locally the DNS entries for all network addresses in the local network, and calls into Google's public DNS server for entries outside the network. You cannot configure this DNS server, however you can set up your own DNS server if you like, and configure your instances to use that server instead by editing the /etc/resolv.conf file.

    • Packet handling between the network and the outside

      Packets coming into or out of the network are handled by network code that examines the packet against firewalls, against the IP lookup table, and against the active connections table. The network also performs NAT on packets coming into and out of the network.

    • Packets received by an instance

      These packets are received and turned into a stream by the instance kernel in standard fashion.

    • Packets sent by an instance

      Packets are sent using the instance kernel to Google's network implementation.

    Detailed Connection Walkthroughs

    Here are more details about what happens when an instance makes a network call.

    An instance makes a call:

    1. If the target address is an instance name or a URL such as www.google.com, the instance calls the DNS service on its metadata server and gets back the equivalent IP address. You can configure your instance to consult another DNS service, although then you will not be able to resolve instance names.
    2. The destination IP address is examined against the network's IP address range, which every instance knows.
      1. If the IP address is outside the network address range:
        1. The instance sends the packet to the network's gateway MAC address with the destination set to the packet's final destination. The instance might need to make an ARP request to resolve the gateway's MAC address.
        2. The network receives the packet, and rewrites the IP header to declare the instance's external IP address as the source. If the instance has no external IP address, the call is not allowed, and the network drops the packet without informing the sender.
        3. The network records the outgoing packet, and adds the source and destination to the active connections table.
        4. The network sends the packet on to its destination.
        5. The destination gets the packet and responds if it chooses.
        6. The network receives the response, consults the active connections table, notes that this is an active connection, and allows it. The network consults its network/external IP lookup table and replaces the instances' external IP address with the equivalent network address and sends the packet to the source instance.
        7. The instance receives the packet.
      2. If the destination IP address is within the network address range:
        1. The instance makes a standard ARP request to the network to learn the equivalent MAC address for this network IP, unless it has cached the MAC address for this IP as part of a previous request.
        2. The instance sends the packet to the network, with the target MAC and IP address set to the destination instance.
        3. The network checks for a firewall rule that permits a connection to this target: if so, it sends it on; if not, it simply drops the packet without informing the sender.
        4. If the packet is allowed, the network records or updates the status of this connection in a table of active connections.
        5. The target instance gets the packet, and optionally sends a reply to the network, addressed to the source instance. It performs a standard ARP request similar to that done by the source (if the MAC address is not cached).
        6. The network consults the active connection table, sees that the reply is part of active connection, and routes it on to the destination. If the reply had been over a different port – for example, if this had been an FTP request that specified a different reply port – the reply would not match a known connection and the network would again examine the firewalls for a rule allowing the connection, and if a firewall rule exists, the network logs the connection and sends the reply to the source.
        7. The source instance receives the reply.

    An external instance or computer calls an instance:

    1. The external caller sends a packet to an instance's external IP address, which is owned by the network.
    2. The network compares the packet against the active connections table to see whether this is an existing connection:
      1. If it is not an existing connection, then the network looks for a firewall rule to allow the connection.
      2. If there is no firewall, the network drops the packet without informing the sender.
    3. If there is an existing connection or valid firewall, the network examines its lookup table and replaces the external IP with the corresponding network IP in the packet, logs the incoming packet in the active connections table and sends the packet to the target instance.
    4. The instance receives the packet and responds as described in 2.2.1 above when sending a packet outside the network range.
    5. The network receives the reply, finds the matching incoming request in the active connections table, and allows the packet through. Before sending, it modifies the source IP address by replacing the instance's network IP with the corresponding external IP from its lookup table.

    Firewalls

    Note: If you are experiencing issues receiving traffic to your virtual machine instance, even after you have configured your Firewall rules, please check that your operating system firewall is configured to permit the traffic.

    A firewall is a rule that defines what incoming connections are accepted by which instances. Each firewall contains one rule, which specifies a permitted incoming connection request, defined by source, destination, ports, and protocol. When a request is sent to an instance, whether internally or from another network or the Internet, Google Compute Engine allows the request if any firewall in the network permits the connection.

    Firewalls do not restrict outgoing packets between instances within the same network. Any instance can send packets to any other instance in its network. However, whether that packet is accepted is determined by the firewalls associated with the target instance. To regulate outgoing requests, use another system, such as iptables.

    Only instances with an external IP address can send packets outside the network. Only instances with an external IP address and a permitting firewall rule can be addressed from outside the network (or you could route packets through an externally-addressable proxy server).

    You cannot share firewalls between projects. You cannot modify an existing firewall, or overwrite a firewall with the same name; you must delete a similarly named firewall before adding a new one.

    A Firewall resource exposes the following properties, which can also be set using the gcutil addfirewall command. Each rule is composed of several elements, most of which must be defined before a connection is allowed:

    • network - [Required] The network that this firewall is assigned to. Each firewall can be associated with one and only one network. You must provide a network when making an API call to create a firewall. However, if you use gcutil and do not specify a network, the firewall will automatically be assigned to the default network.
    • sourceRanges - [Required if sourceTags is not specified] Identifies permitted callers by a list of IP address blocks expressed in CIDR notation. If not specified, the default is 0.0.0.0/0, which means that all incoming connections will be accepted from inside or outside the network. For example:
      "sourceRanges": [ "198.51.100.0/24", "203.0.113.0/25" ]
    • sourceTags - [Required if sourceRanges is not specified] If the source is within this network and has one of the specified tags, the connection will be accepted. For example:
      "sourceTags": [ "management" ]
    • targetTags [Optional] - A list of instance tags that specify which instances on the network can accept requests from the specified sources. If not specified, this firewall rule applies to all instances on this network. For example:
      "targetTags": [ "web", "database" ]
    • allowed - [Required] - An array of allowed connections permitted by this firewall rule. Each object contains an IP protocol and an optional range of ports (for TCP and UDP traffic) that should be allowed to the instances specified by targetTags.
      • IPProtocol - [Required] The protocols allowed over this connection. This can be the (case-sensitive) string values "tcp", "udp", "icmp", or any IP protocol number.
      • ports - [Optional] An array of target ports allowed for this connection. This is only applicable for TCP and UDP connections. Each value is an string that is either a single port or a port range. If not specified, all ports are allowed. Example: ["80", "160", "300-500"]
      • For example:

         "allowed": [
            {
              "IPProtocol": "tcp",
              "ports": [ "22" ],
            },
            {
              "IPProtocol": "17",
              "ports": [ "161" ],
            }
          ]

        Note: Certain types of traffic are not allowed between virtual machines and the Internet, regardless of the firewall settings. Read the documentation on blocked traffic for more information.

      • name - [Required] The firewall name. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.

      Google Compute Engine provides the following standard firewalls in every project, assigned to the default network. You can add/modify/delete your own as desired.

      • default-allow-internal - Allows network connections of any protocol and port between any two instances.
      • default-ssh - Allows TCP connections from any source to any instance on the network, over port 22.

      Example

      Here is the description of the default-ssh firewall. The source CIDR of 0.0.0.0/0 means that the connection can come from anywhere, inside or outside of the network.

      gcutil --print_json getfirewall default-ssh --project=<project-id>
      {
        "allowed": [
          {
            "IPProtocol": "tcp",
            "ports": [
              "22"
            ]
          }
        ],
        "creationTimestamp": "1234-56-78T09:12:34.567",
        "description": "SSH allowed from anywhere",
        "id": "12AAA5704BBBB656CCCC",
        "kind": "compute#firewall",
        "name": "default-ssh",
        "network": "https://www.googleapis.com/compute/v1/projects/myproject/global/networks/default",
        "selfLink": "https://www.googleapis.com/compute/v1/projects/myproject/global/firewalls/default-ssh",
        "sourceRanges": [
          "0.0.0.0/0"
        ]
      }

      Useful gcutil commands:

      • gcutil listfirewalls
      • gcutil getfirewall
      • gcutil addfirewall
      • gcutil deletefirewall
      • There is no way to modify an existing firewall using gcutil.

      Adding a Firewall

      To add a firewall, create a Firewall object describing the permitted source, target, and protocol, and network. Any instance connected to the specified network will have that firewall applied. Here is the syntax in gcutil:

      gcutil addfirewall <firewall-name> <flags>
      firewall-name
      [Required] A name for the firewall that is unique within this project. The name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
      --allowed
      [Required] A combination of the protocol and the port (or port range) permitted in this connection. gcutil will try to guess the correct protocol or port, if only one or the other is given for a well-known protocol. Examples:
      Setting Explanation
      tcp:80 or tcp:http Allow incoming TCP traffic on port 80, which is used for HTTP connections.
      :22 or :ssh Allow incoming TCP and UDP traffic on port 22. This will also allow incoming SSH connections on port 22.
      :443 or :https Allow incoming TCP and UDP traffic on port 443, used by HTTPS connections.
      1 or icmp Allow all incoming ICMP traffic.
      --allowed_ip_sources
      [Optional] A list of IP sources that are allowed to talk to instances within the network, through the the connections described by the --allowed flag. The default is 0.0.0.0/0, which means any source, internal or external is allowed. To limit to the network, use the network range (10.240.0.0/16 is the default network range). Network addresses can only be used to identify sources within the same network.
      --allowed_tag_sources
      [Optional] A comma-delimited list of instance tags that are allowed to talk to instances in the network, using the protocols and ports described by the --allowed flag. Instances must be within the current network, and tags can only identify instances within the same network as the firewall.
      --network
      [Optional] The network this firewall is applied to. The default is the network named default.
      --target_tags
      [Optional] A list of instance tags that this firewall is applied to. Instances must be in the same network as the firewall.
      Other flags
      To see all the flags that you can set with addfirewall, call gcutil help addfirewall

      Example

      The following example adds a firewall that supports HTTP connections over port 80 from any source to any instance in the network that exposes an external IP address.

      gcutil addfirewall allowhttp --description="Incoming http allowed." --allowed="tcp:http"--print_json --project=<project-id>
      {
        "allowed": [
          {
            "IPProtocol": "tcp",
            "ports": [
              "80"
            ]
          }
        ],
        "creationTimestamp": "1234-56-78T09:12:34.567",
        "description": "Incoming http allowed.",
        "id": "13AAA70BBBB5639CCCC9",
        "kind": "compute#firewall",
        "name": "allowhttp",
        "network": "https://www.googleapis.com/compute/v1/projects/myproject/global/networks/default",
        "selfLink": "https://www.googleapis.com/compute/v1/projects/myproject/global/firewalls/allowhttp",
        "sourceRanges": [
          "0.0.0.0/0"
        ]
      }
      

      Deleting a Firewall

      To delete a firewall, run the following command:

      gcutil deletefirewall <firewall-name> --project=<project-id>
      <firewall-name>
      The name or names of the firewalls to delete.
      <project-id>
      The name of the project the firewalls belongs to.

      Routes Collection

      Every project has a Routes collection that determines how packets leaving an instance should be handled. The Routes collection contains all routes for that project and each route is a single rule that specifies which network object should handle a packet leaving a virtual machine instance. For example, a route may specify that packets leaving any instances in the default network whose destination range matches 0.0.0.0/0 should be routed to a another virtual machine instance first, before it can be sent on to its final destination.

      By default, every network has two default routes: a route that directs traffic to the Internet and a route that directs traffic to other instances within the network. These default routes are generally sufficient for most projects.

      Routes allow you to implement more advanced networking functions in your virtual machines, such as creating VPNs (virtual private networks), setting up many-to-one NAT (networking address translation), and setting up transparent proxies. If you do not need any advanced routing solutions, the default routes should be sufficient for handling most outgoing traffic.

      Instance Routing Tables

      Each route in the Routes collection may apply to one or more instances. A route applies to an instance if the network and instance tags match. If the network matches and there are no instance tags specified, the route applies to all instances in that network. Google Compute Engine then uses the Routes collection to create individual read-only routing tables for each instance.

      A good way to visualize this is to imagine a massively scalable virtual router at the core of each network. Every virtual machine instance in the network is directly connected to this router, and all packets leaving a virtual machine instance are first handled at this layer before they are forwarded on to their next hop. The virtual network router selects the next hop for a packet by consulting the routing table for that instance. The diagram below describes this relationship, where the green boxes are virtual machine instances, the router is at the center, and the individual routing tables are indicated by the tan boxes.

      The Routes collection for the network in the diagram might look like this:

      +--------------------------------+------------------+-----------+-------------------+-------------------------------+-------------+-----------------------------------+------------------+----------+
      |              name              |     network      |    tags   | destination-range |       next-hop-instance       | next-hop-ip |         next-hop-gateway          | next-hop-network | priority |
      +--------------------------------+------------------+-----------+-------------------+-------------------------------+-------------+-----------------------------------+------------------+----------+
      | default-route-68079898SAMPLEe7 | networks/default |           | 0.0.0.0/0         |                               |             | gateways/default-internet-gateway |                  | 1000     |
      | default-route-78SAMPLEd2bc5762 | networks/default |           | 10.100.0.0/16     |                               |             |                                   | networks/default | 1000     |
      | vpngateway                     | networks/default |    vpn    | 172.12.0.0/16     | <zone>/instances/vpngateway   |             |                                   |                  | 1000     |
      +--------------------------------+------------------+-----------+-------------------+-------------------------------+-------------+-----------------------------------+------------------+----------+

      Any instances with the vpn tag automatically has a routing table that contains the vpnroute and the two default routes. In the diagram, both vm1 and vm2 have these routes in their routing table, so all outgoing traffic destined for the 172.12.0.0/16 external IP range is handled by the vpngateway instance.

      An instance's routing table is a read-only entity. You cannot directly edit these tables. If you want to add, remove, or edit a route, you must do so through the Routes collection.

      Individual Routes

      A single route is made up of the following:

      • name - [Required] The user-friendly name for this route. For example, internetroute for a route that allows access to the Internet.
      • network - [Required] The name of the network this route applies to. For example, the default network.
      • destRange - [Required] The destination IP range that this route applies to. If the destination IP of a packet falls in this range, it matches this route. For example, 0.0.0.0/0. See the Route Selection section to understand how Google Compute Engine uses all matching routes to select a single next hop for a packet.
      • instanceTags - [Required] The list of instance tags this route applies to. If this is empty, this route applies to all instances within the specified network. In the API, this is a required field. In gcutil, this is an optional field and gcutil assumes an empty list if this field is not specified.
      • Exactly one of the following next hop specifications:
        • nextHopInstance - The fully-qualified URL of the instance that should handle matching packets. The instance must already exist and have IP forwarding enabled. For example:
          https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone-name>/instances/<instance-name>

          If a next hop instance crashes and is restarted by the system, or if you delete an instance and recreate it with the same name, in the same zone, Google Compute Engine will continue to route matching packets to the new instance.

        • nextHopIp - The network IP address of an instance that should handle matching packets. The IP address must lie within the address space of the network. For example, if your network is 10.240.0.0/16, you cannot specify nextHopIp=1.1.1.1. The instance must already exist and have IP forwarding enabled. If the next hop instance crashes and is later restarted by the system with the same IP address or if the user deletes the instance and recreates it with the same IP address, Google Compute Engine continues routing matching packets to the new instance.
        • nextHopNetwork - [Read-Only] The URL of the local network handling matching packets. This is only available to the default local route. You cannot manually set this field.
        • nextHopGateway - The URL of a gateway that should handle matching packets. Currently, there is only the Internet gateway available:
          /projects/<project-id>/global/gateways/default-internet-gateway
      • priority - [Required] The priority of this route. Priority is used to break ties in the case where there is more than one matching route of maximum length. A lower value is higher priority; a priority of 100 is higher than 200. For example, the following routes are tied because the destination range is the same, and they are in the same network:
        +-----------------+------------------+-------------+-------------------+---------------------------------------+-------------+-----------------------------------+------------------+----------+
        |       name      |     network      |    tags     | destination-range |            next-hop-instance          | next-hop-ip |         next-hop-gateway          | next-hop-network | priority |
        +-----------------+------------------+-------------+-------------------+---------------------------------------+-------------+-----------------------------------+------------------+----------+
        | vpnroute        | networks/default |             | 192.168.0.0/16    | <zone>/instances/vpninstance          |             |                                   |                  | 1000     |
        | vpnroute-backup | networks/default |             | 192.168.0.0/16    | <zone>/instances/vpninstance-backup   |             |                                   |                  | 2000     |
        +-----------------+------------------+-------------+-------------------+---------------------------------------+-------------+-----------------------------------+------------------+----------+

        Under this configuration, VPN traffic would normally be handled by vpninstance, but would fall back to vpninstance-backup in case vpnroute is deleted.

        In the API, this is a required field. In gcutil, this is an optional field and gcutil assumes default priority of 1000 if the field is not specified.

      Default Routes

      By default, every network you create comes with two routes in the Routes collection:

      • A local route that handles packets destined within the network.
      • An Internet route that sends all other packets to the Internet gateway.

      These default routes are assigned unique names generated by the server and may look similar to the following:

      +--------------------------------+------------------+----------+-------------------+-----------------------+-------------+-----------------------------------+------------------+----------+
      |              name              |     network      |   tags   | destination-range |   next-hop-instance   | next-hop-ip |         next-hop-gateway          | next-hop-network | priority |
      +--------------------------------+------------------+----------+-------------------+-----------------------+-------------+-----------------------------------+------------------+----------+
      | default-route-68079898SAMPLEe7 | networks/default |          | 0.0.0.0/0         |                       |             | gateways/default-internet-gateway |                  | 1000     |
      | default-route-78SAMPLEd2bc5762 | networks/default |          | 10.240.0.0/16     |                       |             |                                   | networks/default | 1000     |
      +--------------------------------+------------------+----------+-------------------+-----------------------+-------------+-----------------------------------+------------------+----------+

      The Internet route is identifiable by the next hop Internet gateway, while the local route is identifiable by the next hop network field. You can choose to edit or delete the default Internet route for a network, but you cannot delete the local route. It is also not possible to create routes that override the local route. For example, if your network address space is 10.240.0.0/16, you can create a route with a destination range of 10.0.0.0/8 because it is less specific than the local route and therefore doesn't override it. However, you cannot create routes that are as specific or more specific than the local route, such as a route with a destination range of 10.240.128.0/17.

      If you have multiple networks in your project, there will also be multiple sets of default routes:

      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      |              name              |      network       |   tags  | destination-range |     next-hop-instance     | next-hop-ip |         next-hop-gateway          |  next-hop-network  | priority |
      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      | default-route-5a37SAMPLE3a19b6 | networks/mynetwork |         | 10.0.0.0/8        |                           |             |                                   | networks/mynetwork | 1000     |
      | default-route-68079898SAMPLEe7 | networks/default   |         | 0.0.0.0/0         |                           |             | gateways/default-internet-gateway |                    | 1000     |
      | default-route-78SAMPLEd2bc5762 | networks/default   |         | 10.240.0.0/16     |                           |             |                                   | networks/default   | 1000     |
      | default-route-9cebSAMPLE2dd35c | networks/mynetwork |         | 0.0.0.0/0         |                           |             | gateways/default-internet-gateway |                    | 1000     |
      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      

      The example above shows two sets of default routes. Each set applies to a different network. For example, the first two routes apply to the custom network mynetwork, and the last two routes apply to the default network. The Routes collection lists all routes that apply to your project, so it is possible to see multiple routes with similar criteria, such as the same destination range and next hop, that apply to different networks.

      Route Selection

      When an outgoing packet leaves a virtual machine instance, Google Compute Engine uses the following steps to decide which route to use and where to forward the packet:

      1. Google Compute Engine discards all but the most specific routes that match the packet’s destination address. For example, if destinationRange=10.1.1.1 and there is a route for 10.1.1.0/24 and a route for 10.0.0.0/8, Google Compute Engine selects the 10.1.1.0/24 route because it is more specific.
      2. If there are multiple specific routes, Google Compute Engine discards all routes but leaves the routes with the smallest priority value (smallest priority value indicates highest priority).
      3. Google Compute Engine computes a hash value of the IP protocol field, the source and destination IP addresses, and the source and destination port (if applicable). Google Compute Engine uses this hash value to select a single next hop from the remaining ties.
      4. If a next hop is found, Google Compute Engine forwards the packet. If a next hop is not found, the packet is dropped and Google Compute Engine replies with an ICMP destination or network unreachable error.

      It is important to note that Google Compute Engine does not consider network distance when selecting a next hop. The next hop instance or gateway could be in a different zone than the instance sending the packet, so you should engineer your routing tables to control locality. For example, you can use instances tags to direct packets for instances in different zones to prefer a local transparent proxy or VPN gateway. By tagging instances by zone, you can assure that packets leaving an instance in a one zone, will only be sent to a next hop in the same zone.

      Listing Routes

      To see a list of routes for a project, run gcutil listroutes:

      gcutil --project=<project-id> listroutes [--filter=<expression>] [--sort_by=<sort-criteria>]

      Important flags and parameters:

      --project=<project-id>
      [Required] The project ID for which you want to list routes.
      --filter=<expression>
      [Optional] Filter your list results.
      --sort_by=<sort-criteria>
      [Optional] Sort output results by the given field name. Field names starting with a "-" return lists results in a descending order. Valid values include:
      • name or -name
      • network or -network
      • tags or -tags
      • destination-range or -destination-range
      • next-hop-instance or -next-hop-instance
      • next-hop-ip or -next-hop-ip
      • next-hop-gateway or -next-hop-gateway
      • next-hop-network or -next-hop-network
      • priority or -priority

      For example:

      $ gcutil --project=myproject listroutes
      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      |              name              |      network       |   tags  | destination-range |     next-hop-instance     | next-hop-ip |         next-hop-gateway          |  next-hop-network  | priority |
      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      | default-route-5a37SAMPLE3a19b6 | networks/mynetwork |         | 10.0.0.0/8        |                           |             |                                   | networks/mynetwork | 1000     |
      | default-route-68079898SAMPLEe7 | networks/default   |         | 0.0.0.0/0         |                           |             | gateways/default-internet-gateway |                    | 1000     |
      | default-route-78SAMPLEd2bc5762 | networks/default   |         | 10.240.0.0/16     |                           |             |                                   | networks/default   | 1000     |
      | default-route-9cebSAMPLE2dd35c | networks/mynetwork |         | 0.0.0.0/0         |                           |             | gateways/default-internet-gateway |                    | 1000     |
      +--------------------------------+--------------------+---------+-------------------+---------------------------+-------------+-----------------------------------+--------------------+----------+
      

      Adding a Route

      To add a route to the routing table using gcutil, use the gcutil addroute command:

      gcutil --project=<project-id> addroute <route-name> <destination-range>
             --network=<network> <next-hop> \
             [--tags=<instance-tags>]  \
             [--priority=<route-priority>]

      Important flags and parameters:

      --project=<project-id>
      [Required] The project ID where you want to create this route.
      <route-name>
      [Required] The user-friendly name of this route. The name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
      <destination-range>
      [Required] The destination IP range this route applies to. For example, 0.0.0.0/0.
      --network=<network>
      [Required] The network this route applies to.
      <next-hop>
      [Required] You must provide exactly one next hop from the following list:
      • --next_hop_ip=<next-hop-ip>

        The network IP address of the instance that should handle matching packets. The instance must already exist and have IP forwarding enabled. If the next hop instance crashes and is later restarted by the system with the same IP address or if the user deletes the instance and recreates it with the same IP address, Google Compute Engine will continue routing matching packets to the new instance.

      • --next_hop_instance=<next-hop-instance>

        The fully-qualified URL of the instance that should handle matching packets. The instance must already exist and have IP forwarding enabled. For example:

        https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone-name>/instances/<instance-name>

        If the next hop instance crashes and is later restarted by the system or if the user deletes the instance and recreates it with the same name, Google Compute Engine routes matching packets to the new instance.

      • --next_hop_gateway=<next-hop-gateway>

        The URL of a gateway that should handle matching packets. Currently, only the Internet gateway is available:

        /projects/<project-id>/global/gateways/default-internet

      --tags=<instance-tags>
      [Optional] A list of tags that indicate which instances this particular route applies to. If this is not specified, an empty list is sent and this route will apply to all instances in the network.
      --priority=<route-priority>
      [Optional] Specify the priority of this route. If two or more routes apply to the same destination range, the priority breaks ties between these routes. The route with the lower value is used. By default, the priority of a route is 1000 but can be an integer between 0 and 4294967295.

      To add a route through the RESTful API, construct POST request to the following URI:

      https://www.googleapis.com/compute/v1/projects/<project-id>/global/routes

      Your request body must be constructed similar to the following (replacing nextHopGateway with your desired next hop):

      { "name": "mynewroute",
        "description": "new,
        "tags": [],
        "destRange": "192.168.0.0/16",
        "priority": 1000,
        "nextHopGateway": "https://www.googleapis.com/compute/v1/projects/<project-id>/global/gateway/default-internet-gateway",
        "network": "https://www.googleapis.com/compute/v1/projects/<project-id>/global/networks/<network-name>"
      }
      

      Deleting a Route

      To delete a route, run the gcutil deleteroute command:

      gcutil --project=<project-id> deleteroute <route-name>

      Important flags and parameters:

      --project=<project-id>
      [Required] The project ID where this route lives.
      <route-name>
      [Required] The name of the route to delete.

      To delete a route in the API, make a DELETE request to the following URI, with an empty request body:

      https://www.googleapis.com/compute/v1/projects/<project-id>/global/routes/<route-name>

      Consistency of Route Operations

      Similar to firewalls, when you make changes to the Routes collection, these changes are eventually consistent. This means that after you update, add, or remove a route, the changes are only guaranteed to have taken affect on all applicable instances once your operation object returns a status of DONE. A PENDING or RUNNING operation means that the change may have taken effect on some instances but has not taken effect on all instances.

      If you make a sequence of changes, these changes may be applied to your instances in any order. There is no guarantee that the order in which you make your requests will be the order in which these requests are processed. Since routing changes do not take affect instantaneously, different instances may observe different changes at different times. However, your operation object will move to a DONE state once the changes have been observed by all affected instances.

      Enabling IP Forwarding for Instances

      By default, Google Compute Engine instances are not allowed to send packets whose source IP address does not match the IP address of the instance sending the packet. Similarly, Google Compute Engine won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets. To disable this source and destination IP check, enable the canIPForward field, which allows an instance to send and receive packets with non-matching destination or source IPs.

      To set the canIPForward field in gcutil, use the --can_ip_forward flag when creating your instance:

      gcutil --project=<project-id> addinstance <instance-name> .... --can_ip_forward=true

      In the API, set the canIpForward field to true when you construct the request body to create a new instance:

       {
        "name": "myinstance",
        "description": "",
        "image": "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-X-wheezy-vYYYYMMDD",
        "machineType": "https://www.googleapis.com/compute/v1/projects/<project-id>/global/machineTypes/n1-standard-2",
        "networkInterfaces": [
            ...
        ],
        "canIpForward": true
      }

      You can only set this field at instance creation time. After an instance is created, the field becomes read-only.

      Interacting with Firewall Rules

      Just creating a route does not ensure that your packets will be received by the specified next hop. Firewall rules still determine whether incoming traffic is allowed into a network or instance. For example, if you create a route that sends packets through multiple instances, each instance must have an associated firewall rule to accept packets from the previous instance.

      For tag-based firewall rules, the source tag list will continue to be matched against the virtual machine instance sending the packet, and the target tags list will be matched against the virtual machine instance receiving the packet. For IP address matching, only the source IP address of the packet is used, rather than the IP address of the instance sending the packet. For example, if you have a firewall rule that specifies only packets from 10.240.2.3 are accepted, all packets that have a source IP address that match the rule are accepted, regardless of the IP address of instance that sends the packet.

      For more information, see Firewalls.

      Routing Packets to the Internet

      Currently, any packets sent to the Internet must be sent by an instance that has an external IP address. If you create a route that sends packets to the Internet from a particular instance, that instance must also have an external IP. If you create a route that sends packets to the Internet gateway, but the source instance doesn't have an external IP address, the packet will be dropped.

      Setting Up VPN Gateways

      You can create more complicated networking scenarios by making changes to the Routes collection. This section describes an example scenario that connects a VPN gateway instance in Google Compute Engine to a Debian-based VPN gateway in your non-Cloud network. This scenario is just an example and may or may not be suitable for your network. Consult with your network administrator for more information.

      Warning: The following set up is purely an example of how you could potentially set up your own VPN gateways. Google Compute Engine does not validate or endorse third-party software and does not guarantee that the following example will work for all scenarios.

      1. To start, create a Google Compute Engine network to connect via VPN.
        $ gcutil --project=myproject addnetwork gce-network --range 10.120.0.0/16 --gateway 10.120.0.1

        Caution: Make sure the network range you choose for your Google Compute Engine network does not overlap the network range of your local network or this example may not work correctly.

      2. Create a VPN gateway virtual machine on gce-network.
        $ IMAGE="https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-7-wheezy-v20130507"
        $ gcutil --project=myproject addinstance vpn-gateway --can_ip_forward=true \
                 --network gce-network --external_ip ephemeral \
                 --zone us-central1-a --image $IMAGE --tags vpn
        
      3. Make note of your newly-created virtual machine's internal IP address by performing the following command:
        gcutil --project=myproject getinstance vpn-gateway

        Note the address under network-ip. It should begin with 10.120.x.x or 10.240.x.x.

      4. Create a "plain old" non-VPN gateway virtual machine to talk to your local network.
        gcutil --project=myproject addinstance povm-1 --network gce-network --image $IMAGE --zone us-central1-a
        
      5. Create a route in gce-network to route traffic through vpn-gateway if it is destined for your local network.
        gcutil --project=myproject addroute gce-network-via-gateway '<your-local-network-address-space>' --next_hop_ip <vpn-gateway-network-ip> --network gce-network --tags=vpn

        The <vpn-gateway-network-ip> value should be the network-ip you noted from step 3.

      6. Add the following Google Compute Engine firewall rules for your Google Compute Engine network to accept incoming traffic.
        $ gcutil addfirewall ssh --allowed_ip_sources 0.0.0.0/0 --allowed 'tcp:22' --network gce-network
        $ gcutil --project=myproject addfirewall  allow-internal --allowed_ip_sources 10.0.0.0/8 --allowed 'tcp:1-65535,udp:1-65535,icmp' \
                 --network gce-network --target_tags vpn
        $ gcutil --project=myproject addfirewall allow-ipsec-nat --allowed_ip_sources <public-ip-of-your-local-vpn-gateway-machine>/32 \
                 --allowed 'udp:4500' --network gce-network --target_tags vpn
        $ gcutil --project=myproject addfirewall allow-all-peer --allowed_ip_sources <your-local-network-address-space> \
                 --allowed 'tcp:1-65535,udp:1-65535,icmp' --network gce-network --target_tags vpn

        You may also need to set up firewall settings in your local network to accept incoming traffic from the Google Compute Engine network. Depending on your network, this process may vary.

      7. Install VPN software and configure gateway guest OS.

        To install VPN software and configure your guest OS on your gateway virtual machine, vpn-gateway, you need your VPN gateway machine's external IP address. Run the following command:

        $ gcutil --project=myproject getinstance vpn-gateway

        Copy the external IP address and create a file named ipsec.conf on your virtual machine gateway instance. Populate it with the following contents:

        conn myconn
          authby=psk
          auto=start
          dpdaction=hold
          esp=aes128-sha1!
          forceencaps=yes
          ike=aes128-sha1-modp2048!
          keyexchange=ikev2
          mobike=no
          type=tunnel
          left=%any
          leftid=<vpn-vm-gateway-external-address>
          leftsubnet=<internal-ip-subnet>
          leftauth=psk
          leftikeport=4500
          right=<public-ip-of-your-local-vpn-gateway-machine>
          rightsubnet=<your-local-network-address-space>
          rightauth=psk
          rightikeport=4500

        Your <internal-ip-subnet> value should be either 10.120.0.0/16 or 10.240.0.0/16, based on the internal-ip value you noted in step 3.

        Then, run the following commands, replacing <secret-key> with a secret key you choose:

        $ sudo apt-get install strongswan -y
        $ echo "%any : PSK \"<secret-key>\"" | sudo tee /etc/ipsec.secrets > /dev/null
        $ sudo sysctl -w net.ipv4.ip_forward=1
        $ sudo cp ipsec.conf /etc
        $ sudo ipsec restart
        $ sudo ipsec up myconn

        Assuming your local gateway machine is running a Debian-based operating system, you can use the same steps to install VPN on your local machine. Make a copy of your ipsec.conf file with the following changes on your local gateway machine:

        conn myconn
          authby=psk
          auto=start
          dpdaction=hold
          esp=aes128-sha1!
          forceencaps=yes
          ike=aes128-sha1-modp2048!
          keyexchange=ikev2
          mobike=no
          type=tunnel
          left=%any
          leftid=<public-ip-of-local-VPN-gateway-machine>
          leftsubnet=<your-local-network-address-space>
          leftauth=psk
          leftikeport=4500
          rightid=<vpn-vm-gateway-external-address>
          rightsubnet=10.120.0.0/16
          rightauth=psk
          rightikeport=4500

        Run the same commands described above on your local VPN gateway machine.

      8. Try it out!
        $ gcutil ssh povm-1 'ping -c 3 <your-local-network-external-address>'

      Troubleshooting

      If you are experiencing issues with your VPN setup based on the instructions above, try these tips to troubleshoot your setup:

      • Determine whether the two VPN endpoints are able to communicate at all.

        Use netcat to send VPN-like traffic (UDP, port 4500). Run the following command on your local VPN endpoint:

        echo | nc -u <vpn-vm-gateway-external-address> 4500

        Run tcpdump on the receiving end to determine that your Google Compute Engine instance can receive the packet on port 4500:

        tcpdump -nn -n host <public-ip-of-local-VPN-gateway-machine> -i any
      • Turn on more verbose logging.

        Turn on verbose logging for more logging information by adding the following lines to your ipsec.conf files:

        config setup
          charondebug="ike 3, mgr 3, chd 3, net 3"
        
        conn myconn
          authby=psk
          auto=start
          ...

        Next, retry your connection. Although the connection should still fail, you can check the log for errors. The log file should be located at /var/log/charon.log on your Google Compute Engine instance.

      Glossary

      Connection
      A communication channel between a specific address and port on one computer and a specific address and port on a second computer (the address and port do not need to be the same), where all packets use a common protocol, for example TCP or UDP.
      DNS Resolver
      A service on a computer that converts a friendly name (for example, instance-one.project.google.internal) into an IP address. Sometimes called a DNS lookup service.
      Host
      A computer that sends a packet

    Page: projects

    All Google Compute Engine resource belong to a project. Projects form the basis for enabling and using the Google Compute Engine service, including managing resources, enabling billing, adding and removing collaborators and enabling other Google services.

    Contents

    1. Overview
    2. Getting Project Information

    Overview

    A Project resource is the root collection and settings resource for all Google Compute Engine resources.

    The Project resource is created using the Google Developers Console when you activate Google Compute Engine for a project. Some administration tasks can only be done from the Console, and others must be done from within Google Compute Engine: for instance, adding team members, listing projects, and setting ACLs, can only be done within the Console; creating instances, disks, or other resources for a project can only be done directly in the API.

    You must use the Developers Console to manage all non-Google Compute Engine-specific properties, such as project members or billing information.

    You must have read, write, or owner permissions on a project to be able to use gcutil. You do not need to be a project member to be able to ssh into an instance and manage it; however, if you do not have read permissions on a project you must use raw ssh, not gcutil ssh.

    Useful gcutil commands:

    Note: There is no Google Compute Engine command to list projects; you must use the Developers Console to list projects of which you are a member.

    Getting Project Information

    You can get information about a project, such as the different quota usage amounts and limits, by running the command gcutil getproject, which returns information similar to the following:

    $ gcutil getproject --project=myproject
    +--------------------------+--------------------------------------------------+
    |         property         |                         value                    |
    +--------------------------+--------------------------------------------------+
    | name                     | myproject                                        |
    | description              |                                                  |
    | creation-time            | 2012-01-11T17:45:37.812-08:00                    |
    | ips                      |                                                  |
    |                          |                                                  |
    | usage                    |                                                  |
    |   instances              | 4.0/8.0                                          |
    |   cpus                   | 6.0/8.0                                          |
    |   ephemeral-addresses    | 4.0/8.0                                          |
    |   disks                  | 7.0/8.0                                          |
    |   disks-total-gb         | 57.0/100.0                                       |
    |   snapshots              | 0.0/1000.0                                       |
    |   networks               | 2.0/5.0                                          |
    |   firewalls              | 14.0/100.0                                       |
    |   images                 | 3.0/100.0                                        |
    |                          |                                                  |
    | common-instance-metadata |                                                  |
    |   hello                  | there                                            |
    |   sshKeys                | <key>                                            |
    +--------------------------+--------------------------------------------------+
    

    Important Flags:

    --project=<project-id>
    This flag is required for every gcutil command except help, unless you have previously specified the --cache_flag_values flag to store your project ID information.

    Page: protocol-forwarding

    Google Compute Engine supports Protocol Forwarding, which lets you create forwarding rule objects that can send packets to a non-NAT’ed target instance. Each target instance contains a single virtual machine instance that receives and handles traffic from the corresponding forwarding rules.

    Protocol forwarding can be used in a number of scenarios, including:

    • Virtual hosting by IPs

      You can set up multiple forwarding rules to point to a single target instance, allowing you to use multiple external IP addresses with one virtual machine instance. You can use this in scenarios where you may want to serve data from just one virtual machine instance, but through different external IP addresses. This is especially useful for setting up SSL virtual hosting.

    • Virtual private network (VPN) connection setup

      You can send IP Authentication Header (AH) and IP Encapsulating Security Payload (ESP) protocols from the Internet to a Compute Engine network to create a VPN network setup between your local gateway and a Compute Engine network.

      For example, with the introduction of ESP and AH, you could use the new protocol forwarding feature alongside the advanced routing feature to create a virtual private network (VPN) from your local network to Google Compute Engine.

    Google Compute Engine supports protocol forwarding for the following protocols:

    Protocol forwarding is charged at the same rates as the load balancing service. Read the pricing page for more information.

    Contents

    Prerequisites

    Before using protocol forwarding, you should:

    Quickstart

    Note: This quickstart assumes you are familiar with bash.

    To get started using protocol forwarding, you must:

    1. Create a target instance.

      Your target instance will contain a single virtual machine instance, but this virtual machine instance can exist at the time you create the target instance, or can be created afterwards.

    2. Create a forwarding rule.

      Your target instance must exist before you create a forwarding rule. If incoming packets match the IP, protocol, and (if applicable) the port range that is being served by your forwarding rule, the forwarding rule will direct that traffic to your target instance.

    This rest of this quickstart demonstrates the above steps end-to-end by:

    1. Setting up an Apache server on a virtual machine instance.
    2. Creating a target instance and corresponding forwarding rules.
    3. Send traffic to a single target instance.

    At the end of this quickstart, you should know how to set up protocol forwarding from multiple forwarding rules to a single target instance.

    Set up a virtual machine instance and install Apache

    To begin, let's create a single virtual machine instance with Apache installed.

    1. Create some startup scripts for your new instance.

      Depending on your operating system, your startup script contents might differ:

      • If you're planning to using Debian on your instance, run the following command:
        me@local:~$ echo "apt-get update && apt-get -y install apache2 && mkdir -p /var/www1 &&
        mkdir -p /var/www2 && mkdir -p /var/www3 && hostname > /var/www/index.html &&
        echo w1 > /var/www1/index.html && echo w2 > /var/www2/index.html && echo w3 > /var/www3/index.html" \
        > $HOME/pf_startup.sh
      • If you're planning on use CentOS for your instance, run the following command:
        me@local:~$ echo "yum -y install httpd && service httpd restart && mkdir -p /var/www1 &&
        mkdir -p /var/www2 && mkdir /var/www3 && hostname > /var/www/html/index.html &&
        echo w1 > /var/www1/index.html && echo w2 > /var/www2/index.html && echo w3 > /var/www3/index.html" > \
        $HOME/pf_startup.sh
    2. Create a tag for your future virtual machine, so we can apply a firewall to it later:
      me@local:~$ TAG="www-tag"
    3. Choose a zone and a region for your virtual machine and set your project ID.
      me@local:~$ ZONE="us-central1-a"
      me@local:~$ REGION="us-central1"
      me@local:~$ PROJECT="<project-id>"
    4. Create a new virtual machine instance to handle traffic for your forwarding rules.
      gcutil --project=$PROJECT addinstance pf-instance --image=debian-7 --tags=$TAG \
      --zone=$ZONE --metadata_from_file=startup-script:$HOME/pf_startup.sh
    5. Create a firewall rule to allow external traffic to this virtual machine instance:
      me@local:~$ gcutil --project=$PROJECT addfirewall www-firewall --target_tags=$TAG --allowed=tcp

    Great, you have successfully set up a virtual machine instance. Now, you can start settting up your protocol forwarding configuration.

    Create a target instance and corresponding forwarding rules

    1. Create a target instance.

      Target instances contain a single virtual machine instance that receives and handles traffic from a forwarding rule. Target instances do not have a NAT policy, so you can use them to set up your own VPN connections using IPSec protocols directly.

      You must create a target instance before you can create a forwarding rule object because forwarding rules must reference an existing target resource. It is not possible to create a forwarding rule that directs traffic to a non-existing target resource. For this example, create a target instance as follows:

      me@local:~$ gcutil --project=$PROJECT addtargetinstance pf-target-instance --zone=$ZONE --instance=pf-instance
    2. Create your forwarding rule objects.

      A forwarding rule object directs traffic that matches the IP protocol and port to a specified target instance. For more details, review the Forwarding Rules documentation.

      For this example, the following commands will create three forwarding rules, each with an ephemeral IP address that forwards TCP traffic to your target instance. Optionally, if you have some static reserved IP addresses, you can use them with these forwarding rules by specifying the --ip=<ip-address> flag.

      # Add forwarding rules
      me@local:~$ gcutil --project=$PROJECT addforwardingrule pf-rule1 --region=$REGION --protocol=TCP --port=80 --target_instance=pf-target-instance
      me@local:~$ gcutil --project=$PROJECT addforwardingrule pf-rule2 --region=$REGION --protocol=TCP --port=80 --target_instance=pf-target-instance
      me@local:~$ gcutil --project=$PROJECT addforwardingrule pf-rule3 --region=$REGION --protocol=TCP --port=80 --target_instance=pf-target-instance

    That's it! You can start sending traffic to your target instance.

    Send traffic to your instance

    1. Get the external IP addresses of your new forwarding rules.

      Run gcutil listforwardingrules to get the external IP addresses of your forwarding rules. For example, the following tables lists the ephemeral IP addresses that were allocated for the forwarding rules created earlier. Your external IP addresses will be different from the ones listed below.

      If you opted to use reserved IP addresses, they will listed here in place of the ephemeral IP addresses.

      user@local:~$ gcutil --project=$PROJECT listforwardingrules --columns=name,ip
      
      +----------+-----------+
      | name     | ip        |
      +----------+-----------+
      | pf-rule1 | 1.2.3.4   |
      +----------+-----------+
      | pf-rule2 | 1.2.3.5   |
      +----------+-----------+
      | pf-rule3 | 1.2.3.6   |
      +----------+-----------+

      Save the IP addresses for the next step.

    2. Configure the virtual machine instance's Apache virtual hosts to serve different information based on the destination URL.

      First, SSH into your instance:

      user@local:~$ gcutil --project=$PROJECT ssh pf-instance

      Then, edit the /etc/apache2/ports.conf file and add the following lines:

      <VirtualHost 1.2.3.4>
        DocumentRoot /var/www1
      </VirtualHost>
      <VirtualHost 1.2.3.5>
        DocumentRoot /var/www2
      </VirtualHost>
      <VirtualHost 1.2.3.6>
        DocumentRoot /var/www3
      </VirtualHost>
      

      Lastly, restart Apache:

      user@myinst:~$ /etc/init.d/apache2 restart
    3. Try sending some traffic to your instance.

      On your local machine, we are going to make a request to the external IP addresses served by the forwarding rules we created.

      Assign the external IP addresses for the forwarding rules to the following alias:

      user@local:~$ IP1="1.2.3.4"
      user@local:~$ IP2="1.2.3.5"
      user@local:~$ IP3="1.2.3.6"

      Next, use curl to send traffic to the IP addresses. The response will return w1, w2, or w3 depending on the IP addresses.

      me@local:~$ curl $IP1
      w1
      me@local:~$ curl $IP2
      w2
      me@local:~$ curl $IP3
      w3

      That's it! You have set up your first protocol forwarding configuration!

    Forwarding Rules

    Forwarding rules work in conjunction with target pools and target instances to support load balancing and protocol forwarding features. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target pools (for load balancing) or target instances (for protocol forwarding). It is not possible to use either of these features without a forwarding rule.

    Forwarding Rule resources live in the Forwarding Rules collection. Each forwarding rule matches a particular IP address, protocol, and optionally, port range to a single target pool or target instance. When traffic is sent to an external IP address that is served by a forwarding rule, the forwarding rule directs that traffic to the corresponding target pool or target instances. You can create up to 50 forwarding rule objects per project.

    A forwarding rule object contains the following properties:

    • name - [Required] The name of the forwarding rule. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    • region - [Required] The region where this forwarding rule resides. For example:
      "region" : "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region-name>"
    • IPAddress - [Optional] A single IP address this forwarding rule matches to. All traffic directed to this IP address will be handled by this forwarding rule. The IP address must be a static reserved IP address or, if left empty, an ephemeral IP address is assigned to the forwarding rule upon creation. For example:
      "IPAddress" : "1.2.3.4"
    • target [Required] - The Target Pool or Target Instance resource that this forwarding rule directs traffic to. Must be a fully-qualified URL such as:
      http://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>

      For target instances, the URL will look like:

      http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/targetInstances/<target-pool-name>

      The target pool or target instance must exist before you create your forwarding rule and must reside in the same region as the forwarding rule.

    • IPProtocol - [Optional] The type of protocol that this forwarding rule matches. Valid values are:

      If left empty, this field will default to TCP. Also note that certain protocols can only be used with target pools or target instances:

      • If you use ESP, AH, or SCTP, you must specify a target instance. It is not possible to specify a target pool when using these protocols.
      • If you use TCP or UDP, you can specify either a target pool or a target instance.
    • portRange - [Optional] A single port or single contiguous port range, ranging from low to high for which this forwarding rule matches. Packets of the specified protocol sent to these ports will be forwarded on to the appropriate target pool or target instance. If this field is left empty, then the forwarding matches traffic for all ports for the specified protocol. For example:
      "portRange" : ["200-65536"]

      You can only specify this field for TCP, UDP, and SCTP protocols.

    Adding a Forwarding Rule

    To add a new forwarding rule, you can use the gcutil addforwardingrule command or create a POST request to the ForwardingRules collection. To create a forwarding rule using gcutil:

    gcutil --project=<project-id> addforwardingrule <forwarding-rule-name> \
           [--description=<description-text>] --ip=<external-ip-address> \
           [--target_pool=<target-pool> --target_instance=<target-instance> \
           --protocol=<protocol>] [--port_range=<port-range>]  \
           --region=<region>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID for this forwarding rule.
    <forwarding-rule-name>
    [Required] The name for this forwarding rule.
    --description=<description-text>
    [Optional] The description for this forwarding rule.
    --ip=<external-ip-address>
    [Optional] An external static IP that this forwarding rule serves on behalf of. This can be a reserved static IP, or if left blank or unspecified, the default is to assign an ephemeral IP address. Multiple forwarding rules can use the same IP address as long as their port range and protocol do not overlap. For example, --ip="1.2.3.106".
    --target_pool=<target-pool>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies a target pool that handles traffic from this forwarding rule. The target pool must already exist before you can use it for a forwarding rule and it must reside in the same region as the forwarding rule. This is specifically for load balancing. For example: 'mytargetpool'.
    --target_instance=<target-instance>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies target instance that handles traffic from this forwarding rule. This is specifically for protocol forwarding. You must specify only one of either --target_pool or --target_instance..
    --protocol=<protocol>
    [Optional] The protocol that this forwarding rule is handling. If left empty, this field will default to TCP. Also note that certain protocols can only be used with target pools or target instances:
    • If you use ESP, AH, or SCTP, you must specify a target instance. It is not possible to specify a target pool when using these protocols.
    • If you use TCP or UDP, you can specify either a target pool or a target instance.
    --port_range=<port-range>
    [Optional] The list of ports for which this forwarding rule is responsible for. Packets of the specified protocol sent to these ports will be forwarded on to the appropriate target pool. If this field is left empty, then the forwarding rule sends traffic for all ports for the specified protocol. Can be a single port, or a range of ports. You can only set this field for TCP, UDP, and SCTP protocols.
    --region=<region>
    [Required] The region where this forwarding rule should reside. For example, us-central1. This must be the same region as the target pool.

    To add a forwarding rule using the API, perform a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/forwardingRules

    Your request body should contain the following fields:

     bodyContent = {
       "name": <name>,
       "IPAddress": <external-ip>,
       "IPProtocol": <tcp-or-udp>,
       "portRange": <port-range>,
       "target": <uri-to-target-resource>
     }

    Listing Forwarding Rules

    To get a list of forwarding rules, use gcutil listforwardingrules.

    gcutil --project=<project-id> listforwardingrules [--region=<region>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for which you want to list your forwarding rules.
    --region=<region>
    [Optional] The region for which you want to list forwarding rules. If not specified, all forwarding rules across all regions are listed.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules

    Getting Forwarding Rules

    To get information about a single forwarding rules, use gcutil getforwardingrule.

    gcutil --project=<project-id> getforwardingrule <forwarding-rule-name> --region=<region>

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for which you want to get your forwarding rule.
    --region=<region>
    [Required] The region where the forwarding rule resides.
    <forwarding-rule-name>
    [Required] The forwarding rule name.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules/<forwarding-rule-name>

    Updating the Forwarding Rule Target

    If you have already created a forwarding rule but want to change the target pool that the forwarding rule is using, you can do so using the gcutil setforwardingruletarget command:

    gcutil --project=<project-id> setforwardingruletarget <forwarding-rule-name> \
           --region=<region> [--target_pool=<target-pool-name> --target_instance=<target-instance-name>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project for this request.
    <forwarding-rule-name>
    [Required] The forwarding rule name.
    --region=<region>
    [Required] The region where the forwarding rule resides.
    --target_pool=<target-pool>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies a target pool to add or update. The target pool must already exist before you can use it for a forwarding rule and it must reside in the same region as the forwarding rule. This is specifically for load balancing. For example: 'mytargetpool'.
    --target_instance=<target-instance>
    [Optional] You must specify only one of --target_pool or --target_instance. Specifies target instance to add or update for this forwarding rule. This is specifically for protocol forwarding. You must specify only one of either --target_pool or --target_instance..

    In the API, make a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/forwardingRules/<forwardingRule>/setTarget

    Your request body should contain the URL to the target instance or target pool resource you want to set. For instance, for target pools, the URI format should be:

    body = {
      "target": "https://www.googleapis.com/compute/v1/projects/<project-id>/regions/<region>/targetPools/<target-pool-name>"
    }

    Deleting Forwarding Rules

    To delete a forwarding rule, use the gcutil deleteforwardingrule command:

    gcutil --project=<project-id> deleteforwardingrule [-f] <forwarding-rule-name> [--region=<region>]

    Important Flags and Parameters:

    --project=<project-id>
    [Required] The project ID where this forwarding rule lives.
    -f, --force
    [Optional] Bypass the confirmation prompt to delete this forwarding rule.
    <forwarding-rule-name>
    [Required] The forwarding rule to delete.
    --region=<region>
    [Optional] The region of this forwarding rule. If you do not specify this flag, gcutil performs an extra API request to determine the region for your forwarding rule.

    To delete a forwarding rule from the API, make a DELETE request to the following URI, with an empty request body:

    https://www.googleapis.com/compute/v1/project/<project-id>/regions/<region>/forwardingRules/<forwarding-rule-name>

    Target Instances

    A Target Instance resource contains one virtual machine instance that handles traffic from one or more forwarding rules and is ideal for forwarding certain types of protocol traffic that should be managed by a single source (e.g. ESP and AH), but you can also use a target instance for TCP and UDP protocols. Target instances do not have a NAT policy applied to them, so they can be used for traffic that require non-NAT'ed IPSec traffic for virtual private networks (VPN).

    Target instances must live in the same region as the forwarding rule, and in the same zone as the instance it contains. For example, if your forwarding rule lives in us-central1 and the instance you want to use lives in us-central1-a, the target instance must live in us-central1-a. If the instance lived in us-central1-b, the target instance must also live in us-central1-b.

    Adding a Target Instance

    To add a target instance in gcutil, use the gcutil addtargetinstance:

    gcutil --project=<project-id> addtargetinstance <name> --zone=<zone> --instance=<instance-name> \
           [--nat_policy=NO_NAT]

    Important flags and parameters

    --project=<project-id>
    [Required] Project ID for this request.
    <name>
    [Required] Specifies the name for this target instance resource. The name must be unique in this project, from 1-63 characters long and match the regular expression: [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    --zone=<zone>
    [Required] The zone for this target instance. This must be the same zone as the virtual machine instance that will be used for this target instance. For example, us-central1-a.
    --instance=<instance-name>
    [Required] The name of the virtual machine instance you would like to use for this target instance. This does not need to exist before you create the target instance, but it must live in the same zone as the target instance if you decide to create it later.
    --nat_policy=NO_NAT
    [Optional] Defines the NAT policy for this target instance. Currently, the only value available is NO_NAT, indicating that no NAT policy is used for this target instance. It is not possible to set this flag to any other value.

    In the API, make a POST request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project>/zones/<zone>/targetInstances

    With the following request body:

    body = {
      "name": <name-of-target-instance-object>,
      "instance": <fully-qualified-url-to-virtual-machine-instance>
    }

    Getting a Target Instance

    To get information about a single target instance, run the gcutil gettargetinstance command:

    gcutil --project=<project-id> gettargetinstance <target-instance-name> --zone=<zone>
    --project=<project-id>
    [Required] Project ID for this request.
    <target-instance-name>
    [Required] The target instance to get.
    --zone=<zone>
    [Required] The zone of the target instance.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/targetInstances/<target-instance-name>

    Listing Target Instances

    To list your target instances, use the gcutil listtargetinstances command:

    gcutil --project=<project-id> listtargetinstances  [--zone=&t;zone>]

    Important flags and parameters

    --project=<project-id>
    [Required] Project ID for this request.
    --zone=<zone>
    [Optional] Specifies the zone for which you want to list target instances. If this flag isn't provided, gcutil lists target instances across all zones.

    In the API, make an empty GET request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/aggregatedList/targetInstances

    Deleting a Target Instance

    To delete a target instance, you must first make sure that it is not being referenced by any forwarding rules. If a forwarding rule is currently referencing the target instance you want to delete, you must delete the forwarding rule to remove the reference.

    Once you've removed a target instance from being referenced by any forwarding rules, delete it using the gcutil deletetargetinstance command:

    gcutil --project=<project-id> deletetargetinstance <target-instance-name> --zone=>zone> [-f]

    Important flags and parameters

    --project=<project-id>
    [Required] Project ID for this request.
    <target-instance-name>
    [Required] The target instance to delete.
    --zone=<zone>
    [Required] The zone of the target instance to delete.
    -f, --force
    [Optional] Bypass the confirmation prompt to delete this target pool.

    In the API, make an empty DELETE request to the following URI:

    https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/targetInstances/<target-instance-name>

    Page: zones

    Google Compute Engine allows you to choose the region and zone where certain resources live, giving you control over where your data is stored and used. For example, when you create an instance or disk, you are prompted to select a zone where that resource should serve traffic from. Other resources, such as static IPs, live in regions and you must select a region for where each static IP should live.

    Resources that are specific to a zone or a region can only be used by other resources in the same zone or region. For example, disks and instances are both zonal resources. If you want to attach a disk to an instance, both resources must reside in the same zone. Similarly, if you want to assign a static IP address to an instance, your instance must reside in the same region as the static IP.

    Google Cloud Platform resources are hosted in multiple locations world-wide. These locations are composed of regions and zones within those regions. Putting resources in different zones in a region provides isolation for many types of infrastructure, hardware, and software failures. Putting resources in different regions provides an even higher degree of failure independence.

    Note: Only certain resources are region- or zone-specific. Other resources, such as images, are global resources that can be used by any other resources across any location.

    Contents

    Overview

    Each region in Compute Engine contains any number of zones. To determine what zones belong to what region, review the fully qualified name of the zone. Each zone name contains two parts that describe each zone in detail. The first part of the zone name is the region and the second part of the name describes the zone in the region:

    • Region

      Regions are collections of zones. Zones have high-bandwidth, low-latency network connections to other zones in the same region. In order to deploy fault-tolerant applications that have high availability, Google recommends deploying applications across multiple zones in a region. This helps protect against unexpected failures of components, up to and including a single zone.

      Choose a region that makes sense for your scenario. For example, if you only have customers in the US, or if you have specific needs that require your data to live in the US, it makes sense to store your resources in a zone in the us-central1 region.

    • Zone

      A zone is an isolated location within a region. The fully-qualified name for a zone is made up of <region>/<zone>. For example, the fully-qualified name for zone a in region us-central1 is us-central1-a.

      Depending on how widely you want to distribute your resources, you may choose to create instances across multiple zones in multiple regions.

    The following diagram provides some examples of how regions and zones relate to each other. Notice that each region is independent of other regions and each zone is isolated from other zones in the same region.

    Note: This diagram is an example to demonstrate zones and may not reflect actual available zones.

    Available regions & zones

    The following is a list of available regions and zones.

    Region Available zones Supported processor types
    US us-central1-a
    us-central1-b
    Sandy Bridge
    Europe europe-west1-a
    europe-west1-b
    Sandy Bridge
    Asia asia-east1-a
    asia-east1-b
    Ivy Bridge

    Note: The selection of a location does not guarantee that your data at rest is kept only in that specific location. See the FAQ for more details.

    Each zone supports either Ivy Bridge or Sandy Bridge processors. When you create an instance in the zone, your instance will use the processor supported in that zone. For example, if you create an instance in an Asia zone, your instance will use an Ivy Bridge processor. If you create an instance in a US or Europe zone, your instance will use a Sandy Bridge processor.

    To view a list of available zones, you can always run:

    $ gcutil --project=<project-id> listzones

    To view a list of available regions using gcutil, use the gcutil listregions command. The command lists all available regions and provides information such as any relevant deprecation status and the status of the region itself.

    $ gcutil --project=<project-id> listregions
    +-----------------+------------------------+--------+-------------+
    |       name      |      description       | status | deprecation |
    +-----------------+------------------------+--------+-------------+
    | example-region  | Description of region  | UP     |             |
    | example-region2 | Description of region2 | UP     |             |
    +-----------------+------------------------+--------+-------------+

    To get information about a single region, use the gcutil getregion command:

    $ gcutil --project=<project-id> getregion example-region
    +---------------+----------------------------------------+
    |   property    |                  value                 |
    +---------------+----------------------------------------+
    | name          | example-region                         |
    | description   | Description of region                  |
    | creation-time | 2013-04-29T11:18:01.821-07:00          |
    | status        | UP                                     |
    | zones         | zones/example-zone,zones/example-zone2 |
    | deprecation   |                                        |
    | replacement   |                                        |
    |               |                                        |
    | usage         |                                        |
    +---------------+----------------------------------------+

    Scheduled maintenance

    Google regularly maintains its infrastructure by patching systems with the latest software, performing routine tests and preventative maintenance, and generally ensuring that Google infrastructure is as fast and efficient as Google knows how to make it.

    Compute Engine currently has two types of zones - those that have transparent maintenance and those that are subject to occasional scheduled maintenance windows.

    Zones with transparent maintenance remain operational throughout all maintenance operations. Google uses a combination of datacenter innovations, operational best practices, and live migration technology to move running virtual machine instances out of the way of maintenance that is being performed.

    Zones with scheduled maintenance windows are occasionally taken offline for various disruptive maintenance tasks (e.g. power maintenance). During scheduled maintenance windows, which last up to approximately two weeks, the entire zone is unavailable.

    Compute Engine will be updating all its zones to transparent maintenance over the days to come. The table below lists zones with their maintenance mode:

    Zone Maintenance Mode
    us-central1-a Transparent maintenance
    us-central1-b Transparent maintenance
    asia-east1-a Transparent maintenance
    asia-east1-b Transparent maintenance
    europe-west1-a Scheduled maintenance
    europe-west1-b Scheduled maintenance

    Transparent maintenance

    During transparent maintenance, Compute Engine automatically moves your instances away from maintenance events so that maintenance work is transparent to your applications and workloads. Your instance continues to run within the same zone with no action on your part.

    During transparent maintenance, you can configure Compute Engine to handle your instances in two ways:

    • Live migrate

      Compute Engine can automatically migrate your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but it is expected most applications and workloads will not notice.

    • Terminate and reboot

      Compute Engine automatically signals your instance to shut down, waits a short time for it to shut down cleanly, and then restarts it away from the scheduled maintenance event.

    For more information on how to set the options above for your instances, see Setting Instance Scheduling Options.

    Scheduled zone maintenance windows

    For zones with scheduled maintenance windows, there will be periods of time when these zones are taken offline for maintenance tasks, such as software upgrades. When a zone is taken down for maintenance, the following happens:

    • All virtual machine instances in that zone are terminated and deleted from your project.
    • All persistent disks will be preserved, but are unavailable until the maintenance window ends.

    When a zone comes back online, you need to recreate your instances in the affected zone. The Compute Engine team will notify users of upcoming maintenance windows in a timely manner so that users can perform any tasks necessary before the zone is taken offline.

    Although maintenance windows are an inconvenient and unavoidable part of the service, you can use the tips from the How to Design Robust Systems section to design a system that can withstand maintenance windows, zone failures, and unexpected interruptions.

    Quotas

    Certain resources, such as static IPs, images, firewall rules, and networks, have defined project-wide quota limits and per-region quota limits. When you create these resources, it counts towards your total project-wide quota or your per-region quota, if applicable. If any of the affected quota limits are exceeded, you won't be able to add more resources of the same type in that project or region.

    For example, if your global target pools quota is 50 and you create 25 rules in example-region and 25 pools in example-region2, you reach your project-wide quota and won't be able to create more target pools in any region within your project until you free up space. Similarly, if you have a per-region quota of 7 reserved IP addresses, you can only reserve up to 7 IP addresses in a single region. Once you hit that limit, you will either need to reserve IP addresses in a new region or release some IP addresses.

    Tips

    When selecting zones, here are some things to keep in mind:

    • Communication within and across regions will incur different costs.

      Generally, communication within regions will always be cheaper and faster than communication across different regions.

    • Design important systems with redundancy across multiple zones.

      At some point in time, your instances may be terminated because of scheduled zone maintenance windows or because of an unexpected failure. To mitigate the effects of these events, duplicate important systems in multiple zones, in case a zone hosting your instance goes offline or is taken down for servicing.

      For example, if you host virtual machine instances in zones europe-west1-a and europe-west1-b, if europe-west1-b is taken down for maintenance or fails unexpectedly, your instances in zone europe-west1-a will still be available. However, if you host all your instances in europe-west1-b, you will not be able to access any of your instances if europe-west1-b goes offline. For more tips on how to design systems for availability, see Designing Robust Systems.

    Page: startupscript

    This page discusses how to use startup scripts with Google Compute Engine.

    Contents

    Overview

    You can choose to specify a startup script that will run when your instance boots up or restarts. Start up scripts can be used to install software and updates, and to ensure that services are running within the VM. This can be a script on your local computer or a script stored on Google Storage or other URL-accessible location. This script will automatically be run whenever your instance is restarted.

    The same mechanism that enables startup scripts also enables you to specify custom name/value pairs in the command-line that will be persistently available to your instance whenever it starts. See Storing and Retrieving Instance Metadata for information about passing arbitrary values to an instance on startup.

    Note: Google Compute Engine enforces a metadata value length limit of 32768 bytes. If your startup script exceeds this limit, you won't be able to load it locally. Instead, you should save your file to Google Cloud Storage and specify the script URL on instance creation time. See Storing Your Script on Google Cloud Storage for more information.

    Using a Startup Script

    To use a startup script, you just need to create the script and start an instance that uses the startup script. You can either store your startup script locally or you can store your startup script on Google Cloud Storage. Here is an example of creating and running a startup script that installs Apache and creates a custom homepage.

    1. Create the script

      The following script installs apache and creates a custom home page. Your script can perform as many actions as you would like. For this example, save the following file locally as install-apache.sh. Choose your operating system to view the correct startup script:

      Debian
      #! /bin/bash
      # Installs apache and a custom homepage
      
      apt-get update
      apt-get install -y apache2
      cat <<EOF > /var/www/index.html
      <html><body><h1>Hello World</h1>
      <p>This page was created from a simple startup script!</p>
      </body></html>
      EOF
      CentOS
      #! /bin/bash
      # Installs apache and a custom homepage
      
      yum install -y httpd
      service httpd start
      cat <<EOF > /var/www/html/index.html
      <html><body><h1>Hello World</h1>
      <p>This page was created from a simple startup script!</p>
      </body></html>
      EOF
    2. Optional: Store your script on Google Cloud Storage

      If you don't want to store your script locally, or if your script exceeds the metadata value length limit of 32768 bytes, you can choose to store your file on Google Cloud Storage. To do so, you need to:

      1. Sign up for Google Cloud Storage.
      2. Upload your file using the Google Cloud Storage manager.

      You can then run your startup script directly from Google Cloud Storage, as described in the next step.

    3. Start a VM with the startup script

      You can run a locally-stored startup script or startup script stored on Google Cloud Storage. Both methods are described below.

      Using a startup script from a local file

      To run a startup script from a local file, use the --metadata_from_file flag with gcutil addinstance to specify the path to your script file. This flag uploads the file and installs it on the server in the same location relative to the path that you pass in. The default instance image looks for this parameter, and if present, will run the specified file.

      $ gcutil addfirewall http2 --description="Incoming http allowed." --allowed="tcp:http" --project=<project-id>
      $ gcutil addinstance simple-apache --metadata_from_file=startup-script:install-apache.sh --project=<project-id>

      The startup script will be stored in the instance's metadata server, and will automatically re-run if the server crashes and is automatically restarted.

      Using a startup script from Google Cloud Storage

      It is possible to store your script on Google Cloud Storage and specify the startup script URL when you start an instance, instead of uploading a local file. This is ideal if you don't want to store your script locally or if your script's metadata value exceeds the metadata value limit of 32768 bytes.

      When you specify a startup script URL, Google Compute Engine downloads the script to a temporary file and runs it. When you update your startup script, the instances using this startup script will automatically be able to use the updated script.

      To run a startup script from Google Cloud Storage:

      1. Set up your instance to have internet access.

        Your instance must have internet access to load a script by URL; to enable this, you must launch your instance with an external IP address. Use the --external_ip_address flag to assign an external IP address.

      2. Set up permissions and run your startup script.

        Before you can specify a startup script from Google Cloud Storage, your Google Compute Engine instance needs permissions to the startup script. You can do this two ways:

        • Using service accounts (Recommended): Set up your instance to use service accounts with Google Cloud Storage scopes. Service accounts are ideal for server-to-server interactions that do not need explicit user authorization:
          gcutil addinstance simple-apache --service_account_scope=storage-ro \
           --metadata=startup-script-url:<url> --project=<project-id>

          Note: gcutil provides shorthand aliases for OAuth 2.0 scopes that may be useful for users. In this example, the shorthand for the read-only Google Cloud Storage scope is used, storage-ro. The full scope URI is https://www.googleapis.com/auth/devstorage.read_only, which you can also use if you prefer. For convenience, Google Compute Engine provides a list of aliases for common scopes which you can use with gcutil.


        • Using anonymous access: Set up public-read access on your script for anonymous access
          1. Set the access control list for your startup script to be publicly-accessible. To do this using gsutil, run:
            gsutil setacl public-read <startup_script>

            Warning: Setting up your script for anonymous access means that anyone on the Internet may be able to access it. If you don't want this, you should set up your instance to use service accounts instead (see above).

          2. Start your instance like so:
            gcutil addinstance simple-apache \
             --metadata=startup-script-url:<url>  --project=<project-id>

        <url> can be any publicly readable URL, or, if the startup script is not publicly readable, it can be a Google Storage URL, in the format:

        gs://<bucket>/<file>

        For example:

        gs://mybucket/install-apache.sh

        *Although this command uses a read-only scope to Google Cloud Storage, you can set up the instance to use any of the Google Cloud Storage scopes.

      You can find startup script logging at /var/log/startupscript.log

      You can also view the logging information through the instance's serial console output for any image type:

      gcutil --project=<project-id> getserialportoutput <instance-name>

      The serial console port output can also be viewed at the Google Compute Engine console or through the getSerialOutput() method.

      If your startup script is less than 35000 bytes, you could choose to pass in your startup script as pure metadata, although this is generally not as convenient as saving your startup script in a file. To pass in script as pure metadata, perform the following command:

      Debian
      gcutil addinstance mystartupscript --metadata=startup-script:'#! /bin/bash
      apt-get update
      apt-get install -y apache2
      cat <<EOF > /var/www/index.html
      <html><body><h1>Hello World</h1>
      <p>This page was created from a simple startup script!</p>
      </body></html>
      EOF'
      CentOS
      gcutil addinstance mystartupscript --metadata=startup-script:'#! /bin/bash
      yum install -y apache2
      service httpd start
      cat <<EOF > /var/www/html/index.html
      <html><body><h1>Hello World</h1>
      <p>This page was created from a simple startup script!</p>
      </body></html>
      EOF'
      Note: If you specify both the startup-script-url and startup-script metadata values, Google Compute Engine only uses the startup script specified by the startup-script-url and ignores the startup script specified by the startup-script value.

    4. View the page

      Check your instance's status, and when it is listed as RUNNING, browse to http://<your_external_ip>/index.html to see your default page.

    Rerunning a Startup Script

    You can force your startup scripts to rerun on your VM by ssh'ing in and running the following command

    $ sudo /usr/share/google/run-startup-scripts
    google Running startup script...
    google Finished running startup script...

    You can view a log for all the times you have run your startup scripts at /var/log/startupscript.log.

    Passing in Custom Values

    When you run startup scripts across instances, there may be situations where you would like to use custom values for different instances. For example, say you want to run the previous startup script on different instances and have each instance print out a custom message. You can specify these custom values as custom metadata using key/value pairs during instance creation time. Your startup script can then use these custom values however you see fit.

    For example, the following command creates an instance with a custom metadata key/value pair of foo:bar where foo is the key and bar is the value:

    gcutil addinstance example-instance --metadata='foo:bar' --project=<project-id>

    Now, you can access the value of foo from within an instance by ssh'ing into the instance and querying the metadata server:

    user@example-instance:~$ curl http://metadata/computeMetadata/v1beta1/instance/attributes/foo
    bar

    Similarly, you can also query the metadata server from within your startup script. To do so:

    1. Modify your startup script to query for custom metadata as shown here:
      Debian
      #! /bin/bash
      # Installs apache and a custom homepage
      VALUE_OF_FOO=$(curl http://metadata/computeMetadata/v1beta1/instance/attributes/foo)
      apt-get update
      apt-get install -y apache2
      cat <<EOF > /var/www/index.html
      <html><body><h1>Hello World</h1>
      <p>The value of foo: $VALUE_OF_FOO</p>
      </body></html>
      EOF
      CentOS
      #! /bin/bash
      # Installs apache and a custom homepage
      VALUE_OF_FOO=$(curl http://metadata/computeMetadata/v1beta1/instance/attributes/foo)
      yum install -y httpd
      service httpd start
      cat <<EOF > /var/www/html/index.html
      <html><body><h1>Hello World</h1>
      <p>The value of foo: $VALUE_OF_FOO</p>
      </body></html>
      EOF
    2. Pass your metadata to your instance's metadata server in the startup script as shown here:
      $ gcutil addinstance simple-apache --metadata_from_file=startup-script:install-apache.sh --metadata='foo:bar' --project=<project-id>

    Page: getting-started

    Here are some quick ways to start using the Google Compute Engine API.

    The one-minute experience

    To play around and see what the API can do, without writing any code, visit the OAuth 2 playground to create and try out REST requests. Instant gratification!

    How-Tos

    If you already understand the basics and there are particular things you want to do, see the How Tos pages.

    Reference

    To look up a particular resource type or method, see the Reference.

    Page: api-rate-limits

    API rate limits define the number of requests that can be made to the Google Compute Engine API. API rate limits also apply on a per-project basis. When you use gcutil or the Google Compute Engine console tool, you are also making requests to the API and these requests count towards your API rate limit. If you make a request to the Google Compute Engine API from within an instance (such as an application using a service account to make requests to Google Compute Engine), it also counts towards your API rate limits.

    Each project is subject to the following Google Compute Engine API rate limits:

    • 250,000 requests/day
    • 20 requests/second

    If you need more quota for API requests, you can request more quota using the API rate limits change form.

    Page: robustsystems

    Designing a robust system is important to help mitigate instance downtime and to be prepared for times where your instances fall into a maintenance window or you suffer an unexpected failure.

    Content

    Scheduled Maintenance

    Google periodically performs scheduled maintenance on its infrastructure: patching systems with the latest software, performing routine tests and preventative maintenance, and generally ensuring that our infrastructure is as fast and efficient as possible.

    There are currently two types of scheduled maintenance:

    • Transparent Maintenance

      Transparent maintenance affects only a small piece of the infrastructure in a given zone and Google Compute Engine automatically moves your instances elsewhere in the zone, out of the way of the maintenance work. For more information, see Transparent Maintenance.

    • Scheduled Zone Maintenance Windows

      For scheduled zone maintenance windows, Google takes an entire zone offline for roughly two weeks to perform various, disruptive maintenance tasks. For more information, see Scheduled Zone Maintenance Windows.

    The type of scheduled maintenance your instances will experience currently depends on the zone your instances are running in. Currently, only US and Asia zones support transparent maintenance. All other zones still have scheduled maintenance windows where the whole zone is taken offline for two weeks. We are in the process of planning and rolling out the hardware and software required to support transparent maintenance for all of our zones, so check back periodically for updates. For more information, see Maintenance Events.

    Types of Failures

    At some point, one or more of your instances will be lost due to system or hardware failures, or due to a scheduled zone maintenance window. Some of the failures you may experience include:

    • Unexpected Single Instance Failure

      Unexpected single instance losses can be due to hardware or system failure. We are working to make this as rare as possible but you should expect a higher level of single instance losses during the preview period. To mitigate these events, use persistent disks and start up scripts.

    • Unexpected Single Instance Reboot

      At some point in time, you will experience an unexpected single instance failure and reboot. Unlike unexpected single instance losses, your instance fails and is automatically rebooted by the Google Compute Engine service. To help mitigate these events, back up your data, use persistent disks and start up scripts.

    • Zone maintenance and failures
      • Zone failures - Zone failures are rare, unexpected failures within a zone that can cause your instances to go down.
      • Zone maintenance - Scheduled zone maintenance windows are planned periods where a zone is taken offline for servicing. In these cases, you will receive prior notification of the maintenance window. For zones that support transparent scheduled maintenance events, you can keep your instances running through maintenance events. You can do this by configuring instance scheduling options so that Google Compute Engine automatically migrates your instances away from maintenance events. For more information, see Setting Instance Scheduling Options.


      To mitigate zone failures and maintenance windows, create diversity across zones and implement load balancing. You should also back up your data or migrate your persistent disk data to another zone.

    How to Design Robust Systems

    To help mitigate instance failures, you should design your application on the Google Compute Engine service to be robust against failures, network interruptions, and unexpected disasters. A robust system should be able to gracefully handle failures, including redirecting traffic from a downed instance to a live instance or automating tasks on reboot.

    Here are some general tips to help you design a robust system against failures.

    Distribute your instances

    Create instances across many zones so that you have alternative VM instances to point to if a zone containing one of your instances is taken down for maintenance or fails. If you host all your instances in the same zone, you won’t be able to access any of these instances if that zone is unreachable.

    Use Google Compute Engine load balancing

    Google Compute Engine offers a load balancing service that helps you support periods of heavy traffic so that you don't overload your instances. With the load balancing service, you can pick a region with multiple zones and deploy your application on instances within these zones. Then, you can configure a forwarding rule that can spread traffic across all virtual machine instances in all zones within the region. Each forwarding rule can define one entry point to your application using an external IP address.

    Lastly, when a zone maintenance window approaches, you can add more virtual machines in the available zones to prepare for an increased load. While the zone is offline during the maintenance window, the load balancing service will automatically direct traffic away from the terminated instances and instead use instances in healthy zones. Once the maintenance window is over and the zone is back online, you can choose to migrate your virtual machines back, or keep them in the new zone. In this way, your external clients can access your application without any service disruptions, even if some of your instances are taken offline during maintenance windows. In addition, the load balancing service also offers instance health checking, providing support in detecting and handling instance failures.

    Alternatively, if you have already replicated your instances across many zones and many regions, you can create a forwarding rule for each of the regions and use it as the entry point for instances in that region. Then, you can use DNS-based load balancing to distribute the load over these entry points into each region. When a maintenance window approaches, you can adjust your DNS settings or increase the number of virtual machine instances in other zones in the same region. Once the maintenance window is over, you can recreate the virtual machines in the old zone, re-adjust the DNS setting, and tear down any backup virtual machines that are no longer needed.

    For more information, see Google Compute Engine load balancing and Round-robin DNS.

    Use startup scripts

    Start up scripts are an efficient and invaluable way to bootstrap your instances. If an instance fails, it can bring itself back up using start up scripts, and be able to install and access the appropriate resources as if it never went down. Instead of configuring your VM instances via custom images, it can be beneficial to configure them using startup scripts. Startup scripts run whenever the VM is rebooted or restarted due to failures, and can be used to install software and updates, and to ensure that services are running within the VM. Codifying the changes to configure a VM in a startup script is easier than figuring out what files or bytes have changed on a custom image.

    You can run startup scripts using the gcutil tool by specifying the --metadata_from_file=startup-script:<script> flag with the gcutil addinstance command:

    $ gcutil addinstance simple-apache --metadata_from_file=startup-script:install-apache.sh --project=my-project

    For more information, see startup scripts.

    Back up your data

    If you need access to data on a VM instance or persistent disk that is in a zone scheduled to be taken offline, you can back up your files to Google Cloud Storage, your local computer, or migrate your data to another persistent disk in another zone.

    To copy files from a VM instance to Google Cloud Storage:

    1. Log into your instance from gcutil
      $ gcutil ssh my-first-instance --project=my-project
    2. If you have never used gsutil on this VM instance, set up your credentials.
      $ gsutil config

      Alternatively, if you have set up your instance to use a service account with a Google Cloud Storage scope, you do can skip this and the next step.

    3. Follow the instructions to authenticate to Google Cloud Storage.
    4. Copy your data to Google Cloud Storage by using the following command:
      $ gsutil cp <file1> <file2> <file3> ...  gs://<your bucket>

    You can also use the gcutil tool to copy files to a local computer. For more information, see Copying Files To/From an Instance.

    Page: access

    You can give other users access to Google Compute Engine by adding them to your project or by setting up ssh access.

    Contents

    1. Adding Users to Your Projects
    2. Letting Users ssh into Your Instances

    Adding Users to Your Projects

    You can add a user to your project using the Google Developers Console. When you add a user to your Google Compute Engine project, it gives them access to all Google Compute Engine resources in that project, as described by their roles e.g., viewer, writer, or owner. For example, if you add a user as an owner, they will be able to add and modify Google Compute Engine resources in the project.

    To get to the Teams page for your project:

    1. Log into Google Developers Console.
    2. Click on the project where you want to manage team members.
    3. Click on the wrench icon in the top-right corner of the page:

    4. Select Teams from the drop-down menu.
    5. Add a new team member by clicking on the Add Member button.
    6. To delete a team member, hover over their email and click on the trash can symbol that appears:

    It is possible to specify one of three different user roles in the Google Cloud Console, which maps to the following permissions:

    Developers Console Role Permissions
    Can View Provides READ access:
    • Can see the state of your instances
    • Can list and get any resource type
    Can Edit Provides "Can View" access, plus:
    • Can modify instances
    • On standard images released after March 22, 2012, can ssh into the project's instances
    Is Owner Provides "Can Edit" access, plus:
    • Can change membership of the project

    Letting Users ssh into Your Instances

    You can authorize users into your Google Compute Engine instance by adding them with Can Edit or Is Owner roles to your project. As described above, this also lets the user access all resources within the project.

    Page: sending-mail

    Google Compute Engine does not allow outbound connections on ports 25, 465, and 587 but you can still set up your instances to send mail through ports 587 and 465 using servers provided through partner services, such as SendGrid. This document discusses how to set up your instances to send email using SendGrid.

    SendGrid is a partner service that provides Google Compute Engine customers with a free or paid SendGrid account that you can use to send mail from Google Compute Engine instances. SendGrid offers a number of advantages:

    • A free tier* to Google Compute Engine customers that includes 25,000 transactional email messages per month
    • Ability to send emails from addresses other than @gmail.com
    • No daily limit on the number of transactional email messages

    Prerequisites

    Before you can use SendGrid, you must first sign up for the service from SendGrid's Google partner page.* When signing up, provide the domain and email address from which you would like to send email messages. This may or may not be the same domain and email you used to sign up for Google Compute Engine.

    If you use an email account that doesn't match your specified domain e.g. a Gmail account, you will need to provide more information to SendGrid before they can provision your account. Complete the sign up process and SendGrid will contact you for more information, if necessary.

    Once you have signed up and your account has been provisioned by SendGrid, follow some of the examples below to set up your mail configuration. If you don't see your email solution in this list of examples, SendGrid provides extensive documentation for integration with most common STMP servers, libraries, and frameworks. See SendGrid's documentation for more examples.


    Here are the SendGrid-specific SMTP settings that are used to configure clients, for your reference:

  1. Host: smtp.sendgrid.net
  2. Port: 2525

  3. Note that the port is different from the standard port 25 and port 587.

    Postfix

    To set up Postfix on your instances to use SendGrid, follow the instructions below.

    Postfix on Debian Wheezy
    1. ssh into your instances:
      gcutil --project=<project-id> ssh <instance-name>
    2. Become a superuser and set a safe umask:

      user@test-wheezy:~$ sudo su -
      root@test-wheezy:~# umask 077
      
    3. Install the Postfix Mail Transport Agent. When prompted, accept the default choices for domain names but select the Local Only configuration.

      root@test-wheezy:~# apt-get install postfix
      Reading package lists... Done
      Building dependency tree
      Reading state information... Done
      The following extra packages will be installed:
        ssl-cert
      Suggested packages:
        procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre sasl2-bin dovecot-common resolvconf postfix-cdb mail-reader ufw postfix-doc openssl-blacklist
      The following NEW packages will be installed:
        postfix ssl-cert
      0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
      Need to get 1611 kB of archives.
      After this operation, 3653 kB of additional disk space will be used.
      Do you want to continue [Y/n]? Y
      
      ...
      
      [ ok ] Stopping Postfix Mail Transport Agent: postfix.
      [ ok ] Starting Postfix Mail Transport Agent: postfix.
      
    4. Create a file named /etc/postfix/sasl_passwd containing the credentials to be used for authentication. It should be a single line with [smtp.sendgrid.net]:2525 followed by your username and password, separated by a colon:

      root@test-wheezy:~# cat > /etc/postfix/sasl_passwd << EOF
      [smtp.sendgrid.net]:2525 YOUR_SENDGRID_USERNAME:YOUR_SENDGRID_PASSWORD
      EOF
      
    5. Use the postmap utility to generate a .db file from the plaintext file you just created:

      root@test-wheezy:~# postmap /etc/postfix/sasl_passwd
      root@test-wheezy:~# ls -l /etc/postfix/sasl_passwd*
      -rw------- 1 root root    68 Jun  1 11:42 /etc/postfix/sasl_passwd
      -rw------- 1 root root 12288 Jun  1 11:42 /etc/postfix/sasl_passwd.db
      
    6. Edit /etc/postfix/main.cf and comment out the following lines:

      default_transport = error
      relay_transport = error

      Next, add the following contents to the file:

      smtp_sasl_auth_enable = yes
      smtp_sasl_password_maps = static:<yourSendGridUsername>:<yourSendGridPassword>
      smtp_sasl_security_options = noanonymous
      smtp_tls_security_level = may
      header_size_limit = 4096000
      relayhost = [smtp.sendgrid.net]:2525
      

      Save your changes.

    7. Reload Postfix to pick up the configuration changes:

      root@test-wheezy:~# postfix reload
      postfix/postfix-script: refreshing the Postfix mail system
      
    8. Test that mail delivery is working by sending a message to an external address (replace EMAIL@EXAMPLE.COM with your own address):

      root@test-wheezy:~# printf 'Subject: test\r\n\r\npassed' | sendmail EMAIL@EXAMPLE.COM
      root@test-wheezy:~# tail -n 5 /var/log/syslog
      Aug 13 07:44:55 sendgrid postfix/pickup[17927]: 5CB31B6: uid=0 from=<root>
      Aug 13 07:44:55 sendgrid postfix/cleanup[18762]: 5CB31B6: message-id=<20130813074455.5CB31B6@sendgrid.c.testproject121.internal>
      Aug 13 07:44:55 sendgrid postfix/qmgr[17926]: 5CB31B6: from=<root@sendgrid.c.myproject.internal>, size=325, nrcpt=1 (queue active)
      Aug 13 07:44:56 sendgrid postfix/smtp[18764]: 5CB31B6: to=<EMAIL@EXAMPLE.COM>, relay=smtp.sendgrid.net[50.97.69.148]:2525, delay=0.66, delays=0.03/0/0.44/0.18, dsn=2.0.0, status=sent (250 Delivery in progress)
      Aug 13 07:44:56 sendgrid postfix/qmgr[17926]: 5CB31B6: removed

      Note the status=sent and the successful server response code (250).

    9. Remove your sasl_passwd file.

      Since you no longer need this file to create your .db file, remove this file.

      root@test-wheezy:~# rm -f /etc/postfix/sasl_passwd
    Postfix on CentOS
    1. ssh into your instance:

      gcutil --project=<project-id> ssh <instance-name>
    2. Become a superuser and set a safe umask:

      [user@test-centos ~]$ sudo su -
      [root@test-centos ~]# umask 077
      
    3. Create a file named /etc/postfix/sasl_passwd containing the credentials to be used for authentication. It should be a single line with [smtp.sendgrid.net]:2525 followed by your SendGrid username and password generated in the previous step, separated by a colon:

      [root@test-centos ~]# cat > /etc/postfix/sasl_passwd << EOF
      [smtp.sendgrid.net]:2525 YOUR_SENDGRID_USERNAME:YOUR_SENDGRID_PASSWORD
      EOF
      
    4. Use the postmap utility to generate a .db file from the plaintext file you just created:

      [root@test-centos ~]# postmap /etc/postfix/sasl_passwd
      [root@test-centos ~]# ls -l /etc/postfix/sasl_passwd*
      -rw------- 1 root root    68 Jun  1 10:50 /etc/postfix/sasl_passwd
      -rw------- 1 root root 12288 Jun  1 10:51 /etc/postfix/sasl_passwd.db
      
    5. Append the following block of configuration to the end of /etc/postfix/main.cf:

      [root@test-centos ~]# cat >> /etc/postfix/main.cf << EOF
      smtp_sasl_auth_enable = yes
      smtp_sasl_password_maps = static:<yourSendGridUsername>:<yourSendGridPassword>
      smtp_sasl_security_options = noanonymous
      smtp_tls_security_level = may
      header_size_limit = 4096000
      relayhost = [smtp.sendgrid.net]:2525
      EOF
      EOF
      
    6. Reload Postfix to pick up the configuration changes:

      [root@test-centos ~]# postfix reload
      postfix/postfix-script: refreshing the Postfix mail system
      
    7. You should be all set. Test that mail delivery is working by sending a message to an external address (replace EMAIL@EXAMPLE.COM with your own address):

      [root@test-centos ~]# echo test | mail -s test EMAIL@EXAMPLE.COM
      [root@test-centos ~]# tail -n 5 /var/log/maillog
      Aug 13 08:10:23 sendgridcentos postfix/cleanup[2167]: D043824EE: message-id=<20130813081023.D043824EE@sendgridcentos.localdomain>
      Aug 13 08:10:23 sendgridcentos postfix/qmgr[2160]: D043824EE: from=<root@sendgridcentos.localdomain>, size=454, nrcpt=1 (queue active)
      Aug 13 08:10:24 sendgridcentos postfix/smtp[2169]: D043824EE: to=<EMAIL@EXAMPLE.COM>, relay=smtp.sendgrid.net[75.126.83.211]:2525, delay=0.91, delays=0.08/0.24/0.41/0.18, dsn=2.0.0, status=sent (250 Delivery in progress)
      Aug 13 08:10:24 sendgridcentos postfix/qmgr[2160]: D043824EE: removed
      ...
      

      Note the status=sent and the successful server response code (250).

    8. Remove your sasl_passwd file.

      Since you no longer need this file to create your .db file, remove this file.

      root@test-centos:~# rm -f /etc/postfix/sasl_password

    Java

    The following Java sample uses the javax.mail library to construct and send an email message through SendGrid. The lines in bold are settings for your SendGrid account.

    import java.util.Properties;
    
    import javax.mail.Authenticator;
    import javax.mail.BodyPart;
    import javax.mail.Message;
    import javax.mail.Multipart;
    import javax.mail.PasswordAuthentication;
    import javax.mail.Session;
    import javax.mail.Transport;
    import javax.mail.internet.InternetAddress;
    import javax.mail.internet.MimeBodyPart;
    import javax.mail.internet.MimeMessage;
    import javax.mail.internet.MimeMultipart;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    
    public class SendGridDemoHandler extends HttpServlet {
    
      @Override
      public void doPost(HttpServletRequest request, HttpServletResponse response) {
        System.out.println(getClass().getResource("/javax/mail/Address.class"));
        try {
          send(request);
        } catch (Exception e) {
          e.printStackTrace();
        }
      }
    
      private static final String SMTP_HOST_NAME = "smtp.sendgrid.net";
      private static final String SMTP_AUTH_USER = "<YOUR_SENDGRID_USERNAME>";
      private static final String SMTP_AUTH_PWD = "<YOUR_SENDGRID_PASSWORD>";
      private static final int SMTP_PORT = 2525;
    
      private void send(HttpServletRequest request) throws Exception {
        Properties props = new Properties();
        props.put("mail.transport.protocol", "smtp");
        props.put("mail.smtp.auth", "true");
    
        Authenticator auth = new SMTPAuthenticator();
        Session mailSession = Session.getDefaultInstance(props, auth);
        Transport transport = mailSession.getTransport();
    
        MimeMessage message = new MimeMessage(mailSession);
    
        Multipart multipart = new MimeMultipart("alternative");
    
        // Sets up the contents of the email message
        BodyPart part1 = new MimeBodyPart();
        part1.setText(request.getParameter("Message"));
    
        multipart.addBodyPart(part1);
    
        message.setContent(multipart);
        message.setFrom(new InternetAddress("me@yourdomain.com"));
        message.setSubject(request.getParameter("Subject"));
        message.addRecipient(
            Message.RecipientType.TO, new InternetAddress(request.getParameter("To")));
    
        // Sends the email
        transport.connect(SMTP_HOST_NAME, SMTP_PORT, SMTP_AUTH_USER, SMTP_AUTH_PWD);
        transport.sendMessage(message, message.getRecipients(Message.RecipientType.TO));
        transport.close();
      }
    
      // Authenticates to SendGrid
      private class SMTPAuthenticator extends javax.mail.Authenticator {
        @Override
        public PasswordAuthentication getPasswordAuthentication() {
          String username = SMTP_AUTH_USER;
          String password = SMTP_AUTH_PWD;
          return new PasswordAuthentication(username, password);
        }
      }
    }

    node.js

    The following instructions describe how to use SendGrid with node.js on Debian Wheezy and CentOS.

    Debian Wheezy
    1. ssh into your instance:
      gcutil --project=<project-id> ssh <instance-name>
    2. Become a superuser and set a safe umask:
      user@test-wheezy:~$ sudo su -
      root@test-wheezy:~# umask 077
    3. Update your package repositories:
      root@test-wheezy:~# sudo apt-get update
    4. Install node.js dependencies:
      root@test-wheezy:~# sudo apt-get install git-core curl build-essential openssl libssl-dev -y
    5. Clone node.js repo from github:
      root@test-wheezy:~# git clone https://github.com/joyent/node.git
    6. Change directory to the node.js source tree:
      root@test-wheezy:~# cd node
    7. Configure node software for this OS and virtual machine:
      root@test-wheezy:~# ./configure
    8. Build node.js, npm, and related objects:
      root@test-wheezy:~# make
    9. Install node.js, npm, and other software in the default location:
      root@test-wheezy:~# sudo make install
    10. Install the mailer package:
      root@test-wheezy:~# npm install mailer
    11. In the node directory, create a new file named sendmail.js file containing the following Javascript:
      var email = require('mailer');
      
      email.send({
          host: 'smtp.sendgrid.net',
          port : '2525',
          domain: 'smtp.sendgrid.net',
          authentication: 'login',
          username: '<YOUR_SENDGRID_USERNAME>',
          password: '<YOUR_SENDGRID_PASSWORD>',
          to : 'EMAIL@EXAMPLE.COM',
          from : 'ANOTHER_EMAIL@ANOTHER_EXAMPLE.COM',
          subject : 'test email from node.js on a Compute Engine VM',
          body : 'Hello!\n\nThis a test email from node.js on a VM.',
        },
        // Callback function in case of error.
        function(err, result){
          if(err){ console.log(err); }
        });
    12. Execute the program to send an email message through SendGrid:
      root@test-wheezy:~# node sendmail
      
    CentOS
    1. ssh into your instance:
      gcutil --project=<project-id> ssh <instance-name>
    2. Become a superuser and set a safe umask:
      user@test-centos:~$ sudo su -
      root@test-centos:~# umask 077
    3. Update package repositories:
      root@test-centos:~# yum update
      ...
      Install       9 Package(s)
      Upgrade     218 Package(s)
      
      Total download size: 124 M
      Is this ok [y/N]: y
    4. Install node.js dependencies:
      root@test-centos:~# yum install git-core curl openssl openssl-dev -y
      ...
      root@test-centos:~# yum groupinstall "Development Tools"
      ...
    5. Clone node.js repository from github:
      root@test-centos:~# git clone https://github.com/joyent/node.git
    6. Change directory to the node.js source tree:
      root@test-centos:~# cd node
    7. Configure node software for this OS and virtual machine:
      root@test-centos:~# ./configure
    8. Build node.js, npm, and related objects:
      root@test-centos:~# make
    9. Install node.js, npm, and other software in the default location:
      root@test-centos:~# sudo make install
    10. Install the mailer package:
      root@test-centos:~# npm install mailer
    11. In the node directory, create a new file named sendmail.js file containing the following Javascript:
      var email = require('mailer');
      
      email.send({
          host: 'smtp.sendgrid.net',
          port : '2525',
          domain: 'smtp.sendgrid.net',
          authentication: 'login',
          username: '<YOUR_SENDGRID_USERNAME>',
          password: '<YOUR_SENDGRID_PASSWORD>',
          to : 'EMAIL@EXAMPLE.COM',
          from : 'ANOTHER_EMAIL@ANOTHER_EXAMPLE.COM',
          subject : 'test email from node.js on a Compute Engine VM',
          body : 'Hello!\n\nThis a test email from node.js on a VM.',
        },
        // Callback function in case of error.
        function(err, result){
          if(err){ console.log(err); }
        });
    12. Execute the program to send an email message through SendGrid:
      root@test-centos:~# node sendmail
      
    * Google will be compensated for customers who sign up for a non-free account.

    Page: performance

    This document covers some techniques you can use to improve the performance of your application. In some cases, examples from other APIs or generic APIs are used to illustrate the ideas presented. However, the same concepts are applicable to the Google Compute Engine API.

    Contents

    1. Using gzip
    2. Working with partial resources
      1. Partial response

    Using gzip

    An easy and convenient way to reduce the bandwidth needed for each request is to enable gzip compression. Although this requires additional CPU time to uncompress the results, the tradeoff with network costs usually makes it very worthwhile.

    In order to receive a gzip-encoded response you must do two things: Set an Accept-Encoding header, and modify your user agent to contain the string gzip. Here is an example of properly formed HTTP headers for enabling gzip compression:

    Accept-Encoding: gzip
    User-Agent: my program (gzip)
    

    Working with partial resources

    Another way to improve the performance of your API calls is by requesting only the portion of the data that you're interested in. This lets your application avoid transferring, parsing, and storing unneeded fields, so it can use resources including network, CPU, and memory more efficiently.

    Partial response

    By default, the server sends back the full representation of a resource after processing requests. For better performance, you can ask the server to send only the fields you really need and get a partial response instead.

    To request a partial response, use the fields request parameter to specify the fields you want returned. You can use this parameter with any request that returns response data.

    Example

    The following example shows the use of the fields parameter with a generic (fictional) "Demo" API.

    Simple request: This HTTP GET request omits the fields parameter and returns the full resource.

    https://www.googleapis.com/demo/v1?key=YOUR-API-KEY
    

    Full resource response: The full resource data includes the following fields, along with many others that have been omitted for brevity.

    {
      "kind": "demo",
      ...
      "items": [
      {
        "title": "First title",
        "comment": "First comment.",
        "characteristics": {
          "length": "short",
          "accuracy": "high",
          "followers": ["Jo", "Will"],
        },
        "status": "active",
        ...
      },
      {
        "title": "Second title",
        "comment": "Second comment.",
        "characteristics": {
          "length": "long",
          "accuracy": "medium"
          "followers": [ ],
        },
        "status": "pending",
        ...
      },
      ...
      ]
    }
    

    Request for a partial response: The following request for this same resource uses the fields parameter to significantly reduce the amount of data returned.

    https://www.googleapis.com/demo/v1?key=YOUR-API-KEY&fields=kind,items(title,characteristics/length)
    

    Partial response: In response to the request above, the server sends back a response that contains only the kind information along with a pared-down items array that includes only HTML title and length characteristic information in each item.

    200 OK
    
    {
      "kind": "demo",
      "items": [
      {
        "title": "First title",
        "characteristics": {
          "length": "short"
        }
      },
      {
        "title": "Second title",
        "characteristics": {
          "length": "long"
        }
      },
      ...
      ]
    

    Note that the response is a JSON object that includes only the selected fields and their enclosing parent objects.

    Details on how to format the fields parameter is covered next, followed by more details about what exactly gets returned in the response.

    Fields parameter syntax summary

    The format of the fields request parameter value is loosely based on XPath syntax. The supported syntax is summarized below, and additional examples are provided in the following section.

    • Use a comma-separated list to select multiple fields.
    • Use a/b to select a field b that is nested within field a; use a/b/c to select a field c nested within b.

      Exception: For API responses that use "data" wrappers, where the response is nested within a data object that looks like data: { ... }, do not include "data" in the fields specification. Including the data object with a fields specification like data/a/b causes an error. Instead, just use a fields specification like a/b.

    • Use a sub-selector to request a set of specific sub-fields of arrays or objects by placing expressions in parentheses "( )".

      For example: fields=items(id,author/email) returns only the item ID and author's email for each element in the items array. You can also specify a single sub-field, where fields=items(id) is equivalent to fields=items/id.

    • Use wildcards in field selections, if needed.

      For example: fields=items/pagemap/* selects all objects in a pagemap.

    More examples of using the fields parameter

    The examples below include descriptions of how the fields parameter value affects the response.

    Note: As with all query parameter values, the fields parameter value must be URL encoded. For better readability, the examples in this document omit the encoding.

    Identify the fields you want returned, or make field selections.
    The fields request parameter value is a comma-separated list of fields, and each field is specified relative to the root of the response. Thus, if you are performing a list operation, the response is a collection, and it generally includes an array of resources. If you are performing an operation that returns a single resource, fields are specified relative to that resource. If the field you select is (or is part of) an array, the server returns the selected portion of all elements in the array.

    Here are some collection-level examples:
    Examples Effect
    items Returns all elements in the items array, including all fields in each element, but no other fields.
    etag,items Returns both the etag field and all elements in the items array.
    items/title Returns only the title field for all elements in the items array.

    Whenever a nested field is returned, the response includes the enclosing parent objects. The parent fields do not include any other child fields unless they are also selected explicitly.
    context/facets/label Returns only the label field for all members of the facets array, which is itself nested under the context object.
    items/pagemap/*/title For each element in the items array, returns only the title field (if present) of all objects that are children of pagemap.

    Here are some resource-level examples:
    Examples Effect
    title Returns the title field of the requested resource.
    author/uri Returns the uri sub-field of the author object in the requested resource.
    links/*/href
    Returns the href field of all objects that are children of links.
    Request only parts of specific fields using sub-selections.
    By default, if your request specifies particular fields, the server returns the objects or array elements in their entirety. You can specify a response that includes only certain sub-fields. You do this using "( )" sub-selection syntax, as in the example below.
    Example Effect
    items(title,author/uri) Returns only the values of the title and author's uri for each element in the items array.

    Handling partial responses

    After a server processes a valid request that includes the fields query parameter, it sends back an HTTP 200 OK status code, along with the requested data. If the fields query parameter has an error or is otherwise invalid, the server returns an HTTP 400 Bad Request status code, along with an error message telling the user what was wrong with their fields selection (for example, "Invalid field selection a/b").

    Here is the partial response example shown in the introductory section above. The request uses the fields parameter to specify which fields to return.

    https://www.googleapis.com/demo/v1?key=YOUR-API-KEY&fields=kind,items(title,characteristics/length)
    

    The partial response looks like this:

    200 OK
    
    {
      "kind": "demo",
      "items": [
      {
        "title": "First title",
        "characteristics": {
          "length": "short"
        }
      },
      {
        "title": "Second title",
        "characteristics": {
          "length": "long"
        }
      },
      ...
      ]
    

    Note: For APIs that support query parameters for data pagination (maxResults and nextPageToken, for example), use those parameters to reduce the results of each query to a manageable size. Otherwise, the performance gains possible with partial response might not be realized.

    Page: release-notes

    This page contains release notes for each version of the Google Compute Engine API.

    Current version: v1

    Release History

    April 17, 2014

    API

    • Updated default Compute Engine API rate limit from 50,000 requests/day to 250,000 requests/day. See API rate limits for more information.

    Metadata Server

    • Introduced new Metadata-Flavor: Google header to replace the X-Google-Metadata-Request: True header. This also allows users to easily detect if they are running in Compute Engine by querying for the new header. For more information, see Metadata Server.

    April 14, 2014

    Zones

    • Introduced an Asia Pacific region (asia-east1) and two new supported zones, asia-east1-a and asia-east1-b.

    April 09, 2014

    Images & Kernels

    • Released new images v20140408 to address the OpenSSL security bulletin (CVE-2014-0160). New images include:
      • debian-7-wheezy-v20140408
      • backports-debian-7-wheezy-v20140408
      • centos-6-v20140408
      • rhel-6-v20140408

    April 07, 2014

    Images & Kernels

    • RHEL images have moved to General Availability status and are open to all users and projects.

      Note that there is an additional fee for using premium operating systems, including RHEL. Please review the price sheet for more information.

    • Added new Red Hat Cloud Access feature, which allows users to use their RHEL licenses on Compute Engine virtual machine instances.

    API

    • Removed support for v1beta16. Please transition to using v1 if you haven't already.

    April 02, 2014

    gcutil

    Release 1.15.0

    • New features
      • Added feature where gcutil prompts the user to set an initial Windows password in the addinstance command if the source image is from a Google Windows project.

    March 25, 2014

    Introduced sustained use discounts

    Sustained use discounts lowers the effective price of your instances as your usage goes up. When you use a virtual machine for an entire month, this amounts to an additional 30% discount. For more information, see the price sheet.

    Sustained use discounts are effective starting April 1st, 2014.

    Images & Kernels

    • Windows Server images are now available in limited preview.

      Although we do not currently charge for use, you can review the price sheet for the intended Windows Server image pricing.

    • SUSE images are now generally available and is available for all users.

      Note that Compute Engine will start charging for SUSE images on April 1st, 2014. See the price sheet for more information.

    Replica Pool

    Introduced new Replica Pool service which allows you to create a managed pool of virtual machines based on a reusable template. For more information, see the Replica Pool documentation, or the Replica Pool API reference.

    March 19, 2014

    Images & Kernels

    • RHEL images are now in open preview with a new image version, v20140318.

      RHEL images are available to all users at no extra cost until April 1, 2014. On April 1, 2014, Compute Engine will start charging for use of these images according to the price sheet.

    • Released new Debian, CentOS, and Debian Backports images, v20140318.
      • For Debian images, network time protocol (NTP) is now configured to use Google services instead of the public NTP pool.
    • Updated image packages
      • Google Daemon now syncs ssh keys immediately instead of on a per-minute intervals.
      • Improved systemd integration.
      • Fixed Google Daemon data corruption bug.
      • Startup scripts are now downloaded with curl instead of wget.
      • Removed harmless warnings.

    March 14, 2014

    gcutil

    Release 1.14.2

    • Bug fixes
      • Fixed issue where performing gcutil moveinstances with instances with disks whose autoDelete status is set to true would lead to loss of user data. gcutil moveinstances is now compatible with Compute Engine API v1 only.

    March 10, 2014

    Instances

    • Temporarily disabled support for Advanced Vector Extensions (AVX).

      Compute Engine has disabled support for AVX due to a stability issue that we are actively investigating. We will re-enable AVX support as soon as we find and fix the root cause.

    March 06, 2014

    Images & Kernels

    • SUSE images are now in open preview.

      This means that SUSE images are available to all users at no extra cost until April 1, 2014. On April 1, 2014, Compute Engine will start charging for use of these images according to the price sheet.

    March 05, 2014

    API

    • Added ability for creating and deleting a root persistent disk when a virtual machine instance is created or deleted. See the Instances documentation for more information.

    Persistent Disks

    • Added support for restoring persistent disk snapshots to a persistent disk of a user-specified size.

      It is now possible to use the sizeGb parameter when restoring a snapshot. This can be used to create a persistent disk that is larger than the persistent disk snapshot. See Restoring snapshots to a Larger Size for more information.

    • Added support for setting the auto-delete state of a read-write persistent disk.

    gcutil

    Release 1.14.0

    • New Features
      • Switched to new, single API call for creating a virtual machine instance with a root persistent disk.
      • Added new command, setinstancediskautodelete, that sets the auto-delete option for persistent disks attached to virtual machine instances.
      • Added support for specifying a disk size when creating a disk using a snapshot.
      • Decreased the time spent waiting for SSH keys to propagate during initial instance creation from 120 seconds to 10 seconds.

    February 20, 2014

    Instances

    • Added support for Advanced Vector Extensions (AVX) in new virtual machine instances.

      All virtual machine instances created after February 11, 2014 have this feature enabled. To check if your virtual machine instance has this enabled, run the following command in your virtual machine instance:

      me@my-inst:~$ $ cat /proc/cpuinfo | grep avx
      <output should contain 'avx', 'xsave', and 'xsaveopt'>

      If you need to update your instance to use AVX, you must delete and recreate the instance.

    December 17, 2013

    Networking

    Released new Protocol Forwarding feature

    Protocol forwarding allows you to forward traffic to a single virtual machine instance, using a target instance. Protocol forwarding provides support for these additional features:

    See Protocol forwarding for more information.

    December 03, 2013

    Google Compute Engine is now generally available!

    Users can now feel confident using Compute Engine to support mission-critical workloads with 24/7 support and a 99.95% monthly SLA. The move to general availability also comes with a host of new features and changes, detailed below. For a full list, review our transition guide.

    API

    Released new v1 API

    v1beta16 is now deprecated and customers should switch to v1. v1beta16 will remain available until March 04, 2014 and v1beta15 will be discontinued on January 03, 2014. For the full details, read our transition guide.

    Changes in v1 include (but are not limited to):

    • New support for custom kernels and removed support for Google-provided kernels

      Users can now use custom kernels with their images and no longer need to use Google-built kernels. The Kernels collection has been removed from v1 and all new images will include embedded kernel binaries as part of the image.

    • Removed scratch boot disks from v1.

      All scratch boot disks have been deprecated and we recommend transitioning to using persistent disks. In the v1 API, it is not possible to create a scratch boot disk.

    • Deprecated *-d machine types.

      All *-d machine types have been deprecated and no longer supported. Although you can still create instances with these machine types, we do not recommend this and will eventually remove these machine types completely.

    Machine Types

    • New Machine Types

      We've added new 16-core-machine types that are now available for your instances. For more information, review machine types and pricing.

    Persistent Disks

    We've introduced a new persistent disk model. Persistent disk performance now scales linearly with the size of the disk. Additionally, we are removing I/O charges for persistent disks completely and lowering the price of persistent disk storage. For more information, review the pricing documentation.

    Metadata Server

    Release new metadata server version v1

    The following are new changes with the v1 metadata server:

    • Requests to the metadata server will now require a security header.

      All requests to the metadata server will require the following header:

      X-Google-Metadata-Requests: True
    • Requests containing the header X-Forwarded-For will automatically be rejected.

    gcutil

    Release 1.12.0

    • New Features
      • Added awareness of deprecated machine types to listmachinetypes and the machine type prompt when creating instances.
      • Made --persistent_boot_disk the default setting for the addinstance subcommand since scratch disks were removed from the v1 API. The --nopersistent_boot_disk flag can only be specified using the v1beta16 API.
      • Deprecated all kernel-related subcommands and flags when using the v1 API.
    • Other Changes
      • Updated gcutil to be distributed with the Cloud SDK.
      • Raised the default size of persistent disks to 500GB.
      • Made v1 the default API version.

    Images & Kernels

    As part of the Google Compute Engine move to using full disk operation system images, we have made the following changes:

    • Released new backports-debian-wheezy image, which allows users to access new features and bug fixes from the backports kernel. See Using backports images for more information.
    • Deprecated Kernels collection.
    • Remove all support for kernels from the v1 API.

    Additionally, FreeBSD, SELinux, and CoreOS images now known to be functional on Compute Engine instances with the move to full disk operation system images.

    New Premium Operating Systems limited preview program

    The new premium OS limited preview program lets you use a SUSE or Red Hat Enterprise Linux (RHEL) images built explicitly for Compute Engine instances. Users who are interested in the program can review the documentation and sign up for the program on the Premium OS page.

    November 25, 2013

    Images & Kernels

    • Released new Debian 7 and CentOS 6 images, v20131120.
      • New images now contain embedded kernels rather than Google-built kernels. For instructions on how to upgrade you persistent disk to use an embedded kernel, review the documentation. Similarly, you can also upgrade your custom image to use an embedded kernel.
      • New images allow you to use dmidecode to determine if you are running on Google Compute Engine. See the documentation for more information.
    • Deprecated the Kernel resource.
      • Google will no longer provide custom kernels and will instead use community-provided kernels in Google-provided images.

    November 12, 2013

    Instance Migration and Transparent Scheduled Maintenance

    • Google Compute Engine now offers transparent scheduled maintenance in us-central1-a and us-central1-b; these zones will no longer go offline for scheduled maintenance and Google Compute Engine will automatically move your instances out of the way of any scheduled maintenance activity. For more information, visit Maintenance Events.

    gcutil

    Release 1.11.0

    • New Features
      • Added a new subcommand, gcutil whoami, that prints out the email of the currently-authenticated user to standard out.
      • Added two new scope aliases: datastore and userinfo-email.
      • Added flags to gcutil addinstance and a new subcommand, gcutil setscheduling, for controlling instance scheduling parameters.
    • Other Changes
      • Disabled host key checking for commands that rely on ssh because there is no secure channel to pass the host key to the client for the first time.

    Images & Kernels

    • Marked all Debian 6 images as deprecated.
    • Marked Debian 7 images older than debian-7-wheezy-v20130926 as deprecated.

    October 22, 2013

    Regions & Zones

    • Deprecated us-central2-a zone.

      us-central2-a has been deprecated and will be permanently turned down by December 31st, 2013. You should move all resources to us-central1-a and/or us-central1-b (after November 11, 2013) and ensure that you are no longer using any resources in us-central2-a after December 31st, 2013.

    October 10, 2013

    Images & Kernels

    • Added new kernel, gce-no-conn-track-v20130813, and images v20130926.
      • gce-no-conn-track-v20120813 kernel is identical to gce-v20130813 kernel except that connection tracking is no longer enabled.
      • Images v20130926 will use the new gce-no-conn-track kernel. To use a kernel with connection tracking turned on, specify the --kernel flag with a previous kernel version, such as gce-v20130813.

    October 07, 2013

    Maintenance Windows

    • Reduce duration of two upcoming maintenance windows for us-central1-a and us-central1-b zones.

      The new maintenance window durations are as follows:

      • us-central2-a: Oct 12, 2013 12:00:00 PM - Oct 22, 2013 10:00:00 AM
      • us-central1-b: Nov 2, 2013 12:00:00 PM - Nov 10, 2013 12:00:00 PM

    gcutil

    Release 1.9.1

    • Bug fixes
      • Fixed a bug in which the tilde in the authentication file path was not being expanded properly.

    October 3rd, 2013

    Networking

    • Added new features to load balancing:
      • New sessionAffinity feature allows users to determine the hashing method used to select backend machines that receive traffic.
      • New backupPools and failoverRatio feature allows users to specify a backup target pool, in case a primary target pool becomes unhealthy.

    API

    Released new API version v1beta16

    v1beta15 is now deprecated and customers should switch to v1beta16. v1beta15 will remain available until January 03, 2014. For more information on how to transition to v1beta16, see our transition guide.

    Changes in v1beta16 include:

    • Removed zone quotas.
    • Added new regional quotas.
    • Updated the global default quotas with new default limits.
    • Changed addresses().user field from a string to a list and renamed the field to addresses().users.
    • Added new setBackup method to set backup target pools for existing primary target pools.
    • Updated TargetPools resource representation to describe backup pools, failover ratios, and session affinity.

    gcutil

    Release 1.9.0

    • New features
      • Added gcutil settargetpoolbackup command.
      • Added new --backup_pool and --failover_ratio flags for the gcutil addtargetpool command.
    • Other
      • Removed usage field from gcutil getzone response.
      • Added new usage field to gcutil getregion response.
      • gcutil now outputs tables thats respect the terminal width. This feature can be turned off using the --respect_terminal_width flag.
      • gcutil deleteinstance with the --force flag now requests users to explicitly provide --[no]delete_boot_pd if any of the instances have a boot disk.

    Projects

    • Stopped allowing cross-project resource references, such as the ability to create a disk from a snapshot in another project. Previously, it was allowed for projects whose access control lists (ACLs) allowed it, such as situations where multiple projects were owned by one user.

    September 10, 2013

    gcutil

    Release 1.8.4

    • Bug Fixes
      • Fixed an issue whereby reserved IP addresses were not preserved in the gcutil moveinstances subcommand.
      • Bug fixed where global flags were not being displayed on gcutil --help.
    • Other
      • Updated gcutil help text.

    September 05, 2013

    Images & Kernels

    • Added new Debian images v20130816.
      • Updated images to use latest kernel.
      • Updated images to use latest gcutil too.

    September 04, 2013

    API

    • Removed support for v1beta14. Please transition to using v1beta15 if you haven't already.
      • (Updated 09/09/2013) Removed support for cross-region external IP address assignment.

    August 26, 2013

    Persistent Disks

    Networking

    Images & Kernels

    • Added new CentOS image v20130813 with the following updates:
      • Updated image to use the latest kernel.
      • Updated image to use the latest gcutil tool.
    • Added new kernels v20130813 with the following updates:
      • Added multiqueue support.
      • Fixed an issue in scheduler that impacted Hadoop.
      • Added backport pvclock enlightment for softlockup detector.

    August 6th, 2013

    Networking

    Launched new load balancing service

    Google Compute Engine has launched a load balancing feature that lets you distribute traffic across your instances. Load balancing is especially useful for supporting heavy traffic to your instances and to provide redundancy to avoid failures.

    For more information, visit the load balancing documentation. Additionally, you can review the load balancing reference documentation.

    gcutil

    Release 1.8.3

    • New Features
      • Added new prompt to select a persistent or scratch boot disk when using gcutil addinstance.

        Caution: Customers with scripts that programmatically create instances in gcutil will need to update their script to use the --[no]persistent_boot_disk flag to continue programmatically creating instances. For more information, see gcutil addinstance.


      • Changed naming of persistent boot disks that are created during instance creation from boot-<instance-name> to <instance-name>.
      • Added prompt to delete attached persistent disk when using gcutil deleteinstance.
      • Added support for load balancing.

    Images & Kernels

  4. Added source code for custom tools that Google Compute Engine images uses, onto GitHub. The list of tools include:
    • Image Bundle - Creates an image file our of a disk attached to a virtual machine instance.
    • Google Startup Scripts - Scripts and configuration files that set up a Linux-based image to work smoothly with Google Compute Engine.
    • Google Daemon - A service that manages user accounts, maintains SSH login keys, and syncs public endpoint IP addresses.
  5. Added new Debian and CentOS images v20130723, with the following updates:
    • Added latest gsutil version which addresses issues where gsutil was not working properly.
    • Fixed typo which causes erroneous startup-script-url error.
  6. July 15th, 2013

    Kernels

    • Marked kernels older than gce-v20130603 as DEPRECATED.
    • Marked deprecated kernels gce-v20120912 and older as OBSOLETE.

    For a list of kernels and their deprecation states, run the following command:

    gcutil --project=<project-id> listkernels

    For more information about kernel deprecation states, see the reference documentation.

    June 26th, 2013

    Machine Types

    • Added bursting for f1-micro instances. See machine types for more information.

    API

    gcutil

    Release 1.8.2

    • New Features
      • Added new gcutil resetinstance command that allows resetting virtual machine instances.
    • Bug Fixes
      • Fixed region detection when releasing addresses from multiple regions.
      • Fixed aggregated resource listing with --format=names.
    • Other Changes
      • Fixed the usage help string for gcutil addroute command.

    June 19th, 2013

    Images & Kernels

    • Added new Debian images v20130617.
    • Added the following updates for Debian 6 and 7 images v20130617:
      • Updated gsutil to 3.31 and gcutil to 1.8.1.
      • Disable IPv6 by default via /etc/sysctl.d, for optimal user experience. Google Compute Engine does not currently support IPv6.
    • Added the following updates for Debian 7 image v20130617:
      • Upgrade pre-installed packages to Debian 7.1, incorporating security updates and miscellaneous important bug fixes. For more information, see the Debian announcements.

    June 18th, 2013

    Images & Kernels

    • Added new images v20130522 and kernels v20130603.
    • Patched new kernel version gcg-3.3.8-201305211623 and gcg-3.3.8-201305291443 to address vulnerability in previous kernels. See Security Bulletins for more information.
    • Fixed kernel warning printed on boot about virtio net multiqueue.
    • Made ext4 kernel fixes (for xfstest).

    May 21st, 2013

    Disks

    • Increased default per-project total disk quota to 1TB.

    gcutil

    • Updated documentation for gcutil moveinstances to provide a warning of possible failures during the moving process.
    • Improved error detection in the gcutil moveinstances command.
    • Fixed behavior where gcutil attempted to use existing persistent disk when recreating an instance with the same name and the --persistent_boot_disk flag.
    • Machine type prompts in gcutil now provides a description of the machine types and gcutil listimages will now only display the name and description of images.

    May 15th, 2013

    Google Compute Engine is available for open signups!

    We're excited to announce that Google Compute Engine is now available for open signups and anyone can sign up for the service. For signup instructions, see the signup page.

    API

    Released new API version v1beta15

    v1beta14 is now deprecated and customers should switch to v1beta15. v1beta14 will remain available until August 15, 2013 and v1beta13 will be discontinued on May 31, 2013. For more information on how t transition to v1beta15, see our transition guide.

    Changes in v1beta15 include:

    • Introduced new region scope and regional resources.
      • Added new regional resource URIs to access regional resources, in the form:
        https://www.googleapis.com/compute/v1beta15/project/<project-id>/regions/<region-name>/<resource-type>/<resource-name>

        For example, to access regional reserved IPs, use the following regional URI:

        http://www.googleapis.com/compute/v1beta15/project/example.com:myproject/regions/example-region/addresses
      • Updated reserved IP addresses to a regional resource.

        External static IPs are now referred to as reserved IP addresses and are no longer a global resource. Reserved IPs are now a regional resource that can be managed through the Addresses collection.

        You can also provision, promote, and release external IP addresses through the Addresses collection, without having to manually request one. For more information, see the Reserved Addresses documentation.

    • Converted Machine Type resources to per-zone resources.

      To use a machine type, you must now specify the zone in which that machine type lives:

      http://www.googleapis.com/compute/v1beta15/project/example.com:myproject/zone/example-zone/machineTypes/machineTypeName
    • Changed method of creating Snapshot resources to use a custom verb on the Disk resource.

      To create a Snapshot resource, you must now make a request to the following URI:

      https://www.googleapis.com/compute/v1beta15/projects/<project-id>/zones/<zone>/disks/<disk-name>/createSnapshot

      Snapshots are still accessible by making requests to the Snapshot collection.

    • Removed ability to assign an internal IP address.

      The internalIp field on a virtual machine instance is now read-only and you can no longer manually assign internal IPs to your instances. Google Compute Engine will assign internal IPs automatically.

    • Added a number of new features.
      • Added new Routes collection that lets you set up and manage a virtual machine's routing table.
      • Added ability to reserve and release static IPs, and to promote ephemeral IPs to static IPs.
      • Added the ability to request aggregate lists for per-zone and per-region resources. You can request aggregate lists for the following resources:
        • Instance resources
        • Disk resources
        • Address resources
        • Machine type resources

        For example, you can list instances across all zones by making a request to the following URI:

        https://www.googleapis.com/compute/v1beta15/project/example.com:myproject/aggregated/instances

    Machine Types

    • Introduced new shared-core machine types.

      Shared-core machine types are more cost-effective for running applications that don't require a lot of resources. New available machine types are g1-small and f1-micro.

    • Updated maximum total persistent disk size that can be attached to a machine type.

      Standard, high memory, and high CPU machine types now have an updated maximum total disk size of 10 TB. See machine types for more information.

    Billing

    • Updated billing model for instances.

      Google Compute Engine has updated our billing model so that instances are billed based on per-minute usage. All instances that run for 10 minutes or less will be charged for 10 minutes of usage. After the first 10 minutes, usage is charged on a per-minute basis.

    Images

    • Added new images and kernels v20130515.
    • Removed Google-specific repositories from images. The only packaged repositories configured in images are now the Debian archive. Google Compute Engine still installs Google-specific packages at build time but removed Google-specific repositories for various reasons.
    • Removed default installation of the apiclient library.
    • Changed log location of startup script output to /var/log/startupscript.log. Also, added startup script log output to the instance's serial port console so you can also run gcutil getserialportoutput to retrieve startup script log information.
    • Improved instance creation and deletion time for Debian.
    • Fixed issue preventing startup script specified in metadata to be downloaded from Google Cloud Storage.
    • Removed dist-upgrade from starting on instance boot.
    • Removed google_storage_download script.

    gcutil

    Release 1.8.0

    • New Features
      • Added support for v1beta15 Google Compute Engine API. (addresses, regions, per-zone machine types, aggregated lists).
      • Added gcutil config command, an alias for gcutil auth.
      • When prompting the user to select an image, gcutil will include standard images (CentOS, Debian).
      • With v1beta15 API, gcutil will use aggregated list API call by default. Aggregated list method will aggregate all resources across all scopes in which the resource of that type exist (for example, aggregated list of instances will list instances in all zones).
      • Users can specify image from the standard project by specifying image name prefix. For example: gcutil addinstance my-instance --image=debian-7.
    • Bug Fixes
      • When moving instances using gcutil moveinstances, if some of the instances depend on deprecated resources (image, kernel), gcutil will warn before it proceeds with the migration (migration would fail). New flag --replace_deprecated will create instances in the destination zone with dependencies on deprecated resources updated to recommended replacement resources.
    • Other Changes
      • List commands will display all resources by default. Number of resources listed may be limited using --max_results flag. --fetch_all_pages flag is now deprecated.
      • Improved display of images and kernels list. By default, only newest kernels/images will be displayed when listed or when user is prompted to select an image or kernel. Use --old_images or --old_kernels to list all images or kernels, respectively.
      • When listing imges, the standard images (CentOS, Debian) will be listed in addition to images from the specified project. To list images in the specified project only, use --nostandard_images flag.
      • When prompting user to select a machine type, gcutil displays machine type description in addition to the name.
      • Removed support for v1beta13 Google Compute Engine API.

    gcelib

    gcelib is no longer available and if you haven't already, we strongly encourage users to transition to the Google APIs Python Client Library.

    May 7th, 2013

    Images

    Released new Debian images

    Google Compute Engine is happy to announce that Debian images for Google Compute Engine are now available for your instances. To view a list of Debian images available to your project, run the following gcutil command:

    gcutil --project=debian-cloud listimages

    For information about Debian images, see the Debian wiki.

    Similarly, you can see a list of CentOS images like so:

    gcutil --project=centos-cloud listimages

    Deprecated gcel images

    gcel images are now deprecated and we encourage users to transition to either Debian or CentOS images.

    April 4th, 2013

    Google Compute Engine available for Gold signups!

    We're excited to announce that Google Compute Engine is now available for users who sign up for Gold Support for the Google Cloud Platform! Visit the signup page to get started.

    Google Compute Engine Console

    March 29th, 2013

    Metadata Server

    • Changed service account token cache period
      • The metadata server no longer caches service account tokens within 5 minutes of their expiration window. If you need to ensure you always have a valid access token, you can fetch one anytime within 5 minutes of the expiration window.

    Bug Fixes

    • Fixed a bug where operations created using v1beta13 could not be retrieved using v1beta14.
    • Fixed a bug where attaching persistent disks with device names may collide with scratch disks.

    March 8th, 2013

    Metadata Server

    Released new metadata server version v1beta1

    See the transition guide to help transition your code away from the previous metadata version. v1beta1 changes include:

    • New metadata server URL: http://metadata/computeMetadata/v1beta1/
    • New metadata tree structure where metadata now live under a project/ or instance/ directory.
    • New URL query parameters
      • wait_for_change: Perform a hanging GET request that returns when the value of the specified metadata key changes
      • recursive: retrieve all content from underneath a directory
      • alt: specify the format of the response
    • Updated or added new default metadata keys.

    Disks

    Images & Kernels

    • Added new images and kernels v20130225.
    • Patched kernels 3.3x to address security vulnerability in kernels 2.6x.
    • Released new security bulletins page that lists known security issues and their associated fixes.
    • Removed /dev/<disk> paths; users should be referencing their disks using the /dev/disk/by-id/ aliases.

    gcutil

    Release 1.7.2

    • New Features
      • Added two new commands attachDisk and detachDisk, which can be used to attach/detach a persistent disk to and from running virtual machine instance.
    • Bug Fixes
      • Fixed an issue where list operations were incorrectly capped at maximum number of results of 100.
    • Other Changes
      • Improved display of project's IP addresses in gcutil getproject.
      • Deprecation information is now printed for deprecated resources.
      • Removed support for v1beta12 Google Compute Engine API.

    February 19th, 2013

    gcelib

    gcelib is now deprecated

    Downloads and documentation of gcelib will continue to be available for three months, until May 15, 2013. During that time, gcelib will work with the v1beta13 API only (it won’t be upgraded to work with v1beta14). Between now and May 15, developers using gcelib are strongly encouraged to migrate their applications to use an alternative client library, such as the Google APIs Python Client Library.

    Disks

    • Enabled billing for persistent disk snapshots
      • For more information on snapshot pricing, see the price sheet.

    February 8th, 2013

    gcutil

    Release 1.7.0

    • New features
      • Added a new subcommand, gcutil moveinstances, for moving instances (and their persistent disks) from one zone to another.
    • Bug Fixes
      • Added --zone flag to gcutil listdisks.
      • Fixed a bug where gcutil addsnapshot would crash if the --zone flag was not specified.
    • Other Changes
      • Added zone column to the table output of gcutil listoperations.
      • Increased the timeout of synchronous operations from 2 minutes to 4 minutes.

    January 30th, 2013

    API

    Released new API version v1beta14

    v1beta13 is now deprecated and customers should switch to v1beta14. v1beta13 will remain available until April 30, 2013, and v1beta12 will be discontinued February 11, 2013.

    Changes in v1beta14 include:

    • Introduced per-zone and global resources
      • Added new per-zone resource URIs to access per-zone resources, in the form:
        http://www.googleapis.com/compute/v1beta14/projects/<project-id>/zones/<zone/<resource-type>/<resource-name>

        For example, accessing a Disk resource requires the following per-zone URI:

        http://www.googleapis.com/compute/v1beta14/project/example.com:myproject/zones/some-example-zone/disks/mydisk
      • Added new global resource URIs for accessing global resources, in the form:
        http://www.googleapis.com/compute/v1beta14/projects/<project-id>/<resource-type>/<resource-name>

        For example, accessing a Machine Type resource requires the following global URI:

        http://www.googleapis.com/compute/v1beta14/project/example.com:myproject/global/machineTypes/somemachinetype
    • Added a number of new features
      • Added new setTags method which allows you to update instance tags for a running instances.
      • Added new setMetadata method which allows you to update metadata for a running instance.
      • Added new deprecate method which allows you to set the deprecation status for an image.
      • Added new root from Persistent Disk feature which allows you to store an operating system image on a persistent disk so that it persists through the life of the instance. Multiple instances can also attach to a root persistent disk in read-only mode.
    • Updated existing resource properties
      • Removed kind property from instance.networkInterfaces and instance.serviceAccounts.
      • Removed support for using default images and default kernels when creating an instance or an image through the API. Users must now explicitly specify an image or kernel.
      • Added new deprecate status to resources.
    • Updated response codes
      • Changed error response for inserting an existing instance from HTTP 400 to HTTP 409.
      • Changed server response for accepting an asynchronous request from HTTP 200 to HTTP 202.

    gcutil

    Release 1.6.0

    • New Features
    • Other Changes
      • Changed the ordering of the machine type prompt when creating instances so the standard machine types show up first, followed by the highcpu and highmem machine types.

    January 24th, 2013

    Images

    • Added new VM images centos-6-v20130104, gcel-12-04-v20130104, and gcel-10-04-v20130104
      • No significant changes.

    December 14th, 2012

    Disks

    • New Persistent Disk Snapshot Feature
      • Added Persistent Disk Snapshot feature which allows you to create snapshots of existing persistent disks and apply them to new disks.

        Note: Although persistent disk snapshot rates are available on the price sheet, billing for snapshots is not yet enabled. We expect to enable snapshot billing in January 2013.

    API

    • Other Changes
      • Added new error message when querying the metadata server for a service account token that has not been authorized for that instance.
      • Added new operation types for instance restarts and shutdowns

    gcutil

    Release 1.5.0

    • New Features
      • Added subcommands for interacting with snapshots.

    December 6th, 2012

    Machine Types

    Zones

    November 9th, 2012

    gcutil

    Release 1.4.1

    • New Features
      • Added new subcommand, gcutil getserialportoutput, for getting the serial port output from an instance.
    • Bug Fixes
      • Fixed an issue where gcutil waited for instances that failed to be created.
    • Other changes
      • Changed the zone selection feature to display maintenance window information next to the zone names.
      • Changed the display of operation resources to show the user responsible for the operation.

    Images

    • New VM images and kernel for v20121106
      • All new images that use a Debian package manager are now named gcel-<version>. Current images 'ubuntu-12-04-vYYYYMMDD' and 'ubuntu-10-04-vYYYYMMDD' are deprecated and will remain available until Feb. 9th, 2013.
      • Updated /etc/lsb-release file to reflect new distribution information.
      • Added support for SCSI disk interface; for information on how to convert your instances, see Disks Interfaces.

    Google Compute Engine Console

    • Added ability to clone instances
      • It is now possible to clone an instance by visiting the instance's details page and clicking the Clone button.

    October 11th, 2012

    API

    Released new API Version v1beta13

    v1beta12 is now deprecated and customers should switch to v1beta13. V1Beta12 will remain available until January 11, 2013. Changes in v1beta13 include:

    • Removed hostCpus field from the machineType resource
    • Changed API nouns and verbs to use camelCase, specifically:
      • machine-types is now machineTypes
      • add-access-config and delete-access-config is now addAccessConfig and deleteAccessConfig
      • set-common-instance-metadata is now setCommonInstanceMetadata
    • Made setCommonInstanceMetadata an asynchronous operation, returning an operation resource to track completion of the request
    • Add serial port output API
    • Fix metadata key validation and prevent duplicate metadata keys
    • PENDING and RUNNING states of long-running operations now reflect the full lifetime of the request
    • Delete operations now guarantee that the DONE state is not reached until after the resource has been completely torn down

    To update your application code to v1beta13:

    1. Change all URIs from v1beta12 to v1beta13. For example:
      https://www.googleapis.com/compute/v1beta13/disks
    2. Update API nouns and verbs that have a dash to use camelCase (e.g. machineTypes instead of machine-types)
    3. Update your application code to reflect the following changes, if necessary:
      • setCommonInstanceMetadata now returns an Operations resource
      • New metadata keys must match the regex [a-zA-Z0-9-_]{1,128} and be less than 128 bytes in length. Metadata values cannot be longer than 32768 bytes

        Note: If your metadata value exceeds 32768 bytes, consider using a startup script

      • Operations may take longer to complete as they now reflect the total time it takes to roll out and confirm the request
      • Delete operations only return DONE after the resource has been completely torn down
      • Instances have new additional STOPPING state which means that the instance is currently in process of being stopped

    gcutil

    Release 1.3.4

    • New Features
      • Implemented batch adddisk. It is now possible to add multiple disks with a single call to gcutil adddisk.
      • Implemented batch delete operations for additional resources. It is now possible to delete multiple disks, firewalls, images, instances, networks, operations, and snapshots.
      • Added a --format flag for the list subcommands. The flag accepts the following values: table, sparse, json, csv, and names. --format=names allows gcutil to be used with Unix tool pipelines:
        gcutil listinstances --format=names | xargs gcutil deleteinstance --force
    • Bug Fixes
      • Fixed the sorting in list subcommands. Instead of sorting each page individually, gcutil now sorts all results before displaying them to the user.
      • Changed --cache_flag_values to not cache flags when the underlying command fails.
    • Other Changes
      • Deprecated --project_id in favor of --project. --project_id still works, but will produce a warning.
      • Reconfigured the version checking to take place when gcutil exits.
      • Improved documentation for firewall commands.
      • Changed the headings for list and get subcommands. The new headings use dashes instead of spaces and are in lower-case. This eliminates the need to use quotes with the --sort_by flag and makes the display of the headings more user-friendly.

    Google Compute Engine Console

    • Added serial console output from a VM instance to the instance details page.
    • Added support for attaching persistent disks in read-only mode as well as read-write mode.
    • Added new example gcutil commands for adding instances, disks, networks, and firewalls.
    • Added support for adding and deleting networks.
    • Fixed assorted bugs.

    September 18th, 2012

    gcutil

    Release 1.2.0

    • New Features
      • Added support for gs:// URLs to the addimage command.
      • Implemented support for multiple flag cache files. gcutil now searches for a .gcutil.flags file starting in the current directory, followed by the parent directories, and the home directory until a file is found.
    • Bug Fixes
      • Added a check to commands dealing with metadata to warn the user of duplicate metadata keys instead of silently ignoring duplicates.
      • Fixed an issue where listoperations would not fetch multiple pages when encountering an operation that contains an error.
    • Other Changes
      • Changed the way gcutil is packaged.
      • Made some of the flag descriptions and an error messages more informative.

    Images

    • New Linux VM images v20120912
      • Added more aggressive validation for ssh keys.
      • make package is now included by default.

    September 13th, 2012

    API

    • Added newline to the end of fstab for images created using the image bundling tool.
    • Added a warning when users try to create hostnames that are 33 characters or longer.
    • Improved error messaging when a user tries to use an IP address reserved for system purposes.

    Google Compute Engine Console

    • Added ability to add or remove networks using the Console.

    September 5th, 2012

    API

    • Faster asynchronous job completion.
    • Improved scalability for resource creation, updates, and monitoring.
    • Resource quotas enabled on a per-project basis, for images, firewalls, and networks.
    • Enable NAT on ICMP packets.

    June 28, 2012

    Google Compute Engine is available for limited preview!

    Page: libraries

    The Google Compute Engine API is built on HTTP and JSON, so any standard HTTP client can send requests to it and parse the responses.

    However, instead of creating HTTP requests and parsing responses manually, you may want to use the Google APIs client libraries. The client libraries provide better language integration, improved security, and support for making calls that require user authorization.

    .NET

    Get the latest Google Compute Engine API client library for .NET ().

    Read the client library's developer's guide.

    Go

    Get the latest Google Compute Engine API client library for Go (alpha).

    Read the client library's developer's guide.

    GWT

    Get the latest Google Compute Engine API client library for Google Web Toolkit (alpha).

    Read the client library's developer's guide.

    Java

    Get the latest Google Compute Engine API client library for Java (rc).

    Read the client library's developer's guide.

    JavaScript

    Get the latest Google Compute Engine API client library for JavaScript (beta).

    Read the client library's developer's guide.

    Node.js

    Get the latest Google Compute Engine API client library for Node.js.

    Read the client library's developer's guide.

    Objective-C

    Get the latest Google Compute Engine API client library for Objective-C.

    Read the client library's developer's guide.

    PHP

    Get the latest Google Compute Engine API client library for PHP (beta).

    Read the client library's developer's guide.

    Python

    Get the latest Google Compute Engine API client library for Python.

    Read the client library's developer's guide.

    Ruby

    Get the latest Google Compute Engine API client library for Ruby (alpha).

    Read the client library's developer's guide.

    Page: java

    This page contains information about getting started with the Compute Engine API using the Google APIs Client Library for Java. In addition, you may be interested in the following documentation:

    Quickstart

    Quickstart creates a starter application to get you up and running with the Compute Engine API and the Google APIs Client Library for Java faster. After you select a platform and click Configure Project, it helps you set up a project in the Google Developers Console. Finally, it generates a custom client application that you can download, run, and modify.

    Sample

    The compute-engine-cmdline-sample may help you get started using the client library.

    Add Library to Your Project

    Select your build environment (Maven or Gradle) from the following tabs, or download a zip file containing all of the jars you need:

    Download

    Download the Compute Engine API v1 Client Library for Java.

    See the compute/readme.html file for details on:

    • What the zip file contains.
    • Which dependent jars are needed for each application type (web, installed, or Android application).

    The libs folder contains all the of the globally-applicable dependencies you might need across all application types.

    Page: python

    This page contains information about getting started with the Compute Engine API using the Google APIs Client Library for Python. In addition, you may be interested in the following documentation:

    Quickstart

    Quickstart creates a starter application to get you up and running with the Compute Engine API and the Google APIs Client Library for Python faster. After you select a platform and click Configure Project, it helps you set up a project in the Google Developers Console. Finally, it generates a custom client application that you can download, run, and modify.

    System requirements

    • Operating systems:
      • Linux
      • Mac OS X
      • Windows

      Note: This library is pure Python, so other operating systems with Python support may work as well.

    • The latest version of Python 2.
    • Python package installation manager: Setuptools or pip.

    Manual Installation

    To install the library and all of its dependencies, open a terminal and do one of the following:

    • Use the easy_install tool included in the setuptools package:
      $ easy_install --upgrade google-api-python-client
    • Use the pip tool:
      $ pip install --upgrade google-api-python-client

    Depending on your system, you may need to prepend those commands with sudo.

    App Engine

    Because Google App Engine requires that all of the source files for a library must be present in your App Engine project, there is a special installation procedure for App Engine. To install the library and all of its dependencies in an App Engine project, download the file named google-api-python-client-gae-N.M.zip from the list of downloads, where N.M is the version number of the latest release. Unzip that file into your project. For example:

    $ cd myproject
    $ unzip google-api-python-client-gae-1.1.zip

    Page: quickstart-tool

    The QuickStart tool automatically generates a skeletal sample application, complete with OAuth support and associated client credentials, that you can use with Google Compute Engine. In the widget below, select a programming language and a platform type, and follow the prompts to generate and download your sample app.

    Support is currently limited to the Python programming language. We plan to add additional languages in the future.

    Page: python-guide

    This document demonstrates how to use the Google Python Client Library for Google Compute Engine. It describes how to authorize requests and how to create, list, and stop instances. This exercise discusses how to use the google-api-python-client library to access Google Compute Engine from outside a VM instance. It does not discuss how to build and run applications within a VM instance.

    Contents

    Setup

    Before you can try the examples in this exercise, you need to download and install the google-api-python-client library. This contains the core Python library for accessing Google APIs and also contains the OAuth 2.0 client library. For information on how to install this library, see the installation instructions. You also need to have Python 2.5, 2.6, or 2.7 to run the Google Python Client Library.

    Getting Started

    The purpose of this exercise is to describe how to use OAuth 2.0 authorization, and how to perform basic instance management tasks using the google-api-python-client library. At the end of this exercise, you should be able to:

    • Perform OAuth 2.0 authorization using the oauth2client library
    • Create an instance using the google-python-client library
    • List instances using the google-python-client library
    • Stop an instance using the google-python-client library

    To skip the exercise and view the full code example, visit the google-cloud-platform-samples page.

    Authorizing Requests

    This sample uses OAuth 2.0 authorization. You will need to create a client ID and client secret, and use both with the oauth2client library. By default, the oauth2 library is included in the google-api-python-client library, which you should have downloaded in the Setup section.

    To start, all applications are managed by the Google Developers Console. If you already have a registered application, you can use the client ID and secret from that application. If you don't have a registered application or would like to register a new application, follow the application registration process. Make sure to select Native as the application type.

    Once on the application page, expand the OAuth 2.0 Client ID section and click Download JSON. Save the file as client_secrets.json. The file should look similar to the following:

    {
      "installed": {
        "client_id": "<your_client_id>",
        "client_secret":"<your_client_secret>",
        "redirect_uris": ["http://localhost", "urn:ietf:wg:oauth:2.0:oob"],
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://accounts.google.com/o/oauth2/token"
      }
    }
    Next, create a file called helloworld.py in the same directory as the client_secrets.json file and provide the following code:
    #!/usr/bin/env python
    
    import logging
    import sys
    import argparse
    import httplib2
    from oauth2client.client import flow_from_clientsecrets
    from oauth2client.file import Storage
    from oauth2client import tools
    from oauth2client.tools import run_flow
    
    CLIENT_SECRETS = 'client_secrets.json'
    OAUTH2_STORAGE = 'oauth2.dat'
    GCE_SCOPE = 'https://www.googleapis.com/auth/compute'
    
    def main(argv):
      logging.basicConfig(level=logging.INFO)
    
      parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        parents=[tools.argparser])
    
      # Parse the command-line flags.
      flags = parser.parse_args(argv[1:])
    
      # Perform OAuth 2.0 authorization.
      flow = flow_from_clientsecrets(CLIENT_SECRETS, scope=GCE_SCOPE)
      storage = Storage(OAUTH2_STORAGE)
      credentials = storage.get()
    
      if credentials is None or credentials.invalid:
        credentials = run_flow(flow, storage, flags)
      http = httplib2.Http()
      auth_http = credentials.authorize(http)
    
    if __name__ == '__main__':
      main(sys.argv)

    The above code uses the OAuth 2.0 scope specified (https://www.googleapis.com/auth/compute) and the client_secrets.json information to request a refresh and access token, which is then stored in the oauth2.dat file. Because the refresh token never expires, your application can reuse the refresh token to request new access tokens when necessary. This also eliminates further authorization events, unless the refresh token has been explicitly revoked.

    If you run helloworld.py now on the command line, it should automatically open a browser window for you to authorize access.

    Initializing the API

    Before you can make requests, you first need to initialize an instance of the Google Compute Engine service. Add the following bold lines to your helloworld.py:

    #!/usr/bin/env python
    
    import logging
    import sys
    import argparse
    import httplib2
    from oauth2client.client import flow_from_clientsecrets
    from oauth2client.file import Storage
    from oauth2client import tools
    from oauth2client.tools import run_flow
    
    from apiclient.discovery import build
    
    API_VERSION = 'v1'
    GCE_URL = 'https://www.googleapis.com/compute/%s/projects/' % (API_VERSION)
    PROJECT_ID = '<your_project_id>'
    CLIENT_SECRETS = 'client_secrets.json'
    OAUTH2_STORAGE = 'oauth2.dat'
    GCE_SCOPE = 'https://www.googleapis.com/auth/compute'
    
    def main(argv):
      logging.basicConfig(level=logging.INFO)
    
      parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        parents=[tools.argparser])
    
      # Parse the command-line flags.
      flags = parser.parse_args(argv[1:])
    
      # Perform OAuth 2.0 authorization.
      flow = flow_from_clientsecrets(CLIENT_SECRETS, scope=GCE_SCOPE)
      storage = Storage(OAUTH2_STORAGE)
      credentials = storage.get()
    
      if credentials is None or credentials.invalid:
        credentials = run_flow(flow, storage, flags)
      http = httplib2.Http()
      auth_http = credentials.authorize(http)
    
      # Build the service
      gce_service = build('compute', API_VERSION)
      project_url = GCE_URL + PROJECT_ID
    
    if __name__ == '__main__':
      main(sys.argv)

    Listing Instances

    Next, to list your instances, call the instances().list method, providing the project ID, the zone for which you want to list instances, and any optional filters:

    #!/usr/bin/env python
    
    import logging
    import sys
    import argparse
    import httplib2
    from oauth2client.client import flow_from_clientsecrets
    from oauth2client.file import Storage
    from oauth2client import tools
    from oauth2client.tools import run_flow
    
    from apiclient.discovery import build
    
    DEFAULT_ZONE = 'us-central1-a'
    API_VERSION = 'v1'
    GCE_URL = 'https://www.googleapis.com/compute/%s/projects/' % (API_VERSION)
    PROJECT_ID = '<your_project_id>'
    CLIENT_SECRETS = 'client_secrets.json'
    OAUTH2_STORAGE = 'oauth2.dat'
    GCE_SCOPE = 'https://www.googleapis.com/auth/compute'
    
    def main(argv):
      logging.basicConfig(level=logging.INFO)
    
      parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        parents=[tools.argparser])
    
      # Parse the command-line flags.
      flags = parser.parse_args(argv[1:])
    
      # Perform OAuth 2.0 authorization.
      flow = flow_from_clientsecrets(CLIENT_SECRETS, scope=GCE_SCOPE)
      storage = Storage(OAUTH2_STORAGE)
      credentials = storage.get()
    
      if credentials is None or credentials.invalid:
        credentials = run_flow(flow, storage, flags)
      http = httplib2.Http()
      auth_http = credentials.authorize(http)
    
      # Build the service
      gce_service = build('compute', API_VERSION)
      project_url = '%s%s' % (GCE_URL, PROJECT_ID)
    
      # List instances
      request = gce_service.instances().list(project=PROJECT_ID, filter=None, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      if response and 'items' in response:
        instances = response['items']
        for instance in instances:
          print instance['name']
      else:
        print 'No instances to list.'
    
    if __name__ == '__main__':
      main(sys.argv)

    Run helloworld.py on the command line and you should see a list of instances for your specified project:

    user@mymachine:~/gce_demo$ python helloworld.py
    instance1
    instance2
    hello-world

    Adding an Instance

    Adding an instance is a two-step process. All instances must boot from a root persistent disk. If you have an existing root persistent disk, you can use it for this part of the exercise. For the purposes of this guide, we're also going to demonstrate how to create a root persistent disk.

    Creating a root persistent disk

    A root persistent disk contains all of the necessary files required for starting your instance. When you create a root persistent disk, you specify the name and the OS image that should be applied to the disk. For this example, we are going to create a root persistent disk using the latest Debian 7 image.

    To create a root persistent disk, use the disks().insert() method, specifying the desired OS image and the name of the disk as the minimal requirements. In your file, add the following lines:

    #!/usr/bin/env python
    
    import logging
    import sys
    import argparse
    import httplib2
    from oauth2client.client import flow_from_clientsecrets
    from oauth2client.file import Storage
    from oauth2client import tools
    from oauth2client.tools import run_flow
    
    from apiclient.discovery import build
    
    # New root persistent disk properties
    DEFAULT_IMAGE = 'debian'
    DEFAULT_IMAGES = {
        'debian': 'debian-7-wheezy-v20131120',
        'centos': 'centos-6-v20131120'
    }
    DEFAULT_ROOT_PD_NAME = 'my-root-pd'
    
    DEFAULT_ZONE = 'us-central1-a'
    API_VERSION = 'v1'
    GCE_URL = 'https://www.googleapis.com/compute/%s/projects/' % (API_VERSION)
    PROJECT_ID = '<your_project_id>'
    CLIENT_SECRETS = 'client_secrets.json'
    OAUTH2_STORAGE = 'oauth2.dat'
    GCE_SCOPE = 'https://www.googleapis.com/auth/compute'
    
    def main(argv):
      logging.basicConfig(level=logging.INFO)
    
      parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        parents=[tools.argparser])
    
      # Parse the command-line flags.
      flags = parser.parse_args(argv[1:])
    
      # Perform OAuth 2.0 authorization.
      flow = flow_from_clientsecrets(CLIENT_SECRETS, scope=GCE_SCOPE)
      storage = Storage(OAUTH2_STORAGE)
      credentials = storage.get()
    
      if credentials is None or credentials.invalid:
        credentials = run_flow(flow, storage, flags)
      http = httplib2.Http()
      auth_http = credentials.authorize(http)
    
      # Build the service
      gce_service = build('compute', API_VERSION)
      project_url = '%s%s' % (GCE_URL, PROJECT_ID)
    
      # List instances
      request = gce_service.instances().list(project=PROJECT_ID, filter=None, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      if response and 'items' in response:
        instances = response['items']
        for instance in instances:
          print instance['name']
      else:
        print 'No instances exist.'
    
      # Construct URLs
      image_url = '%s%s/global/images/%s' % (
             GCE_URL, 'debian-cloud', DEFAULT_IMAGES['debian'])
    
      # Construct the request body
      disk = {
        'name': DEFAULT_ROOT_PD_NAME
      }
    
      # Create the root pd
      request = gce_service.disks().insert(
           project=PROJECT_ID, body=disk, zone=DEFAULT_ZONE, sourceImage=image_url)
      response = request.execute(http=auth_http)
      response = _blocking_call(gce_service, auth_http, response)
    
      print response
    
    def _blocking_call(gce_service, auth_http, response):
      """Blocks until the operation status is done for the given operation."""
    
      status = response['status']
      while status != 'DONE' and response:
        operation_id = response['name']
    
        # Identify if this is a per-zone resource
        if 'zone' in response:
          zone_name = response['zone'].split('/')[-1]
          request = gce_service.zoneOperations().get(
              project=PROJECT_ID,
              operation=operation_id,
              zone=zone_name)
        else:
          request = gce_service.globalOperations().get(
               project=PROJECT_ID, operation=operation_id)
    
        response = request.execute(http=auth_http)
        if response:
          status = response['status']
      return response
    
    if __name__ == '__main__':
      main(sys.argv)
    
    Blocking until the operation is complete

    By default, when you send a request to Google Compute Engine API, you'll immediately receive a response describing the status of your operation as RUNNING. To check when the operation is finished, you need to periodically query the server for the operation status. To eliminate this extra step, we've included a helper method in the sample above that "blocks" and waits for the operation status to become DONE before it returns a response.

    Notice when you query per-zone operations, you use the zoneOperations.get() method, while querying global operations requires using the globalOperations.get() method. For more information, see zone resources.


    Run your helloworld.py file to create your new disk.

    Creating an Instance

    Now that you have a root persistent disk for your new instance, we can actually create your new instance.

    To add an instance, use the instances().insert() method and provide appropriate request body with the JSON properties described at the API reference documentation. At a minimum, your request must provide values for the following properties when you create a new instance:

    • Instance name
    • Root persistent disk
    • Machine type
    • Zone
    • Network Interfaces

    For this example, you are going to start an instance with the following properties:

    • Zone: us-central1-a
    • Machine type: n1-standard-1
    • Root persistent disk: my-root-pd
    • The default service account with the following scopes:
      • https://www.googleapis.com/auth/devstorage.full_control
      • https://www.googleapis.com/auth/compute

    Add the following lines to helloworld.py:

    #!/usr/bin/env python
    
    import logging
    import sys
    import argparse
    import httplib2
    from oauth2client.client import flow_from_clientsecrets
    from oauth2client.file import Storage
    from oauth2client import tools
    from oauth2client.tools import run_flow
    
    from apiclient.discovery import build
    
    # New instance properties
    DEFAULT_MACHINE_TYPE = 'n1-standard-1'
    DEFAULT_NETWORK = 'default'
    DEFAULT_SERVICE_EMAIL = 'default'
    DEFAULT_SCOPES = ['https://www.googleapis.com/auth/devstorage.full_control',
                      'https://www.googleapis.com/auth/compute']
    NEW_INSTANCE_NAME = 'my-new-instance'
    
    # New root persistent disk properties
    DEFAULT_IMAGE = 'debian'
    DEFAULT_IMAGES = {
        'debian': 'debian-7-wheezy-v20131120',
        'centos': 'centos-6-v20131120'
    }
    DEFAULT_ROOT_PD_NAME = 'my-root-pd'
    
    DEFAULT_ZONE = 'us-central1-a'
    API_VERSION = 'v1'
    GCE_URL = 'https://www.googleapis.com/compute/%s/projects/' % (API_VERSION)
    PROJECT_ID = '<your_project_id>'
    CLIENT_SECRETS = 'client_secrets.json'
    OAUTH2_STORAGE = 'oauth2.dat'
    GCE_SCOPE = 'https://www.googleapis.com/auth/compute'
    
    def main(argv):
      logging.basicConfig(level=logging.INFO)
    
      parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        parents=[tools.argparser])
    
      # Parse the command-line flags.
      flags = parser.parse_args(argv[1:])
    
      # Perform OAuth 2.0 authorization.
      flow = flow_from_clientsecrets(CLIENT_SECRETS, scope=GCE_SCOPE)
      storage = Storage(OAUTH2_STORAGE)
      credentials = storage.get()
    
      if credentials is None or credentials.invalid:
        credentials = run_flow(flow, storage, flags)
      http = httplib2.Http()
      auth_http = credentials.authorize(http)
    
      # Build the service
      gce_service = build('compute', API_VERSION)
      project_url = '%s%s' % (GCE_URL, PROJECT_ID)
    
      # List instances
      request = gce_service.instances().list(project=PROJECT_ID, filter=None, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      if response and 'items' in response:
        instances = response['items']
        for instance in instances:
          print instance['name']
      else:
        print 'No instances exist.'
    
      # Construct URLs
      image_url = '%s%s/global/images/%s' % (
             GCE_URL, 'debian-cloud', DEFAULT_IMAGES['debian'])
      machine_type_url = '%s/zones/%s/machineTypes/%s' % (
            project_url, DEFAULT_ZONE, DEFAULT_MACHINE_TYPE)
      zone_url = '%s/zones/%s' % (project_url, DEFAULT_ZONE)
      network_url = '%s/global/networks/%s' % (project_url, DEFAULT_NETWORK)
      root_disk_url = '%s/zones/%s/disks/%s' % (
            project_url, DEFAULT_ZONE, DEFAULT_ROOT_PD_NAME)
    
      ''' Commented out so we do not create multiple root disks :)
      request = gce_service.disks().insert(
           project=PROJECT_ID, body=disk, zone=DEFAULT_ZONE, sourceImage=image_url)
      response = request.execute(http=auth_http)
      response = _blocking_call(gce_service, auth_http, response) '''
    
      # Construct the request body
      instance = {
        'name': NEW_INSTANCE_NAME,
        'machineType': machine_type_url,
        'disks': [{
            'source': root_disk_url,
            'boot': 'true',
            'type': 'PERSISTENT'
          }],
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': network_url
        }],
        'serviceAccounts': [{
             'email': DEFAULT_SERVICE_EMAIL,
             'scopes': DEFAULT_SCOPES
        }]
      }
    
      # Create the instance
      request = gce_service.instances().insert(
           project=PROJECT_ID, body=instance, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      response = _blocking_call(gce_service, auth_http, response)
    
      print response
    
    def _blocking_call(gce_service, auth_http, response):
      """Blocks until the operation status is done for the given operation."""
    
      status = response['status']
      while status != 'DONE' and response:
        operation_id = response['name']
    
        # Identify if this is a per-zone resource
        if 'zone' in response:
          zone_name = response['zone'].split('/')[-1]
          request = gce_service.zoneOperations().get(
              project=PROJECT_ID,
              operation=operation_id,
              zone=zone_name)
        else:
          request = gce_service.globalOperations().get(
               project=PROJECT_ID, operation=operation_id)
    
        response = request.execute(http=auth_http)
        if response:
          status = response['status']
      return response
    
    if __name__ == '__main__':
      main(sys.argv)
    

    Notice that you construct the JSON for your instance here:

    instance = {
        'name': NEW_INSTANCE_NAME,
        'machineType': machine_type_url,
        'disks': [{
            'source': root_disk_url,
            'boot': 'true',
            'type': 'PERSISTENT'
          }],
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'n
           }],
          'network': network_url
        }],
        'serviceAccounts': [{
             'email': DEFAULT_SERVICE_EMAIL,
             'scopes': DEFAULT_SCOPES
        }]
      }

    If you would rather not hand construct the JSON for your request, many Google Compute Engine tools can automatic generate the JSON for you. For example, the Google Developers Console, which allows you to configure and create resources for your Google Compute Engine project, also provides a handy REST Request feature that constructs the JSON for the request for you. For more information, see Using the Console to Generate REST Requests.

    Similarly, you can also use gcutil with the --dump_request_response flag, which dumps the contents of the request and response, including the JSON body, to see how JSON requests should be constructed.

    Adding Instance Metadata

    When you create your instance, you may want to include instance metadata, such as a startup script URL, or sshKeys. To do so, include the metadata field with your request body:

    instance = {
        'name': NEW_INSTANCE_NAME,
        'machineType': machine_type_url,
        'disks': [{
            'source': root_disk_url,
            'boot': 'true',
            'type': 'PERSISTENT'
          }],
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': network_url
        }],
        'serviceAccounts': [{
             'email': DEFAULT_SERVICE_EMAIL,
             'scopes': DEFAULT_SCOPES
        }],
        'metadata': [{
             'items': [{
                'key': <key>,
                'value': <value>,
             .....
             }]
        }]
      }

    For example, you can use the metadata field to specify a startup script with your instance. Create a new file named startup.sh and populate it with the following contents:

    #!/bin/bash
    apt-get -y install imagemagick
    IMAGE_URL=$(curl http://metadata/computeMetadata/v1/instance/attributes/url -H "X-Google-Metadata-Request: True")
    TEXT=$(curl http://metadata/computeMetadata/v1/instance/attributes/text -H "X-Google-Metadata-Request: True")
    CS_BUCKET=$(curl http://metadata/computeMetadata/v1/instance/attributes/cs-bucket -H "X-Google-Metadata-Request: True")
    mkdir image-output
    cd image-output
    wget $IMAGE_URL
    convert * -pointsize 30 -fill black -annotate +10+40 $TEXT output.png
    gsutil cp -a public-read output.png gs://$CS_BUCKET/output.png

    This startup script installs the ImageMagick application, downloads an image from the Internet, adds some text on top, and copies it to Google Cloud Storage. To try it out, make the following changes to your helloworld.py file. If you do not have a Google Cloud Storage account, you can sign up for the service during the limited free trial, which ends December 31, 2012. After you have signed up for the service, create your bucket using the Google Cloud Storage manager.

      # Construct URLs
      machine_type_url = '%s/zones/%s/machine-types/%s' % (
            project_url, DEFAULT_ZONE, DEFAULT_MACHINE_TYPE)
      zone_url = '%s/zones/%s' % (project_url, DEFAULT_ZONE)
      network_url = '%s/global/networks/%s' % (project_url, DEFAULT_NETWORK)
      root_disk_url = '%s/zones/%s/disks/%s' % (
            project_url, DEFAULT_ZONE, DEFAULT_ROOT_PD_NAME)
    
      my_image = '<url_of_image>' # Choose an image from the Internet and put its URL here
      cs_bucket = '<your_google_cloud_storage_bucket>' # Must already exist, e.g. 'samplebucket'
      startup_script = 'startup.sh'
    
      # Construct the request body
      instance = {
        'name': 'startup-script-demo',
        'machineType': machine_type_url,
        'disks': [{
            'source': root_disk_url,
            'boot': 'true',
            'type': 'PERSISTENT'
          }],
        'networkInterfaces': [{
          'accessConfigs': [{
            'type': 'ONE_TO_ONE_NAT',
            'name': 'External NAT'
           }],
          'network': network_url
        }],
         'serviceAccounts': [{
             'email': DEFAULT_SERVICE_EMAIL,
             'scopes': DEFAULT_SCOPES
        }],
         'metadata': [{
             'items': [{
                 'key': 'startup-script',
                 'value': open(startup_script, 'r').read()
               }, {
                 'key': 'url',
                 'value': my_image
               }, {
                 'key': 'text',
                 'value': 'AWESOME'
               }, {
                 'key': 'cs-bucket',
                 'value': cs_bucket
              }]
        }]
      }
    
      request = gce_service.instances().insert(
           project=PROJECT_ID, body=instance, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      response = _blocking_call(gce_service, auth_http, response)
    
      print response
      print '\n'
      print 'Visit http://commondatastorage.googleapis.com/%s/output.png' % (
          cs_bucket)
      print 'It might take a minute for the output.png file to show up.'

    Try running helloworld.py, which creates a new instance named startup-script-demo, and viewing your image in the provided URL.

    Stopping an Instance

    To stop an instance, you need to call the instances().delete() method and provide the name, zone, and project ID of the instance to delete. Add the following lines to your helloworld.py to delete the instance:

      # Delete an Instance
      request = gce_service.instances().delete(
           project=PROJECT_ID, instance=INSTANCE_TO_DELETE, zone=DEFAULT_ZONE)
      response = request.execute(http=auth_http)
      response = _blocking_call(gce_service, auth_http, response)
    
      print response

    Next Steps

    Now that you've completed this exercise, you can:

    • Download and view the full code example. The full sample includes a simple wrapper for the instance management methods, and is generally more cleaner than our example here. Feel free to download it, change it, and run it to suit your needs.
    • Review the API reference to learn how to perform other tasks with the API.
    • Start creating your own applications!

    Page: javascript-guide

    This document demonstrates how to use the google-apis-javascript-client library with Google Compute Engine. It describes how to authorize requests and how to create, list, and stop instances. This exercise discusses how to use the google-api-javascript-client library to access Google Compute Engine from outside a VM instance. It does not discuss how to build and run applications within a VM instance.

    Contents

    Setup

    There is no setup necessary for this sample. However, you should have some proficiency with Javascript and have access to use Google Compute Engine. If not, please visit the signup page. This sample also assumes you have basic HTML knowledge and access to a webserver.

    Getting Started

    This basic sample describes how to use OAuth 2.0 authorization, and how to perform instance management tasks using the google-api-javascript-client library. At the end of this exercise, you should be able to:

    • Authorize your application to make requests to the Google Compute Engine API
    • Insert instances
    • List instances
    • List other resources (images, machine types, zones, machine type, networks, firewalls, and operations)
    • Delete instances

    To skip this exercise and view the more advanced code sample, visit the GoogleCloudPlatform GitHub page.

    Loading the Client Library

    To use the Javascript client library, you first need to load it. The Javascript client library is located at:

    https://apis.google.com/js/client.js

    Create a basic HTML file that looks like the following:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
     </head>
      <body>
      </body>
    </html>

    Authorizing Requests

    This sample uses a simple API key and client ID authorization method. To start, all applications are managed by the Google Developers Console. If you already have a registered application, you can use the client ID and API key from that application. If you don't have a registered application or would like to register a new application, follow the application registration process. Make sure to select Web Application as the application type.

    Once on the application's page, expand the OAuth 2.0 Client ID section and make note of the Client ID. Then, expand the Browser Key section and make note of the API Key.

    To authorize your application, use the gapi.auth.authorize() method, providing your client ID, API key, and a callback function. The method also provides an immediate field, which, when set to true, means that authorization is performed behind the scenes and no prompt is presented to the user. Add the following code to your HTML page:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
         var projectId = 'YOUR_PROJECT_ID';
         var clientId = 'YOUR_CLIENT_ID';
         var apiKey = 'YOUR_API_KEY';
         var scopes = 'https://www.googleapis.com/auth/compute';
    
         /**
          * Authorize Google Compute Engine API.
          */
         function authorization() {
           gapi.client.setApiKey(apiKey);
           gapi.auth.authorize({
             client_id: clientId,
             scope: scopes,
             immediate: false
           }, function(authResult) {
                if (authResult && !authResult.error) {
                  alert("Auth was successful!");
                } else {
                  alert("Auth was not successful");
                }
              }
            );  return false;
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
      </body>
    </html>
    

    View your page in a browser. On load, you should be prompted to authorize the application; once successfully authorized, an alert window should appear letting you know that your application was authorized successfully.

    Initializing the Google Compute Engine API

    Next, you need to initialize the API by calling gapi.client.load(), which accepts the API name, and API version number as parameters. In your HTML page, add an initialization function and invoke it once your application has been successfully authorized:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
         var projectId = 'YOUR_PROJECT_ID';
         var clientId = 'YOUR_CLIENT_ID';
         var apiKey = 'YOUR_API_KEY';
         var scopes = 'https://www.googleapis.com/auth/compute';
         var API_VERSION = 'v1beta16';
    
         /**
          * Load the Google Compute Engine API.
          */
         function initializeApi() {
           gapi.client.load('compute', API_VERSION);
         }
    
         /**
          * Authorize Google Compute Engine API.
          */
         function authorization() {
           gapi.client.setApiKey(apiKey);
           gapi.auth.authorize({
             client_id: clientId,
             scope: scopes,
             immediate: false
           }, function(authResult) {
                if (authResult && !authResult.error) {
                  initializeApi();
                } else {
                  alert("Auth was not successful");
                }
              }
           );  return false;
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
      </body>
    </html>

    That's it! You can now start making requests to the API.

    Listing Instances

    To demonstrate some basic operations, start by constructing requests to list specific resources. Add the following to your webpage:

      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
      </body>
    </html>

    Now, lets define our listInstances() function. To list instances, use the gapi.client.compute.instances.list() method:

    /**
     * Google Compute Engine API request to retrieve the list of instances in
     * your Google Compute Engine project.
     */
    function listInstances() {
      var request = gapi.client.compute.instances.list({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE
      });
      request.execute(function(resp) {
        // Code to handle response
      })
    }

    The only required field for this method is the project field but you can also add fields that determine max result size, help filter instances, and other configurations. To see a list of all possible fields for this method, visit the API Explorer or review the API documentation.

    Add the new listInstances() function to your webpage as follows:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
         var projectId = 'YOUR_PROJECT_ID';
         var clientId = 'YOUR_CLIENT_ID';
         var apiKey = 'YOUR_API_KEY';
         var scopes = 'https://www.googleapis.com/auth/compute';
         var API_VERSION = 'v1';
         
         var DEFAULT_ZONE = 'ZONE_NAME' // For example, us-central1-a
         var DEFAULT_PROJECT=projectId;
         var GOOGLE_PROJECT = 'debian-cloud';
         var BASE_URL = 'https://www.googleapis.com/compute/' + API_VERSION + '/projects/'
         
    
         /**
          * Load the Google Compute Engine API.
          */
         function initializeApi() {
           gapi.client.load('compute', API_VERSION);
         }
    
         /**
          * Authorize Google Compute Engine API.
          */
         function authorization() {
           gapi.client.setApiKey(apiKey);
           gapi.auth.authorize({
             client_id: clientId,
             scope: scopes,
             immediate: false
           }, function(authResult) {
                if (authResult && !authResult.error) {
                  initializeApi();
                } else {
                  alert("Auth was not successful");
                }
              }
           );  return false;
         }
    
         /**
          * Google Compute Engine API request to retrieve the list of instances in
          * your Google Compute Engine project.
          */
         function listInstances() {
           var request = gapi.client.compute.instances.list({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE
           });
           executeRequest(request, 'listInstances');
         }
    
         /**
          * Executes your Google Compute Engine request object and, subsequently,
          * prints the response.
          * @param {string} request A Google Compute Engine request object issued
          *    from the Google Compute Engine JavaScript client library.
          * @param {string} apiRequestName The name of the example API request.
          */
         function executeRequest(request, apiRequestName) {
           request.execute(function (resp) {
             newWindow = window.open(apiRequestName, '', 'width=600, height=600, scrollbars=yes');
             newWindow.document.write('<h1>' + apiRequestName + '</h1> <br />'
               + '<pre>' + JSON.stringify(resp.result, null, ' ') + '</pre>');
           });
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
      </body>
    </html>

    Refresh your page and click the new List Instances button, which returns a list of instances that belong to the project you specified.

    Filtering List Results

    A particular useful feature when listing resources is the filter field, which allows you to filter your results based an expression. For example, you can filter terminated instances from running instances by providing the following filter expression:

    /**
     * Google Compute Engine API request to retrieve a filtered list of
     * instances in your Google Compute Engine project.
     */
    function listInstancesWithFilter() {
      var request = gapi.client.compute.instances.list({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE,
        'filter': 'status ne TERMINATED'
      });
      var apiRequestName = 'listInstancesWithFilter';
        executeRequest(request, apiRequestName);
    }

    Listing Other Resources

    To list other resources, use the respective methods described below:

    Type Method
    List Zones gapi.client.compute.zones.list()
    List Machine Types gapi.client.compute.machinetypes.list()
    List Global Operations gapi.client.compute.globalOperations.list()
    List Per-zone Operations gapi.client.compute.zoneOperations.list()
    List Images gapi.client.compute.images.list()
    List Firewalls gapi.client.compute.machinetypes.list()

    Listing Operations

    Listing operation resources is unique because there are two types of operations you can list: per-zone and global operations. A per-zone operation is an operation performed on a resource that lives in a zone. Specifically, per-zone operations include disks and instances. Global operations are operations performed on global resources, such as machine types and images. For more information on per-zone and global resources, see the overview.

    Listing Per-Zone Operations

    To view a list of per-zone operations, use the gapi.client.compute.zoneOperations.list() method:

    /**
     * Google Compute Engine API request to retreive the list of operations
     * (inserts, deletes, etc.) for your Google Compute Engine project.
     */
    function listZoneOperations() {
      var request = gapi.client.compute.zoneOperations.list({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE
      });
      executeRequest(request, 'listZoneOperations');
    }

    In your webpage, add the following lines:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
    
         [...snip...]
        
          /**
           * Google Compute Engine API request to retreive the list of operations
           * (inserts, deletes, etc.) for your Google Compute Engine project.
           */
          function listZoneOperations() {
            var request = gapi.client.compute.zoneOperations.list({
              'project': DEFAULT_PROJECT,
              'zone': DEFAULT_ZONE
            });
            executeRequest(request, 'listZoneOperations');
          }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
      </body>
    </html>
    Listing Global Operations

    To view a list of global operations, use the gapi.client.compute.globalOperations.list() method:

    /**
     * Google Compute Engine API request to retreive the list of operations
     * (inserts, deletes, etc.) for your Google Compute Engine project.
     */
    function listGlobalOperations() {
      var request = gapi.client.compute.globalOperations.list({
        'project': DEFAULT_PROJECT,
      });
      executeRequest(request, 'listGlobalOperations');
    }

    In your webpage, add the following lines:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
    
         [...snip...]
        
          /**
           * Google Compute Engine API request to retreive the list of global
           * operations (inserts, deletes, etc.) for your Google Compute Engine
           * project.
           */
          function listGlobalOperations() {
            var request = gapi.client.compute.globalOperations.list({
              'project': DEFAULT_PROJECT
            });
            executeRequest(request, 'listGlobalOperations');
          }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
        <button onClick="listGlobalOperations()">List Global Operations</button>
      </body>
    </html>

    Inserting an Instance

    To insert an instance, use the gapi.client.compute.instances.insert() method, specifying an image, zone, machine type, and network interface object with your request:

    /**
     * Google Compute Engine API request to retrieve the list of instances in
     * your Google Compute Engine project.
     */
    
    function listInstances() {
      resource = {
        'image': DEFAULT_IMAGE,
        'name': DEFAULT_NAME,
        'machineType': DEFAULT_MACHINE_TYPE,
        'networkInterfaces': [{
          'network': DEFAULT_NETWORK
        }]
      };
      var request = gapi.client.compute.instances.insert({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE,
        'resource': resource
      });
    
      request.execute(function(resp) {
          // Code to handle response
      })
    }
    Each resource page describes available zones, images, and machine types that you can use. When indicating specific resources, provide the full URL for the resource; for example, to specify an image, you need to provide the full URL to the image resource, like so:

    https://www.googleapis.com/compute/v1beta16/projects/debian-cloud/global/images/<image-name>

    The URLs to other resources include:

    Resource URL
    Machine Types https://www.googleapis.com/compute/v1beta16/projects/<project-id>/global/machineTypes/<machine-name>
    Zones https://www.googleapis.com/compute/v1beta16/projects/<project-id>/zones/<zone-name>
    Network https://www.googleapis.com/compute/v1beta16/projects/<project-id>/global/networks/<network-name>

    In your webpage, add the following lines:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
    
         [...snip...]
         
         var DEFAULT_IMAGE = BASE_URL + GOOGLE_PROJECT +
           '/global/images/debian-7-wheezy-v20130507';
         var DEFAULT_MACHINE_TYPE = BASE_URL + DEFAULT_PROJECT +
           '/zones/' + DEFAULT_ZONE + '/machineTypes/n1-standard-1';
         var DEFAULT_NETWORK = BASE_URL + DEFAULT_PROJECT + '/global/networks/default';
         var DEFAULT_NAME = 'test-node';
    
         [...snip...]
    
         /**
          * Google Compute Engine API request to insert your resource as an instance
          * into your cluster.
          */
         function insertInstance() {
           resource = {
             'image': DEFAULT_IMAGE,
             'name': DEFAULT_NAME,
             'machineType': DEFAULT_MACHINE_TYPE,
             'networkInterfaces': [{
               'network': DEFAULT_NETWORK
             }]
           };
           var request = gapi.client.compute.instances.insert({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'resource': resource
           });
           executeRequest(request, 'insertInstance');
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
          .bind('load', function() {
            authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
        <button onClick="listGlobalOperations()">List Global Operations</button>
        <button onClick="insertInstance()">Insert an Instance</button>
      </body>
    </html>

    Refresh your webpage in the browser and click the Insert Instance button to create your new instance. Click the List Instances button to see your new instance appear in your list of instances.

    Inserting an Instance with Metadata

    When creating an instance, you can also pass in custom metadata. You can define any type of custom metadata, but custom metadata is particularly useful for running startup scripts. To create an instance with custom metadata, add the metadata field like so:

    /**
     * Google Compute Engine API request to insert your instance with metadata
     * into your cluster.
     */
    function insertInstanceWithMetadata() {
      resource = {
        'image': DEFAULT_IMAGE,
        'name': 'node-with-metadata',
        'machineType': DEFAULT_MACHINE_TYPE,
        'networkInterfaces': [{
          'network': DEFAULT_NETWORK
        }],
        'metadata': {
          'items': [{
            'value': 'apt-get install apache2',
            'key': 'startup-script'
          }]
        }
      };
      var request = gapi.client.compute.instances.insert({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE,
        'resource': resource
      });
      executeRequest(request, 'insertInstance');
     }

    The metadata for this instance defines a startup script that installs apache2.

    Getting an Instance

    To get information about a particular instance, use the gapi.client.compute.instances.get() method, passing in the instance and project field:

    /**
     * Google Compute Engine API request to get your Google Compute Engine
     * instance.
     */
    function getInstance() {
      var request = gapi.client.compute.instances.get({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE,
        'instance': DEFAULT_NAME
      });
      request.execute(function(resp) {
        // Code to handle response
      })
    }

    In your webpage, add the following lines:

    <!DOCTYPE html>
    
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
    
         [...snip...]
        
         /**
          * Google Compute Engine API request to get your resource
          */
         function getInstance() {
           var request = gapi.client.compute.instances.get({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'instance': DEFAULT_NAME
           });
           executeRequest(request, 'getInstance');
         }
    
        /**
         * Driver for sample application.
         */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
        <button onClick="listGlobalOperations()">List Global Operations</button>
        <button onClick="insertInstance()">Insert an Instance</button>
        <button onClick="getInstance()">Get an Instance</button>
      </body>
    </html>

    Refresh your page and click on the Get an Instance button.

    Deleting an Instance

    To delete an instance, use the gapi.client.compute.instances.delete() method, providing the instance name and the project id:

    /**
    * Google Compute Engine API request to delete your Google Compute Engine
    * instance.
    */
    function deleteInstance() {
      var request = gapi.client.compute.instances.delete({
        'project': DEFAULT_PROJECT,
        'zone': DEFAULT_ZONE,
        'instance': DEFAULT_NAME
      });
      request.execute(function(resp) {
        // Code to handle response
      })
    }

    In your webpage, add the following lines:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
    
         [...snip...]
        
         /**
          * Google Compute Engine API request to delete your instance
          */
         function deleteInstance() {
           var request = gapi.client.compute.instances.delete({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'instance': DEFAULT_NAME
           });
           executeRequest(request, 'deleteInstance');
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
        <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
        <button onClick="listGlobalOperations()">List Global Operations</button>
        <button onClick="insertInstance()">Insert an Instance</button>
        <button onClick="getInstance()">Get an Instance</button>
        <button onClick="deleteInstance()">Delete an Instance</button>
      </body>
    </html>

    Refresh your page and click on the Delete an Instance button.

    Full Sample

    Here is the full code sample for this exercise:

    <!DOCTYPE html>
    <html>
      <head>
       <meta charset='utf-8' />
       <link rel="stylesheet" src="style.css" />
       <script src="http://code.jquery.com/jquery-1.8.2.js"></script>
       <script src="https://apis.google.com/js/client.js"></script>
       <script type="text/javascript">
         var projectId = 'YOUR_PROJECT_ID';
         var clientId = 'YOUR_CLIENT_ID';
         var apiKey = 'YOUR_API_KEY';
         var scopes = 'https://www.googleapis.com/auth/compute';
         var API_VERSION = 'v1beta16';
         var DEFAULT_PROJECT = projectId;
         var GOOGLE_PROJECT = 'debian-cloud';
         var DEFAULT_NAME = 'test-node';
         var DEFAULT_ZONE = 'ZONE_NAME' // For example, us-central1-a
         var BASE_URL = 'https://www.googleapis.com/compute/' + API_VERSION +
           '/projects/'
         var DEFAULT_IMAGE = BASE_URL + GOOGLE_PROJECT +
           '/global/images/debian-7-wheezy-v20130507';
         var DEFAULT_MACHINE_TYPE = BASE_URL + DEFAULT_PROJECT +
           '/zones/' + DEFAULT_ZONE + '/machineTypes/n1-standard-1';
         var DEFAULT_NETWORK = BASE_URL + DEFAULT_PROJECT + '/global/networks/default';
    
         /**
          * Authorize Google Compute Engine API.
          */
         function authorization() {
           gapi.client.setApiKey(apiKey);
           gapi.auth.authorize({
             client_id: clientId,
             scope: scopes,
             immediate: false
           }, function(authResult) {
                if (authResult && !authResult.error) {
                  initializeApi();
                } else {
                  alert("Auth was not successful");
                }
              }
           );  return false;
         }
    
         /**
          * Load the Google Compute Engine API.
          */
         function initializeApi() {
           gapi.client.load('compute', API_VERSION);
         }
    
         /**
          * Executes your Google Compute Engine request object and, subsequently,
          * prints the response.
          * @param {string} request A Google Compute Engine request object issued
          *    from the Google Compute Engine JavaScript client library.
          * @param {string} apiRequestName The name of the example API request.
          */
         function executeRequest(request, apiRequestName) {
           request.execute(function (resp) {
             newWindow = window.open(apiRequestName, '', 'width=600, height=600, scrollbars=yes');
             newWindow.document.write('<h1>' + apiRequestName + '</h1> <br />'
               + '<pre>' + JSON.stringify(resp.result, null, ' ') + '</pre>');
           });
         }
    
         /**
          * Google Compute Engine API request to delete your instance
          */
         function deleteInstance() {
           var request = gapi.client.compute.instances.delete({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'instance': DEFAULT_NAME
           });
           executeRequest(request, 'deleteInstance');
         }
    
         /**
          * Google Compute Engine API request to get your instance
          */
         function getInstance() {
           var request = gapi.client.compute.instances.get({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'instance': DEFAULT_NAME
           });
           executeRequest(request, 'getInstance');
         }
    
         /**
          * Google Compute Engine API request to insert your instance with metadata
          */
         function insertInstanceWithMetadata() {
           resource = {
             'image': DEFAULT_IMAGE,
             'name': 'node-with-metadata',
             'machineType': DEFAULT_MACHINE_TYPE,
             'networkInterfaces': [{
               'network': DEFAULT_NETWORK
             }],
             'metadata': {
               'items': [{
                 'value': 'apt-get install apache2',
                 'key': 'startup-script'
               }]
             }
           };
           var request = gapi.client.compute.instances.insert({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'resource': resource
           });
           executeRequest(request, 'insertInstanceWithMetadata');
         }
    
         /**
          * Google Compute Engine API request to insert your instance
          */
         function insertInstance() {
           resource = {
             'image': DEFAULT_IMAGE,
             'name': DEFAULT_NAME,
             'machineType': DEFAULT_MACHINE_TYPE,
             'networkInterfaces': [{
               'network': DEFAULT_NETWORK
             }]
           };
           var request = gapi.client.compute.instances.insert({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'resource': resource
           });
           executeRequest(request, 'insertInstance');
         }
    
         /**
          * Google Compute Engine API request to retreive the list of global
          * operations (inserts, deletes, etc.) for your Google Compute Engine
          * project.
          */
         function listGlobalOperations() {
           var request = gapi.client.compute.globalOperations.list({
             'project': DEFAULT_PROJECT
           });
           executeRequest(request, 'listGlobalOperations');
         }
    
         /**
          * Google Compute Engine API request to retreive the list of operations
          * (inserts, deletes, etc.) for your Google Compute Engine project.
          */
         function listZoneOperations() {
           var request = gapi.client.compute.zoneOperations.list({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE
           });
           executeRequest(request, 'listZoneOperations');
         }
    
         /**
          * Google Compute Engine API request to retrieve a filtered list
          *  of instances in  your Google Compute Engine project.
          */
         function listInstancesWithFilter() {
           var request = gapi.client.compute.instances.list({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE,
             'filter': 'status ne TERMINATED'
           });
           var apiRequestName = 'listInstancesWithFilter';
           executeRequest(request, apiRequestName);
         }
    
         /**
          * Google Compute Engine API request to retrieve the list of instances in
          * your Google Compute Engine project.
          */
         function listInstances() {
           var request = gapi.client.compute.instances.list({
             'project': DEFAULT_PROJECT,
             'zone': DEFAULT_ZONE
           });
           executeRequest(request, 'listInstances');
         }
    
         /**
          * Driver for sample application.
          */
         $(window)
           .bind('load', function() {
             authorization();
         });
        </script>
      </head>
      <body>
       <header>
          <h1>Google Compute Engine JavaScript Client Library Application</h1>
        </header>
        <button onClick="listInstances()">List Instances</button>
        <button onClick="listInstancesWithFilter()">List Instances with Filter</button>
        <button onClick="listZoneOperations()">List Zone Operations</button>
        <button onClick="listGlobalOperations()">List Global Operations</button>
        <button onClick="insertInstance()">Insert an Instance</button>
        <button onClick="insertInstanceWithMetadata()">Insert Instance with Metadata</button>
        <button onClick="getInstance()">Get an Instance</button>
        <button onClick="deleteInstance()">Delete an Instance</button>
      </body>
    </html>
    

    Next Steps

    Now that you have completed this exercise, you can:

    • Download a more advanced sample and view other samples at the googlecloudplatform GitHub page. The advanced sample includes some more complex dispatching methods and is cleaner than our sample here.
    • View the similar Python Getting Started Guide which describes how to use the Python client library.
    • Review the API reference to learn how to perform other tasks with the API.
    • Start creating your own applications!

    Page: console

    Google Compute Engine offers the browser-based Google Developers Console tool that you can use to manage your Google Compute Engine resources. Use the Developers Console to list, create, and delete your instances and disks, and to list information about your networks, firewalls, and zones, such as the size of your disks, the firewalls attached to a network, or the zones available to you.

    Contents

    Getting Started

    The Developers Console lets you manage your Google Compute Engine resources through a easy-to-use graphical user interface. Through the Console, you can create and manage instances, disks, networks, and other resources.

    To get started using the Console, review the instructions on accessing the Console, and start managing your resources! Most tasks in the Console are intuitive but this document provides instructions for some less common tasks.

    Accessing the Console

    To access the Console:

    1. Log in to the Console.
    2. Choose the project where you have enabled Google Compute Engine.
    3. Click on Google Compute Engine.

    Setting Up ssh Keys

    After you create new instances in the Console, you can access them using:

    When you connect using gcutil, gcutil automatically handles creating your ssh key and inserting it into the instance. If you want to use an ssh client without gcutil, you can insert your ssh key using the Console. This can be especially useful if you already have a public/private key pair and would like to use the same public/private key pair for your new instances.

    To insert your keys using the Console:

    1. Generate your keys using ssh-keygen (or PuTTYgen for Windows), if you haven't already.
    2. Copy your public key. If you just generated this key, it can probably be found in a file named id_rsa.pub.
    3. Log in to the Console.
    4. Add your key to the project metadata:
      1. Click on the Metadata page.
      2. Enter sshKeys the blank key box.
      3. In the corresponding Value, enter a value for the ssh key in the following format:
        <username>:<public_key>

        This makes your public key automatically available to all of your instances in that project. To add multiple keys, list each key on a new line, as demonstrated above.

      4. Save your changes.

        Click Add metadata to save your changes. It may take several minutes before the key is inserted into the instance. Try ssh'ing into your instance. If it is successful, your key has been propagated to the instance.

    Attaching a Persistent Disk

    You can attach a persistent disk at instance creation or attach it after you have started your instance.

    To attach your persistent disk at instance creation:

    1. Log in to the Console.
    2. Click on VM Instances.
    3. Click on New Instance.
    4. Under Location and Resources, select a persistent disk from the Persistent Disks drop-down field.

    To attach a persistent disk to a running instance:

    1. Log in to the Console.
    2. On the VM Instances page, select an instance.
    3. Under the Disks section, click on Attach.
    4. Select a disk to attach to the instance.
    5. Select whether to attach the disk in read-only or read-write mode.
    6. Click Attach Disk.

    Using a Root Persistent Disk

    You can start your instance using a root filesystem on a persistent disk instead of a root filesystem on a scratch disk.

    1. Login to the Console.
    2. Click on VM Instances to navigate to the Instances page.
    3. Click on the New Instance button to create an new instance.
    4. Under the Location and resources section, locate the Boot source field.
    5. Select New persistent disk from image.
    6. Select an image in the Image field. Your root disk will be created with the same <instance-name> and automatically attached to your new instance.

      Alternatively, you can also create a root disk and attach it separately:

      1. Click on Disks.
      2. Click on New Disk to create a new disk.
      3. For the Source image field, select an image to apply to the persistent disk.

        Note: You can only apply a source image or a source snapshot. It is not possible to select both options.

      4. Click Create.
      5. On the New Instance page, select Existing persistent disk as the boot source.
      6. Select a kernel for the disk.
    7. Click Create when you are ready to create your instance.

    Caution: Although you can also select a scratch disk as your boot source, this is not recommended because scratch disks only last the life of the instance and aren't persistent once the instance is terminated.

    Using Persistent Disk Snapshots

    You can use the Console to create snapshots of your persistent disks, and apply these snapshots to new disks in your project. Snapshots are useful for migrating disks across zones and can also act as a backup mechanism for your persistent disks.

    Caution: Before you take a snapshot, you should make sure your disk buffers are flushed. Review the snapshot documentation for more information.

    1. Log in to the Console.
    2. Click on the Snapshots page.
    3. Click on New Snapshot.
    4. Fill out the form and click Create.

    To use a snapshot to create a new disk:

    1. Log in to the Console.
    2. Click on the Disks page.
    3. Choose a name for your new disk and the zone where this disk should live.
    4. Select your snapshot in the Source snapshot drop-down menu.
    5. Click on Create.

    Using the Console to Generate REST Requests

    When you create a new resource using the Console, Google Compute Engine also shows the REST request that is used to create this resource. This is a good way to view a sample REST request or to build your own REST request using a graphical interface. To see an example of this:

    1. Log in to the Console.
    2. On the Instances page, click New Instance.
    3. Click on Equivalent REST to view the REST details for creating a new instance.

    General Notes

    Here are some general notes to be aware of when using Google Compute Engine:

    • Certain ports are blocked by default

      These blocked ports cannot be unblocked without an exception from the Google Compute Engine team. You cannot unblock these ports by setting firewall rules.

    • There is a limit to the number of persistent disks you can attach to an instance

      Review those limits if you run into errors attaching your disks.

    Page: gcutil

    This document describes installation and usage of the gcutil tool. gcutil is a command-line tool that is used to manage your Google Compute Engine resources.

    Note: Looking for reference pages for gcutil commands? See the reference documentation instead!

    If you haven't already activated Google Compute Engine, you must follow the Sign Up steps.

    Contents

    1. System Requirements
    2. Installing gcutil
    3. Authenticating to Google Compute Engine
    4. Upgrading gcutil on Google Compute Engine instancesDeprecated
    5. Next Steps

    System Requirements

    gcutil runs on UNIX-based operating systems such as Linux and Mac OS X. To use gcutil, you must have Python 2.6.x or 2.7.x installed on your computer. gcutil does not support Python 3.x. Python is installed by default on most Linux distributions and Mac OS X.

    You can also run gcutil on the Microsoft Windows family of operating systems by using Cygwin. Cygwin is not installed on Windows by default. The instructions below describe how to install Cygwin.

    Installing gcutil

    This section discusses how to install gcutil on your computer.

    gcutil is distributed as part of the Cloud SDK, which contains tools and libraries for managing resources on Google Cloud Platform.

    Installing on Linux or Mac OS X


    1. Download and install the Cloud SDK.

      You can download and install the Cloud SDK using the following command:

      $ curl https://dl.google.com/dl/cloudsdk/release/install_google_cloud_sdk.bash | bash

      Alternatively, if you don't want to use curl, you can always download and unzip the package manually:

      1. Download google-cloud-sdk.zip
      2. Unzip the file:
        $ unzip google-cloud-sdk.zip
      3. Run the installation script:
        $ ./google-cloud-sdk/install.sh

      Follow the prompts to complete the setup. When prompted if you would like to update your system path, select y.

    2. Restart your terminal to allow changes to your PATH to take affect.

      You can also run source ~/.<bash-profile-file> if you want to avoid restarting your terminal.

    3. Authenticate to the Google Cloud platform by running:
      $ gcloud auth login

    Installing on Windows with Cygwin


    1. Download and install Cygwin.

      Cygwin's website contains installation instructions. While installing Cygwin, be sure to select openssh, curl, and the latest 2.6.x or 2.7.x version of python from the package selection screen.

    2. Start Cygwin.

      By default, you can launch Cygwin by going to Start -> All Programs -> Cygwin -> Cygwin Terminal.

    3. Download the Cloud SDK and install it.

      You can download and install the Cloud SDK by issuing the following commands from Cygwin:

      $ curl https://dl.google.com/dl/cloudsdk/release/install_google_cloud_sdk.bash | bash

      Alternatively, if you don't want to use curl, you can always download and unzip the package manually:

      1. Download google-cloud-sdk.zip.
      2. Unzip the file by right-clicking on it and selecting Extract all.
      3. Run the installation script by clicking on the install.bat file.

      Follow the prompts to complete the setup. When prompted if you would like to update your system path, select y.

    4. Restart Cygwin (or cmd).
    5. Authenticate to the Google Cloud platform by running:
      $ gcloud auth login

    Authenticating to Google Compute Engine

    Google Compute Engine uses OAuth2 to authenticate and authorize access. Before you can use gcutil, you must first authorize the Cloud SDK on your behalf to access your project and acquire an auth token. You won't need to repeat these steps unless you delete your stored credentials file or remove Google Compute Engine access to your Google account.

    1. Run gcloud auth login to request a token. This command prints a URL and opens a browser window to request access to your project.
      $ gcloud auth login
      Your browser has been opened to visit:
      
      https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.co%2
      Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery+https
      %3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%
      Fauth%2Fdevstorage.full_control+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuser...
      
      Created new window in existing browser session.

      You can also provide the --no-launch-browser flag if your browser doesn't automatically load the URL. If you provide this flag, the tool will print out a verification code that you can copy and paste into a browser, instead of opening a new browser window.

      $ gcloud auth login --no-launch-browser
      Go to the following link in your browser:
      
      https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.co%2
      Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery+https
      %3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%
      Fauth%2Fdevstorage.full_control+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuser...
      
      Enter verification code:
    2. Grant access.

      In the browser window, review the application permissions and click Accept when you are ready. If you used the --no-launch-browser flag, copy and paste the printed code on the next page onto the command line. Otherwise, the code will automatically be sent to the command line without any additional action on your part.

    3. (Optional) Next, the tool will prompt you for a project ID to use as your default project. Enter the ID of the project you want to use for Google Compute Engine:
      You can list your projects and create new ones in the Google Cloud
      console at https://cloud.google.com/console. If you have a project
      ready, you can enter it now.
      Enter a cloud project id (or leave blank
      to not set): myproject

      If you do not want to select a default project at this time, you can leave the prompt blank. To set your project ID later, run the following command at any time:

      $ gcloud config set project <new-project-id>

      Similarly, to unset your project ID, run:

      $ gcloud config unset project

      You can also view your settings, including your project ID:

      $ gcloud config list
    4. Try a quick example, such as a gcutil listinstances command:
      $ gcutil listinstances

    Back to top

    Upgrading gcutil on Compute Engine instances

    Warning: The stand-alone version of gcutil is deprecated and we encourage users to transition to using the Cloud SDK.

    Currently, Compute Engine instances do not come preinstalled with the Cloud SDK and instead, provides a stand-alone version of gcutil. We encourage users to transition to the Cloud SDK using the installation instructions above, but if you need to keep the stand-alone gcutil version, use the following instructions to keep the tool to updated to the latest version.

    1. Download the latest gcutil-1.14.2.tar.gz file.
      wget https://dl.google.com/dl/cloudsdk/release/artifacts/gcutil-1.14.2.tar.gz
    2. Extract the files.

      Run the following command to extract the tar file. This unpacks a directory named gcutil-1.14.2 in /usr/local/share.

      sudo tar xzvpf gcutil-1.14.2.tar.gz -C /usr/local/share
    3. Create a symbolic link to the gcutil binary.
      sudo ln -sf /usr/local/share/gcutil-1.14.2/gcutil /usr/local/bin/gcutil
    4. Start using gcutil.

      If you modified your system path, restart your shell before continuing so the changes can take effect. To see a list of available gcutil commands, run:

      gcutil help

    Back to top

    Next Steps

    That's it, you can now start using gcutil! Here are some ideas to get you started:

    Page: tips

    This document describes some helpful usage tips for gcutil. Note that this is not a comprehensive list. For a comprehensive list of all available gcutil flags and commands, run:

    $ gcutil
    $ gcutil --help

    Contents

    Listing Available gcutil Commands

    To list all available gcutil commands, run:

    gcutil

    Similarly, you can list all global gcutil flags by running:

    gcutil --help

    For help on a specific command and list non-global flags for that command:

    gcutil help <command-name>

    Filtering List Results

    When you list resources, you may want to filter your list results based on a set of criteria. This can help so that you have a shorter list of more relevant results. To do so, use the --filter flag:

    --filter="<expression>"

    Your <expression> must contain the following:

    <field-name> <comparison-string> <literal-string>
    • <field-name>: The name of the field you want to compare. The field name must be valid for the type of resource being filtered. Only atomic field types are supported (string, number, boolean). Array and object fields are not currently supported.
    • <comparison-string>: The comparison string, either eq (equals) or ne (not equals).
    • <literal-string>: The literal string value to filter to. The literal value must be valid for the type of field (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field. For example, when filtering instances, name eq my-instance won't work, but name eq .*my-instance will work.

    For example:

    gcutil listinstances --filter="status ne RUNNING" --project=my-project 

    The above filter causes gcutil to return only results whose status field does not equal RUNNING. Here is a more complex example:

    gcutil listinstances --filter="name eq '.*/my-instance-[0-9]+'" --project=my-project

    This would list all instances whose name matches the given regular expression.

    Formatting List Results

    gcutil allows you format your list results into one of several display options, including:

    • sparse
    • json
    • csv
    • table
    • names

    For example, if you wanted to just list the names of a certain resource, you can use the names options:

    gcutil listinstances --format=names --project=<project-id>

    Moving Instances Across Zones

    gcutil offers a helper function to move instances across zones using the gcutil moveinstances command. The command copies your instance configuration, takes snapshots of any persistent disks attached to the instance, deletes the existing instances, and then recreates the instances in your destination zone.

    Important Warnings

    This particular feature takes a snapshot of all your existing persistent disk data, shuts down your instance, recreates your persistent disks in the new zone, and starts new instances with your instance configurations. It is possible that this process could fail before it completes your move. For this reason, we encourage that you also backup your data before running this command, so that you can keep track of your backup resources in case you need to perform recovery. In particular, you should create snapshots of any persistent disk data you want to keep and also create custom images of the instances you are moving.

    Here are other warnings to keep in mind:

    • This command deletes and recreates instances for you, but does not preserve any scratch disk data, ephemeral IP addresses, or contents of memory. Before executing this command, make sure you have backed up any scratch data you want to keep.
    • You should not modify your project while a move is in progress. In particular, any changes that affect quotas, introduce naming conflicts, or deletes required resources, such as snapshots, will cause unpredictable behavior from gcutil.

    Prerequisites

    You must fulfill the following prerequisites before you use this feature:

    • The desired zone must have enough quota to handle the number of new instances and disks that are created.

      To see how much quota is available in a zone using gcutil, use the gcutil getzone <zone> command.

    • If you want to move instances that have persistent disks attached, you must move all instances that are using those persistent disks.

      For example, if instance-1 and instance-2 are both using persistent-disk-1, you must move both instances together to be able to successfully move the persistent disk. If you attempt to move one instance and not the other, gcutil won't perform the move.

    • The destination zone cannot contain instances with the same names as the instances you want to move.

      The moveinstances feature also copies over instance names to your desired zone. For example, if you move an instance named my-instance from us-central1-a to us-east1-a, gcutil automatically uses the my-instance name for your new instance. However, if your new zone already has an instance with that name, the command quits, and your instances are not moved. Make sure your desired destination zone does not already contain an instance with the same name as an instance you want to move.

    If the above preconditions are not met, gcutil won't perform the move at all and your instances won't be modified.

    When you run this command, almost all data that is accessible through the API, such as persistent disk data, is copied to your new instances. Server-generated data and data that is not accessible through the API, such as selfLinks, ephemeral IP addresses, scratch disk data, and any contents of memory, are not preserved. Generally, startup scripts and storing data on persistent disks are the ideal ways reinstall software and store persistent data, as opposed to scratch disks which are terminated when the instance is stopped.

    To move an instance:

    gcutil --project=<project-id> moveinstances <name-regex-1> <name-regex-2> ... <name-regex-n> \
           --source_zone=<zone-name> \
           --destination_zone=<zone-name> \
           [--force] [--keep_snapshots]
    

    Import flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    <name-regex-n>
    [Required] A series of regular expressions matched to instance names that should be moved to the new zone. If any instance names match any of these regex expression, they will be moved.
    --source_zone=<zone-name>
    [Required] The current zone where the instances live.
    --destination_zone=<zone-name>
    [Required] The new zone where you want your instances to be recreated.
    --force
    [Optional] Overrides the confirmation prompt.
    --keep_snapshot
    [Optional] After a successful move, all generated snapshots from the move are deleted. If you want to keep these snapshots, specify this flag.

    Also, as part of the move process, gcutil creates snapshots of any attached persistent disks and recreates these disks in the new zone. Once the move is complete, the snapshots are automatically deleted. It is possible that users may experience some charges for these snapshots. For information how snapshots are priced, see the price sheet.

    Resuming a Move

    gcutil generates a log file every time you start the moveinstances command. If the move fails, you can use the log file to resume a move.

    gcutil --project=<project-id> resumemove <log-path> \
               [--force] [--keep_log_file] [--keep_snapshots]

    Important flags and parameters:

    --project=<project-id>
    [Required] The project ID for this request.
    --force
    [Optional] Override the confirmation prompt.
    --keep_log_file
    [Optional] By default, gcutil deletes the log file after the command is completely successfully. If you want to keep the log file, include this flag.
    --keep_snapshots
    [Optional] After a successful move, all generated snapshots from the move are deleted. If you want to keep these snapshots, set this flag.
    <log-path>
    [Required] The path to the log file that gcutil should use. When running gcutil moveinstances, look for the following line:
    If the move fails, you can re-attempt it using:
     gcutil resumemove /usr/local/home/user/.gcutil.move.20130201231024

    where /path/to/your/homedir/.gcutil.move.<datetime> is the log path.

    Checking Which User You Are Authorized As

    Use the --just_check_auth flag:

    gcutil --project=<project-id> auth --just_check_auth
    INFO: Authorization succeeded for user <user>

    Printing a REST request and its corresponding JSON response

    To see the details of the REST request and its corresponding JSON response sent and received by gcutil, use the --dump_request_response flag:

    gcutil --dump_request_response <command> <command_flags>

    To see only the JSON response to your request, use the --print_json flag.

    gcutil <command> <command_flags> --print_json

    Saving and Reusing Flag Values

    When you run gcutil commands, there are certain flags that you must always specify. For example, every gcutil command requires a --project flag so that gcutil knows for which project it should execute the command. To make this more convenient, gcutil provides a --cache_flag_values flag which saves all your specified flags for that command into a .gcutil.flags file. To save and use gcutil flag values, run:

    gcutil <command> <command_flags> --cache_flag_values=True

    By default, gcutil creates a .gcutil.flags file in your home directory. If you would like to save your .gcutil.flags file elsewhere, you can specify a file path using the --cached_flags_file flag:

    gcutil <command> <command_flags> --cache_flag_values=True [--cached_flags_file=<some_file_path>]

    where <some_file_path> is your desired file path. You can specify the file path in a variety of manners:

    • ~/<filename>
    • ./<filename>
    • subdirectory/<filename>

    gcutil will always look for an existing .gcutil.flags value in the following manner:

    1. If the --cached_flags_value flag is specified with a file path, gcutil the file path provided.
    2. If the --cached_flags_value flag is not specified, or is specified only with a file name rather than a file path, gcutil looks for the file name in the current directory. If it doesn't find the file in the current directory, it looks for the file in the parent directories, and lastly, it looks for the file in the home directory if the file is not found in the parent directories. gcutil uses the first instance of the file it finds.

    This behavior allows you to create multiple .gcutil.flag values for easy access to more than one project. For example, if you use multiple projects with gcutil, you can save a customized cached flag file in each directory to use for each project and gcutil automatically selects the right cached flag file for you.

    Revoking a Refresh Token

    If your refresh tokens are compromised, you can revoke them so that they can no longer be used. To revoke a refresh token:

    1. Log into your Google account page.
    2. Click on Security and then click the Edit button next to Authorizing applications and sites.
    3. Click Revoke Access next to Google Cloud SDK.

    Resetting an Instance

    For information on how to reset an instance, review the Instances documentation.

    Performing Requests in Asynchronous Mode

    To perform a request in asynchronous mode, meaning gcutil returns immediately after posting a request, regardless of whether or not the request has been completed, use the --[no]synchronous_mode flag. This can be useful if you know a request will take a while to complete and you want to be able to perform other tasks. By default, gcutil will perform requests in synchronous mode.

    Using gcutil with Multiple Accounts

    You will need a different authorization token for each Google account with which you want to access Google Compute Engine. gcutil provides a --credentials_file flag for this purpose. When specifying a new --credentials_file that does not yet exist, gcutil prompts you through the authorization process again. Sign into the intended Google account in your browser, and authorize gcutil to access your account. For example:

    1. Sign into Google with user1@gmail.com on your browser
    2. Run gcutil auth --credentials_file=$HOME/.user1_credentials
    3. Follow the authorization steps
    4. Sign into Google with user2@gmail.com on your browser
    5. Run gcutil auth --credentials_file=$HOME/.user2_credentials
    6. Follow the authorization steps

    Isolating Your Scripts from Future Changes

    If you use applications or scripts that depend on a certain API version, image, or disk location, you may find that your scripts may break from time to time as Google Compute Engine release a new API, releases a new default image, or changes the disk path. If you don't always have the time to resolve these breaking changes as they happen, here are some tips on how to isolate your scripts and applications from changes until you have time to resolve these conflicts:

    • Always reference your scratch and persistent disks using their alias path

      Always reference your scratch and persistent disks using the disk alias path rather than linking to it using /dev/vd*. The /dev/vd* path can change at anytime and is not a reliable path to use. Use the disk alias path, which always points to the correct location of the disk:

      • For scratch disks, use use /dev/disk/by-id/google-ephemeral-disk-*
      • For persistent disks, use /dev/disk/by-id/google-*

      For more information, see Formatting and Attaching Additional Scratch Disk and Attaching a Persistent Disk.

    • Hard coding an API version or image name in your script:
      • If your scripts or applications depend on a particular API version

        Consider hard coding the API version number using the gcutil --service_version=<api-version-number> flag. If you don’t specify an API version number, Google Compute Engine automatically uses the most current version. As versions change, there may be changes that break your script. To help with this, hard code the API version number so that your script continues to run until you have time to address any conflicts. Deprecated APIs are supported for 3 months from the deprecation date, providing a relatively large time frame to convert your scripts.

      • If your scripts or applications depend on a particular image

        Consider hard coding the image name into your script. The default image does change from time to time, and your script may not work or run correctly if it relies on certain capabilities of an older image that is not available in newer images. To prevent your script from breaking until you have time to resolve conflicts, you can hard code the image name and change it when you have updated your script to use the new images. In gcutil, you can specify an image using the --image=<image-name> flag.

    Back to top

    Authentication required

    You need to be signed in with Google+ to do that.

    Signing you in...

    Google Developers needs your permission to do that.