Google Compute Engine

Disks

A Persistent Disk resource provides disk space for your instances and contains the root filesystem that your instance boots from. You can create additional persistent disks to store data. A single persistent disk can be used across multiple instances, and an instance can attach multiple persistent disks. Before your virtual machine instance can use a persistent disk, the persistent disk must exist beforehand.

A persistent disk's lifespan is independent from an instance. Disks are bound to a zone rather than an instance. If your instance suffers a failure, or is moved out of the way of a maintenance event, all data in a persistent disk is preserved. Persistent disks can only be used by instances residing in the same zone. To use a non-root persistent disk for a first time, you must format and mount the disk first. If you are just attaching an already formatted persistent disk to a new instance, you only need to mount it to the instance (and skip formatting if you'd like). If you are attaching a root persistent disk, you do not need to format or mount it, as Google Compute Engine will do that for you.

Instances that use persistent disks can be live migrated out of the way of schedule maintenance or impending failures of underlying infrastructure. For more information about scheduled maintenance and instance migration, see Scheduled Maintenance.

Persistent disks are per-zone resources.

Useful gcutil commands:

  • gcutil push to copy data from your local computer to an instance.
  • gcutil pull to copy data from an instance to your local computer.
  • gcutil listdisks to list persistent disks.
  • gcutil getdisk to get information about a specific persistent disk.
  • gcutil adddisk to add a new persistent disk to the project.
  • gcutil deletedisk to remove a persistent disk.

Contents

Disk Encryption

Google Compute Engine protects the confidentiality of persistent disks by encrypting them using AES-128-CBC, and this encryption is applied before the data leaves the virtual machine monitor and hits the disk. We also protect the integrity of persistent disks via a HMAC scheme.

Google Compute Engine generates and tightly controls access to a unique encryption key for each disk. Encryption is always enabled and is transparent to Google Compute Engine users.

Persistent Disk Performance

Persistent disk performance depends on the size of the volume. Larger volumes can achieve higher I/O levels than smaller volumes. There are no separate I/O charges as the cost of the I/O capability is included in the price of the space. Persistent disk performance is determined as follows:

  • Input/output operations per second (IOPS) performance caps grow linearly with the size of the persistent disk volume all the way to the maximum size of 10TB.
  • Throughput caps also grow linearly up to the maximum bandwidth for the virtual machine.

This model is a more granular version of what people would see with a RAID set. The more disks in a RAID set, the more I/O it can perform and the finer a RAID set is carved up, the less I/O there is per partition. However, instead of growing volumes in increments of entire disks, persistent disk gives Compute Engine customers granularity at the GB level for their volumes.

The pricing and performance model provides three main benefits:

  • Operational simplicity

    In the previous persistent disk model (before Compute Engine's general availability announcement in December 2013), to increase I/O to a virtual machine, customers needed to create multiple small volumes and then stripe them together inside the virtual machine. This created unnecessary complexity at volume creation time and throughout the volume’s lifetime because it required complicated management of snapshots. Under the covers, persistent disk stripes data across a very large number of physical drives, making it redundant for users to also stripe data across separate disk volumes. In this current model, a single 1TB volume performs the same as 10 x 100GB volumes striped together.

  • Predictable pricing

    Volumes are priced only on a per GB basis. This price pays for both the volume’s space and all the I/O that the volume is capable of. Customers’ bills do not vary with usage of the volume.

  • Predictable performance

    This model allows more predictable performance than other possible models for HDD-based storage while still keeping the price very low.

Volume I/O caps distinguish between read and write I/O and between IOPS and bandwidth.

Maximum Sustained IOPS / TB (scales linearly up to 10 TB) Maximum Sustained throughput / TB Maximum Sustained throughput / VM
Read 300 IOPS 120 MB/s 180 MB/s
Write 1500 IOPS 90 MB/s 120 MB/s

Please note the following points when considering the information in the chart:

  • The caps in the chart are for the maximum sustained IO.

    To better serve the many cases where I/O spikes, Compute Engine allows virtual machines to save up I/O capability and burst over the numbers listed here. In this way, smaller volumes can be used for use cases where I/O is typically low but periodically bursts well above the average. For example, boot volumes tend to be small and infrequently accessed, but sometimes need to perform heavy I/O for tasks like booting and package installation.

  • Performance depends on I/O pattern and volume size.

    IOPS and throughput caps have per TB values. These numbers need to be multiplied by a volume’s size to determine the cap for that volume. There is a throughput cap per virtual machine that comes from the virtual machine itself, not the volume. Observed throughput caps for a volume will be the lower of the volume’s cap and the virtual machine cap.

  • Larger virtual machines tend to have higher performance levels than smaller virtual machines.

    Generally, the performance levels for virtual machines are as follows:

    • 4 and 8 core virtual machines will reach the top performance levels listed in the chart above.
    • 1 and 2 core virtual machines perform at lower levels than the 4 and 8 core virtual machines.
    • Shared core virtual machines tend to have low I/O capability and should be used only where high I/O is not required.

As a concrete example of how to use this chart, a 500 GB (0.5 TB) volume can execute up to 150 small random reads, 750 small random writes, 60 MB/s streaming reads and 45 MB/s streaming writes.

To determine what size of volume is required to have the same optimal performance as a typical 7200 RPM SATA drive, you must first identify the I/O pattern of the volume. Use the chart below to determine the best volume size for a specific I/O pattern.

IO Pattern Size of volume to approximate a typical 7200 RPM SATA drive
Small random reads 250 GB
Small random writes 50 GB
Streaming large reads 1000 GB
Streaming large writes 1333 GB

Persistent disk volumes perform much better than physical hard drives, achieving the speed of a single disk with much less space than you would have to buy with an off the shelf drive.

Caution: Not all configurations of virtual machine and applications are able to reach all the performance numbers in the charts above. For example, the default Linux images provided by Compute Engine are tuned for random I/O and transactional storage and will easily reach the IOPS limits. But, consequently, these images are not tuned well for sequential I/O and some common operations like large file copies can be much slower than the performance caps.

As a workaround, the easiest way to tune for sequential I/O and speed up operations like cp, is to increase the size of the operating system readahead cache. To do that, add the following command to your /etc/rc.local file on your virtual machine instance:

/sbin/blockdev --setra 16384 <drive-name>

where <drive-name> is the drive name inside the virtual machine, such as /dev/sda.

You should selectively add this only to virtual machines that perform sequential I/O operations regularly, because it will have adverse effects on transactional workloads. You can find more details about performance tuning at the Compute Engine Disks: Price, Performance, and Persistence technical article, but one tip will help you get good performance quickly: do nothing for random IO, but for sequential IO, run the command above.

Disk Interface

By default, Google Compute Engine uses a SCSI for attaching persistent disks. Images that provided on or after 20121106 will have virtio SCSI enabled by default. Images using Google-provided kernels older than 20121106 only support a virtio block interface. If you're currently using images that have a block interface, you should consider switching to a newer image that uses SCSI.

If you are using the latest Google images, they should already be set to use SCSI.

Creating a New Persistent Disk

Before setting up a persistent disk, keep in mind the following restrictions:

  • A persistent disk can only be used by one instance in a read-write capacity.

    While you can attach a persistent disk to multiple instances, the persistent disk can only be accessed in read-only mode when it is being used by more than one instance. If the persistent disk is attached to just one instance, it can be used by that instance with read-write capacity.

  • Generally, persistent disks are not mounted or formatted when they are first created and attached.

    Root persistent disks are mounted on instance start up, but for additional persistent disks that are not used for booting an instance, you must mount and format a disk explicitly the first time you use it. After the initial formatting, you do not need to format the disk again (unless you would like to). If your instance fails or if you reboot it, you need to remount the disk. You can remount persistent disks automatically by modifying the /etc/fstab file.

Every region has a quota of the total persistent disk space that you can request. Call gcutil getregion to see your quotas for that region:

$ gcutil --project=myproject getregion us-central1
+--------------------+-------------------------------------------------+
| name               | us-central1                                     |
| description        | Region for zones in us-central1-a               |
| creation-time      | 2013-09-03T12:05:02.515-07:00                   |
| status             | UP                                              |
| zones              | zones/us-central1-a,zones/us-central1-b         |
| deprecation        |                                                 |
| replacement        |                                                 |
| usage              |                                                 |
|   cpus             | 0.0/24.0                                        |
|   disks-total-gb   | 0.0/5120.0                                      |
|   static-addresses | 0.0/7.0                                         |
|   in-use-addresses | 0.0/23.0                                        |
+--------------------+-------------------------------------------------+

Persistent disk size guidelines

To decide how big to make your persistent disk, determine what performance you require and then use the persistent disk performance chart to choose the right persistent disk size for your needs.

If you aren't sure what your performance or I/O needs are, we recommend profiling your application and learning your I/O patterns so you can size your disks correctly and optimize the cost of your volumes. To get started, the following table provides some general recommendations based on your current setup:

If you are currently using... ...with the following I/O pattern We recommend the following persistent disk size ...with the following monthly cost
A single SATA hard drive in a physical machine.. Small transactional I/Os 250GB $10
A single SATA hard drive in a physical machine.. Streaming I/O 1500GB $60
A Compute Engine virtual machine mounting a persistent disk volume Small transactional I/Os 500GB if mostly writes, 1000GB if mostly reads $20 for 500GB, $40 for 1000GB
A Compute Engine virtual machine mounting a persistent disk volume Streaming I/Os 1500GB $60

Create a disk

To create a new persistent disk in your project, call gcutil adddisk with the following syntax:

gcutil --project=<project-id> adddisk <disk-name> [--size_gb=<size> --zone=<zone-name> \
--source_snapshot=<snapshot-name> --source_image=<image-name>]

Important flags and parameters:

‑‑project=<project‑id>
[Required] The ID of the project where this persistent disk should live.
<disk‑name>
[Required] The name for the persistent disk, when managing it as a Google Compute Engine resource using gcutil. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
‑‑size_gb=<size>
[Optional] When specified alone, creates an empty persistent disk with the specified size, in GB. This should be an integer value, up to the remaining disk size quota for the project. You can also specify this field alongside the ‑‑source_image or ‑‑source_snapshot parameters, which creates a disk using the image or snapshot provided, that is the size of ‑‑size_gb. ‑‑size_gb must also be equal to or larger than the size of the image (10GB) or the size of the snapshot.

Any root persistent disks created from an source image or source snapshot with an explicit ‑‑size_gb that is larger than the filesystem needs to be re-partitioned from within an instance before the additional space can be used. For empty disks, you can mount the entire disk to an instance without creating a partition. For more information about partitioning a persistent disk, see Repartitioning a Root Persistent Disk. For information about formatting and mounting non-root persistent disks, see Formatting Disks.

If you are creating a non-root persistent disk from a snapshot that is larger than the original snapshot size, you will need to follow the instructions to restore your snapshot to a larger size.

‑‑zone=<zone‑name>
[Optional] The zone where this persistent disk should reside. If you don't specify this flag, gcutil prompts you to select a zone from a list.
‑‑source_snapshot=<snapshot‑name>
[Optional] The persistent disk snapshot from which to create the persistent disk. You can also use this alongside ‑‑size_gb to explicitly set the size of the persistent disk. ‑‑size_gb must be larger than or equal to the size of a snapshot.
‑‑source_image=<image‑name>
[Optional] The image to apply to this persistent disk. This option creates a root persistent disk. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk with enough space to store your root filesystem files. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of an image, which is 10GB.

Note: Since v1, you can no longer specify an image with a Google-provided kernel for your sourceImage. For more information, see the v1 transition guide.

When you run the gcutil adddisk command above, Google Compute Engine give you a list of zones for you to choose from, where your persistent disk should live. If you plan to attach this persistent disk to an instance, the persistent disk must be in the same zone as the instance that uses it.

You can check on the status of the disk creation process by running gcutil getdisk <disk‑name>. Your disk can have one of the following statuses:

  • CREATING - This disk is in the process of being created.
  • FAILED - This disk was not created successfully.
  • READY - This disk was created successfully and is ready to be used.

Once the disk status is READY, you can use your new persistent disk by attaching it to your instance as described in Attaching a Persistent Disk to an Instance.

Attaching a Persistent Disk to an Instance

After you have created your persistent disk, you must attach it to your instance to use it. You can attach a persistent disk in two ways:

To use a persistent disk with an instance, the persistent disk must live in the same zone as your desired instance. For example, if you want to create an instance in zone us-central1-a and you want to attach a persistent disk to the instance, the persistent disk must also reside in us-central1-a.

Persistent Disk Size Limits

Before you attach a persistent disk to an instance, note that your persistent disks are subjected to certain size and quantity restrictions. Standard, high memory, and high CPU machine types can attach up to 16 persistent disks. Shared-core machine types can attach up to 4 persistent disks.

Additionally, machine types have a restriction on the total maximum amount of persistent disk space that can be mounted at a given time. If you reach the total maximum size for that instance, you won't be able to attach more persistent disks until you unmount some persistent disks from your instance. By default, you can mount up to 10TB of persistent disk space for standard, high memory, and high CPU machine types, or you can mount up to 3TB for shared-core machine types.

For example, if you are using an n1-standard-1 machine type, you can choose to attach up to 16 persistent disks whose combined size is equal to or less than 10TB or you can attach one 10TB disk. Once you've reached that 10TB limit, you cannot mount additional persistent disks until you unmount some space.

Note: The default limit of aggregate persistent disk space for a region is 5TB. This is significantly less than the 10TB limit mentioned above. As such, if you were hoping to attach up to 10TB or more of persistent disks to instances, you will also need to request a quota increase for the total amount of aggregate disk space available to your project. For more information, see resource quotas.

To find out an instance's machine type, run gcutil getinstance <instance-name> ‑‑project=<project-id>.

Attaching a Disk During Instance Creation

To attach a persistent disk to an instance during instance creation, follow the instructions described. Note that if you are attaching a root persistent disk that is larger than the original source (such as the image or snapshot), you need to repartition the persistent disk before you can use the extra space.

If you attach a data persistent disk that was originally created using a snapshot, and you created the data disk to be larger than the original size of the snapshot, you will need to resize the filesystem to the full size of the disk. For more information, see Restoring a Snapshot to a Larger Size.

  1. Create the persistent disk by calling gcutil adddisk <disk‑name> ‑‑project=<project‑id>
  2. Create the instance where you would like to attach the disk, and assign the disk using the ‑‑disk=<disk‑name> flag.

    Here is the abbreviated syntax to attach a persistent disk to an image:

    gcutil --project=<project-id> addinstance --disk=<disk-name>[,deviceName=<alias-name>,mode=<mode>,boot] <instance-name> \
    [--auto_delete_boot_disk]

    Important Flags and Parameters:

    ‑‑disk=<disk‑name>
    [Required] The disk resource name used when you created the disk. If you don't specify <alias‑name>, the disk is exposed to the instance as a block device at/dev/disk/by‑id/google‑<disk‑name>. Learn more about the difference between disk names and alias names.
    deviceName=<alias‑name>
    [Optional] An optional alias name for the disk device. If specified, the disk will be exposed to the instance as a block device at /dev/disk/by‑id/google‑<alias‑name>. Learn more about the difference between disk names and alias names.
    mode=<mode>,boot
    [Optional] The mode for which you want to attach this persistent disk and if you want to attach this disk as a root persistent disk, indicated by the boot flag.

    Valid mode values are:

    • read_write or rw: Attach this disk in read-write mode. This is the default behavior. If you are attaching a root persistent disk with a Google-provided image, you must attach the disk in read-write mode. If a persistent disk is already attached to an instance in read-write mode and you try to attach this disk to another instance, this command will fail with the following error:
      error   | RESOURCE_IN_USE
      message | The disk resource '<disk‑name>' is already being used
      in read-write mode
    • read_only or ro: Attach this disk in read-only capacity. Specify this if you're attaching this disk to multiple instances.
    <instance‑name>
    [Required] The name of the instance you are creating and attaching the persistent disk to.
    ‑‑project=<project‑id>
    [Required] The name of the project where this instance should live.
    ‑‑[no]auto_delete_boot_disk
    [Optional] If this is a root persistent disk, determines if the root persistent disk should be deleted automatically when the instance is deleted. The default is false.
    Disk Names vs. Alias Names

    When you attach a persistent disk to an instance, you have the option of setting an alias name for the disk. Aliases are useful for providing functional names to persistent disks that may have more unrecognizable names. For example, if you have a persistent disk named pd20120326, you can choose to attach it under an alias name, such as "databaselogs," which provides more information about the actual disk than the original disk name. Note that this doesn't change the name of the persistent disk but simply exposes it to the instance with another name:

    gcutil --project=my-project addinstance --disk=pd20120326,deviceName=databaselogs mynewinstance 

    Setting an alias name is optional, so you can choose to simply use the disk name if you would like. If you do set an alias name, note that your persistent disk is exposed to the instance at:

    /dev/disk/by-id/google-<alias-name>

    If you do not specify an alias name, your disk is exposed at:

    /dev/disk/by-id/google-<disk-name>

    To attach multiple disks to an instance, you can specify multiple ‑‑disk flags. For instance:

    gcutil addinstance --disk=disk1 --disk=disk2 my-multi-disk-instance --project=my-project
    
  3. ssh into your instance

    You can do this using gcutil ‑‑project=<project‑id> ssh <instance‑name>.

  4. Create your disk mount point, if it doesn't already exist

    For example, if you want to mount your disk at /mnt/pd0, create that directory:

    me@my-instance:~$ sudo mkdir -p /mnt/pd0
  5. Determine the /dev/* location of your persistent disk by running:
    me@my-instance:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd0 -> ../../sdb #pd0 is mounted at /dev/sdb
  6. Format your persistent disk:
    me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/<disk-or-alias-name> <mount-point>

    Specify the local disk alias if you assigned one, or the disk's resource name if you haven't. In this case, the disk alias is /dev/sdb:

    me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0

    In this example, you are mounting your persistent disk /dev/diskalias at /mnt/pd0 but you can choose to mount your persistent disk anywhere e.g. /home. If you have multiple disks, you can specify a different mount point for each disk.

That's it! You have mounted your persistent disk and can start using it immediately. To demonstrate this process from start to finish, the following example attaches a previously created persistent disk named pd1 to an instance named diskinstance, formats it, and mounts it:

  1. Create an instance and attach the pd1 persistent disk.
    $ gcutil --project=my-project addinstance --disk=pd1 diskinstance  --auto_delete_boot_disk
    ...select a zone, image, and machine type for your instance...
    INFO: Waiting for insert of diskinstance. Sleeping for 3s.
    INFO: Waiting for insert of diskinstance. Sleeping for 3s.
    
    Table of resources:
    
    +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
    |     name     |  machine-type |                              image                             | network |   network-ip   | external-ip  |       disks          |    zone     | tags-fingerprint | metadata-fingerprint | status  | status-message |
    +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
    | diskinstance | n1-standard-1 | projects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD  | default | 00.000.000.000 | 000.000.0.00 |   <zone>/disks/pd1   |    <zone>   | 42WmSpB8rSM=     | 42WmSpB8rSM=         | RUNNING |
    +--------------+---------------+----------------------------------------------------------------+---------+----------------+--------------+----------------------+-------------+------------------+----------------------+---------+----------------+
    
    Table of operations:
    
    +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
    |                      name                      | status | status-message |             target              | operation-type | error | warning |
    +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
    | operation-1358529117225-4d3933570d191-0fc3d1da | DONE   |                |  <zone>/instances/diskinstance  |     insert     |       |         |
    +------------------------------------------------+--------+----------------+---------------------------------+----------------+-------+---------+
    
  2. ssh into the instance.
    $ gcutil --project=my-project ssh diskinstance
    ...
    The programs included with this system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    This software comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    
  3. Change to root:
    user@diskinstance:~$ sudo -s
  4. Create a directory to mount the new persistent disk.
    root@diskinstance:~# mkdir /mnt/pd0
  5. Determine where pd1 is currently mounted by getting a list of available persistent disks on the instance.
    root@diskinstance:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd1 -> ../../sdb #pd1 is mounted at /dev/sdb
  6. Run the safe_format_and_mount tool.
    root@diskinstance:~# /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0
    mke2fs 1.41.11 (14-Mar-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=8 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2684354560
    80 block groups
    ...
  7. Give all users write access to the drive.
    root@diskinstance:~# chmod a+w /mnt/pd0
  8. Create a new file called hello.txt on the new mounted persistent disk.
    root:~# echo 'Hello, World!' > /mnt/pd0/hello.txt
  9. Print out the contents of the file to demonstrate that the new file is accessible and lives on the persistent disk.
    root@diskinstance:~# cat /mnt/pd0/hello.txt
    Hello, World!

Attaching a Disk to a Running Instance

You can attach an existing persistent disk to a running instance using the attachdisk command in gcutil or attachDisk in the API. Persistent disks can be attached to multiple instances at the same time in read-only mode (with the exception of root persistent disk, which should only be attached to one instance at a time). If you've already attached a disk to an instance in read-write mode, that disk cannot be attached to any other instance. You also cannot attach the same disk to the same instance multiple times, even in read-only mode.

Note: If you attach a data persistent disk that was originally created using a snapshot, and you created the disk to be larger than the original size of the snapshot, you will need to resize the filesystem to the full size of the disk. For more information, see Restoring a Snapshot to a Larger Size.

To attach a persistent disk to an existing instance in gcutil:

gcutil --project=<project-id> attachdisk --zone=<zone> --disk=<disk-name>,[deviceName=<alias-name>,mode=<mode>] <instance-name>

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID for this request.
‑‑zone=<zone>
[Optional] The zone where the instance and disk reside. Although this is optional, it is preferable that you specify the zone with your request so that gcutil won't have to use additional API calls to find out the zone of your instance and disk.
‑‑disk=<disk‑name>
[Required] The disk resource name used when you created the disk. If you don't specify <alias‑name>, the disk is exposed to the instance as a block device at/dev/disk/by‑id/google‑<disk‑name>. Learn more about the difference between disk names and alias names.

You can only attach one persistent disk to an instance at a time. If you specify multiple persistent disks, the last ‑‑disk argument is the persistent disk is attached to the instance.

deviceName=<alias‑name>
[Optional] An optional alias name for the disk device. If specified, the disk will be exposed to the instance as a block device at /dev/disk/by‑id/google‑<alias‑name>. If not specified, the disk will be exposed to the instance using the disk‑name. Learn more about the difference between disk names and alias names.
mode=<mode>
[Optional] The mode for which you want to attach this persistent disk. Valid mode values are:
  • read_write or rw: Attach this disk in read-write capacity. This is the default behavior. If a persistent disk is already attached to an instance in read-write mode and you try to attach this disk to another instance, this command will fail with the following error:
    error   | RESOURCE_IN_USE
    message | The disk resource '<disk‑name>' is already being used
    in read-write mode
  • read_only or ro: Attach this disk in read-only capacity. Specify this if you're attaching this disk to multiple instances.
<instance‑name>
[Required] The name of the instance to attach this disk to.

To attach a persistent disk to a running instance through the API, perform a POST request to the following URI:

https://www.googleapis.com/compute/v1/project/<project-id>/zones/<zone>/instances/<instance-name>/attachDisk

Your request body must contain the following:

bodyContent = {
    "type": "persistent",
    "mode": "<mode>",
    "source": "https://www.googleapis.com/compute/v1/projects/<project-id>/zone/<zone>/disks/<disk>"
  }

For more information, see the attachDisk reference documentation.

Detaching a Persistent Disk

You can detach a persistent disk from a running instance by using the gcutil detachdisk command.

To detach a disk using gcutil:

gcutil --project=<project-id> detachdisk --zone=<zone> --device_name=<disk-name> <instance-name>

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID where this instance belongs to.
‑‑zone=<zone>
[Optional] The zone where the instance and disk resides. Although this is optional, it is preferable that you specify the zone with your request so that gcutil won't have to use additional API calls to find out the zone of your instance and disk.
‑‑device_name=<disk‑name>
[Required] The name of the disk to detach. If you attached the disk using an alias name, specify the alias name instead of the disk name.
<instance‑name>
[Required The instance from which you want to detach the disk.

To detach a disk in the API, perform an empty POST request to the following URL:

https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>/detachDisk?deviceName=<disk-name>

For more information, see the detachDisk reference documentation.

Root Persistent Disk

Each instance has an associated root persistent disk where the filesystem for the instance is stored. It is possible to create and attach a root persistent disk to an instance during instance creation or by creating the disk separately and attaching it to a new instance. All persistent disk features and limitations apply to root persistent disks.

Create a Root Persistent Disk During Instance Creation

When you start an instance without specifying a ‑‑disk flag, gcutil automatically creates a root persistent disk for you using the image that you provided in your request. The new root persistent disk is named after the instance by default. For example, if you create an instance using the following command:

user@local~:$ gcutil --project=my-project addinstance awesomeinstance --image=debian-7 --auto_delete_boot_disk

gcutil automatically create a persistent boot disk using the latest Debian 7 image, with the name awesomeinstance, and boots your instance off the new persistent disk. The ‑‑auto_delete_boot_disk flag also indicates that Compute Engine should automatically delete the root persistent disk when the instance is deleted. You can also change the auto-delete state later on.

You can also create multiple instances and root persistent disks by providing more than one instance name:

gcutil addinstance --project=<project-id> <instance-name-1> ... <instance-name-n> --auto_delete_boot_disk

Create a Stand-alone Root Persistent Disk

You can create a standalone-root persistent disk outside of instance creation and attach it to an instance afterwards. In gcutil, this is possible using the standard gcutil adddisk command. It is possible to create a root persistent disk from an image or a snapshot, using the ‑‑source_image or ‑‑source_snapshot flags.

gcutil adddisk --project=<project-id> <disk-name> [--source_image=<image-name> \
--source_snapshot=<snapshot-name> --size_gb=<size>]

Important Flags and Parameters:

‑‑size_gb=<size>
[Optional] The size of the disk, in GB. This should be an integer value, up to the remaining disk size quota for the project. Note that any root persistent disk created from an image or a snapshot that is larger than the filesystem needs to be re-partitioned from within an instance before the additional space can be used. For non-root persistent disks, you can mount the entire disk to an instance without creating a partition.

For more information, see documentation about how to repartition a root persistent disk. For information about formatting and mounting non-root persistent disks, see Formatting Disks.

<disk‑name>
[Required] The name for the persistent disk, when managing it as a Google Compute Engine resource using gcutil. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
‑‑project=<project‑id>
[Required] The ID of the project where this persistent disk should live.
‑‑source_image=<image‑name>
[Optional] The image to apply to this persistent disk. This option creates a persistent disks which may be used as a root persistent disk. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk with enough space to store your root filesystem files. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of an image which is around 10GB.
‑‑source_snapshot=<snapshot‑name>
[Optional The snapshot to apply to this disk. Snapshots can also be used to create root persistent disk, in addition to regular data disks. You can also specify ‑‑size_gb to explicitly choose the size of the root persistent disk, or omit the flag to create a root persistent disk that is the same size as the snapshot. If you choose to specify ‑‑size_gb, the value must be greater than or equal to the size of the snapshot.

In the API, create a new persistent disk with the sourceImage query parameter in the following URI:

https://www.googleapis.com/compute/v1/projects/<project>/zones/<zone>/disks?sourceImage=<source-image>
‑‑sourceImage=<source‑image>
[Required] The URL-encoded, fully-qualified URI of the source image to apply to this persistent disk.

Using an Existing Root Persistent Disk

To start an instance with an existing root persistent disk in gcutil, provide the boot parameter when you attach the disk. When you create a root persistent disk using a Google-provided image, you must attach it to your instance in read-write mode. If you try to attach it in read-only mode, your instance may be created successfully, but it won't boot up correctly.

In the API, insert an instance with a populated boot field:

[{ ...
"disk": [{
  "deviceName": "<disk-name>",
  "source": "<disk-uri>",
  "boot": "true",
  ...
}]

When you are using the API to specify a root persistent disk:

  • You can only specify the boot field on one disk. You may attach many persistent disks but only one can be the root persistent disk.
  • You must attach the root persistent disk as the first disk for that instance.
  • When the source field is specified, you cannot specify the initializeParams field, as they conflict with each other. Providing a source indicates that the root persistent disk exists already, whereas specifying initializeParams indicates that Compute Engine should create the root persistent disk.

Repartitioning a Root Persistent Disk

By default, when you create a root persistent disk with a source image or a source snapshot, your disk is automatically partitioned with enough space for the root filesystem. It is possible to create a root persistent disk with more disk space using the sizeGb field but the additional persistent disk space won't be recognized until you repartition your persistent disk. Follow these instructions to repartition a root persistent disk with additional disk space, using fdisk and resize2fs:

  1. If you haven't already, create your root persistent disk:
    user@local:~$ gcutil --project=<project-id> adddisk <disk-name> --source_image=<image> --size_gb=<size-larger-than-10gb>
  2. Start an instance using the root persistent disk.
    user@local:~$ gcutil --project=<project-id> addinstance <instance-name> --disk=<disk-name>,boot
  3. Check the size of your disk.

    Although you have specified a size larger than 10GB for your persistent disk, notice that only the 10GB of root disk space appears:

    user@mytestinstance:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           10G  641M  8.9G   7% /
    /dev/root        10G  641M  8.9G   7% /
    none            1.9G     0  1.9G   0% /dev
    tmpfs           377M  116K  377M   1% /run
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           753M     0  753M   0% /run/shm
  4. Run fdisk.
    user@mytestinstance:~$ sudo fdisk /dev/sda
    Note: If you are running a CentOS image, you will need to turn off DOS-compatibility and change the unit to sectors to complete this step. This is because CentOS starts fdisk in a deprecated mode which is DOS-compatible but will cause an alignment error in your persistent disks. Enter the follow commands in your prompt to turn off DOS-compatibility mode:
    Command (m for help): c
    DOS Compatibility flag is not set
    
    Command (m for help): u
    Changing display/entry units to sectors

    When prompted, enter in p to print the current state of /dev/sda which will display the actual size of your root persistent disk. For example, this root persistent disk has ~50GB of space:

    The device presents a logical sector size that is smaller than
    the physical sector size. Aligning to a physical sector (or optimal
    I/O) size boundary is recommended, or performance may be impacted.
    
    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1            2048    20971519    10484736   83  Linux

    Make note of this device ID number for future steps. In this example, the device ID is 83.

  5. Next, enter d at the prompt to delete the logical partition at /dev/sda so we can resize the partition. This won't delete any files on the system.
    Command (m for help): d
    Selected partition 1

    Enter p at the prompt to review and confirm that the original partition has been deleted (notice the empty lines after Device Boot where the partition use to be):

    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    
    
  6. Next, type n at the prompt to create a new partition. Select the default values for partition type, number, and the first sector when prompted:
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-104857599, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
    Using default value 104857599

    Confirm that your partition was created:

    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1            2048   104857599    52427776   83  Linux
  7. Check that your device ID is the same ID number that you made note of in step 4. For this example, the device ID matches the original ID of 83.
  8. Commit your changes by entering w at the prompt:
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table. The new table will be used at
    the next reboot or after you run partprobe(8) or kpartx(8)
    Syncing disks.
  9. Reboot your instance. This will close your current SSH connection. Wait a couple minutes before performing another ssh connection.
    user@mytestinstance:~$ sudo reboot
  10. SSH back into your instance.
    user@local:~$ gcutil --project=<project-id> ssh <instance-name>
  11. Resize your filesystem to the full size of the partition:
    user@mytestinstance:~$ sudo resize2fs /dev/sda1
    resize2fs 1.42.5 (29-Jul-2012)
    Filesystem at /dev/sda1 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 4
    The filesystem on /dev/sda1 is now 13106944 blocks long.
  12. Verify that your filesystem is now the correct size.
    user@mytestinstance:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           50G  1.3G   47G   3% /
    /dev/root        50G  1.3G   47G   3% /
    none            1.9G     0  1.9G   0% /dev
    tmpfs           377M  116K  377M   1% /run
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           753M     0  753M   0% /run/shm
    

Persistent Disk Snapshots

Google Compute Engine offers the ability to take snapshots of your persistent disk and create new persistent disks from that snapshot. This can be useful for backing up data, recreating a persistent disk that might have been lost, or copying a persistent disk. Google Compute Engine provides differential snapshots, which allow for better performance and lower storage charges for users. Differential snapshots work in the following manner:

  1. The first successful snapshot of a persistent disk is a full snapshot that contains all the data on the persistent disk.
  2. The second snapshot only contains any new data or modified data since the first snapshot. Data that hasn't changed since snapshot 1 isn't included. Instead, snapshot 2 contains references to snapshot 1 for any unchanged data.
  3. Snapshot 3 contains any new or changed data since snapshot 2 but won't contain any unchanged data from snapshot 1 or 2. Instead, snapshot 3 contains references to blocks in snapshot 1 and snapshot 2 for any unchanged data.

This repeats for all subsequent snapshots of the persistent disk.

Note: Snapshots are always created based on the last successful snapshot taken. For example, if you start the creation process for snapshot A, and also start a creation process for snapshot B before snapshot A is completed, snapshot B won't be based on snapshot A, and instead will be a full snapshot without any references to A. If you create snapshot A and snapshot B, but snapshot B fails to create successfully or is corrupted, when you attempt to create snapshot C, snapshot C will be based on snapshot A rather than snapshot B, since A was the last successful snapshot taken.

The diagram below attempts to illustrate this process:

Diagram describing how to create a snapshot

Snapshots are a global resource. Because they are geo-replicated, they will survive maintenance windows. It is not possible to share a snapshot across projects. You can see a list of snapshots available to a project by running:

gcutil listsnapshots --project=<project-id>

To list information about a particular snapshot:

gcutil getsnapshot <snapshot-name> --project=<project-id>

Creating a Snapshot

Before you create a persistent disk snapshot, you should ensure that you are taking a snapshot that is consistent with the desired state of your persistent disk. If you take a snapshot of your persistent disk in an "unclean" state, it may force a disk check and possibly lead to data loss. To help with this, Google Compute Engine encourages you to make sure that your disk buffers are flushed before you take your snapshot. For example, if your operating system is writing data to the persistent disk, it is possible that your disk buffers are not yet cleared. Follow these instructions to clear your disk buffers:

Linux
  • Unmount the filesystem

    This is the safest, most reliable way to ensure your disk buffers are cleared. To do this:

    1. ssh into your instance.
    2. Run sudo umount <disk‑location>.
    3. Create your snapshot.
    4. Remount your persistent disk.
  • Alternatively, you can also sync your filesystem

    If unmounting your persistent disk is not an option, such as in scenarios where an application might be writing data to the disk, you can sync your filesystem to flush the disk buffers. To do this:

    1. ssh into your instance.
    2. Stop your applications from writing to your persistent disk.
    3. Run sudo sync.
    4. Create your snapshot.
Windows

Caution: Taking a snapshot of persistent disk attached to a Windows instance requires that the instance is terminated. Make sure you have saved all your data before you continue with this process. If you will be using the snapshot to start multiple virtual machines, Compute Engine recommends that you sysprep the disk.

  1. Log onto your Windows instance.
  2. Make a copy of the following file:
    C:\Program Files\Google Compute Engine\sysprep\unattended.xml
  3. Edit the copied file to provide a password. Look for the following fields: AdministratorPassword and AutoLogon. Provide a generic password for both fields. This password is only used temporarily during the setup process and will be changed by the instance setup script to the password provided by the metadata server. This is necessary so that you can log into future virtual machine instances that use this image.
  4. Run the following command in a cmd window:
    gcesysprep -unattend unattended-copy.xml

Once you run the gcesysprep command, your Windows will terminate. Afterwards, you can take a snapshot of the root persistent disk.

Create your snapshot using the gcutil addsnapshot command:

gcutil --project=<project-id> addsnapshot <snapshot-name> --source_disk=<source-disk>

Important Flags and Parameters:

<snapshot‑name>
[Required] The name for this new snapshot.
‑‑zone=<zone>
[Optional] The zone of the persistent disk.
‑‑source‑disk=<source‑disk>
[Required] The persistent disk from which to create the snapshot.
‑‑project=<project‑id>
[Required] The project ID of the source disk. The snapshot is created within the same project. For example, if the source disk belongs to a project named myfirstproject, the snapshot will also belong to the project.

gcutil waits until the operation returns a status of READY or FAILED, or reaches the maximum timeout and returns the last known details of the snapshot in JSON format.

Caution: If you create a snapshot from a persistent disk and the snapshot creation failed, you won't be able to delete the original persistent disk until you clean up the failed snapshot. This prevents accidentally removing source data if the snapshot did not successfully copy your persistent disk information.

Creating a New Persistent Disk from a Snapshot

After creating a persistent disk snapshot, you can apply data from that snapshot to new persistent disks. It is only possible to apply data from a snapshot when you first create a persistent disk. You cannot apply a snapshot to an existing persistent disk, or apply a snapshot to persistent disks that belongs to a different project than that snapshot.

To apply a data from persistent disk snapshot, run the gcutil adddisk command with the ‑‑source_snapshot flag:

gcutil adddisk <disk-name> --project=<project-id> --source_snapshot=<snapshot-name> [--size_gb=<size>]

Important Flags and Parameters:

<disk‑name>
[Required] The name for the persistent disk. The name must start with a lowercase letter, followed by 1-62 lowercase letters, numbers, or hyphens, and cannot end with a hyphen.
‑‑project=<project‑id>
[Required] The ID of the project where this persistent disk should live.
‑‑source_snapshot=<snapshot‑name>
[Required] The persistent disk snapshot whose data should be to applied to this disk.
‑‑size_gb=<size>
[Optional] The size of the persistent disk. This must be equal or larger than the size of the snapshot. If you create a non-root persistent disk that is larger than the original size of the snapshot, you will need to follow the instructions to restore a snapshot to a larger size in order to use the additional space.

If not specified, the persistent disk size will be the same size as the original disk from which the snapshot was created.

Restoring a Snapshot to a Larger Size

You can restore a non-root persistent disk snapshot to a larger size than the original snapshot but you must run some extra commands from within the instance for the additional space to be recognized by the instance. For example, if your original snapshot is 500GB, you can choose to restore it to a persistent disk that is 600GB or more. However, the extra 100GB won't be recognized by the instance until you mount and resize the filesystem.

The instructions that follow discuss how to mount and resize your persistent disk using resize2fs as an example. Depending on your operating system and filesystem type, you may need to use a different filesystem resizing tool. Please refer to your operating system documentation for more information.

Note: This only works for non-root persistent disk snapshots. If you are restoring a root persistent disk snapshot that is larger than the original snapshot size, you must follow the instructions to Repartition the root persistent disk for the extra space to be recognized by the instance.

  1. Create a new persistent disk from your non-root snapshot that is larger than the snapshot size.

    Provide the ‑‑size_gb flag to specify a larger persistent disk size. For example:

    me@local~:$ gcutil --project=exampleproject adddisk newdiskname --source_snapshot=my-data-disk-snapshot --size_gb=600
  2. Attach your persistent disk to an instance.
    me@local~:$ gcutil --project=exampleproject attachdisk my-instance-name --disk=newdiskname
  3. ssh into your instance.
    me@local~:$ gcutil --project=exampleproject ssh my-instance-name
  4. Determine the /dev/* location of your persistent disk by running:
    me@my-instance-name:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-newdiskname -> ../../sdb # newdiskname is located at /dev/sdb
  5. Mount your new persistent disk.

    Create a new mount point. For example, you can create a mount point called /mnt/pd1.

    user@my-instance-name:~$ sudo mkdir /mnt/pd1

    Mount your persistent disk:

    me@my-instance-name:~$ sudo mount -a /dev/sdb /mnt/pd1
  6. Resize your persistent disk using resize2fs.
    me@my-instance-name:~$ sudo resize2fs /dev/sdb
  7. Check that your persistent disk reflects the new size.
    me@my-instance-name:~$ df -h
    Filesystem                                              Size  Used Avail Use% Mounted on
    rootfs                                                  296G  671M  280G   1% /
    udev                                                     10M     0   10M   0% /dev
    tmpfs                                                   3.0G  112K  3.0G   1% /run
    /dev/disk/by-uuid/36fd30d4-ea87-419f-a6a4-a1a3cf290ff1  296G  671M  280G   1% /
    tmpfs                                                   5.0M     0  5.0M   0% /run/lock
    /dev/sdb                                                593G  198M  467G   1% /mnt/pd1 # The persistent disk is now ~600GB

Deleting a Snapshot

Google Compute Engine provides differential snapshots so that each snapshot only contains data that has changed since the previous snapshot. For unchanged data, snapshots use references to the data in previous snapshots. When you delete a snapshot, Google Compute Engine goes through the following procedures:

  1. The snapshot is immediately marked as DELETED in the system.
  2. If the snapshot has no dependent snapshots, it is deleted outright.
  3. If the snapshot has dependent snapshots:
    1. Any data that is required for restoring other snapshots will be moved into the next snapshot. The size of the next snapshot will increase.
    2. Any data that is not required for restoring other snapshots will be deleted. This lowers the total size of all your snapshots.
    3. The next snapshot will no longer reference the snapshot marked for deletion but will instead reference the existing snapshot before it.

The diagram below attempts to illustrate this process:

Diagram describing the process for deleting a snapshot

To delete a snapshot, run:

gcutil --project=<project-id> deletesnapshot <snapshot-name>

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID for this request.
<snapshot‑name>
[Required] The name of the snapshot to delete.

Attaching Multiple Persistent Disks to One Instance

To attach more than one disk to an instance, run addinstance with multiple ‑‑disk flags, one for each disk to attach.

Attaching a Persistent Disk to Multiple Instances

It is possible to attach a persistent disk to more than one instance. However, if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode.

If you attach a persistent disk in read-write mode and then try to attach the disk to subsequent instances, Google Compute Engine returns an error similar to the following:

error   | RESOURCE_IN_USE
message | The disk resource '<disk‑name>' is already being used
in read-write mode

To attach a persistent disk to an instance in read-only mode, review instructions for attaching a persistent disk and set the <mode> to read_only.

Getting Persistent Disk Information

To see a list of persistent disks in the project:

gcutil --project=<project-id> listdisks

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID for this request.

To see detailed information about a specific persistent disk:

gcutil --project=<project-id> getdisk <disk-name>

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID for this request.
<disk‑name>
[Required] The disk for which you want to get more information.

By default, gcutil provides an aggregate listing of all your resources across all available zones. If you want a list of resources from just a single zone, provide the ‑‑zone flag in your request.

$ gcutil --project=<project-id> listdisks --zone=<zone>

Important Flags and Parameters:

‑‑project=<project‑id>
[Required] The project ID for this request.
‑‑zone=<zone>
[Required] The zone from which you want to list instances.

In the API, you need to make requests to two different methods to get a list of aggregate resources or a list of resources within a zone. To make a request for an aggregate list, make a GET request to that resource's aggregatedList URI:

https://www.googleapis.com/compute/v1/aggregated/disks

In the client libraries, make a request to the disks().aggregatedList function:

def listAllDisks(auth_http, gce_service):
  request = gce_service.disks().aggregatedList(project=PROJECT_ID)
  response = request.execute(auth_http)

  print response

To make a request for a list of instances within a zone, make a GET request to the following URI:

http://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks

In the API client libraries, make a disks().list request:

def listDisks(auth_http, gce_service):
  request = gce_service.disks().list(project=PROJECT_ID,
    zone='<zone>')
  response = request.execute(auth_http)

  print response

Migrating a Persistent Disk to a Different Instance in the Same Zone

To migrate a persistent from one instance to another, you can detach a persistent disk from an instance and reattach it to another instance (either a running instance or a new instance). If you merely want to migrate the information, you can take a persistent disk snapshot and apply it to the new disk.

Persistent disks retain all their information indefinitely until they are deleted, even if they are not attached to a running instance.

Migrating a Persistent Disk to a Different Zone

You cannot attach a persistent disk to an instance in another zone. If you want to migrate your persistent disk data to another zone, you can use persistent disk snapshots. To do so:

  1. Create a snapshot of the persistent disk you would like to migrate
  2. Apply the snapshot to a new persistent disk in your desired zone

Deleting a Persistent Disk

When you delete a persistent disk, all its data is destroyed and you will not be able to recover it.

You cannot delete a persistent disk that is assigned to a specific instance. To check whether a disk is assigned, run gcutil listinstances, which will list all persistent disks in use by each instance.

To delete a disk:

gcutil  deletedisk <disk-name> --project=<project-id>

Important Flag and Parameters:

‑‑project=<project‑id>
[Required] The project ID of the disk.
<disk‑name>
[Required] The name of the disk to delete.

Setting the Auto-delete State of a Persistent Disk

Read-write persistent disks can be automatically deleted when the associated virtual machine instance is deleted. This behavior is controlled by the autoDelete property on the virtual machine instance for a given attached persistent disk and can be updated at any time. Similarly, you can also prevent a persistent disk from being deleted as well, by marking the autoDelete value as false.

Note: You can only set the auto delete state of persistent disk attached in read-write mode.

To set the auto delete state of a persistent disk in gcutil, use the gcutil setinstancediskautodelete command:

gcutil --project=<project-id> setinstancediskautodelete <instance-name> --device_name=<device-name> --zone=<zone> --[no]auto_delete

Important flags and parameters:

‑‑project=<project-id>
[Required] The project ID for this request.
<instance-name>
[Required] The name of the instance for which you want to update the auto delete status of the persistent disk.
‑‑device_name=<device-name>
[Required] The device name of the persistent disk. This is the device name specified at instance creation time, if applicable, and may not be the same as the disk name. If you aren't sure, use the persistent disk name.
‑‑zone=<zone>
[Required] The zone for this request.
‑‑[no]auto_delete
[Required] The auto-delete state to set.

In the API, make a POST request to the following URI:

https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance-name>/setDiskAutoDelete?deviceName=deviceName,autoDelete=true

Using the client library, use the instances().setDiskAutoDelete method:

def setAutoDelete(gce_service, auth_http):
  request = gce_service.instances().setDiskAutoDelete(project=PROJECT_ID, zone=ZONE, deviceName=DEVICE_NAME, instance=INSTANCE, autoDelete='true')
  response = request.execute(http=auth_http)

  print response

Formatting Disks

Before you can use non-root persistent disks in Google Compute Engine, you need to format and mount them. We provide the safe_format_and_mount tool in our images to assist in this process. The safe_format_and_mount tool can be found at the following location on your virtual machine instance:

/usr/share/google/safe_format_and_mount

The tool performs the following actions:

  • Format the disk (only if it is unformatted)
  • Mount the disk

This can be helpful if you need to use a non-root persistent disk from a startup script, because the tool prevents your script from accidentally reformatting your disks and erasing your data.

safe_format_and_mount works much like the standard mount tool:

$ sudo mkdir <mount-point>
$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" <disk-name> <mount-point>

You can alternatively format and mount disks using standard tools such as mkfs and mount.

Caution: If you are formatting disks from a startup script, the startup script runs on every boot (due to reboot or unexpected failure and you risk data loss if you do not take precautions to prevent reformatting your data on boot. You should also back up all important data and set up data recovery systems as a precaution.

Checking an Instance's Available Disk Space

If you're not sure how much disk space you have, you can check the disk space of an instance's mounted disks using the following command:

me@my-instance:~$ sudo df -h

To match up a disk's file system name, run:

me@my-instance:~$ ls -l /dev/disk/by-id/google-*
...
lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-mypd -> ../../sdb  # google-mypd corresponds to /dev/sdb
lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-pd0 -> ../../sdc

me@my-instance:~$ sudo df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.4G  839M  8.1G  10% /
....
/dev/sdb              734G  197M  696G   1% /mnt/pd0 # sdb has 696GB of available space left

Authentication required

You need to be signed in with Google+ to do that.

Signing you in...

Google Developers needs your permission to do that.