This page contains Google Cloud glossary terms. For all glossary terms, click here.
A category of specialized hardware components designed to perform key computations needed for deep learning algorithms.
Accelerator chips (or just accelerators, for short) can significantly increase the speed and efficiency of training and inference tasks compared to a general-purpose CPU. They are ideal for training neural networks and similar computationally intensive tasks.
Examples of accelerator chips include:
- Google's Tensor Processing Units (TPUs) with dedicated hardware for deep learning.
- NVIDIA's GPUs which, though initially designed for graphics processing, are designed to enable parallel processing, which can significantly increase processing speed.
A specialized hardware accelerator designed to speed up machine learning workloads on Google Cloud Platform.
Tensor Processing Unit (TPU)
An application-specific integrated circuit (ASIC) that optimizes the performance of machine learning workloads. These ASICs are deployed as multiple TPU chips on a TPU device.
Abbreviation for Tensor Processing Unit.
A programmable linear algebra accelerator with on-chip high bandwidth memory that is optimized for machine learning workloads. Multiple TPU chips are deployed on a TPU device.
A printed circuit board (PCB) with multiple TPU chips, high bandwidth network interfaces, and system cooling hardware.
The central coordination process running on a host machine that sends and receives data, results, programs, performance, and system health information to the TPU workers. The TPU master also manages the setup and shutdown of TPU devices.
A TPU resource on Google Cloud Platform with a specific TPU type. The TPU node connects to your VPC Network from a peer VPC network. TPU nodes are a resource defined in the Cloud TPU API.
A specific configuration of TPU devices in a Google data center. All of the devices in a TPU pod are connected to one another over a dedicated high-speed network. A TPU Pod is the largest configuration of TPU devices available for a specific TPU version.
A TPU entity on Google Cloud Platform that you create, manage, or consume. For example, TPU nodes and TPU types are TPU resources.
A TPU slice is a fractional portion of the TPU devices in a TPU Pod. All of the devices in a TPU slice are connected to one another over a dedicated high-speed network.
A configuration of one or more TPU devices with a specific
TPU hardware version. You select a TPU type when you create
a TPU node on Google Cloud Platform. For example, a
TPU type is a single TPU v2 device with 8 cores. A
v3-2048 TPU type has 256
networked TPU v3 devices and a total of 2048 cores. TPU types are a resource
defined in the
Cloud TPU API.
A process that runs on a host machine and executes machine learning programs on TPU devices.