This page contains TensorFlow glossary terms. For all glossary terms, click here.
A specialized hardware accelerator designed to speed up machine learning workloads on Google Cloud Platform.
Contrast with premade Estimators.
Dataset API (tf.data)
A high-level TensorFlow API for reading data and
transforming it into a form that a machine learning algorithm requires.
tf.data.Dataset object represents a sequence of elements, in which
each element contains one or more Tensors. A
object provides access to the elements of a
For details about the Dataset API, see Importing Data in the TensorFlow Programmer's Guide.
A category of hardware that can run a TensorFlow session, including CPUs, GPUs, and TPUs.
A TensorFlow programming environment in which operations run immediately. By contrast, operations called in graph execution don't run until they are explicitly evaluated. Eager execution is an imperative interface, much like the code in most programming languages. Eager execution programs are generally far easier to debug than graph execution programs.
An instance of the
tf.Estimator class, which encapsulates logic that builds
a TensorFlow graph and runs a TensorFlow session. You may create your own
custom Estimators (as described
or instantiate premade Estimators created by
Feature column (tf.feature_column)
A function that specifies how a model should interpret a particular feature. A list that collects the output returned by calls to such functions is a required parameter to all Estimators constructors.
tf.feature_column functions enable models to easily experiment
with different representations of input features. For details, see the
Feature Columns chapter
in the TensorFlow Programmers Guide.
- the data to extract (that is, the keys for the features)
- the data type (for example, float or int)
- The length (fixed or variable)
In TensorFlow, a computation specification. Nodes in the graph represent operations. Edges are directed and represent passing the result of an operation (a Tensor) as an operand to another operation. Use TensorBoard to visualize a graph.
A TensorFlow programming environment in which the program first constructs a graph and then executes all or part of that graph. Graph execution is the default execution mode in TensorFlow 1.x.
Contrast with eager execution.
In TensorFlow, a function that returns input data to the training, evaluation, or prediction method of an Estimator. For example, the training input function returns a batch of features and labels from the training set.
Layers API (tf.layers)
tf.layers.Densefor a fully-connected layer.
tf.layers.Conv2Dfor a convolutional layer.
The Layers API follows the Keras layers API conventions. That is, aside from a different prefix, all functions in the Layers API have the same names and signatures as their counterparts in the Keras layers API.
A number that you care about. May or may not be directly optimized in a machine-learning system. A metric that your system tries to optimize is called an objective.
The function within an Estimator that implements machine learning training, evaluation, and inference. For example, the training portion of a model function might handle tasks such as defining the topology of a deep neural network and identifying its optimizer function. When using premade Estimators, someone has already written the model function for you. When using custom Estimators, you must write the model function yourself.
For details about writing a model function, see the Creating Custom Estimators chapter in the TensorFlow Programmers Guide.
node (TensorFlow graph)
An operation in a TensorFlow graph.
A node in the TensorFlow graph. In TensorFlow, any procedure that creates, manipulates, or destroys a Tensor is an operation. For example, a matrix multiply is an operation that takes two Tensors as input and generates one Tensor as output.
Parameter Server (PS)
A job that keeps track of a model's parameters in a distributed setting.
See the TensorFlow Architecture chapter in the TensorFlow Programmers Guide for details.
An Estimator that someone has already built.
TensorFlow provides several premade Estimators, including
LinearClassifier. To learn more about
premade Estimators, see the
Premade Estimators chapter in the TensorFlow Programmers Guide.
Contrast with custom estimators.
A TensorFlow Operation that implements a queue data structure. Typically used in I/O.
The number of dimensions in a Tensor. For instance, a scalar has rank 0, a vector has rank 1, and a matrix has rank 2.
Not to be confused with rank (ordinality).
The directory you specify for hosting subdirectories of the TensorFlow checkpoint and events files of multiple models.
The recommended format for saving and recovering TensorFlow models. SavedModel is a language-neutral, recoverable serialization format, which enables higher-level systems and tools to produce, consume, and transform TensorFlow models.
See the Saving and Restoring chapter in the TensorFlow Programmer's Guide for complete details.
A TensorFlow object responsible for saving model checkpoints.
An object that encapsulates the state of the TensorFlow runtime
and runs all or part of a graph. When using the
low-level TensorFlow APIs, you instantiate and manage one or more
tf.session objects directly. When using the Estimators API,
Estimators instantiate session objects for you.
In TensorFlow, a value or set of values calculated at a particular step, usually used for tracking model metrics during training.
The primary data structure in TensorFlow programs. Tensors are N-dimensional (where N could be very large) data structures, most commonly scalars, vectors, or matrices. The elements of a Tensor can hold integer, floating-point, or string values.
The dashboard that displays the summaries saved during the execution of one or more TensorFlow programs.
A large-scale, distributed, machine learning platform. The term also refers to the base API layer in the TensorFlow stack, which supports general computation on dataflow graphs.
Although TensorFlow is primarily used for machine learning, you may also use TensorFlow for non-ML tasks that require numerical computation using dataflow graphs.
A platform to deploy trained models in production.
Tensor Processing Unit (TPU)
See rank (Tensor).
The number of elements a Tensor contains in various dimensions. For example, a [5, 10] Tensor has a shape of 5 in one dimension and 10 in another.
The total number of scalars a Tensor contains. For example, a [5, 10] Tensor has a size of 50.
A standard protocol buffer for describing input data for machine learning model training or inference.
Abbreviation for Tensor Processing Unit.
A programmable linear algebra accelerator with on-chip high bandwidth memory that is optimized for machine learning workloads. Multiple TPU chips are deployed on a TPU device.
A printed circuit board (PCB) with multiple TPU chips, high bandwidth network interfaces, and system cooling hardware.
The central coordination process running on a host machine that sends and receives data, results, programs, performance, and system health information to the TPU workers. The TPU master also manages the setup and shutdown of TPU devices.
A specific configuration of TPU devices in a Google data center. All of the devices in a TPU pod are connected to one another over a dedicated high-speed network. A TPU Pod is the largest configuration of TPU devices available for a specific TPU version.
A configuration of one or more TPU devices with a specific
TPU hardware version. You select a TPU type when you create
a TPU node on Google Cloud Platform. For example, a
TPU type is a single TPU v2 device with 8 cores. A
v3-2048 TPU type has 256
networked TPU v3 devices and a total of 2048 cores. TPU types are a resource
defined in the
Cloud TPU API.
A process that runs on a host machine and executes machine learning programs on TPU devices.