The Tango project will be deprecated on March 1st, 2018.
Google is continuing AR development with ARCore, a new platform designed for building augmented reality apps for a broad range of devices without the requirement for specialized hardware.

In addition to working with our OEM partners on new devices, Google is also working closely with Asus, as one of our early Tango partners, to ensure that ZenFone AR will work with ARCore. We'll be back with more information soon on how existing and new ZenFone AR users will get access to ARCore apps.

Motion Tracking

How it works

As described on the Tango Concepts page, Motion Tracking allows a device to understand its motion as it moves through an area. This page describes Tango's implementation of Motion Tracking and suggests several ways to utilize it in your applications.


The Tango APIs provide the position and orientation of the user's device in full six degrees of freedom; this combination of position and orientation is referred to as the device’s pose. The APIs support two ways to get pose data: callbacks to get the most recent pose updates, and functions to get a pose estimate at a specific time. The data is returned with two main parts: a vector in meters for translation and a quaternion for rotation. Poses are specified within specific reference frame pairs, and you must specify a target frame with respect to a base frame of reference when asking for a pose.

Usability tips

For applications that translate a user's movement in the real world to a virtual one, you will need to be aware of physical space requirements. If your virtual castle is 100 meters long, the user will have to move the same distance in the real world to get from one end of the castle to the other.

Also, it is important that the user understands how the virtual 3D world will be oriented when they start your application so they aren’t blocked from moving around by walls, doors, or furniture. You may want to offer options for visualizing the expected area needed to play or allow users to rescale the virtual space.

Common use cases

Improved rotation sensing: Any application using the Android Game Rotation Vector APIs gains enhanced precision by switching to the Tango API. In addition to the gyroscope and accelerometers, Tango uses the wide-angle motion tracking camera (sometimes referred to as the "fisheye" lens) to add visual information, which helps to estimate rotation and linear acceleration more accurately.

Tracking movement: Tango allows you to track a device's movement in the real world. The sample app below was created to show a 2D top-down view of the position of the device and its movement on a 2D grid with the motion tracking camera view shown as a viewing frustum.

Virtual camera: When you combine rotation and position tracking, you can use the device as a virtual camera in a 3D rendered environment such as a game. Tango provides an SDK for the Unity 3D game engine and supports OpenGL and other 3D engines through C or Java.

To see some examples of virtual reality in action using Tango, see our demos page.


Motion Tracking is great by itself if you need to know a device's orientation and relative position, but it does have limitations:

  • It does not give the device the ability to understand the actual area around it.

  • It does not "remember" previous sessions. Every time you start a new Motion Tracking session, the tracking starts over and reports its position relative to its most recent starting position.

  • Over long distances and periods of time the accumulation of small errors can cause measurements to "drift," leading to larger errors in absolute position.

For certain types of apps, you will need the ability to save descriptions of a space to reference later—for example, a retail store app where customers can call up a saved area and then shop for products within it. You will also need to correct the "drift" problem. Both of these issues are addressed by the next core technology, Area Learning.

More about tracking rotation and acceleration

Tango implements Motion Tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.

Standard visual odometry uses camera images to determine a change in position by looking at the relative position of different features in those images. For example, if you took a photo of a building from far away and then took another photo from closer up, it would be possible to calculate the distance the camera moved based on the change in size and position of the building in the photos.

Visual-inertial odometry supplements visual odometry with inertial motion sensors capable of tracking a device's rotation and acceleration. This allows a Tango device to estimate both its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, Motion Tracking using VIO works indoors.


To learn about making 3D representations of the actual geometry surrounding the user, go to the Depth Perception page.

  • Tango does not provide a global position or use GPS to find its location. It tracks its relative position within an area using its built-in sensors. If you want to estimate the geographic location of the user, use the Android Location API.

  • Tango and Android are not hard real-time systems. This is largely because the Android Linux Kernel cannot provide strong guarantees on the execution time of the software running on the device. As a result, Project Tango is considered a soft real-time system. All data from Tango includes timestamps from a single clock with single-microsecond accuracy, relative to when the device booted, to allow you to reconcile readings from multiple sensors or data types.

  • The process of learning an area while keeping track of a user's current position within it is known as Simultaneous Localization and Mapping, or SLAM.

Send feedback about...