The Tango project will be deprecated on March 1st, 2018.
Google is continuing AR development with ARCore, a new platform designed for building augmented reality apps for a broad range of devices without the requirement for specialized hardware.

Area Learning

How it works

With Motion Tracking alone, the device tracks its movement and orientation through 3D space and tells you where it is and which way it’s facing, but it retains no memory of what it sees. Area Learning gives the device the ability to see and remember the key visual features of a physical space—the edges, corners, other unique features—so it can recognize that area again later. To do this, it stores a mathematical description of the visual features it has identified inside a searchable index on the device. This allows the device to quickly match what it currently sees against what it has seen before without any cloud services.

When a Tango device has learned an area, there are two key things it can do to improve upon the information provided by Motion Tracking alone:

  1. Improve the accuracy of the trajectory by performing "drift corrections."

  2. Orient and position itself within a previously learned area by performing "localization."

Improving the trajectory

As mentioned on the Motion Tracking Overview page, motion estimates become less accurate over time. The device corrects for some errors by orienting itself to gravity, but errors in other aspects of its pose cannot be detected through Motion Tracking alone.

With Area Learning turned on, the Tango device remembers the visual features of the area it has visited and uses them to correct errors in its understanding of its position, orientation, and movement. This memory allows the system to perform drift corrections (also called loop closures). When the device sees a place it knows it has seen earlier in your session, it realizes it has traveled in a loop and adjusts its path to be more consistent with its previous observations. These corrections can be used to adjust the device's position and trajectory within your application.

The illustration below shows an example of drift correction. As you begin walking through an area, there are actually two different trajectories occurring simultaneously—the path you are walking (the "real trajectory") and the path the device estimates that you are walking (the "estimated trajectory"). The green line is the real trajectory that the device is traveling; the red line shows how, over time, the estimated trajectory has drifted away from the real trajectory. When the device returns to the origin and realizes it has seen the origin before, it corrects the drift errors and adjusts the estimated trajectory to better match the real trajectory.

Without drift correction, a game or application using a virtual 3D space aligned with the real world may encounter inaccuracies in Motion Tracking after extended use. For example, if a door in a game world corresponds with a door frame in the real world, drift errors can cause the game door to appear in the middle of the real-world wall instead of in the door frame.

Area descriptions and localization

After you have walked through an area with Area Learning turned on, you can save what the device has seen in an Area Description File (ADF). Learning an area and loading it as an ADF has a number of advantages; for example, you can use it to intentionally align the device's coordinate frame with a pre-existing coordinate frame so that content in a game or app always appears in the same physical location.

There are two ways to create an ADF. You can use any application that can save area descriptions, including Tango Area Learning sample projects (see more information about sample projects for C, Java, or Unity). Your second choice is to use the Tango APIs to handle the learning, saving, and loading all within your application.

If you want to create a consistent experience within the same mapped space, such as having virtual objects appear in the same location as the last time the user visited an area, you must perform localization. This is a two-step process:

  1. Load a previously saved ADF.

  2. Move the device into the area that was saved in the ADF.

When the device "sees" that it is in the area covered by the ADF, it instantly knows where it is relative to the origin in the file (that is, the point where original learning started in the saved area)—this is localization. Without localizing to an area description, a device's starting point is lost every time you end the session.

Usability tips

  • Tango devices depend on the visual diversity of the area to localize. If you are in an area with many identical rooms or in a completely empty room with blank walls, it is difficult to localize.

  • An environment can look quite different from different angles and positions, and can change over time (furniture can be moved around, lighting will be different depending on the time of day). Localization is more likely to succeed if the conditions at the time you localize are similar to the conditions that existed when the ADF was created.

  • Because environments can and do change, you might create multiple ADFs for a single physical location under different conditions. This gives your users the option to select a file that most closely matches their current conditions. You could also append multiple sessions onto the same ADF to capture visual descriptions of the environment from every position and angle and under every variation of lighting or environmental change.

Our UX Best Practices page has additional tips on creating ADFs and using Area Learning.

Common use cases

Multi-player experiences: Two or more users in the same physical location share an ADF through a cloud service and then localize to the same coordinate frame. This allows multiple people to interact in the same physical space where all of their relative positions are known. The Tango APIs do not natively support data sharing in the cloud, but you can implement this through Google Cloud Storage and the Google Play Games API.

Location-aware shopping or other activities: A retail store manager makes an ADF of their store and then makes the ADF publicly available. Customers load the ADF, localize, and then use the device to navigate directly to products they are interested in.

Area Learning and using area descriptions are powerful features, and we’re excited to see how developers use them to offer new user experiences.

Using learning mode and loaded ADFs

The behavior of some aspects of the Tango APIs will vary depending on your settings for learning mode or whether you loaded an ADF.

In the table below, the two left columns specify whether you have learning mode on and whether you have loaded a previously stored ADF. You may or may not be able to save an ADF depending on the status of those two things. For example, if you don't have learning mode on, you cannot save an ADF. If you have learning mode on and have loaded an ADF, you can only save again after you have localized against the loaded ADF.

Also, if you aren't in learning mode and don't have an ADF loaded, you cannot get pose data using the TANGO_COORDINATE_FRAME_AREA_DESCRIPTION frame of reference. If you have an ADF loaded, you can get pose data from that frame of reference after the device localizes to the loaded ADF.

Is learning mode on? Is there an ADF loaded? Is pose data available for this frame of reference pair? Can you save an ADF?
Start of service to device Area description to device Area description to start of service
False False Available at start Not available Not available Cannot save area description.
True False Available at start Available at start* Available at start* Current area description saved with new UUID.
False True Available at start Available after localized Available after localized Cannot save area description.
True True Available at start Available after localized Available after localized You cannot save the area description until after you have localized against the loaded ADF.
When you save, it will create a new file with a new UUID.

*If tracking is lost, these frame of reference pairs will no longer be available. After service reset, the session functions as if learning mode is True and an ADF was loaded, where the area descriptions are those that were learned up to the loss of tracking. To continue using the area description frame of reference, you must localize against what you learned before the loss of tracking. You must also localize to include what you learned before tracking was lost when saving an ADF.

Send feedback about...