Tango Camera Intrinsics and Extrinsics


When you calibrate your device, the calibration software takes certain measurements relating to how the device's cameras are structured and how they "see" the world. The measurements are called intrinsic parameters. This information is essential for certain use cases. For example, if you create an Augmented Reality app, you combine rendered virtual content with a video overlay. You need to ensure that the field of view (FOV) of the device's camera aligns with the FOV of the virtual camera in the rendering tool you're using. (FOV is explained below.) You can query the Tango API to obtain the values you need to make this alignment.

Another important calculation for Augmented Reality apps ensures that your virtual objects wind up exactly where you intend them to be in the FOV. The component in a Tango device that determines where the device is (position) and which way it's facing (rotation) is the Inertial Measurement Unit (IMU). However, the camera is not in the exact same place on the device as the IMU, so it "sees" the world from a slightly different location. This difference, though small, might be enough to make the virtual objects in your app look a bit misplaced. You must compensate for this by choosing an appropriate coordinate frame pair. The distances between components on the device are considered extrinsic parameters and will be discussed further down the page.

Field of view

The FOV is how much, from left to right (horizontal FOV) and from top to bottom (vertical FOV) a Tango device can see of a scene that is in front of it. FOV is measured in degrees.

Focal length

Focal length is an intrinsic parameter. The focal length helps to determine the size of the FOV. On most traditional cameras, you can adjust the focal length by zooming in or out. On a Tango device, the lenses are fixed, so this isn't a factor. The important things for you to know about focal length are:

  1. When you request intrinsic parameters from the Tango API for a particular camera, two of the parameters you receive are the x and y focal length values for that camera.
  2. You must use these values in the formula that determines the FOV for your rendering tool.

Retrieve intrinsics from the Tango API

Let's take a look at retrieving intrinsic parameters from the C API. The steps are:

  1. Create a TangoCameraIntrinsics struct.
  2. Call the TangoService_getCameraIntrinsics function. Pass it the ID for the camera and the address of the struct.

For example, to get the color camera intrinsics:

TangoCameraIntrinsics ccIntrinsics;
TangoService_getCameraIntrinsics(TANGO_CAMERA_COLOR, &ccIntrinsics);

The function populates the struct with intrinsic information.

The steps in Java and Unity are similar:


The method returns a TangoCameraIntrinsics that contains intrinsic information.


  1. Create a TangoCameraIntrinsics struct object.
  2. Call the GetIntrinsics method of the Tango.VideoOverlayProvider class.

The method populates the struct with intrinsic information.

Calculate camera field of view (FOV)

All the parameters you need to calculate FOV can be retrieved from the API.

Parameter Description
width The width of the image on the image sensor in pixels.
height The height of the image on the image sensor in pixels.
fx Focal length, x axis, in pixels.
fy Focal length, y axis, in pixels.

Note that in most systems, fx = fy.

Example: Let's say you get the color camera intrinsics as mentioned above:

TangoCameraIntrinsics ccIntrinsics;
TangoService_getCameraIntrinsics(TANGO_CAMERA_COLOR, &ccIntrinsics);

When you examine the ccIntrinsics struct, you see the following data:



Vertical FOV = 2*atan(0.5*720.0/1042.0) = 2*19.0549 deg = 38.1098 deg

The equations for the horizontal and vertical fields of view are:

Horizontal FOV = 2 * atan(0.5 * width / Fx)

Vertical FOV = 2 * atan(0.5 * height / Fy)

If your rendering engine only supports one FOV value, refer to its documentation to know which FOV to use. If you require the diagonal field of view, the equation is:

Diagonal FOV = 2 * arctan(sqrt((width/2Fx)^2 + (height/2Fy)^2))

For more information about calculating the field of view, see the Wikipedia article on Angle of View.

Tango's lens distortion models

The lenses in cameras are not perfect and add some amount of distortion. For most use cases the effect is small enough to ignore; however, when a Tango device is calibrated, it examines and stores distortion information and these values are available in the TangoCameraIntrinsics struct (C and Unity) or object (Java) if you need them.

Tango uses two lens distortion models:

If you are using the motion tracking camera (sometimes called the "fisheye" lens), then the "FOV" distortion model is used and the TangoCameraIntrinsics struct/object will contain the calibration type TANGO_CALIBRATION_EQUIDISTANT.

If you are using the color camera, then the polynomial distortion model is used and the TangoCameraIntrinsics struct/object will contain the calibration type TANGO_CALIBRATION_POLYNOMIAL_3_PARAMETERS.

Retrieve extrinsics from the Tango API

As mentioned earlier, for Augmented Reality apps, you must calculate the position (pose) between the IMU and the camera being used. This is considered an extrinsic parameter. Because the positions of components on a device don't change, you only need to do this once.

//Color Camera Frame with respect to IMU Frame
TangoPoseData cToIMUPose;
TangoCoordinateFramePair cToIMUPair;
TangoService_getPoseAtTime(0.0, cToIMUPair, &cToIMUPose);
cToIMU_position = ToVector(cToIMUPose.translation[0],
cToIMU_rotation = ToQuaternion(cToIMUPose.orientation[3],

You can do this calculation for any two components on the device by choosing the appropriate coordinate frame pair.