The Raw Depth API provides depth data for a camera image that has higher accuracy than full Depth API data, but does not always cover every pixel. Raw depth images, along with their matching confidence images, can also be further processed, allowing apps to use only the depth data that has sufficient accuracy for their individual use case.
Raw Depth is available on all devices that support the Depth API. The Raw Depth API, like the full Depth API, does not require a supported hardware depth sensor, such as a time-of-flight (ToF) sensor. However, both the Raw Depth API and the full Depth API make use of any supported hardware sensors that a device may have.
Raw Depth API vs full Depth API
The Raw Depth API provides depth estimates with higher accuracy, but raw depth images may not include depth estimates for all pixels in the camera image. In contrast, the full Depth API provides estimated depth for every pixel, but per-pixel depth data may be less accurate due to smoothing and interpolation of depth estimates. The format and size of depth images are the same across both APIs. Only the content differs.
The following table illustrates the differences between the Raw Depth API and the full Depth API using an image of a chair and a table in a kitchen.
|Raw Depth API
|Full Depth API
In confidence images returned by the Raw Depth API, lighter pixels have higher confidence values, with white pixels representing full confidence and black pixels representing no confidence. In general, regions in the camera image that have more texture, such as a tree, will have higher raw depth confidence than regions that don’t, such as a blank wall. Surfaces with no texture usually yield a confidence of zero.
If the target device has a supported hardware depth sensor, confidence in areas of the image close enough to the camera will likely be higher, even on textureless surfaces.
The compute cost of the Raw Depth API is about half of the compute cost for the full Depth API.
With the Raw Depth API, you can obtain depth images that provide a more detailed representation of the geometry of the objects in the scene. Raw depth data can be useful when creating AR experiences where increased depth accuracy and detail are needed for geometry-understanding tasks. Some use cases include:
- 3D reconstruction
- Shape detection
In a new ARCore session, check whether a user's device supports Depth. Not all ARCore-compatible devices support the Depth API due to processing power constraints. To save resources, depth is disabled by default on ARCore. Enable depth mode to have your app use the Depth API.
var occlusionManager = // Typically acquired from the Camera game object.
// Check whether the user's device supports the Depth API.
// If depth mode is available on the user's device, perform
// the steps you want here.
Acquire the latest raw depth image
AROcclusionManager.TryAcquireEnvironmentDepthCpuImage() and use
AROcclusionManager.environmentDepthTemporalSmoothingRequested to acquire the latest raw depth image on the CPU.
Acquire the latest raw depth confidence image
AROcclusionManager.TryAcquireEnvironmentDepthConfidenceCpuImage() and use
AROcclusionManager.environmentDepthTemporalSmoothingRequested to acquire the confidence image on the CPU.
// Attempt to get the latest environment depth image.
if (occlusionManager && occlusionManager.TryAcquireEnvironmentDepthConfidenceCpuImage(out XRCpuImage image))
m_RawEnvironmentDepthConfidenceImage.enabled = false;