Captures the state and changes to the AR system from a call to Session.update()
.
Public Methods
Image |
acquireCameraImage()
Attempts to acquire an image from the camera that corresponds to the current frame.
|
Image |
acquireDepthImage()
Attempts to acquire a depth image that corresponds to the current frame.
|
PointCloud |
acquirePointCloud()
Acquires the current set of estimated 3d points attached to real-world geometry.
|
long |
getAndroidCameraTimestamp()
Returns the (
Android Camera timestamp) of the image.
|
Pose |
getAndroidSensorPose()
Returns the pose of the Android
Sensor Coordinate System in the world coordinate space for this frame.
|
Camera | |
int |
getCameraTextureName()
Returns the OpenGL ES camera texture name (id) associated with this frame.
|
ImageMetadata |
getImageMetadata()
Returns the camera metadata for the current camera image, if available.
|
LightEstimate |
getLightEstimate()
Returns the current ambient light estimate, if light estimation was enabled.
|
long |
getTimestamp()
Returns the timestamp in nanoseconds when this image was captured.
|
Collection<Anchor> |
getUpdatedAnchors()
Returns the anchors that were changed by the
Session.update() that returned this Frame. |
<T extends Trackable> Collection<T> |
getUpdatedTrackables(Class<T> filterType)
Returns the trackables of a particular type that were changed by the
Session.update()
that returned this Frame. |
boolean |
hasDisplayGeometryChanged()
Checks if the display rotation or viewport geometry changed since the previous
Frame . |
List<HitResult> |
hitTest(MotionEvent motionEvent)
Similar to
hitTest(float, float) , but will take values from Android MotionEvent . |
List<HitResult> |
hitTest(float[] origin3, int originOffset, float[] direction3, int directionOffset)
Similar to
hitTest(float, float) , but takes an arbitrary ray in world space
coordinates instead of a screen-space point. |
List<HitResult> |
hitTest(float xPx, float yPx)
Performs a ray cast from the user's device in the direction of the given location in the camera
view.
|
List<HitResult> |
hitTestInstantPlacement(float xPx, float yPx, float approximateDistanceMeters)
Performs a ray cast that can return a result before ARCore establishes full tracking.
|
void |
transformCoordinates2d(Coordinates2d inputCoordinates, float[] inputVertices2d, Coordinates2d outputCoordinates, float[] outputVertices2d)
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate
system.
|
void |
transformCoordinates2d(Coordinates2d inputCoordinates, FloatBuffer inputVertices2d, Coordinates2d outputCoordinates, FloatBuffer outputVertices2d)
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate
system.
|
void |
transformDisplayUvCoords(FloatBuffer uvCoords, FloatBuffer outUvCoords)
This method is deprecated.
Replaced by
frame.transformCoordinates2d(Coordinates2d.VIEW_NORMALIZED, ..,
Coordinates2d.TEXTURE_NORMALIZED, ..) .
|
Inherited Methods
Public Methods
public Image acquireCameraImage ()
Attempts to acquire an image from the camera that corresponds to the current frame. Depending
on device performance, can throw NotYetAvailableException
for several frames after session start, and for a few frames at a time while the session is
running.
Returns
- an Android image object that contains the image data from the camera. The returned image object format is AIMAGE_FORMAT_YUV_420_888.
Throws
NullPointerException | if session or frame is null. |
---|---|
DeadlineExceededException | if the input frame is not the current frame. |
ResourceExhaustedException | if the caller app has exceeded maximum number of images that it can hold without releasing. |
NotYetAvailableException | if the image with the timestamp of the input frame did not become available within a bounded amount of time, or if the camera failed to produce the image. |
public Image acquireDepthImage ()
Attempts to acquire a depth image that corresponds to the current frame.
The depth image has a single 16-bit plane at index 0. Each pixel contains the distance in millimeters to the camera plane. Currently, only the low order 13 bits are used. The 3 highest order bits are always set to 000. The image plane is stored in big-endian format. The actual resolution of the depth image depends on the device and its display aspect ratio, with sizes typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future.
The output depth image can express depth values from 0 millimeters to 8191 millimeters. Optimal depth accuracy is achieved between 50 millimeters and 5000 millimeters from the camera. Error increases quadratically as distance from the camera increases.
Depth is estimated using data from previous frames and the current frame. As the user moves their device through the environment 3D depth data is collected and cached, improving the quality of subsequent depth images and reducing the error introduced by camera distance.
If an up-to-date depth image isn't ready for the current frame, the most recent depth image
available from an earlier frame will be returned instead. This is expected only to occur on
compute-constrained devices. An up-to-date depth image should typically become available again
within a few frames. Compare the Image.getTimestamp()
depth image timestamp
to the getTimestamp()
frame timestamp to determine which camera frame the depth
image corresponds to.
The image must be released via Image.close()
once it is no longer
needed.
Returns
- The depth image corresponding to the frame.
Throws
NotYetAvailableException | if the number of observed frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. |
---|---|
NotTrackingException | if the Session is not in the
TrackingState.TRACKING state, which is required to acquire depth images. |
IllegalStateException | if a supported depth mode was not enabled in Session configuration. |
ResourceExhaustedException | if the caller app has exceeded maximum number of depth images that it can hold without releasing. |
DeadlineExceededException | if the method is called on not the current frame. |
public PointCloud acquirePointCloud ()
Acquires the current set of estimated 3d points attached to real-world geometry. PointCloud.release()
must be called after application is done using the PointCloud object.
Note: This information is for visualization and debugging purposes only. Its characteristics and format are subject to change in subsequent versions of the API.
Throws
ResourceExhaustedException | if too many point clouds are acquired without being released. |
---|
public Pose getAndroidSensorPose ()
Returns the pose of the Android Sensor Coordinate System in the world coordinate space for this frame. The orientation follows the device's "native" orientation (it is not affected by display rotation) with all axes corresponding to those of the Android sensor coordinates.
See Also:
Camera.getPose()
for the pose of the physical camera.Camera.getDisplayOrientedPose()
for the pose of the virtual camera.
Camera.getTrackingState()
returns TrackingState.TRACKING
and otherwise should not be used.
public Camera getCamera ()
Returns the Camera
object for the session. Note that this Camera instance is long-lived
so the same instance is returned regardless of the frame object this method was called on.
public int getCameraTextureName ()
Returns the OpenGL ES camera texture name (id) associated with this frame. This is guaranteed
to be one of the texture names previously set via Session.setCameraTextureNames(int[])
or
Session.setCameraTextureName(int)
. Texture names (ids) are returned in a round robin fashion
in sequential frames.
Returns
- the OpenGL ES texture name (id).
public ImageMetadata getImageMetadata ()
Returns the camera metadata for the current camera image, if available. Throws NotYetAvailableException
when metadata is not yet available due to sensors data not yet being
available.
If the AR session was created for shared camera access, this method will throw IllegalStateException
. To retrieve image metadata in shared camera mode, use SharedCamera.setCaptureCallback(CameraCaptureSession.CaptureCallback, Handler)
, then use getAndroidCameraTimestamp()
to
correlate the frame to metadata retrieved from CameraCaptureSession.CaptureCallback
.
Throws
NotYetAvailableException | when metadata is not available because the sensors are not ready. |
---|
public LightEstimate getLightEstimate ()
Returns the current ambient light estimate, if light estimation was enabled.
If lighting estimation is not enabled in the session configuration, the returned
LightingEstimate will always return LightEstimate.State.NOT_VALID
from LightEstimate.getState()
.
public long getTimestamp ()
Returns the timestamp in nanoseconds when this image was captured. This can be used to detect
dropped frames or measure the camera frame rate. The time base of this value is specifically
not defined, but it is likely similar to System.nanoTime()
.
public Collection<Anchor> getUpdatedAnchors ()
Returns the anchors that were changed by the Session.update()
that returned this Frame.
public Collection<T> getUpdatedTrackables (Class<T> filterType)
Returns the trackables of a particular type that were changed by the Session.update()
that returned this Frame. filterType
may be Plane.class
or Point.class
, or Trackable.class
to retrieve all changed trackables.
Parameters
filterType |
---|
public boolean hasDisplayGeometryChanged ()
Checks if the display rotation or viewport geometry changed since the previous Frame
.
The application should re-query Camera.getProjectionMatrix(float[], int, float, float)
and transformCoordinates2d(Coordinates2d, float[], Coordinates2d, float[])
whenever
this is true.
public List<HitResult> hitTest (MotionEvent motionEvent)
Similar to hitTest(float, float)
, but will take values from Android MotionEvent
. It is assumed that the MotionEvent
is received from the same view that
was used as the size for Session.setDisplayGeometry(int, int, int)
.
Note: this method does not consider the action
of the MotionEvent
. The caller must check for appropriate action, if needed, before calling this
method.
Note: When using Session.Feature.FRONT_CAMERA
, the returned hit result list will
always be empty, as the camera is not TrackingState.TRACKING
. Hit testing against
tracked faces is not currently supported.
Parameters
motionEvent | an event containing the x,y coordinates to hit test |
---|
public List<HitResult> hitTest (float[] origin3, int originOffset, float[] direction3, int directionOffset)
Similar to hitTest(float, float)
, but takes an arbitrary ray in world space
coordinates instead of a screen-space point.
Note: When using Session.Feature.FRONT_CAMERA
, the returned hit result list will
always be empty, as the camera is not TrackingState.TRACKING
. Hit testing against
tracked faces is not currently supported.
Parameters
origin3 | an array of 3 floats containing ray origin in world space coordinates. |
---|---|
originOffset | the offset into origin3 array. |
direction3 | an array of 3 floats containing ray direction in world space coordinates. Does not have to be normalized. |
directionOffset | the offset into direction3 array. |
Returns
- an ordered list of intersections with scene geometry, nearest hit first.
public List<HitResult> hitTest (float xPx, float yPx)
Performs a ray cast from the user's device in the direction of the given location in the camera view. Intersections with detected scene geometry are returned, sorted by distance from the device; the nearest intersection is returned first.
Note: Significant geometric leeway is given when returning hit results. For example, a plane
hit may be generated if the ray came close, but did not actually hit within the plane extents
or plane bounds (Plane.isPoseInExtents(Pose)
and Plane.isPoseInPolygon(Pose)
can be used to determine these cases). A point (point cloud) hit is generated when a point is
roughly within one finger-width of the provided screen coordinates.
Note: When using Session.Feature.FRONT_CAMERA
, the returned hit result list will
always be empty, as the camera is not TrackingState.TRACKING
. Hit testing against
tracked faces is not currently supported.
Parameters
xPx | x coordinate in pixels |
---|---|
yPx | y coordinate in pixels |
Returns
- an ordered list of intersections with scene geometry, nearest hit first
public List<HitResult> hitTestInstantPlacement (float xPx, float yPx, float approximateDistanceMeters)
Performs a ray cast that can return a result before ARCore establishes full tracking.
The pose and apparent scale of attached objects depends on the InstantPlacementPoint
tracking method and the provided approximateDistanceMeters. A discussion of the different
tracking methods and the effects of apparent object scale are described in InstantPlacementPoint
.
This function will succeed only if Config.InstantPlacementMode
is Config.InstantPlacementMode.LOCAL_Y_UP
in the ARCore session configuration, the ARCore session
tracking state is TrackingState.TRACKING
, and there are sufficient feature points to
track the point in screen space.
Parameters
xPx | x screen coordinate in pixels |
---|---|
yPx | y screen coordinate in pixels |
approximateDistanceMeters | the distance at which to create an InstantPlacementPoint . This is only used while the tracking method for the returned point
is InstantPlacementPoint.TrackingMethod.SCREENSPACE_WITH_APPROXIMATE_DISTANCE . |
Returns
- if successful a list containing a single
HitResult
, otherwise an empty list. TheHitResult
will have a trackable of typeInstantPlacementPoint
.
public void transformCoordinates2d (Coordinates2d inputCoordinates, float[] inputVertices2d, Coordinates2d outputCoordinates, float[] outputVertices2d)
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate system.
Same as transformCoordinates2d(Coordinates2d, FloatBuffer, Coordinates2d, FloatBuffer)
, but taking float arrays.
Parameters
inputCoordinates | |
---|---|
inputVertices2d | |
outputCoordinates | |
outputVertices2d |
public void transformCoordinates2d (Coordinates2d inputCoordinates, FloatBuffer inputVertices2d, Coordinates2d outputCoordinates, FloatBuffer outputVertices2d)
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate system.
For Android view coordinates (VIEW, VIEW_NORMALIZED), the view information is taken from the most recent call to Session#setDisplayGeometry(width,height,rotation).
Must be called on the most recently obtained Frame object. If this function is called on an older frame, a log message will be printed and outputVertices2d will remain unchanged.
Some examples of useful conversions:
- To transform from [0,1] range to screen-quad coordinates for rendering: VIEW_NORMALIZED -> TEXTURE_NORMALIZED
- To transform from [-1,1] range to screen-quad coordinates for rendering: OPENGL_DEVICE_NORMALIZED -> TEXTURE_NORMALIZED
- To transform a point found by a computer vision algorithm in a CPU image into a point on the screen that can be used to place an Android View (e.g. Button) at that location: IMAGE_PIXELS -> VIEW
- To transform a point found by a computer vision algorithm in a CPU image into a point to be rendered using GL in clip-space ([-1,1] range): IMAGE_PIXELS -> OPENGL_DEVICE_NORMALIZED
Read-only array-backed buffers are not supported by inputVertices2d for performance reasons.
If inputCoordinates is same as outputCoordinates, the input vertices will be copied to the output vertices unmodified.
Parameters
inputCoordinates | The coordinate system used by inputVertices2d. |
---|---|
inputVertices2d | Input 2D vertices to transform. |
outputCoordinates | The coordinate system to convert to. |
outputVertices2d | Buffer to put the transformed 2D vertices into. |
Throws
IllegalArgumentException | If the buffer sizes don't match, or the input/output buffers have odd size. |
---|---|
ReadOnlyBufferException | If this buffer is a read-only array backed buffer. |
public void transformDisplayUvCoords (FloatBuffer uvCoords, FloatBuffer outUvCoords)
This method was deprecated.
Replaced by frame.transformCoordinates2d(Coordinates2d.VIEW_NORMALIZED, ..,
Coordinates2d.TEXTURE_NORMALIZED, ..)
.
Transform the given texture coordinates to correctly show the background image. This will
account for the display rotation, and any additional required adjustment. For performance, this
function should be called only if hasDisplayGeometryChanged()
returns true.
Usage Notes / Bugs:
- Both input and output buffers must be direct and native byte order.
- Position and limit of buffers is ignored.
- Capacity of both buffers must be identical.
- Capacity of both buffers must be a multiple of 2.
Note: both buffer positions will remain unmodified after this call.
Parameters
uvCoords | The uv coordinates to transform. |
---|---|
outUvCoords | The buffer to hold the transformed uv coordinates. Must have enough remaining elements to fit the input uvCoords. |