This site has been permanently archived. The content on this site was last updated in 2019.
About see-through mode
Stay organized with collections
Save and categorize content based on your preferences.
See-through mode is a feature of Google VR SDKs for Android and for
Unity. It lets you add augmented reality (AR) experiences to your
users' experiences.
See-through mode supports scenes that can be characterized as
predominantly virtual scenes or predominantly augmented scenes:
Predominantly virtual scenes. These are scenes where the background
environment is rendered by the app, potentially with some small holes left
to show the real world. Examples of this would be a virtual room with
windows showing the real world, or an existing VR app with a newly added
portal that sees through into the real world.
Predominantly augmented scenes. These are scenes where the background
environment consists predominantly of unaltered see-through mode and the app
is rendering virtual objects intended to appear alongside real objects.
Examples of this would be a virtual monitor designed to float above your
real desk, or a furniture arranging app that lets you place virtual chairs
in your real home.
Spatial and temporal offsets with see-through mode
In predominantly virtual scenes, the goal is to render the image based on the
position of the user's eyes at the time the frame is composited onto the screen.
When an app calls gvr_get_head_space_from_start_space_transform
to get the head's position, it includes a timestamp that estimates when the
frame rendered with that head pose will be composited. It then uses
gvr_get_eye_from_head_matrix
to get the eye's position from the headset's position.
Ideally, see-through mode would behave the same, but there are practical
considerations. Each see-through mode image is "rendered" at the position of the
physical tracking camera rather than at the position of the eyes. This means
that each see-through mode image has some spatial offset.
Also, each see-through mode image arrives on screen with some latency. This
means that it is rendered not at the head position from when it is composited,
but at the head position from some time before that. The image's rotation is
reprojected to correct for this, but any translational differences cannot be
reprojected. This means that each see-through mode image has some temporal
offset.
These two offsets create noticeable visual artifacts. When a user turns their
head, the real world as viewed through see-through mode moves too much as
their viewpoint moves more than is normal. Real objects will appear closer than
they actually are.
How do we correct for this?
In predominantly virtual scenes, no adjustments are needed. Your app should
should continue to render virtual objects using existing best practices.
Although see-through mode images covering small areas of the field of view will
not behave quite right, this issue will not cause serious issues because users
should be anchoring themselves primarily on the virtual environment.
In predominantly augmented scenes, however, a user's eyes will anchor primarily
on the see-through mode image and adjust to the spatial offset and temporal
offset. Even though the error is in the see-through mode image, the virtual
objects will appear to swim from the perspective of the user. It's a better
experience to have the virtual and real objects align in a slightly incorrect
position than for only the virtual objects to be rendered at their correct
physical positions. This means that the spatial offset and temporal offset need
to be added to virtual objects for primarily augmented scenes.
When gvr_beta_see_through_config_set_scene_type
sets the scene type to GVR_BETA_SEE_THROUGH_SCENE_TYPE_VIRTUAL_SCENE
,
gvr_get_eye_from_head_matrix
automatically uses an earlier timestamp to align with the see-through mode
images and returns a transformation to the position of the camera rather than
the position of the eyes.
All rights reserved. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-10-09 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-09 UTC."],[[["\u003cp\u003eSee-through mode in Google VR SDKs enables augmented reality (AR) experiences by overlaying virtual content onto the real world, supporting both virtual-centric and augmented-centric scenes.\u003c/p\u003e\n"],["\u003cp\u003ePredominantly virtual scenes utilize see-through for limited real-world integration, like windows in a virtual room, while predominantly augmented scenes primarily showcase the real world with virtual objects overlaid, such as virtual furniture in a real room.\u003c/p\u003e\n"],["\u003cp\u003eSpatial and temporal offsets inherent in see-through mode cause visual artifacts, requiring adjustments primarily for augmented-centric scenes to ensure proper alignment of virtual and real objects.\u003c/p\u003e\n"],["\u003cp\u003eFor predominantly virtual scenes, existing rendering practices suffice, while augmented-centric scenes need adjustments using \u003ccode\u003egvr_beta_see_through_config_set_scene_type\u003c/code\u003e and \u003ccode\u003egvr_get_eye_from_head_matrix\u003c/code\u003e to align virtual objects with see-through images for better user experience.\u003c/p\u003e\n"]]],["See-through mode in Google VR SDKs enables augmented reality experiences, supporting predominantly virtual or augmented scenes. In augmented scenes, real-world images from tracking cameras have spatial and temporal offsets, causing misalignments. To correct, developers use `gvr_beta_see_through_config_set_scene_type` to indicate a predominantly augmented scene. Consequently, `gvr_get_eye_from_head_matrix` adjusts the virtual objects’ positioning by incorporating these offsets, ensuring virtual elements align with the real-world view. In predominantly virtual scenes, such adjustments are not necessary.\n"],null,["# About see-through mode\n\nSee-through mode is a feature of Google VR SDKs for Android and for\nUnity. It lets you add augmented reality (AR) experiences to your\nusers' experiences.\n\nSee-through mode supports scenes that can be characterized as\n*predominantly virtual scenes* or *predominantly augmented scenes*:\n\n- **Predominantly virtual scenes.** These are scenes where the background\n environment is rendered by the app, potentially with some small holes left\n to show the real world. Examples of this would be a virtual room with\n windows showing the real world, or an existing VR app with a newly added\n portal that sees through into the real world.\n\n- **Predominantly augmented scenes.** These are scenes where the background\n environment consists predominantly of unaltered see-through mode and the app\n is rendering virtual objects intended to appear alongside real objects.\n Examples of this would be a virtual monitor designed to float above your\n real desk, or a furniture arranging app that lets you place virtual chairs\n in your real home.\n\nSpatial and temporal offsets with see-through mode\n--------------------------------------------------\n\nIn predominantly virtual scenes, the goal is to render the image based on the\nposition of the user's eyes at the time the frame is composited onto the screen.\nWhen an app calls [`gvr_get_head_space_from_start_space_transform`](/vr/reference/ios-ndk/group/headtracking#gvr_get_head_space_from_start_space_transform)\nto get the head's position, it includes a timestamp that estimates when the\nframe rendered with that head pose will be composited. It then uses\n[`gvr_get_eye_from_head_matrix`](/vr/reference/ios-ndk/group/h-m-d#gvr_get_eye_from_head_matrix)\nto get the eye's position from the headset's position.\n\nIdeally, see-through mode would behave the same, but there are practical\nconsiderations. Each see-through mode image is \"rendered\" at the position of the\nphysical tracking camera rather than at the position of the eyes. This means\nthat each see-through mode image has some spatial offset.\n\nAlso, each see-through mode image arrives on screen with some latency. This\nmeans that it is rendered not at the head position from when it is composited,\nbut at the head position from some time before that. The image's rotation is\nreprojected to correct for this, but any translational differences cannot be\nreprojected. This means that each see-through mode image has some temporal\noffset.\n\nThese two offsets create noticeable visual artifacts. When a user turns their\nhead, the real world as viewed through see-through mode moves too much as\ntheir viewpoint moves more than is normal. Real objects will appear closer than\nthey actually are.\n\n### How do we correct for this?\n\nIn predominantly virtual scenes, no adjustments are needed. Your app should\nshould continue to render virtual objects using existing best practices.\nAlthough see-through mode images covering small areas of the field of view will\nnot behave quite right, this issue will not cause serious issues because users\nshould be anchoring themselves primarily on the virtual environment.\n\nIn predominantly augmented scenes, however, a user's eyes will anchor primarily\non the see-through mode image and adjust to the spatial offset and temporal\noffset. Even though the error is in the see-through mode image, the virtual\nobjects will appear to swim from the perspective of the user. It's a better\nexperience to have the virtual and real objects align in a slightly incorrect\nposition than for only the virtual objects to be rendered at their correct\nphysical positions. This means that the spatial offset and temporal offset need\nto be added to virtual objects for primarily augmented scenes.\n\nWhen [`gvr_beta_see_through_config_set_scene_type`](/vr/reference/android-ndk/group/beta#gvr_beta_see_through_config_set_scene_type)\nsets the scene type to [`GVR_BETA_SEE_THROUGH_SCENE_TYPE_VIRTUAL_SCENE`](/vr/reference/android-ndk/group/beta#gvr_beta_see_through_scene_type),\n[`gvr_get_eye_from_head_matrix`](/vr/reference/ios-ndk/group/h-m-d#gvr_get_eye_from_head_matrix)\nautomatically uses an earlier timestamp to align with the see-through mode\nimages and returns a transformation to the position of the camera rather than\nthe position of the eyes."]]