Google VR NDK Rendering API

GVR applications render frames to framebuffers provided by the SDK. Each frame contains one or more framebuffers. The number of framebuffers and their properties, such as size and multisampling, are configured when initializing the swap chain. The application renders the 3D scene twice (once for each eye, unless multiview is enabled) using a standard perspective projection. Once application rendering is done, it submits the frame for composition and lens distortion by the SDK.

The BufferViewportList specifies how the contents of the framebuffers drawn by the application should be composited during the lens distortion correction step. Each BufferViewport describes how a region of the framebuffer should be shown in one eye. This allows applications to choose to render both eyes to a single buffer or each one to a separate buffers. The viewport FOV indicates to the SDK where the viewport contents should be put in the user's field of view. The combination of buffer content and the pair of viewports (one for each eye) that reference it is called a layer.

The default viewport list contains two viewports, one for each eye, both referencing the framebuffer at index zero. The left eye is in the left half of the texture and the right eye in the right half. The field of view of each viewport corresponds to the physical field of view of the current viewer.

Using multiple buffer viewports

Multiple buffer viewports allow the app to specify items in the scene that are head-locked and do not undergo reprojection, update at a different framerate, or are overlaid onto the scene. In this example, we will render a head-locked reticle that does not judder when the user turns their head.

When you create the SwapChain, create a BufferSpec for each buffer viewport.

const int kSceneBufferIndex = 0;
const int kReticleBufferIndex = 1;
std::vector<gvr::BufferSpec> specs;

// Create a BufferSpec for the scene.
specs.push_back(gvr_api->CreateBufferSpec());
specs[0].SetColorFormat(GVR_COLOR_FORMAT_RGBA_8888);
specs[0].SetDepthStencilFormat(GVR_DEPTH_STENCIL_FORMAT_DEPTH_16);
specs[0].SetSize(render_size_);
specs[0].SetSamples(2);

// Create a BufferSpec for the reticle.
specs.push_back(gvr_api->CreateBufferSpec());
specs[1].SetSize(reticle_render_size_);
specs[1].SetColorFormat(GVR_COLOR_FORMAT_RGBA_8888);
specs[1].SetDepthStencilFormat(GVR_DEPTH_STENCIL_FORMAT_NONE);
specs[1].SetSamples(1);
swap_chain = gvr_api->CreateSwapChain(specs);

For each buffer, create a BufferViewport for each eye. Set the BufferViewport’s source buffer index to the corresponding index in the list of buffer specifications passed when creating the swap chain. The layers will be drawn in the same order as the BufferViewports in the list, with viewports at lower indices being drawn before viewports at higher indices.

// Create the BufferViewports for each layer.
// The recommended BufferViewportList has two viewports, one for each eye.
// We add extra viewports for the reticle layer.
viewport_list.SetToRecommendedBufferViewports();
gvr::BufferViewport reticle_viewport = gvr_api_->CreateBufferViewport();
reticle_viewport.SetSourceBufferIndex(kReticleBufferIndex);
reticle_viewport.SetSourceFov({2.5f, 2.5f, 2.5f, 2.5f});
// This will disable adjusting the viewport's position to compensate for the
// rotation that happens between when the frame rendering starts and when the
// frame is finally drawn to the screen. The reticle is head-locked, so its
// position is always correct. If we did not disable reprojection, it would
// jump around as the user moved their head.
reticle_viewport.SetReprojection(GVR_REPROJECTION_NONE);

reticle_viewport.SetSourceUv({0.f, 0.5f, 0.f, 1.f});
reticle_viewport.SetTargetEye(GVR_LEFT_EYE);
viewport_list.SetBufferViewport(2, reticle_viewport);
reticle_viewport.SetSourceUv({0.5f, 1.f, 0.f, 1.f});
reticle_viewport.SetTargetEye(GVR_RIGHT_EYE);
viewport_list.SetBufferViewport(3, reticle_viewport);

When the app acquires a frame from the swap chain, it should bind to each buffer in the frame to render to the respective layer, and then submit the frame for distortion rendering.

gvr::Frame frame = swap_chain->AcquireFrame();

// Draw the scene.
frame.BindBuffer(kSceneBufferIndex);
viewport_list.GetBufferViewport(GVR_LEFT_EYE, &scratch_viewport);
DrawEye(scratch_viewport.GetSourceUv(), left_eye_matrix);
viewport_list.GetBufferViewport(GVR_RIGHT_EYE, &scratch_viewport);
DrawEye(scratch_viewport.GetSourceUv(), right_eye_matrix);

// Draw the reticle. Its position will not depend on the head pose, so we
// need only the eye-from-head matrices.
frame.BindBuffer(kReticleBufferIndex);
viewport_list.GetBufferViewport(GVR_LEFT_EYE + 2, &scratch_viewport);
DrawReticle(scratch_viewport.GetSourceUv(),
            gvr_api->GetEyeFromHeadMatrix(GVR_LEFT_EYE));
viewport_list.GetBufferViewport(GVR_RIGHT_EYE + 2, &scratch_viewport);
DrawReticle(scratch_viewport.GetSourceUv()
            gvr_api->GetEyeFromHeadMatrix(GVR_RIGHT_EYE));

frame.Unbind();

// Submit the frame for distortion rendering.
frame.Submit(gvr_buffer_viewports, head_matrix);

Using video viewports

Video viewports allow the app to feed video directly to the asynchronous reprojection system. The video is rendered simultaneously with lens distortion, allowing smooth, 60 FPS video playback regardless of the application's frame rate. Additionally, the application's OpenGL context does not need to be marked as protected (and in fact should not be) in order to play DRM-protected video. This functionality is only available from the Android Java API.

To use this feature, the app performs additional initialization when the Activity is created and receives a Surface from the SDK into which it should output media content. On each frame, the app modifies the viewport list and specifies a transformation matrix that determines the position of the video viewport in eye space.

If the device does not support asynchronous reprojection, the app must draw video directly into the app’s frame buffer, and playback of DRM-protected content is not possible.

Enable the video Surface when you configure the GvrLayout object. You must supply a listener with a Handler to receive callbacks when the Surface is available and valid to use. Perform the setup in your Activity.onCreate() method as follows:

// Create an ExternalSurfaceListener to receive callbacks about the async
// reprojection Surface.
GvrLayout.ExternalSurfaceListener videoSurfaceListener =
    new GvrLayout.ExternalSurfaceListener() {
      @Override
      public void onSurfaceAvailable(Surface surface) {
        // Called when the video surface is available and valid.
        videoPlayer.setSurface(surface);
      }

      @Override
      public void onFrameAvailable() {
        // Called whenever there is a new frame available. Handle any
        // render-specific needs when frames are produced.
      }
    };

// Enable the async reprojection video surface.
// Boolean isProtectedContext is true if the video is DRM and must be protected.
// Note: this must be set before enabling async reprojection.
gvrLayout.enableAsyncReprojectionVideoSurface(
    videoSurfaceListener, new Handler(Looper.getMainLooper()), isProtectedContext);

// Enable async reprojection for low-latency rendering on supporting
// devices. Note that this must be set prior to calling initialize_gl()
// on the native gvr_context.
gvrLayout.setAsyncReprojectionEnabled(true);

// Set up your swap chain as shown in the previous sections.

To render the video in the scene, the app must modify the viewport list to supply an additional pair of viewports (one for each eye). These viewports will display video content and can be positioned in the world by calling their setTransform() method (see below). To associate a viewport with a video, the app should call SetExternalSurfaceId() on the video BufferViewports with the value retrieved from GvrLayout by calling getAsyncReprojectionVideoSurfaceId(). If the external surface ID is set to anything other than GvrApi.EXTERNAL_SURFACE_ID_NONE and the buffer index is set to GvrApi.BUFFER_INDEX_EXTERNAL_SURFACE, the viewport contents will be taken from the external surface. If the external surface is not valid, this viewport will not be rendered.

For correct occlusion between the video and background elements, the the video viewport should be before the color viewport in the list, so that the scene is rendered on top of the video. Before rendering the scene buffer, the application should render a quad at the same position as the video viewport with color writes disabled. This will pre-populate the depth buffer with values that will cause items behind the video to be invisible.

The matrix needed by BufferViewport.setTransform() is a matrix that transforms a quad that fills the OpenGL clip box (that is, a quad with vertices at (-1, -1, 0), (-1, 1, 0), (1, -1, 0) and (1, 1, 0)) to its correct position in eye space. It will typically be computed by multiplying together the eye-from-head matrix obtained from GvrApi.getEyeFromHeadMatrix(), the head-from-world matrix obtained from GvrApi.getHeadSpaceFromStartSpaceRotation(), and the model matrix for the quad that places it in the correct position in the world. Setting this matrix discards any values previously set with BufferViewport.setSourceFov().

// Class variables.
private BufferViewportList viewportList;
private BufferViewportList recommendedViewportList;
// The source UV coords of the video viewport should be the entire video frame.
RectF videoUv = new RectF(/*left*/ 0.f, /*top*/ 1.f, /*right*/ 1.f, /*bottom*/ 0.f);

// Some scratch matrices.
private float[] headFromWorld = new float[16];
private float[][] eyeFromWorld = new float[2][16];
private float[][] eyeFromVideo = new float[2][16];
float[] worldFromVideo = new float[16];

void configureViewports(float[] headFromWorld) {
  viewportList = gvrApi.createBufferViewportList();
  recommendedViewportList = gvrApi.createBufferViewportList();
  BufferViewport scratchViewport = gvrApi.createBufferViewport();

  // The screen has 16:9 aspect ratio and is 4 meters from the viewer.
  Matrix.setIdentityM(worldFromVideo, 0);
  Matrix.scaleM(worldFromVideo, 0, 1.6f, 0.9f, 1.0f);
  Matrix.translateM(worldFromVideo, 0, 0.0f, 0.0f, -4.0f);

  for (int eye = 0; eye < 2; ++eye) {
    Matrix.multiplyMM(eyeFromWorld[eye], 0, gvrApi.getEyeFromHeadMatrix(eye), 0,
                      headFromWorld, 0);
    Matrix.multiplyMM(eyeFromVideo[eye], 0, eyeFromWorld[eye], 0, worldFromVideo, 0);
  }

  gvrApi.getRecommendedBufferViewports(recommendedViewportList);
  // Set up the video viewports.
  for (int eye = 0; eye < 2; eye++) {
    recommendedViewportList.get(eye, scratchViewport);
    scratchViewport.setSourceBufferIndex(GvrApi.BUFFER_INDEX_EXTERNAL_SURFACE);
    scratchViewport.setExternalSurfaceId(gvrLayout.getAsyncReprojectionVideoSurfaceId());
    scratchViewport.setTransform(eyeFromVideo);
    scratchViewport.setSourceUv(videoUv);
    viewportList.set(eye, scratchViewport);
  }
  // Set up the color viewports. These come after the video ones, so that controls
  // or items occluding the video can be displayed.
  for (int eye = 0; eye < 2; eye++) {
    recommendedViewportList.get(eye, scratchViewport);
    viewportList.set(2 + eye, scratchViewport);
  }
}

// headFromWorld is the pose obtained from getHeadSpaceFromStartSpaceRotation().
void drawFrame(float[] headFromWorld) {
  // ... do any preparations ...

  // Because the video viewport transforms are dependent on the head pose,
  // the viewport configuration needs to be updated every frame.
  configureViewports(headFromWorld);
  Frame frame = swapChain.acquireFrame();

  // Draw the app color contents to the buffer.
  frame.bind(BUFFER_INDEX_COLOR);
  for (int eye = 0; eye < 2; ++eye) {
    viewportList.get(2 + eye, scratchViewport);
    // Pre-populate the depth buffer with the video quad for correct occlusion.
    GLES20.glColorMask(false, false, false, false);
    DrawVideoQuad(scratchViewport, /* params to draw left eye */);
    GLES20.glColorMask(true, true, true, true);
    // Draw the environment in which the video is displayed.
    DrawScene(scratchViewport, /* params to draw left eye */);
  }
  frame.unbind();
  frame.submit(headFromWorld);
}

The example above can be very easily modified to show a stereoscopic video. If the top half of the video contains the left eye image and the bottom half contains the right eye image, it's sufficient to add a call to BufferViewport.setSourceUv() when configuring the video viewports.

// This should normally be a class variable.
RectF uv = new RectF(
    /*left*/ 0.f, /*top*/ eye == BufferViewport.EyeType.LEFT ? 1.f : 0.5f,
    /*right*/ 1.f, /*bottom*/ eye == BufferViewport.EyeType.LEFT ? 0.5f : 0.f);
scratchViewport.setSourceUv(uv);

Pausing and resuming

The video surface is not guaranteed to be valid after gvrLayout.onPause() is called. The app should wait until the next onSurfaceAvailable() callback after resuming before using the async reprojection video surface.

Optimize performance with multiview

Daydream-ready devices support a set of OpenGL ES extensions that allow VR applications to render their 3D scenes only once per frame, rather than once per eye. When using multiview rendering, the OpenGL driver renders the views for both eyes after accepting a single sequence of draw calls.

To use multiview, applications should make the changes described in the following sections.

1. Query for multiview support.

Applications should use IsFeatureSupported to determine if the device supports multiview. We do not recommend checking for the OpenGL extension strings directly.

bool has_multiview = gvr_api->IsFeatureSupported(GVR_FEATURE_MULTIVIEW);

2. Create multi-layer buffer specs.

When creating the swap chain, use SetMultiviewLayers to set up two layers in the desired buffer specs. The width of these buffers should be half the non-multiview width.

specs[0].SetMultiviewLayers(2);
specs[0].SetSize({render_size.width / 2, render_size.height});

Not all buffer specs in the swap chain need to have the same number of layers.

3. Set the source layer in each buffer viewport.

When creating the buffer viewports, use SetSourceUv to specify that the entire region be sampled, and use SetSourceLayer to specify the multiview layer for each eye.

for (int eye = 0; eye < 2; ++eye) {
  viewport[eye]->SetSourceUv({ 0, 1, 0, 1 });
  viewport[eye]->SetSourceLayer(eye);
}

4. Change shaders and uniforms.

  • The line #extension GL_OVR_multiview2 : enable should be added near the top of the shader string, after the #version directive.

  • All eye-dependent uniforms should be declared as two-item arrays. Note that this implies changes in the C/C++ code, not just the shader.

  • All eye-dependent uniforms should be indexed using gl_ViewID_OVR.

  • Before declaring any vertex inputs, set a default layout qualifier: layout(num_views=2) in;.

5. Render only once.

The application should be modified so that it makes a single set of draw calls for both eyes.

For an example of an application that supports both multiview and non-multiview code paths, see the NDK TreasureHunt demo.