Google VR NDK Overview

This guide, along with the topical guides for rendering , using the controller, and adding audio, explain how to develop an Android VR app using the GVR NDK. You should probably run through the Getting Started guide to make sure you can build and deploy the NDK code first.

Overview of GvrApi and GvrLayout

Place the provided NDK header files (gvr.h, gvr_types.h, gvr_controller.h, gvr_audio.h, etc) in your C++ include path. You should also make sure that the SDK's .aar files (Android libraries) and .so files (shared libraries) are set up to be linked to your app. If you are using Android Studio or Gradle, use the HelloVR sample app as an example of how to set up your build system.

The GvrApi object is your main entry point into the API. It is accessed through a GvrLayout -- a view that wraps and supports a concrete VR presentation view (usually a SurfaceView). The GvrApi object is responsible for providing rendering parameters and performing distortion, while you are responsible for rendering your scene. The Java GvrApi object wraps a native gvr_context object; the client can either use the Java bindings directly or interact directly with the wrapped gvr_context object in native code.

To render a client:

  1. Create a swap chain, backed by rendering buffers, that allows frame acquisition and submission.

  2. Acquire a frame from the swap chain, using GvrApi to get the necessary frame rendering information, including:

    • The current head pose.
    • The matrices that transform the head pose into the position of each eye.
    • Where in the render target buffer each eye image should be rendered.
  3. Render the image for each eye into the render buffer(s) provided by each frame.

  4. Submit the frame to GvrApi to distort and render to the screen.

Initializing the GVR API

The code in the HelloVR sample app is a good example of how to interact with the Google VR NDK.

Use of the native GVR API for stereo rendering and head tracking requires a minimal amount of Android Java glue for initialization and setup. You should install and configure the GvrLayout in your Activity.onCreate() method. This provides access to the native API, and can be done as follows:

// Create and configure the GvrLayout.
GvrLayout gvrLayout = new GvrLayout(this);

// Enable async reprojection for low-latency rendering on supporting
// devices. Note that this must be set prior to calling initialize_gl()
// on the native gvr_context.

// Install the GvrLayout into the View hierarchy.

// Set up the necessary VR settings.
AndroidCompat.setVrModeEnabled(this, true);

// Plumb the native gvr_context to the native rendering code.

Using the native GVR API

In native code, a GvrApi C++ wrapper is provided for simplicity when interacting with the C API. It can be used to initialize any persistent state used when interacting with the native gvr_context:

// The Java GvrLayout owns the gvr_context.
gvr_context* context = reinterpret_cast<gvr_context*>(jgvr_context);
std::unique_ptr<gvr::GvrApi> gvr_api = gvr::GvrApi::WrapNonOwned(context);

// This must be called on the thread in which the app will submit OpenGL
// rendering commands.

// Create a swap chain with one framebuffer per frame, and a set
// of buffer viewports to be updated each frame.
gvr::SwapChain swap_chain = gvr_api->CreateSwapChain(
    std::vector<gvr::BufferSpec>(1, gvr_api->CreateBufferSpec()));
gvr::BufferViewportList buffer_viewports =
gvr::BufferViewport scratch_viewport = gvr_api->CreateBufferViewport();

// The GvrApi, SwapChain, BufferViewportList and scratch BufferViewport
// should typically be stored in some kind of persistent native data
// structure for future use.

Each frame, the app should use the GvrApi interface to obtain the head pose, acquire a frame, render to the frame's buffer(s), and submit the frame for distortion rendering:

// Update the viewports to the latest defaults. This will update the field
// of view if the user changed the viewer from the settings dialog.

// Acquire a frame from the swap chain.
gvr::Frame frame = swap_chain.AcquireFrame();

// Obtain the latest, predicted head pose.
gvr::Mat4f head_matrix =
gvr::Mat4f left_eye_matrix = MatrixMultiply(
    gvr_api->GetEyeFromHeadMatrix(GVR_LEFT_EYE), head_matrix);
gvr::Mat4f right_eye_matrix = MatrixMultiply(
    gvr_api->GetEyeFromHeadMatrix(GVR_RIGHT_EYE), head_matrix);

// Render the scene for each eye into the frame's buffer.
buffer_viewports.GetBufferViewport(GVR_LEFT_EYE, &scratch_viewport);
DrawEye(scratch_viewport.GetSourceUv(), left_eye_matrix);
buffer_viewports.GetBufferViewport(GVR_RIGHT_EYE, &scratch_viewport);
DrawEye(scratch_viewport.GetSourceUv(), right_eye_matrix);

// Submit the frame for distortion rendering.
frame.Submit(gvr_buffer_viewports, head_matrix);

Activity Interaction

On Android, certain Activity-level events must be handled or forwarded to ensure a correct and consistent VR experience.

Lifecycle Events

The Activity.onPause(), Activity.onResume() and Activity.onDestroy() events should be forwarded at the Java level to GvrLayout:

protected void onPause() {
  // Resume head/controller tracking using the native APIs.

protected void onResume() {
  // Pause head/controller tracking using the native APIs.

protected void onDestroy() {
  // Dispose of any resources or native objects.
  // Note that this will also dispose of the native gvr_context instance.

Key Events

Use the activity's dispatchKeyEvent() method to intercept certain hardware key presses. If the phone's volume up or volume down hardware buttons are pressed while the phone is in a VR viewer, a non-stereo overlay appears on screen. To prevent this overlay from interfering with the user's VR experience, handle such volume events in the Activity.dispatchKeyEvent() method:

public boolean dispatchKeyEvent(KeyEvent event) {
  // Avoid accidental volume key presses while the phone is in the VR headset.
  if (event.getKeyCode() == KeyEvent.KEYCODE_VOLUME_UP
      || event.getKeyCode() == KeyEvent.KEYCODE_VOLUME_DOWN) {
    return true;
  return super.dispatchKeyEvent(event);

Activity Transitions

For GVR applications, transitions between activities are handled in a special way to ensure that head tracking is maintained throughout the transition. In general, VR activities should not directly send intents to begin subsequent activities, and instead should use the SDK methods provided in DaydreamApi to initiate transitions.

For example, to return to the VR homescreen, call:


To launch another VR application, call:


or use one of the alternative methods for formatting and sending intents.

These methods handle displaying a smooth transition while maintaining head tracking and preserving head tracking state across activity transitions. They should always be used any activity transition from a VR activity. You can also use these methods from a 2D UI to initiate a transition to a VR activity and display an appropriate setup flow if necessary.