Surface Capabilities

Your apps can appear on a variety of surfaces such as mobile devices that support audio and display experiences or a Google Home device that supports audio-only experiences.

To help you properly design and build conversations that will work well on all surfaces, use surface capabilities to control and scope your conversations properly.

App surface capabilities

App surface capabilities let you control whether or not users can invoke your app, based on the surface they are using. If users try to invoke your app on an unsupported surface, they receive an error message telling them their device is unsupported.

You define your app's surface support in your Actions on Google developer project.

Your actions can appear on a variety of surfaces that the Assistant supports, such as phones (Android and iOS) and Google Home.

Runtime surface capabilities

You can cater the user experience with runtime surface capabilities in two main ways:

  • Response branching - Present different responses to users but have the same structure and flow for your conversation across different surfaces. For example, a weather app might show a card with an image on a phone and play an audio file on Google Home, but the conversational flow is the same across surfaces.

  • Conversation branching - Present users with completely different conversation on each surface. For example, if you are building a food-ordering app, you might want to provide a re-ordering flow on Google Home, but a full cart assembly flow on mobile phones. To do conversation branching, scope intent triggering in Dialogflow to certain surface capabilities with Dialogflow contexts. The actual Dialogflow intents are not triggered unless a specific surface capability is satisfied.

  • Multi-surface conversations - Present users with a conversation on one surface that transitions to another surface mid-conversation. For example, if a user invokes your app with images on an audio-only surface like Google Home, you can build your app to look for another surface with visual capabilities and move the conversation there if possible.

Response branching

Every time your fulfillment receives a request from the Google Assistant, you can query the following surfaces (for example, Google Home or an Android phone) capabilities:

Node.js

The client library provides the hasSurfaceCapability function to check capabilities after an intent is triggered.

let hasScreen =
    app.hasSurfaceCapability(app.SurfaceCapabilities.SCREEN_OUTPUT)
let hasAudio =
    app.hasSurfaceCapability(app.SurfaceCapabilities.AUDIO_OUTPUT)
JSON

To do response branching, check the surface.capabilities field that you receive in the request and present the appropriate response.

"surface": {
    "capabilities": [
        {
            "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
            "name": "actions.capability.SCREEN_OUTPUT"
        }
    ]
}

Conversation branching

You can set up Dialogflow intents to only trigger on certain capabilities with pre-defined Dialogflow contexts. Every time an intent gets matched, Dialogflow automatically generates contexts from the set of surface capabilities the device has available. You can specify one or more of these contexts as "input contexts" for your intents. This allows you to gate intent triggering based on modality.

For instance, if you only want an intent to trigger on devices with screen output, you can set an input context on the intent to be actions_capability_screen_output. The following contexts are available:

  • actions_capability_audio_output - The device has a speaker
  • actions_capability_screen_output - The device has an output display screen

Here's an example of an intent that will only trigger on surfaces with screens:

Multi-surface conversations

At any point during your app's flow, you can check if the user has any other surfaces with a specific capability. If another surface with the requested capability is available, you can then transfer the current conversation over to that new surface.

The flow for a surface transfer will work as follows:

  1. Check whether the user has an available surface

    In the webhook handler, you can query whether the user has a surface available with a specific capability. Note that this surface must be tied to the same Google account as the source surface.

    Node.js

    The hasAvailableSurfaceCapabilities function will return true if the user has a surface with all of the requested capabilities.

    const screenAvailable = app.hasAvailableSurfaceCapabilities(app.SurfaceCapabilities.SCREEN_OUTPUT);
        
    JSON

    Check the availableSurfaces field for the needed capabilities.

    {
        "surface": {
          "capabilities":
             ["actions.capability.AUDIO_OUTPUT"] // current surface is eyes-free
        },
        "availableSurfaces": [{ // user has a surface with a screen, eg. a phone
          "capabilities": [
            { "name": "actions.capability.SCREEN_OUTPUT"},
            {"name": "actions.capability.AUDIO_OUTPUT"}
          ]
        }],
        ...
    }
        
  2. Request to transfer the user to the new surface

    If there is an available surface with the needed capabilities, your app will need to ask the user if they want to transfer the conversation.

    Node.js

    If hasAvailableSurfaceCapabilities returned true , call the askForNewSurface function, passing in the reason for the transfer (as context), the notification to appear on the new surface, and the requested capabilities.

    let context = 'Sure, I have some sample images for you.';
    let notif = 'Sample Images';
    if (screenAvailable) {
      app.askForNewSurface(context, notif, [app.SurfaceCapabilities.SCREEN_OUTPUT]);
    } else {
      app.tell("Sorry, you need a screen to see pictures");
    };
        
    JSON

    To invoke the NEW_SURFACE intent, your webhook response will look like the following.

    {
      "conversationToken": "null",
      "expectUserResponse": true,
      "expectedInputs": [
        {
          "inputPrompt": {
            "richInitialPrompt": {
              "items": [
                {
                  "simpleResponse": {
                    "textToSpeech": "PLACEHOLDER_FOR_NEW_SURFACE"
                  }
                }
              ]
            }
          },
          "possibleIntents": [
            {
              "intent": "actions.intent.NEW_SURFACE",
                "inputValueData": {
                  "@type": "type.googleapis.com/google.actions.v2.NewSurfaceValueSpec",
                  "context": "Sure, I have some sample images for you.",
                  "notificationTitle": "Sample Images",
                  "capabilities": [
                    "actions.capability.SCREEN_OUTPUT"
                  ]
                }
            }
          ]
        }
      ]
    }
        
  3. Handle the user's response

    Based on the user's response to your request, your app will either facilitate the handoff or return control of the conversation to the original surface. Either way, the next request to your endpoint will contain the actions.intent.NEW_SURFACE intent, so you should build an intent that triggers on that event with a corresponding handler in your webhook. In the handler code, you should check whether or not the transfer was successful.

    Node.js

    Use the isNewSurface method to determine if the user accepted the request to move and is currently on the new surface.

    const actionMap = new Map();
    actionMap.set('new_surface_intent',  function (app) {
      if (app.isNewSurface()) {
        showPicture(app, pictureType);
      } else {
        app.tell('Ok, I understand. You don\'t want to see pictures. Bye');
      }
    });
    app.handleRequest(actionMap);
        
    JSON

    Look at the value of the NEW_SURFACE status to determine if the surface transfer was successful.

    {
      "user": {
        "userId": "1234",
        "locale": "en-US"
      },
      "conversation": {
        "conversationId": "1234",
        "type": "ACTIVE",
        "conversationToken": ""
      },
      "inputs": [
        {
          "intent": "actions.intent.NEW_SURFACE",
          "rawInputs": [
            {
              "inputType": "VOICE",
              "query": "[request notification]"
            }
          ],
          "arguments": [
            {
              "name": "NEW_SURFACE",
              "extension": {
                "@type": "type.googleapis.com/google.actions.v2.NewSurfaceValue",
                "status": "OK"
              }
            }
          ]
        }
      ],
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          }
        ]
      }
      ]
    }