Join the Actions on Google Developer Challenge to win a trip to Google I/O 2018 and more than 20 other prizes.

Surface Capabilities

Your apps can appear on a variety of surfaces such as mobile devices that support audio and display experiences or a Google Home device that supports audio-only experiences.

To help you properly design and build conversations that will work well on all surfaces, use surface capabilities to control and scope your conversations properly.

App surface capabilities

App surface capabilities let you control whether or not users can invoke your app, based on the surface they are using. If users try to invoke your app on an unsupported surface, they receive an error message telling them their device is unsupported.

You define your app's surface support in your Actions on Google developer project.

Your actions can appear on a variety of surfaces that the Assistant supports, such as phones (Android and iOS) and Google Home.

Runtime surface capabilities

You can cater the user experience with runtime surface capabilities in two main ways:

  • Response branching - Present different responses to users but have the same structure and flow for your conversation across different surfaces. For example, a weather action might show a card with an image on a phone and play an audio file on Google Home, but the conversational flow is the same across surfaces.

  • Conversation branching Present users with completely different conversation on each surface. For example, if you are building a food-ordering action, you might want to provide a re-ordering flow on Google Home, but a full cart assembly flow on mobile phones. To do conversation branching, scope intent triggering in API.AI to certain surface capabilities with API.AI contexts. The actual API.AI intents are not triggered unless a specific surface capability is satisfied.

Response branching

Every time your fulfillment receives a request from the Google Assistant, you can query the following surfaces (for example, Google Home or an Android phone) capabilities:

Node.js

The client library provides the hasSurfaceCapability function to check capabilities after an intent is triggered.

let hasScreen =
    app.hasSurfaceCapability(app.SurfaceCapabilities.SCREEN_OUTPUT)
let hasAudio =
    app.hasSurfaceCapability(app.SurfaceCapabilities.AUDIO_OUTPUT)
JSON

To do response branching, check the surface.capabilities field that you receive in the request and present the appropriate response.

"surface": {
    "capabilities": [
        {
            "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
            "name": "actions.capability.SCREEN_OUTPUT"
        }
    ]
}

Conversation branching

You can set up API.AI intents to only trigger on certain capabilities with pre-defined API.AI contexts. Every time an intent gets matched, API.AI automatically generates contexts from the set of surface capabilities the device has available. You can specify one or more of these contexts as "input contexts" for your intents. This allows you to gate intent triggering based on modality.

For instance, if you only want an intent to trigger on devices with screen output, you can set an input context on the intent to be actions_capability_screen_output. The following contexts are available:

  • actions_capability_audio_output - The device has a speaker
  • actions_capability_screen_output - The device has an output display screen

Here's an example of an intent that will only trigger on surfaces with screens: