Interactive Canvas is a framework built on Google Assistant that allows developers to add visual, immersive experiences to Conversational Actions. This visual experience is an interactive web app that Assistant sends as a response to the user in conversation. Unlike rich responses that exist in-line in an Assistant conversation, the Interactive Canvas web app renders as a full-screen web view.
Use Interactive Canvas if you want to do any of the following in your Action:
- Create full-screen visuals
- Create custom animations and transitions
- Do data visualization
- Create custom layouts and GUIs
Interactive Canvas is currently available on the following devices:
- Smart displays
- Android mobile devices
How it works
An Action that uses Interactive Canvas consists of two main components:
- Conversational Action: An Action that uses a conversational interface to fulfill user requests. You can use either Actions Builder or the Actions SDK to build your conversation.
Users interacting with an Interactive Canvas Action have a back-and-forth conversation with Google Assistant to fulfill their goal. However, for Interactive Canvas, the bulk of this conversation occurs within the context of your web app. When connecting your Conversational Action to your web app, you must include the Interactive Canvas API in your web app code.
In addition to including the Interactive Canvas library, you must return the
Canvas response type in your conversation to open your web app on the
user's device. You can also use a
Canvas response to update your web app based
on the user's input.
Canvas: A response that contains a URL of the web app and data to pass it. Actions Builder can automatically populate the
Canvasresponse with the matched intent and current scene data to update the web app. Alternatively, you can send a
Canvasresponse from a webhook using the Node.js fulfillment library. For more information, see Canvas prompts.
To illustrate how Interactive Canvas works, imagine a hypothetical Action called Cool Colors that changes the device screen color to a color the user specifies. After the user invokes the Action, the following flow happens:
- The user says, "Turn the screen blue." to the Assistant device.
- The Actions on Google platform routes the user's request to your conversational logic to match an intent.
- The platform matches the intent with the Action's scene, which triggers an
event and sends a
Canvasresponse to the device. The device loads a web app using a URL provided in the response (if it has not yet been loaded).
- When the web app loads, it registers callbacks with the Interactive Canvas API.
If the Canvas response contains a
datafield, the object value of the
datafield is passed into the registered
onUpdatecallback of the web app. In this example, the conversational logic sends a
Canvasresponse with a data field that includes a variable with the value of
- Upon receiving the
datavalue of the
onUpdatecallback can execute custom logic for your web app and make the defined changes. In this example, the
onUpdatecallback reads the color from
dataand turns the screen blue.
To learn how to build a web app for Interactive Canvas, see Web apps.
To see the code for a complete Interactive Canvas Action, see the sample on GitHub.