Learn the basics

Figure 1. Jungle Dream, an interactive word game built using Interactive Canvas

Interactive Canvas is a framework built on the Google Assistant that allows developers to add a visual, immersive experience to conversational Actions. This visual layer is an interactive web app that is sent as a response to the user in conversation. Unlike traditional rich responses that exist in-line in the Assistant conversation, the Interactive Canvas web app renders as a full-screen web view.

You should use Interactive Canvas if you want to do any of the following in your Action:

  • Create full-screen visuals
  • Create custom animations and transitions
  • Do data visualization
  • Create custom layouts and GUIs
  • Implement video playback (videos are not yet fully supported, but may still play in Interactive Canvas)

How it works

Interactive Canvas connects your conversational Action to an interactive web app so that your users can interact with your visual user interface through voice or touch. There are four components to an Action that uses Interactive Canvas:

  • Custom Conversational Action: An Action that uses a conversational interface to fulfill user requests. Actions that use Interactive Canvas operate in the same fundamental way as any conversational Action, but use immersive web views(HtmlResponse) to render responses instead of rich cards or simple text and voice responses.
  • Web app: A front-end web app with customized visuals that your Action sends as a response to users during a conversation. You build the web app with web standards like HTML, JavaScript, and CSS. interactiveCanvas lets your web app communicate with your conversational Action.

  • interactiveCanvas: JavaScript API that you include in the web app to enable communication between the web app and your conversational Action.

  • HtmlResponse: A response that contains a URL of the web app and data to pass it.

To illustrate how Interactive Canvas works, imagine a hypothetical Interactive Canvas Action called Cool Colors that changes the device screen color to whatever color the user says. After the user invokes the Action, the flow looks like the following:

  1. The user says Turn the screen blue to the Assistant device (a Smart Display in this case).
  2. The Actions on Google platform routes the user's request to Dialogflow to match an intent.
  3. The fulfillment for the matched intent is run and an HtmlResponse is sent to the Smart Display. The device uses the URL to load the web app if it has not yet been loaded.
  4. When the web app loads, it registers callbacks with the interactiveCanvas API. The data object's value is then passed into the registered onUpdate callback of the web app. In our example, the fulfillment sends an HtmlResponse with a data that includes a variable with the value of blue.
  5. The custom logic for your web app reads the data value of the HtmlResponse and makes the defined changes. In our example, this turns the screen blue.
  6. interactiveCanvas sends the callback update to the Smart Display.