Build your web app

The UI for an Action using Interactive Canvas is built as a web app. You can use existing web technologies (HTML, CSS, and JavaScript) to design and develop your UI. Before you begin designing your UI, consider the design principles outlined in the Design guidelines section.

The HTML and JavaScript for your web app do the following:

  • Declare canvas event callbacks.
  • Initialize the Interactive Canvas JavaScript library.
  • Provide custom logic for updating your web app based on the state.

While you can use whatever method you want to build your UI, we recommend the following methods:

Restrictions

Take the following restrictions into consideration as you develop your web app:

  • No cookies
  • No local storage
  • No geolocation
  • No camera usage
  • No popups
  • Origin is set to null for AJAX
  • Stay under the 200mb memory limit
  • 3P Header takes up upper portion of screen
  • No styles can be applied to videos
  • Only one media element may be used at a time
  • No HLS video
  • Assets must accept requests from null origins

Architecture

We strongly recommend using a single-page application architecture. This allows for optimal performance and supports continuous conversational UX.

HTML

The HTML file defines how your UI looks. This also loads the JavaScript for your HTML, which facilitates communication between your Action and canvas.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width,initial-scale=1">
    <title>Immersive Canvas Sample</title>
    <!-- Disable favicon requests -->
    <link rel="shortcut icon" type="image/x-icon" href="data:image/x-icon;,">
    <!-- Load Assistant Canvas CSS and JavaScript -->
    <link rel="stylesheet" href="https://www.gstatic.com/assistant/immersivecanvas/css/styles.css">
    <script src="https://www.gstatic.com/assistant/immersivecanvas/js/immersive_canvas_api.js"></script>
    <!-- Load PixiJS for graphics rendering -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.8.7/pixi.min.js"></script>
    <!-- Load Stats.js for fps monitoring -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/stats.js/r16/Stats.min.js"></script>
    <!-- Load custom CSS -->
    <link rel="stylesheet" href="css/main.css">
  </head>
  <body>
    <div id="view" class="view">
      <div class="debug">
        <div class="stats"></div>
        <div class="logs"></div>
      </div>
    </div>
    <!-- Load custom JavaScript after elements are on page -->
    <script src="js/main.js"></script>
    <script src="js/log.js"></script>
  </body>
</html>

Communicate between your fulfillment and web app

Now that you have built your web app and fulfillment, you need to define the communication between them. You can enable this communication with the assistantCanvas JavaScript that you include in your web app file.

Web app custom logic

This file contains the code to define callbacks and invoke methods through assistantCanvas. Callbacks provide a way for you to respond to information or requests from the conversational Action, while the methods provide a way to send information or requests to the intent fulfillment.

Add assistantCanvas.ready(callbacks); to your HTML file to initialize and register callbacks.

// main.js
const view = document.getElementById('view');

// initialize rendering and set correct sizing
const renderer = PIXI.autoDetectRenderer({
  antialias: true,
  width: view.clientWidth,
  height: view.clientHeight,
});
view.appendChild(renderer.view);

// center stage and normalize scaling for all resolutions
const stage = new PIXI.Container();
stage.position.set(view.clientWidth / 2, view.clientHeight / 2);
stage.scale.set(Math.max(renderer.width, renderer.height) / 1024);

// load a sprite from a svg file
const sprite = PIXI.Sprite.from('triangle.svg');
sprite.anchor.set(0.5);
sprite.tint = 0x00FF00; // green
stage.addChild(sprite);

let spin = true;
// register assistant canvas callbacks
const callbacks = {
  onUpdate(state) {
    console.log('onUpdate', JSON.stringify(state));
    if ('tint' in state) {
      sprite.tint = state.tint;
    }
    if ('spin' in state) {
      spin = state.spin;
    }
  },
};
assistantCanvas.ready(callbacks);

// toggle spin on tap of the triangle
sprite.interactive = true;
sprite.buttonMode = true;
sprite.on('pointerdown', () => {
  spin = !spin;
});

// code to be ran per frame
let last = performance.now();
const frame = () => {
  // calculate time differences for smooth animations
  const now = performance.now();
  const delta = now - last;

  // rotate the triangle only if spin is true
  if (spin) {
    sprite.rotation += delta / 1000;
  }

  last = now;

  renderer.render(stage);
  requestAnimationFrame(frame);
};
frame();

User says versus user does

Per the Interactive Canvas design guidelines, you should develop your Action with "voice-first" in mind. That being said, some Smart Displays support touch interactions. Supporting touch is similar to creating a conversational Action, but instead of a vocal response from the user, your client-side JavaScript looks for touch interactions and uses those to change elements in the web app.

You can see an example of this in our sample, which uses the Pixi.js library:

...
const sprite = PIXI.Sprite.from('triangle.svg');
...
sprite.interactive = true; // Enables interaction events
sprite.buttonMode = true; // Changes `cursor` property to `pointer` for PointerEvent
sprite.on('pointerdown', () => {
  spin = !spin;
});
...

In this case, the value of the spin variable is sent through the assistantCanvas API as an update callback. The fulfillment has logic that triggers an intent based on the value of spin.

...
app.intent('pause', (conv) => {
  conv.ask(`Ok, I paused spinning. What else?`);
  conv.ask(new ImmersiveResponse({
    state: {
      spin: false,
    },
  }));
});
...