Stay organized with collections
Save and categorize content based on your preferences.
There are two main aspects of designing for Interactive Canvas:
Designing the conversation
Designing the user interface (UI)
Your users can interact with your Action that uses Interactive Canvas through
speaking to the Assistant or touching the UI. You should make sure that your
spoken conversation and UI complement each other and make it easy and exciting
for users to progress through your Action. In this section, we'll go over how
to design both the conversation and UI for the best user experience.
Is Interactive Canvas right for my Action?
Before you begin designing, think about whether your Action will work well
with Interactive Canvas. You should consider using Interactive Canvas if:
Your Action would benefit from full-screen, visually rich experiences.
Interactive Canvas is ideal for full-screen experiences that benefit from
rich visuals, like immersive voice-driven games.
Your Action has an intuitive conversational flow. The critical path
through your Action should be navigable through voice alone. An Action that
requires spatial precision, like a drawing app, could provide a difficult
experience around which to design an intuitive conversational flow.
Existing components and customization are not enough
(for example, if you want to go beyond existing Assistant
visual components
and customization). Interactive Canvas is great for showcasing your unique
visual brand attributes, dynamic elements, and animations. Additionally,
Interactive Canvas may be used to provide updates to a single visual
interface as the user progresses through conversation.
Requirements
Although Interactive Canvas is a familiar web development environment, there
are some requirements to take into account before designing your Action.
Header
There is a header at the top of every canvas web app with the name of your
brand. This reserved area has a height of 56 dp for mobile, 96 dp for Home Hub,
and 120 dp for Smart Display. Make sure to follow this header requirement:
Ensure that no important information or interactive elements are hidden
behind the header. The getHeaderHeightPx()
method determines the height of the header.
Figure 1. Examples of how the header appears in various Actions.
Constraints
Consider these constraints before designing your Action with Interactive Canvas:
No local storage of data. We prevent the Action from storing cookies
and from accessing the
Web Storage API.
Given these restrictions, we recommend that your Action manages state in
the webhook and uses AppRequest's userStorage field to
save user data.
No pop-ups or modals. We prevent the Action from showing any pop-up
windows or modals. We also strongly discourage using other standard
navigational UI elements that you usually see in web apps, like keyboards
and pagination.
Design your conversation
You first need to design your Action’s conversation. Interactive Canvas
experiences are still voice-forward, so it’s important that your conversation
effectively guides the user through your Action. You can think of an Action
that uses Interactive Canvas as a conversation with helpful visuals. For more
information on designing conversations, see the Conversation Design site’s
Getting Started
page.
Guidelines
For the best user experience, you should:
Follow the conversation design process and best practices outlined on theConversation design site.
This means that, among other things, you need to:
Make sure that your Action experience works well for conversation
Create a brand persona
Handle errors within your conversation
Try out a voice-only experience before figuring out what it would look
with a screen
Try to provide the same capabilities through touch and voice.
If possible, make sure that everything you can do by touching the screen
you can also do with your voice.
Make sure the critical path through your Action is feasible via voice.
Your users should be able to navigate through the main paths of your Action
using only voice.
Ensure the user can use your Action without audio. On mobile devices,
the user may not have the audio on. For this reason, consider adding
transcripts to your Action to guide the user.
Take cognitive load into consideration. Avoid overly lengthy voice
responses to reduce the cognitive toll on the user.
Design your UI
Once you've designed your conversation, you can design your UI to complement it.
While designing, consider how the natural back and forth of dialogue can drive
the visual interface you present to the user. If you're designing for
Smart Displays, see specific considerations in
Design for Smart Displays.
Guidelines
For the best user experience, you should:
Create responsive designs. Make sure your designs work for either
landscape and portrait mode and hold up from small phones to larger screens.
Your users should be able to easily read the UI for each type of surface.
Take cognitive load into consideration. To avoid overwhelming your
users, keep the information and content you present on the screen organized,
clean, and concise.
Adapt voice output for the screen. Be creative with how you use visuals
to complement the audio— don't just write what is being said out loud. When
a screen is available, we may be more concise with our voice output than
when one is not.
Avoid placing any critical information or components towards the bottom
of the screen. On mobile, the user transcript appears above the mic plate
and can grow to a few lines. Although this transcript is transient, avoid
writing important content towards the bottom of the screen. Buttons similar
to suggestion chips are fine at the bottom of the screen, as user input is
an alternative to using suggestion chips.
Handle errors within your conversation visually. Errors could occur when
the user doesn't respond, if you don't understand them, or don't provide
fulfillment for what they said. Figure out where these error prompts go on
your UI. This could be wherever you put your display prompts
(e.g., in the title) or it could be something different
(e.g., a special area of content that appears as needed). Refer to the
Conversation design site
for more tips on error handling.
Design for Smart Displays
While the above guidelines still apply, you should keep other design
considerations in mind when designing for Smart Displays. It’s tempting to
treat Smart Displays like tablets when designing for them. However, Smart
Displays are a completely different and new category of device for two reasons:
Smart Displays are voice-enabled and the Google Assistant is the
operating system
Smart Displays are stationary and, unlike mobile devices, are often placed
in the kitchen or bedroom when used at home
Because of these characteristics, users are sometimes not physically near the
device and interact with Smart Displays using only their voice. Users may also
be multitasking while using Smart Displays. It’s important to keep these usages
in mind when designing for Smart Displays.
Guidelines
For the best user experience with Smart Displays, you should:
Design with voice-first in mind. Designing your Interactive Canvas
Action to be voice-forward is even more important for Smart Displays. Unlike
with a mobile device, your user may be standing across the room and only
communicating with their Smart Display through voice. For this reason, you
can't always rely on the user touching the device to proceed through your
Action, and need to ensure your users can proceed in your Action using
voice.
Design with a specific viewing distance in mind. Design content on the
Smart Display so it can be viewed from a distance. Depending on the size of
the room, the typical viewing distance for smart displays ranges from 3 to
10 feet.
Use a minimum font size of 32 pt for primary text, like titles. Use a
minimum 24 pt for secondary text, like descriptions or paragraphs of text.
Focus on one touchpoint at a time. Display one type of primary
information or task at a time to reduce cognitive workload and keep the
content legible from a distance. For example, when users ask,
“What’s my day like?” the Assistant responds with weather, calendar,
commute, and news content. Each type of content takes up the full screen
and is presented sequentially rather than all showing up at once on the
screen.
Resources
For more information about designing an Action that uses Interactive Canvas,
check out the following resources:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-18 UTC."],[[["\u003cp\u003eInteractive Canvas is best suited for Actions that require visually rich, full-screen experiences with an intuitive conversational flow, going beyond standard Assistant visual components.\u003c/p\u003e\n"],["\u003cp\u003eWhen designing for Interactive Canvas, prioritize voice-first interactions, ensuring critical paths are navigable through voice and complementing the conversation with clear, responsive UI elements.\u003c/p\u003e\n"],["\u003cp\u003eDesign considerations include accommodating header space, avoiding local storage and pop-ups, handling errors gracefully, and adapting voice output for screen context.\u003c/p\u003e\n"],["\u003cp\u003eFor Smart Displays, prioritize voice-forward design due to their stationary nature and potential distance from the user, using large font sizes and focusing on one touchpoint at a time.\u003c/p\u003e\n"]]],["Interactive Canvas design involves two core aspects: conversation and UI design. Actions should work well with voice alone, making it crucial to ensure the critical path is voice-navigable. UI designs should complement the conversation, be responsive across various screen sizes, and consider cognitive load. Key constraints include no local data storage or pop-ups. For Smart Displays, voice-first design, distant viewing, and focusing on one task at a time are essential. Designers should avoid placing content behind the app's header.\n"],null,["# Design guidelines (Dialogflow)\n\nThere are two main aspects of designing for Interactive Canvas:\n\n- Designing the conversation\n- Designing the user interface (UI)\n\nYour users can interact with your Action that uses Interactive Canvas through\nspeaking to the Assistant or touching the UI. You should make sure that your\nspoken conversation and UI complement each other and make it easy and exciting\nfor users to progress through your Action. In this section, we'll go over how\nto design both the conversation and UI for the best user experience.\n\nIs Interactive Canvas right for my Action?\n------------------------------------------\n\nBefore you begin designing, think about whether your Action will work well\nwith Interactive Canvas. You should consider using Interactive Canvas if:\n\n- **Your Action would benefit from full-screen, visually rich experiences.** Interactive Canvas is ideal for full-screen experiences that benefit from rich visuals, like immersive voice-driven games.\n- **Your Action has an intuitive conversational flow.** The critical path through your Action should be navigable through voice alone. An Action that requires spatial precision, like a drawing app, could provide a difficult experience around which to design an intuitive conversational flow.\n- **Existing components and customization are not enough** (for example, if you want to go beyond existing Assistant [visual components](https://designguidelines.withgoogle.com/conversation/visual-components/overview.html) and customization). Interactive Canvas is great for showcasing your unique visual brand attributes, dynamic elements, and animations. Additionally, Interactive Canvas may be used to provide updates to a single visual interface as the user progresses through conversation.\n\nRequirements\n------------\n\nAlthough Interactive Canvas is a familiar web development environment, there\nare some requirements to take into account before designing your Action.\n\n### Header\n\nThere is a header at the top of every canvas web app with the name of your\nbrand. This reserved area has a height of 56 dp for mobile, 96 dp for Home Hub,\nand 120 dp for Smart Display. Make sure to follow this header requirement:\n\n- **Ensure that no important information or interactive elements are hidden\n behind the header.** The [`getHeaderHeightPx()`](/assistant/df-asdk/interactivecanvas/reference/interactivecanvas#getheaderheightpx) method determines the height of the header.\n\n**Figure 1.** Examples of how the header appears in various Actions.\n\n### Constraints\n\nConsider these constraints before designing your Action with Interactive Canvas:\n\n- **No local storage of data.** We prevent the Action from storing cookies and from accessing the [Web Storage API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API). Given these restrictions, we recommend that your Action manages state in the webhook and uses AppRequest's `userStorage` field to [save user data](/assistant/conversational/save-data#saving_data_across_conversations).\n- **No pop-ups or modals.** We prevent the Action from showing any pop-up windows or modals. We also strongly discourage using other standard navigational UI elements that you usually see in web apps, like keyboards and pagination.\n\nDesign your conversation\n------------------------\n\nYou first need to design your Action's conversation. Interactive Canvas\nexperiences are still voice-forward, so it's important that your conversation\neffectively guides the user through your Action. You can think of an Action\nthat uses Interactive Canvas as a conversation with helpful visuals. For more\ninformation on designing conversations, see the Conversation Design site's\n[Getting Started](https://designguidelines.withgoogle.com/conversation/conversation-design-process/how-do-i-get-started.html)\npage.\n\n### Guidelines\n\nFor the best user experience, you should:\n\n- **Follow the conversation design process and best practices outlined on the**\n [Conversation design site](https://designguidelines.withgoogle.com/conversation/).\n This means that, among other things, you need to:\n\n - Make sure that your Action experience works well for conversation\n - Create a brand persona\n - Handle errors within your conversation\n - Try out a voice-only experience before figuring out what it would look with a screen\n- **Try to provide the same capabilities through touch and voice.**\n If possible, make sure that everything you can do by touching the screen\n you can also do with your voice.\n\n- **Make sure the critical path through your Action is feasible via voice.**\n Your users should be able to navigate through the main paths of your Action\n using only voice.\n\n- **Ensure the user can use your Action without audio.** On mobile devices,\n the user may not have the audio on. For this reason, consider adding\n transcripts to your Action to guide the user.\n\n- **Take cognitive load into consideration.** Avoid overly lengthy voice\n responses to reduce the cognitive toll on the user.\n\nDesign your UI\n--------------\n\nOnce you've designed your conversation, you can design your UI to complement it.\nWhile designing, consider how the natural back and forth of dialogue can drive\nthe visual interface you present to the user. If you're designing for\nSmart Displays, see specific considerations in\n[Design for Smart Displays](#design_for_smart_displays).\n\n### Guidelines\n\nFor the best user experience, you should:\n\n- **Create responsive designs.** Make sure your designs work for either landscape and portrait mode and hold up from small phones to larger screens. Your users should be able to easily read the UI for each type of surface.\n- **Take cognitive load into consideration.** To avoid overwhelming your users, keep the information and content you present on the screen organized, clean, and concise.\n- **Adapt voice output for the screen.** Be creative with how you use visuals to complement the audio--- don't just write what is being said out loud. When a screen is available, we may be more concise with our voice output than when one is not.\n- **Avoid placing any critical information or components towards the bottom\n of the screen.** On mobile, the user transcript appears above the mic plate and can grow to a few lines. Although this transcript is transient, avoid writing important content towards the bottom of the screen. Buttons similar to suggestion chips are fine at the bottom of the screen, as user input is an alternative to using suggestion chips.\n- **Handle errors within your conversation visually.** Errors could occur when the user doesn't respond, if you don't understand them, or don't provide fulfillment for what they said. Figure out where these error prompts go on your UI. This could be wherever you put your display prompts (e.g., in the title) or it could be something different (e.g., a special area of content that appears as needed). Refer to the [Conversation design site](https://designguidelines.withgoogle.com/conversation/) for more tips on error handling.\n\nDesign for Smart Displays\n-------------------------\n\nWhile the above guidelines still apply, you should keep other design\nconsiderations in mind when designing for Smart Displays. It's tempting to\ntreat Smart Displays like tablets when designing for them. However, Smart\nDisplays are a completely different and new category of device for two reasons:\n\n- Smart Displays are voice-enabled and the Google Assistant is the operating system\n- Smart Displays are stationary and, unlike mobile devices, are often placed in the kitchen or bedroom when used at home\n\nBecause of these characteristics, users are sometimes not physically near the\ndevice and interact with Smart Displays using only their voice. Users may also\nbe multitasking while using Smart Displays. It's important to keep these usages\nin mind when designing for Smart Displays.\n\n### Guidelines\n\nFor the best user experience with Smart Displays, you should:\n\n- **Design with voice-first in mind.** Designing your Interactive Canvas Action to be voice-forward is even more important for Smart Displays. Unlike with a mobile device, your user may be standing across the room and only communicating with their Smart Display through voice. For this reason, you can't always rely on the user touching the device to proceed through your Action, and need to ensure your users can proceed in your Action using voice.\n- **Design with a specific viewing distance in mind.** Design content on the Smart Display so it can be viewed from a distance. Depending on the size of the room, the typical viewing distance for smart displays ranges from 3 to 10 feet.\n - Use a minimum font size of 32 pt for primary text, like titles. Use a minimum 24 pt for secondary text, like descriptions or paragraphs of text.\n- **Focus on one touchpoint at a time.** Display one type of primary information or task at a time to reduce cognitive workload and keep the content legible from a distance. For example, when users ask, \"What's my day like?\" the Assistant responds with weather, calendar, commute, and news content. Each type of content takes up the full screen and is presented sequentially rather than all showing up at once on the screen.\n\nResources\n---------\n\nFor more information about designing an Action that uses Interactive Canvas,\ncheck out the following resources:\n\n- [Conversation design site](https://designguidelines.withgoogle.com/conversation/)\n- [Multimodal design](https://designguidelines.withgoogle.com/conversation/conversation-design-process/scale-your-design.html) guidelines\n- Download our [Sketch template](/static/assistant/downloads/AoG_Canvas_Design_Templates-20190503T231437Z-001.zip) to help you design your UI."]]