Conversational components are combined to compose the content in the spoken prompts, display prompts, and chips.
Conversational components (prompts and chips) should be designed for every dialog turn.
The content your Action speaks to the user, via TTS or pre-recorded audio
The content your Action writes to the user, via printed text on the screen
Suggestions for how the user can continue or pivot the conversation
Visual components include cards, carousels, and other visual assets.
Perfect for scanning and comparing options, visual components are useful if you're presenting detailed information—but they aren't required for every dialog turn.
Use basic cards to display an image and text to users.
Browsing carousels are optimized for allowing users to select one of many items, when those items are content from the web.
Carousels are optimized for allowing users to select one of many items, when those items are most easily differentiated by an image.
Lists are optimized for allowing users to select one of many items, when those items are most easily differentiated by their title.
Media responses are used to play and control the playback of audio content like music or other media.
Tables are used to display static data to users in an easily scannable format.
Group devices by the components used for the response
Here are a couple examples from the Google I/O 18 Action
Most of the time, you can simply re-use the same spoken prompt on devices like smart displays, since the need to convey the core of the conversation remains the same.
At this point in the conversation, there isn’t any content that would be appropriate in a visual component like a card or carousel, so none is included.
Be sure to add chips. At a minimum, these should include any options offered in the prompts so the user can quickly tap them to respond.
Since there isn’t any content that would be appropriate in a visual component, there’s no content that can be moved out of the spoken prompt. Therefore, it’s okay to re-use the original.
The display prompt should be a condensed version of the spoken prompt, optimized for scannability. Move any response options to the chips, but be sure to always include the question.
Re-use the same chips you just created.
Start with the original spoken prompt from the example sample dialog.
Note that the spoken list is limited to 6 items (of 17 total) in order to reduce cognitive load. The topics are randomized to not favor one topic over another.
Once again, it’s okay to re-use the same spoken prompt, since we can’t assume the user is looking at the screen.
Including a visual list of all the topics helps the user to browse and select. Note that the visual list of all 17 items (paginated) is shown in alphabetical order, which is easiest for users to search for the topic they want.
Because the list already enumerates the topics that can be chosen, there is no need to include them as chips. Instead, include other options like “None of those” to offer the user a way out.
Here, we can assume that the user has equal access to the audio and the screen. Since the visual modality is better suited to lists, leverage this strength by directing the user to the screen to pick a topic. This allows us to shorten the spoken prompt to a simple list overview and question.
Only the question needs to be maintained in the display prompt.
Re-use the same chip you just created.