Stay organized with collections
Save and categorize content based on your preferences.
Simple responses take the form of a chat bubble visually and use text-to-speech
(TTS) or Speech Synthesis Markup Language (SSML) for sound. By using short
simple responses in conversation, you can keep users engaged with a clear visual
and audio interface that can be paired with other conversational elements.
Chat bubble content in a simple response must be a phonetic subset or a complete
transcript of the TTS/SSML output. This helps users map out what your Action
says and increases comprehension in various conditions.
In a prompt, text you provide in the first_simple and last_simple objects
use the properties of a simple response. Google Assistant sends all simple
responses in a prompt, then sends the final rich response in the prompt queue.
Properties
The simple response type has the following properties:
Property
Type
Requirement
Description
speech
string
Optional
Represents the words to be spoken to the user in SSML or text-to-speech.
If the override field in the containing prompt is "true",
then speech defined in this field replaces the previous simple
prompt's speech.
text
string
Optional
Text to display in the chat bubble. Strings longer than 640 characters
are truncated at the first word break (or whitespace) before 640
characters. We recommend using less than 300 characters to prevent
content from extending past the screen, especially when paired with a
card or other visual element.
If not provided, Assistant renders a display version of the
speech field instead. If the override field in
the containing prompt is "false", then text defined in this field is
appended to the previous simple prompt's text.
Sample code
YAML
candidates:-first_simple:variants:-speech:This is the first simple response.text:This is the 1st simple response.last_simple:variants:-speech:This is the last simple response.text:This is the last simple response.
JSON
{"candidates":[{"first_simple":{"variants":[{"speech":"This is the first simple response.","text":"This is the 1st simple response."}]},"last_simple":{"variants":[{"speech":"This is the last simple response.","text":"This is the last simple response."}]}}]}
Node.js
app.handle('Simple',conv=>{conv.add(newSimple({speech:'This is the first simple response.',text:'This is the 1st simple response.'}));conv.add(newSimple({speech:'This is the last simple response.',text:'This is the last simple response.'}));});
JSON
{"responseJson":{"session":{"id":"session_id","params":{}},"prompt":{"override":false,"firstSimple":{"speech":"This is the first simple response.","text":"This is the 1st simple response."},"lastSimple":{"speech":"This is the last simple response.","text":"This is the last simple response."}}}}
SSML and sounds
Use SSML and sounds in your responses to give them more polish and enhance the
user experience. See the SSML documentation for more information.
Sound library
We provide a variety of free, short sounds in our sound library. These
sounds are hosted for you, so all you need to do is include them in your SSML.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-18 UTC."],[[["\u003cp\u003eSimple responses use chat bubbles with text-to-speech or SSML for audio, keeping users engaged with a clear interface.\u003c/p\u003e\n"],["\u003cp\u003eChat bubble content should be a phonetic subset or a complete transcript of the audio output for better user comprehension.\u003c/p\u003e\n"],["\u003cp\u003eSimple responses are sent before the final rich response, and some surfaces may only display the rich response.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003espeech\u003c/code\u003e property defines what the Assistant says, while the \u003ccode\u003etext\u003c/code\u003e property defines what's displayed in the chat bubble.\u003c/p\u003e\n"],["\u003cp\u003eSSML and sounds can be used to enhance simple responses, with a sound library available for use.\u003c/p\u003e\n"]]],[],null,["# Simple responses take the form of a chat bubble visually and use text-to-speech\n(TTS) or Speech Synthesis Markup Language (SSML) for sound. By using short\nsimple responses in conversation, you can keep users engaged with a clear visual\nand audio interface that can be paired with other conversational elements.\n\nChat bubble content in a simple response must be a phonetic subset or a complete\ntranscript of the TTS/SSML output. This helps users map out what your Action\nsays and increases comprehension in various conditions.\n\nIn a prompt, text you provide in the `first_simple` and `last_simple` objects\nuse the properties of a simple response. Google Assistant sends all simple\nresponses in a prompt, then sends the final rich response in the prompt queue.\n| **Note:** Some surfaces (like smart displays) only display one piece of content at a time instead of using the chat bubble format. As a result, only the rich response appears when you provide both a simple and a rich response.\n\nProperties\n----------\n\nThe simple response type has the following properties:\n\n| Property | Type | Requirement | Description |\n|----------|--------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `speech` | string | Optional | Represents the words to be spoken to the user in SSML or text-to-speech. If the `override` field in the containing prompt is \"true\", then speech defined in this field replaces the previous simple prompt's speech. |\n| `text` | string | Optional | Text to display in the chat bubble. Strings longer than 640 characters are truncated at the first word break (or whitespace) before 640 characters. We recommend using less than 300 characters to prevent content from extending past the screen, especially when paired with a card or other visual element. If not provided, Assistant renders a display version of the `speech` field instead. If the `override` field in the containing prompt is \"false\", then text defined in this field is appended to the previous simple prompt's text. |\n\nSample code\n-----------\n\n### YAML\n\n```yaml\ncandidates:\n - first_simple:\n variants:\n - speech: This is the first simple response.\n text: This is the 1st simple response.\n last_simple:\n variants:\n - speech: This is the last simple response.\n text: This is the last simple response.\n```\n\n### JSON\n\n```gdscript\n{\n \"candidates\": [\n {\n \"first_simple\": {\n \"variants\": [\n {\n \"speech\": \"This is the first simple response.\",\n \"text\": \"This is the 1st simple response.\"\n }\n ]\n },\n \"last_simple\": {\n \"variants\": [\n {\n \"speech\": \"This is the last simple response.\",\n \"text\": \"This is the last simple response.\"\n }\n ]\n }\n }\n ]\n}\n```\n\n### Node.js\n\n```javascript\napp.handle('Simple', conv =\u003e {\n conv.add(new Simple({\n speech: 'This is the first simple response.',\n text: 'This is the 1st simple response.'\n }));\n conv.add(new Simple({\n speech: 'This is the last simple response.',\n text: 'This is the last simple response.'\n }));\n});\n```\n\n### JSON\n\n```carbon\n{\n \"responseJson\": {\n \"session\": {\n \"id\": \"session_id\",\n \"params\": {}\n },\n \"prompt\": {\n \"override\": false,\n \"firstSimple\": {\n \"speech\": \"This is the first simple response.\",\n \"text\": \"This is the 1st simple response.\"\n },\n \"lastSimple\": {\n \"speech\": \"This is the last simple response.\",\n \"text\": \"This is the last simple response.\"\n }\n }\n }\n}\n```\n\nSSML and sounds\n---------------\n\nUse SSML and sounds in your responses to give them more polish and enhance the\nuser experience. See the [SSML documentation](/assistant/conversational/ssml) for more information.\n\nSound library\n-------------\n\nWe provide a variety of free, short sounds in our [sound library](/assistant/tools/sound-library). These\nsounds are hosted for you, so all you need to do is include them in your SSML."]]