Intents represent a task Assistant needs your Action to carry out, such as some user input that needs processing or a system event that you need to handle. You use intents to help build your invocation and conversation models. When these events occur, the Assistant runtime matches it to the corresponding intent and sends the intent to your Action to process. There's two main types of intents, which are described in the following list:
User intents let you extend Assistant's ability to understand user requests that are specific to your brand and services. You define custom training phrases within an intent, which in turn generates an intent's language model. That language model augments the Assistant NLU, increasing its ability to understand even more.
System intents have training data or other non-conversational input signals defined by Assistant. This means you don't need to define training phrases for these intents. Assistant matches these intents in a standard way, during well-known system events such as main invocation or when users don't provide any input.
When you build Actions, you create user intents that contain training phrases, which extends Assistant's ability to understand even more. Assistant uses your training phrases to augment its NLU when it delegates user requests to your Actions.
When this occurs, Assistant brokers the communication between the user and your Actions, mapping user input to an intent that has a matching language model. Assistant then notifies your Actions of the matched intent, so you can process it within a scene.
When building user intents, you specify the following elements:
A Global intent designation defines whether or not the Assistant runtime can match the specified user intent at invocation time as well as during a conversation. By default, Assistant can match user intents only during a conversation. Only intents that you mark as global are eligible for deep link invocation.
Training phrases are examples of what a user might say to match the intent. The Assistant NLU (natural language understanding) engine naturally expands these training phrases to include other, similar phrases. Providing a large set of high-quality examples increases the intent's quality and matching accuracy.
Parameters are typed data that you want to extract from user input. To create a parameter, you annotate training phrases with types to notify the NLU engine that you want portions of user input to be extracted. You can use system types or create your own custom types for parameters.
When the NLU engine detects a parameter match in user input, it extracts the value as a typed parameter, so you can carry out logic with it in a scene. If an intent parameter has the same name as a scene slot, the Assistant runtime automatically fills the scene slot with the value from the intent parameter. See the slot value mapping documentation for more information.
Intent parameters also support "partial" matches. For example, if you specify a
DateTime and the user only provides a date, the NLU still extracts the
partial value as a parameter.
You may want to use your own NLU to handle all user input for an Action. For example, you may want your Action to respond to all no-match scenarios during a conversation. To ensure you capture all user input, create an intent with the Free form text type. However, you should avoid using custom intents to globally override Assistant's default no-match behavior, as it may negatively impact the ability of users to move between Actions.
When you create a custom intent in the Actions console, Actions Builder suggests system intents that may fill the same role as your user intent. For more information about system intents, refer to the following section.
Assistant matches system intents based on standard system events. These events might have a system-defined language model like users saying "pause" to pause the media player, or might not have a language model such as users not providing any input at all. Because these intents are provided by Assistant, you don't have to worry about how they're matched, but only how to handle the intents when they are matched.
System intents also replace the need to create user intents for actions
that are frequently required, such as
NO. System intents are
trained for all locales, enabling you to more easily
implement a consistent experience for your users across multiple languages.
System intents can also be set as global intents.
System intents are versioned. You can use a specific version of a system intent for as long as that version is supported by Assistant. If an Action uses an unsupported version of a system intent, that system intent is automatically updated to a supported version.
List of intents
|Every Actions project must contain this default main invocation, which is tied to your display name. Users say phrases like "Ok Google, talk to <display name>" to invoke the Action.
These intents are matched when the user says something that can't be matched to an intent in your Action. You can set individual reprompts and an exit message in the final intent.
These intents are matched when there's no input from the user after 8 seconds. You can set individual reprompts for each intent and an exit message in the final intent.
|This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
|This intent is matched and sent to your Action when a user completes media playback or skips to the next piece of media.
|This intent is matched and sent to your Action when a user pauses media playback in a media response.
|This intent is matched and sent to your Action when a user stops or exits media playback from a media response.
|This intent is matched and sent to your Action when a media response's player fails to play.
This intent is matched when a user provides an affirmative response to your Action.
This intent is matched when a user provides a negative response to your Action.
This intent is matched when a user asks the Action to repeat the last response. Requests to repeat are automatically handled by Assistant if the system intent is not enabled in the agent. Enabling this system intent allows you to modify how repeat requests are handled, as well as responses.
|This intent is matched when a user asks to play a game. This intent lets you opt into an implicit invocation (invocation without using your display name) provided by Actions on Google.
Add support for other languages
Certain system intents, such as
NO_MATCH, are supported in
English only. To add support for other languages, you must create user intents
that match those system intents. Your new intents must be handled in your code
the same way as the system intents you've implemented.
For example, assume you are developing a new action and have implemented the
YES system intent. The
YES system intent is only supported in English,
but you also want your app to support interactions in German and Japanese.
To support the additional languages, you create an intent that includes
training phrases for German and Japanese, and then implement the handling
you used for the
YES system intent.
Learn more about creating user intents.
Learn more about localizing your user intents.
When a user's response doesn't match one of your intents, Assistant attempts to handle the input. This behavior facilitates users changing Actions in the middle of a conversation. For example, a user asks, "What films are playing this week?" and then changes context mid-conversation: "What is the weather tomorrow?" In this example, because "What is the weather tomorrow?" isn't a valid response to the conversation triggered by the initial prompt, Assistant automatically attempts to handle the match and move the user into an appropriate conversation.
If Assistant can't find an appropriate Action that matches the user's input, the user continues within the context of your Action.
Since Assistant may interrupt your Action to respond to a valid no-match
scenario, do not use the
NO_MATCH system intent to
fulfill user queries. You should only use the
NO_MATCH intent to reprompt