Intents

Intents represent a task Assistant needs your Action to carry out, such as some user input that needs processing or a system event that you need to handle. You use intents to help build your invocation and conversation models. When these events occur, the Assistant runtime matches it to the corresponding intent and sends the intent to your Action to process. There's two main types of intents, which are described in the following list:

  • Custom intents let you extend Assistant's ability to understand user requests that are specific to your brand and services. You define custom training phrases within an intent, which in turn generates an intent's language model. That language model augments the Assistant NLU, increasing its ability to understand even more.

  • System intents have training data or other non-conversational input signals defined by Assistant. This means you don't need to define training phrases for these intents. Assistant matches these intents in a standard way, during well-known system events such as main invocation or when users don't provide any input.

Figure 1. A common intent matching scenario. A user says something that matches a global intent. The corresponding scene activates, and eventually consumes more user input. Another intent is matched, which transitions to and activates another scene.

Custom intents

When you build Actions, you create custom intents that contain training phrases, which extends Assistant's ability to understand even more. Assistant uses your training phrases to augment its NLU when it delegates user requests to your Actions.

When this occurs, Assistant brokers the communication between the user and your Actions, mapping user input to an intent that has a matching language model. Assistant then notifies your Actions of the matched intent, so you can process it within a scene.

When building custom intents, you specify the following elements:

  • A Global intent designation defines whether or not the Assistant runtime can match the specified custom intent at invocation time as well as during a conversation. By default, Assistant can match custom intents only during a conversation. Only intents that you mark as global are eligible for deep link invocation.

  • Training phrases are examples of what a user might say to match the intent. The Assistant NLU (natural language understanding) engine naturally expands these training phrases to include other, similar phrases. Providing a large set of high-quality examples increases the intent's quality and matching accuracy.

  • Parameters are typed data that you want to extract from user input. To create a parameter, you annotate training phrases with types to notify the NLU engine that you want portions of user input to be extracted. You can use system types or create your own custom types for parameters.

When the NLU engine detects a parameter match in user input, it extracts the value as a typed parameter, so you can carry out logic with it in a scene. If an intent parameter has the same name as a scene slot, the Assistant runtime automatically fills the scene slot with the value from the intent parameter. See the slot value mapping documentation for more information.

Intent parameters also support "partial" matches. For example, if you specify a type of DateTime and the user only provides a date, the NLU still extracts the partial value as a parameter.

System intents

Assistant matches system intents based on standard, system events. These events might have a system-defined language model like users saying "pause" to pause the media player, or might not have a language model such as users not providing any input at all. Because these intents are provided by Assistant, you don't have to worry about how they're matched, but only how to handle the intents when they are matched.

The following system intents are supported:

  • actions.intent.MAIN: Every Actions project must contain this default main invocation, which is tied to your display name. Users say phrases like "Ok Google, talk to <display name>" to invoke the Action.
  • actions.intent.NO_MATCH_1, actions.intent.NO_MATCH_2, actions.intent.NO_MATCH_FINAL: These intents are matched when the user says something that can't be matched to an intent in your Action. You can set individual reprompts and an exit message in the final intent.
  • actions.intent.NO_INPUT_1, actions.intent.NO_INPUT_2, actions.intent.NO_INPUT_FINAL: These intents are matched when there's no input from the user after 8 seconds. You can set individual reprompts for each intent and an exit message in the final intent.
  • actions.intent.CANCEL: This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit".
  • actions.intent.MEDIA_STATUS_FINISHED: This intent is matched and sent to your Action when a user completes media playback or skips to the next piece of media.
  • actions.intent.MEDIA_STATUS_PAUSED: This intent is matched and sent to your Action when a user pauses media playback in a media response.
  • actions.intent.MEDIA_STATUS_STOPPED: This intent is matched and sent to your Action when a user stops or exits media playback from a media response.
  • actions.intent.MEDIA_STATUS_FAILED: This intent is matched and sent to your Action when a media response's player fails to play.
  • Implicit invocations for verticals: These intents let you opt into an implicit invocation (invocation without using your display name) provided by Actions on Google for a specific class of activity. Currently, only actions.intent.PLAY_GAME is supported.