Build conversation models

A conversation model defines what users can say to your Actions and how your Actions respond to users. The main building blocks of your conversation model are intents, types, scenes, and prompts. After one of your Actions is invoked, Google Assistant hands the user off to that Action, and the Action begins a conversation with the user, based on your conversation model, which consists of:

  • Valid user requests - To define what users can say to your Actions, you create a collection of intents that augment the Assistant NLU, so it can understand requests that are specific to your Actions. Each intent defines training phrases that describe what users can say to match that intent. The Assistant NLU expands these training phrases to include similar phrases, and the aggregation of those phrases results in the intent's language model.

  • Action logic and responses - Scenes process intents, carry out required logic, and generate prompts to return to the user.

Figure 1. A conversation model consists of intents, types, scenes, and prompts that define your user experience. Intents that are eligible for invocation are also valid for matching in your conversations.

Define valid user requests

To define what users can say to your Actions, you use a combination of intents and types. User intents and types let you augment the Assistant NLU with your own language models. System intents and types let you take advantage of built-in language models and event detection like users wanting to quit your Action or Assistant detecting no input at all.

Create user intents

User intents let you define your own training phrases that define what users might say to your Actions. The Assistant NLU uses these phrases to train itself to understand what your users say. When users say something that matches a user intent's language model, Assistant matches the intent and notifies your Action, so you can carry out logic and respond back to users.

  1. Create a file called sdk/custom/intents/<intent_name>.yaml, replacing <intent_name> with the name of the intent. The content of the file should be:

    {}
    
  2. Create a file called sdk/custom/intents/en/intent_name.yaml, replacing intent_name with the name of the intent and the locale en with another if you're adding training phrases for a language that is not English.

  3. Add training phrases using the trainingPhrases field. You should add as many as possible to train the NLU to efficiently handle variations. The content of the file should look like the following. The synonyms used are just an example for words that express confirmation.

    trainingPhrases:
    - Yes
    - Another
    - Yeah
    - Ok
    ...
    

Create system intents

System intents let you take advantage of intents with pre-defined language models for common events like users wanting to quit your Action or when user input times out. To create system intents:

For each system intent you want to create, add to your project's sdk/custom/global folder a file called <intent_name>.yaml, and define how to handle the intent.

Intent name Triggering condition
actions.intent.CANCEL User wants to cancel the interaction.
actions.intent.NO_INPUT_1 User didn't provide input for the first time.
actions.intent.NO_INPUT_2 User didn't provide input for the second consecutive time.
actions.intent.NO_INPUT_FINAL User didn't provide input for the third consecutive time, causing termination of the Action execution.
actions.intent.NO_MATCH_1 User input didn't match any intents the Actions support for the first time.
actions.intent.NO_MATCH_2 User input didn't match any intents the Actions support for the second consecutive time.
actions.intent.NO_MATCH_FINAL User input didn't match any intents the Actions support for the third consecutive time, causing termination of the Action execution.

As an example, the content of sdk/custom/global/actions.intent.NO_MATCH_FINAL could look like:

handler:
  staticPrompt:
    candidates:
    - promptResponse:
        firstSimple:
          variants:
          - speech: Sorry, I still didn't catch that. The Action will now quit.
            text: Sorry, I still didn't catch that. The Action will now quit.
transitionToScene: actions.page.END_CONVERSATION

Create custom types

Custom types let you create your own type specification to train the NLU to understand a set of values that should map to a single key.

To create a custom type:

  1. Create a file called sdk/custom/types/<type_name>.yaml, replacing <type_name> with the name of the type.
  2. To create a type that supports values in the form of words and synonyms and defines two values that the type can have (based on which words and synonyms matched), the content should look like:

    synonym:
      entities:
        value1: {}
        value2: {}
      matchType: EXACT_MATCH
    
  3. To define additional word and synonyms that should match each value, create a file called sdk/custom/types/en/<type_name>.yaml, replacing <type_name> with the name of the type and the locale en with another if you're adding training phrases for a language that is not english.

  4. Add synonyms for each value. The content should look like:

    synonym:
      entities:
        value1:
          synonyms:
          - value one
          - first value
          - the first
        value2:
          synonyms:
          - value two
          - second value
          - the second
    

Build Action logic and responses

The Assistant NLU matches user requests to intents, so that your Action can process them in scenes. Scenes are powerful logic executors that let you process events during a conversation.

Create a scene

The following sections describe how to create scenes and define functionality for each scene's lifecycle stage.

To create a scene:

  1. Create a file called sdk/custom/scenes/<scene_name>.yaml, replacing <scene_name> with the name of the scene.

Define one-time setup

When a scene first becomes active, you can carry out one time tasks in the On enter stage. The On enter stage executes only once, and is the only stage that doesn't run inside a scene's execution loop.

  1. Open <scene_name>.yaml and use the 'onEnter' field to add a message that is sent to the user when the scene is loaded. The content of the file should look like this:

    onEnter:
      staticPrompt:
        candidates:
        - promptResponse:
            firstSimple:
              variants:
              - speech: This message is sent to the user when the scene loads
    
  2. To transition to the newly created scene when your Action is invoked, add a transition to the scene to sdk/custom/global/actions.intent.MAIN.yaml:

    transitionToScene: scene_name
    ...
    

Check conditions

Conditions let you check slot filling, session storage, user storage, and home storage parameters to control scene execution flow.

As an example, you can use conditions to check that a slot is filled before triggering a webhook event and/or sending prompts to the user.

  1. Open sdk/custom/scenes/<scene_name>.yaml, replacing <scene_name> with the name of the scene you want to add the slot filling check to.
  2. Use the conditionalEvents field to add a conditional event that sends a prompt to the user after the slot is filled (scene.slots.status = "FINAL"). The content of the file should look like:
    conditionalEvents:
    - condition: scene.slots.status = "FINAL"
      handler:
        staticPrompt:
          candidates:
          - promptResponse:
              firstSimple:
                variants:
                - speech: Thanks for the info!
    

Define slot filling

Slots let you extract typed parameters from user input.

As an example, to create any type of order, you need the user to specify which option they choose from the set your service supports.

You can create a type for the order where you define the supported set of options, and then add a slot for the order type in the scene that handles the interaction to place an order.

  1. Create the file /sdk/custom/types/order.yaml to define the order type. The content of the file should look like:

    synonym:
      entities:
        boba: {}
        matcha: {}
      matchType: EXACT_MATCH
    
  2. Create the file /sdk/custom/types/en/order.yaml to create English synonyms for your options. If you need to change the synonyms for a different language, adjust the locale portion of the file path accordingly (for example, replacing en with es for Spanish). The content of the file should look like:

    synonym:
      entities:
        boba:
          synonyms:
          - Boba Special
          - boba special
          - Boba
          - boba
        matcha:
          synonyms:
          - Matcha
          - Matcha Tea
          - matcha
          - matcha tea
    

For certain slot types, like those related to transactions or user engagement, you can additionally configure the slot. Slot configurations can change the conversational experience for users based on the properties you provide.

To configure a slot, provide properties in a JSON object in your fulfillment, referenced as a session parameter. You can find the available properties for each slot type in the Actions Builder JSON reference. For example, the actions.type.DeliveryAddressValue slot type corresponds to the reference content for the DeliveryAddressValue slot.

Slot value mapping

In many cases, a previous intent match can include parameters that partially or entirely fill a corresponding scene's slot values. In these cases, all slots filled by intent parameters map to the scene's slot filling if the slot name matches the intent parameter name.

For example, if a user matches an intent to order a beverage by saying "I want to order a large vanilla coffee", existing slots for size, flavor, and beverage type are considered filled in the corresponding scene if that scene defines same slots.

Add a slot prompt

Your Action needs to prompt the user for information about the type of drink they want to order. This is also an opportunity to include suggestion chips for valid options.

Following on the order example, add a slot for the order to the scene that handles the interaction to place an order, defining the prompt that will be used to ask the user for the missing information.

  1. Open sdk/custom/scenes/<scene_name>.yaml, replacing <scene_name> with the name of the scene that handles the interaction to place an order.
  2. Use the slots field to add a slot for the order type, and prompt and suggestions for the slot filling interaction. Also add a condition to define what should happen when the slot is filled. The content of the file should look like:

    conditionalEvents:
    - condition: scene.slots.status = "FINAL"
      handler:
        webhookHandler: Place_Order
    ...
    slots:
    - commitBehavior:
        writeSessionParam: order
      name: order
      promptSettings:
        initialPrompt:
          staticPrompt:
            candidates:
            - promptResponse:
                firstSimple:
                  variants:
                  - speech: What would you like to order?
                suggestions:
                - title: "Boba Special"
                - title: "Matcha Tea"
      required: true
      type:
        name: order
    ...
    

Process input

During this stage, you can have the Assistant NLU match user input to intents. You can scope intent matching to a specific scene by adding the desired intents to the scene. This lets you control conversation flow by telling Assistant to match specific intents when specific scenes are active.

Following on the order example, after you initially greet the user you want them to be able to complete the task of ordering something.

  1. Create a file called /sdk/custom/intents/order.yaml, and add a parameter of the type you defined for the orders. The content of the file should look like:

    parameters:
    - name: order
      type:
        name: order
    
  2. Create a file called /sdk/custom/intents/en/order.yaml to provide English training phrases for intent recognition. If you need to change the training phrases for a different language, adjust the locale portion of the file path accordingly (e.g. replacing en with es for Spanish). The content of the file should look like:

    trainingPhrases:
    - One ($order 'matcha tea' auto=true) please
    - I'd like a ($order 'matcha tea' auto=true)
    - I'd like to order a ($order 'matcha tea' auto=true)
    - I'd like to order
    
  3. Open /sdk/custom/scenes/<scene_name>.yaml replacing <scene_name> with the name of the scene that handles the interaction to place an order, and add a trigger for the intent that sets a transition to a different scene that handles order confirmation. The content of the file should look like:

    ...
    intentEvents:
    - intent: order
      transitionToScene: Confirm_Order
    ...
    

Transition to other scenes

To transition to another scene:

  1. Open sdk/custom/global/actions.intent.MAIN.yaml if you want to add a transition to the main intent of the action, or sdk/custom/scenes/<scene_name>.yaml replacing <scene_name> with the name of the scene you want to add the transition to.
  2. Define the transition using the transitionToScene field. As an example, sdk/custom/global/actions.intent.MAIN.yaml should look like:

    transitionToScene: TargetScene
    ...