Helpers

Helpers tell the Assistant to momentarily take over the conversation to obtain common data such as a user's full name, a date and time, or a delivery address. When you request a helper, the Assistant presents a standard, consistent UI to users to obtain this information, so you don't have to design your own.

Usage overview

The general process for using a helper with Dialogflow and Actions SDK is described below for Dialogflow and the Actions SDK. See the specific helper sections for more information about each helper.

Dialogflow

Node.js
  1. Call conv.ask() with the appropriate helper object. When you call a helper function, the client library sends a response to the Assistant that contains the corresponding helper intent. Based on the intent, the Assistant knows to carry out the dialog for the corresponding helper.
  2. Declare a Dialogflow intent that specifies an event that corresponds to one of the helper intents. See the helper intents section for a list of supported events. This intent doesn't need to have any User says phrases, because it's always triggered when the event is fired (when the Assistant is done carrying out the helper).
  3. When the Assistant returns the result of the helper in the subsequent request to your fulfillment, the corresponding Dialogflow intent is triggered, and you handle the intent normally.
JSON
  1. Specify the helper's intent in the possibleIntents object when responding to the Assistant. When the Assistant receives the response, it knows that it should carry out the dialog for the helper. See helper intents for information on what intents you can request to be fulfilled.
  2. Declare a Dialogflow intent that specifies an event that corresponds to one of the helper intents. See the helper intents section for a list of supported events. This intent doesn't need to have any User says phrases, because it's always triggered when the event is fired.
  3. When the Assistant returns the result of the helper in the subsequent request to your fulfillment, parse the request and the data you need.

Actions SDK

Node.js
  1. Call ask with the appropriate parameters. A helper function asks the Assistant to fulfill one of the intents described in helper intents. When you call a helper function, the client library sends a response to the Assistant that contains one of these intents. Based on the intent, the Assistant knows to carry out the dialog for the corresponding helper.
  2. When the Assistant returns the result of the helper in the subsequent request to your fulfillment, you receive the corresponding intent in the request. This lets you detect that a helper has returned a result. Use the corresponding getter function for the helper to obtain the data you need.
JSON
  1. Specify the helper's intent in the possibleIntents object when responding to the Assistant. When the Assistant receives the response, it knows that it should carry out the dialog for the helper. See helper intents for information on what intents you can request to be fulfilled.
  2. When the Assistant returns the result of the helper in the subsequent request to your fulfillment, parse the request and the data you need.

Helper intents

The following table describes the supported intents that you can request the Assistant to fulfill. If you're using Dialogflow, you also need to create a Dialogflow intent that specifies the corresponding event for the helper intent.

Intent name Dialogflow Event name Usage
actions.intent.PERMISSION actions_intent_PERMISSION Obtain the user's full name, coarse location, or precise location, or all 3.
actions.intent.OPTION actions_intent_OPTION Receive the selected item from a list or carousel UI. Or, if the user does not select from the list or carousel UI, receive speech or text input that matches with the key in the list or carousel UI.
actions.intent.DATETIME actions_intent_DATETIME Obtain a date and time input from the user.
actions.intent.SIGN_IN actions_intent_SIGN_IN Requests an account linking flow to link a user's account.
actions.intent.PLACE actions_intent_PLACE Obtain an address or saved location from the user.
actions.intent.DELIVERY_ADDRESS actions_intent_DELIVERY_ADDRESS Obtain a delivery address input from the user.
actions.intent.CONFIRMATION actions_intent_CONFIRMATION Obtain a confirmation from the user (for example, an answer to a yes or no question).

The following sections describe the available helpers and the associated intent that you must request to use the helper.

User information

You can obtain the following user information with this helper:

  • Display name
  • Given name
  • Family name
  • Coarse device location (zip code and city)
  • Precise device location (coordinates and street address)

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

Node.js
app.intent('ask_for_permissions_detailed', (conv) => {
  // Choose one or more supported permissions to request:
  // NAME, DEVICE_PRECISE_LOCATION, DEVICE_COARSE_LOCATION
  const options = {
    context: 'To address you by name and know your location',
    // Ask for more than one permission. User can authorize all or none.
    permissions: ['NAME', 'DEVICE_PRECISE_LOCATION'],
  };
conv.ask(new Permission(options));
});
Dialogflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "PLACEHOLDER"
            }
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.PERMISSION",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
          "optContext": "To address you by name and know your location",
          "permissions": [
            "NAME",
            "DEVICE_PRECISE_LOCATION"
          ]
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.PERMISSION",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
            "optContext": "To address you by name and know your location",
            "permissions": [
              "NAME",
              "DEVICE_PRECISE_LOCATION"
            ]
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

Node.js
app.intent('ask_for_permission_confirmation', (conv, params, confirmationGranted) => {
  const {name} = conv.user;
  if (confirmationGranted) {
    if (name) {
      conv.ask(`I'll send the driver you're way now ${name.display}.`);
    }
  }
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_for_permission_confirmation",
      "displayName": "ask_for_permission_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "PERMISSION",
              "rawText": "yes",
              "textValue": "true"
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "user": {},
  "device": {},
  "surface": {
    "capabilities": [
      {
        "name": "actions.capability.SCREEN_OUTPUT"
      },
      {
        "name": "actions.capability.AUDIO_OUTPUT"
      },
      {
        "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
      },
      {
        "name": "actions.capability.WEB_BROWSER"
      }
    ]
  },
  "conversation": {},
  "inputs": [
    {
      "rawInputs": [],
      "intent": "ask_for_permission_confirmation",
      "arguments": [
        {
          "name": "PERMISSION",
          "rawText": "yes",
          "textValue": "true"
        }
      ]
    }
  ],
  "availableSurfaces": [
    {
      "capabilities": [
        {
          "name": "actions.capability.SCREEN_OUTPUT"
        },
        {
          "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
          "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
        },
        {
          "name": "actions.capability.WEB_BROWSER"
        }
      ]
    }
  ]
}

The snippet above shows how to check if the user granted you the information and then access the data.

Once you obtain the user's information, we recommend that you persist this information, so you don't have to ask again. You can use UserStorage to store user information across conversations. Check out our Name Psychic sample to see UserStorage in action.

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

Node.js
app.intent('ask_with_list', (conv) => {
  conv.ask('This is a simple response for a list.');
  conv.ask(new Suggestions([
    'Basic Card',
    'Browse Carousel',
    'Carousel',
    'List',
    'Media',
    'Suggestions',
  ]));
  // Create a list
  conv.ask(new List({
    title: 'Things to learn about',
    items: {
      // Add the first item to the list
      'MATH_AND_PRIME': {
        synonyms: [
          'math',
          'math and prime',
          'prime numbers',
          'prime',
        ],
        title: 'Title of the First List Item',
        description: '42 is an abundant number',
        image: new Image({
          url: 'https://example.com/math_and_prime.jpg',
          alt: 'Math & prime numbers',
        }),
      },
      // Add the second item to the list
      'EGYPT': {
        synonyms: [
          'religion',
          'egypt',
          'ancient egyptian',
      ],
        title: 'Ancient Egyptian religion',
        description: '42 gods ruled on the fate of the dead in the afterworld',
        image: new Image({
          url: 'http://example.com/egypt',
          alt: 'Egypt',
        }),
      },
      // Add the last item to the list
      'RECIPES': {
        synonyms: [
          'recipes',
          'recipe',
          '42 recipes',
        ],
        title: '42 recipes in 42 ingredients',
        description: 'A beautifully simple recipe',
        image: new Image({
          url: 'http://example.com/recipe',
          alt: 'Recipe',
        }),
      },
    },
  }));
});
Dialogflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "This is a simple response for a list."
            }
          }
        ],
        "suggestions": [
          {
            "title": "Basic Card"
          },
          {
            "title": "Browse Carousel"
          },
          {
            "title": "Carousel"
          },
          {
            "title": "List"
          },
          {
            "title": "Media"
          },
          {
            "title": "Suggestions"
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.OPTION",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
          "listSelect": {
            "title": "Things to learn about",
            "items": [
              {
                "optionInfo": {
                  "key": "MATH_AND_PRIME",
                  "synonyms": [
                    "math",
                    "math and prime",
                    "prime numbers",
                    "prime"
                  ]
                },
                "description": "42 is an abundant number",
                "image": {
                  "url": "https://example.com/math_and_prime.jpg",
                  "accessibilityText": "Math & prime numbers"
                },
                "title": "Title of the First List Item"
              },
              {
                "optionInfo": {
                  "key": "EGYPT",
                  "synonyms": [
                    "religion",
                    "egypt",
                    "ancient egyptian"
                  ]
                },
                "description": "42 gods ruled on the fate of the dead in the afterworld",
                "image": {
                  "url": "http://example.com/egypt",
                  "accessibilityText": "Egypt"
                },
                "title": "Ancient Egyptian religion"
              },
              {
                "optionInfo": {
                  "key": "RECIPES",
                  "synonyms": [
                    "recipes",
                    "recipe",
                    "42 recipes"
                  ]
                },
                "description": "A beautifully simple recipe",
                "image": {
                  "url": "http://example.com/recipe",
                  "accessibilityText": "Recipe"
                },
                "title": "42 recipes in 42 ingredients"
              }
            ]
          }
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "This is a simple response for a list."
              }
            }
          ],
          "suggestions": [
            {
              "title": "Basic Card"
            },
            {
              "title": "Browse Carousel"
            },
            {
              "title": "Carousel"
            },
            {
              "title": "List"
            },
            {
              "title": "Media"
            },
            {
              "title": "Suggestions"
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.OPTION",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
            "listSelect": {
              "title": "Things to learn about",
              "items": [
                {
                  "optionInfo": {
                    "key": "MATH_AND_PRIME",
                    "synonyms": [
                      "math",
                      "math and prime",
                      "prime numbers",
                      "prime"
                    ]
                  },
                  "description": "42 is an abundant number",
                  "image": {
                    "url": "https://example.com/math_and_prime.jpg",
                    "accessibilityText": "Math & prime numbers"
                  },
                  "title": "Title of the First List Item"
                },
                {
                  "optionInfo": {
                    "key": "EGYPT",
                    "synonyms": [
                      "religion",
                      "egypt",
                      "ancient egyptian"
                    ]
                  },
                  "description": "42 gods ruled on the fate of the dead in the afterworld",
                  "image": {
                    "url": "http://example.com/egypt",
                    "accessibilityText": "Egypt"
                  },
                  "title": "Ancient Egyptian religion"
                },
                {
                  "optionInfo": {
                    "key": "RECIPES",
                    "synonyms": [
                      "recipes",
                      "recipe",
                      "42 recipes"
                    ]
                  },
                  "description": "A beautifully simple recipe",
                  "image": {
                    "url": "http://example.com/recipe",
                    "accessibilityText": "Recipe"
                  },
                  "title": "42 recipes in 42 ingredients"
                }
              ]
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

You can display a list or carousel UI and obtain the selected item from the user with the actions.intent.OPTION intent.

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

When users select an item, the selected item value is passed to you as an argument. The snippets below show how to check which item the user selected.

Node.js
app.intent('ask_with_list_confirmation', (conv, params, option) => {
  // Get the user's selection
  // Compare the user's selections to each of the item's keys
  if (!option) {
    conv.ask('You did not select any item from the list or carousel');
  } else if (option === 'MATH_AND_PRIME') {
    conv.ask('42 is an abundant number because the sum of its...');
  } else if (option === 'EGYPT') {
    conv.ask('42 gods who ruled on the fate of the dead in the ');
  } else if (option === 'RECIPES') {
    conv.ask(`Here's a beautiful simple recipe that's full `);
  } else {
    conv.ask('You selected an unknown item from the list, or carousel');
  }
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_with_list_confirmation",
      "displayName": "ask_with_list_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "OPTION",
              "textValue": "MATH_AND_PRIME"
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "This is a simple response for a list."
              }
            }
          ],
          "suggestions": [
            {
              "title": "Basic Card"
            },
            {
              "title": "Browse Carousel"
            },
            {
              "title": "Carousel"
            },
            {
              "title": "List"
            },
            {
              "title": "Media"
            },
            {
              "title": "Suggestions"
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.OPTION",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
            "listSelect": {
              "title": "Things to learn about",
              "items": [
                {
                  "optionInfo": {
                    "key": "MATH_AND_PRIME",
                    "synonyms": [
                      "math",
                      "math and prime",
                      "prime numbers",
                      "prime"
                    ]
                  },
                  "description": "42 is an abundant number",
                  "image": {
                    "url": "https://example.com/math_and_prime.jpg",
                    "accessibilityText": "Math & prime numbers"
                  },
                  "title": "Title of the First List Item"
                },
                {
                  "optionInfo": {
                    "key": "EGYPT",
                    "synonyms": [
                      "religion",
                      "egypt",
                      "ancient egyptian"
                    ]
                  },
                  "description": "42 gods ruled on the fate of the dead in the afterworld",
                  "image": {
                    "url": "http://example.com/egypt",
                    "accessibilityText": "Egypt"
                  },
                  "title": "Ancient Egyptian religion"
                },
                {
                  "optionInfo": {
                    "key": "RECIPES",
                    "synonyms": [
                      "recipes",
                      "recipe",
                      "42 recipes"
                    ]
                  },
                  "description": "A beautifully simple recipe",
                  "image": {
                    "url": "http://example.com/recipe",
                    "accessibilityText": "Recipe"
                  },
                  "title": "42 recipes in 42 ingredients"
                }
              ]
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Date and Time

You can obtain a date and time from users by requesting fulfillment of the actions.intent.DATETIME intent.

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

You can specify custom prompts when asking the user for a date and time using the options object when creating the DateTime permission.

Node.js
app.intent('ask_for_datetime_detail', (conv) => {
  const options = {
    prompts: {
      initial: 'When would you like to schedule the appoinment?',
      date: 'What day was that?',
      time: 'What time?',
    },
  };
  conv.ask(new DateTime(options));
});
Dialogflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "PLACEHOLDER"
            }
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.DATETIME",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.DateTimeValueSpec",
          "dialogSpec": {
            "requestDatetimeText": "When would you like to schedule the appoinment?",
            "requestDateText": "What day was that?",
            "requestTimeText": "What time?"
          }
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.DATETIME",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.DateTimeValueSpec",
            "dialogSpec": {
              "requestDatetimeText": "When would you like to schedule the appoinment?",
              "requestDateText": "What day was that?",
              "requestTimeText": "What time?"
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

The snippet below shows how to check whether the user has granted access and how to access the data.

Node.js
app.intent('ask_for_datetime_confirmation', (conv, params, confirmatinGranted) => {
  if (confirmationGranted) {
    conv.ask('Alright, date set.');
  } else {
    conv.ask(`I'm having a hard time finding an appointment`);
  }
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_for_datetime_confirmation",
      "displayName": "ask_for_datetime_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "PERMISSION",
              "rawText": "yes",
              "textValue": "true"
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.DATETIME",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.DateTimeValueSpec",
            "dialogSpec": {
              "requestDatetimeText": "When would you like to schedule the appoinment?",
              "requestDateText": "What day was that?",
              "requestTimeText": "What time?"
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Account Sign-in

You can have users sign-in to their accounts that are associated with your service by requesting fulfillment of the actions.intent.SIGN_IN intent. Users cannot sign-in over voice through OAuth.

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

Node.js
app.intent('ask_for_sign_in_detail', (conv) => {
  conv.ask(new SignIn());
});
Dialogflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "PLACEHOLDER"
            }
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.SIGN_IN",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.SignInValueSpec"
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.SIGN_IN",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.SignInValueSpec"
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

The snippet below shows how to check whether the user has granted access and how to access the data.

Node.js
app.intent('ask_for_sign_in_confirmation', (conv, params, signin) => {
  if (signin.status !== 'OK') {
    return conv.ask('You need to sign in before using the app.');
  }
  // const access = conv.user.access.token;
  // possibly do something with access token
  return conv.ask('Great! Thanks for signing in.');
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_for_sign_in_confirmation",
      "displayName": "ask_for_sign_in_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "SIGN_IN",
              "extension": {
                "@type": "type.googleapis.com/google.actions.v2.SignInValue",
                "status": "OK"
              }
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "user": {},
  "device": {},
  "surface": {
    "capabilities": [
      {
        "name": "actions.capability.SCREEN_OUTPUT"
      },
      {
        "name": "actions.capability.AUDIO_OUTPUT"
      },
      {
        "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
      },
      {
        "name": "actions.capability.WEB_BROWSER"
      }
    ]
  },
  "conversation": {},
  "inputs": [
    {
      "rawInputs": [],
      "intent": "ask_for_sign_in_confirmation",
      "arguments": [
        {
          "name": "SIGN_IN",
          "extension": {
            "@type": "type.googleapis.com/google.actions.v2.SignInValue",
            "status": "OK"
          }
        }
      ]
    }
  ],
  "availableSurfaces": [
    {
      "capabilities": [
        {
          "name": "actions.capability.SCREEN_OUTPUT"
        },
        {
          "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
          "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
        },
        {
          "name": "actions.capability.WEB_BROWSER"
        }
      ]
    }
  ]
}

Place and Location

You can obtain a location from users by requesting fulfillment of the actions.intent.PLACE intent. This helper is used to prompt the user for addresses and other locations, including any home/work/contact locations that they've saved with Google.

Saved locations will only return the address, not the associated mapping (e.g. "123 Main St" as opposed to "HOME = 123 Main St").

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

Node.js
app.intent('ask_for_place_detail', (conv) => {
  const options = {
    context: 'To find a place to pick you up',
    prompt: 'Where would you like to be picked up?',
  };
  conv.ask(new Place(options));
});
Dialgoflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "PLACEHOLDER"
            }
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.SIGN_IN",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.SignInValueSpec"
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.PLACE",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.PlaceValueSpec",
            "dialogSpec": {
              "extension": {
                "@type": "type.googleapis.com/google.actions.v2.PlaceValueSpec.PlaceDialogSpec",
                "permissionContext": "To find a place to pick you up",
                "requestPrompt": "Where would you like to be picked up?"
              }
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

The snippet below shows how to check whether the user has granted access and how to access the data.

Node.js
app.intent('ask_for_place_confirmation', (conv, params, place, status) => {
  if (!place) return conv.ask(`Sorry, I couldn't get a location from you`);
  // the place also carries formattedAddress, and coordinates fields
  const {name} = place;
  if (place.name) conv.ask(`Alright! I'll send the car to ${name}`);
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_for_place_confirmation",
      "displayName": "ask_for_place_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "PLACE",
              "placeValue": {
                "coordinates": {
                  "latitude": 37.3911801,
                  "longitude": -122.0810139
                },
                "name": "Cascal",
                "formattedAddress": "Cascal, 400 Castro Street, Mountain View, CA 94041, United States"
              }
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "user": {},
  "device": {},
  "surface": {
    "capabilities": [
      {
        "name": "actions.capability.SCREEN_OUTPUT"
      },
      {
        "name": "actions.capability.AUDIO_OUTPUT"
      },
      {
        "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
      },
      {
        "name": "actions.capability.WEB_BROWSER"
      }
    ]
  },
  "conversation": {},
  "inputs": [
    {
      "rawInputs": [],
      "intent": "ask_for_place_confirmation",
      "arguments": [
        {
          "name": "PLACE",
          "placeValue": {
            "coordinates": {
              "latitude": 37.3911801,
              "longitude": -122.0810139
            },
            "name": "Cascal",
            "formattedAddress": "Cascal, 400 Castro Street, Mountain View, CA 94041, United States"
          }
        }
      ]
    }
  ],
  "availableSurfaces": [
    {
      "capabilities": [
        {
          "name": "actions.capability.SCREEN_OUTPUT"
        },
        {
          "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
          "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
        },
        {
          "name": "actions.capability.WEB_BROWSER"
        }
      ]
    }
  ]
}

Confirmation

You can ask a generic confirmation from the user (yes/no question) and get the resulting answer. The grammar for "yes" and "no" naturally expands to things like "Yea" or "Nope", making it usable in many situations.

Calling the helper

The following code example shows how you can call the helper using the client library. The JSON snippets show the corresponding webhook response.

You can specify a custom prompt when asking the user for a confirmation.

Node.js
app.intent('ask_for_confirmation_detail', (conv) => {
  conv.ask(new Confirmation('Can you confirm?'));
});
Dialogflow JSON
{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "PLACEHOLDER"
            }
          }
        ]
      },
      "userStorage": "{\"data\":{}}",
      "systemIntent": {
        "intent": "actions.intent.CONFIRMATION",
        "data": {
          "@type": "type.googleapis.com/google.actions.v2.ConfirmationValueSpec",
          "dialogSpec": {
            "requestConfirmationText": "Can you confirm?"
          }
        }
      }
    }
  },
  "outputContexts": [
    {
      "name": "/contexts/_actions_on_google",
      "lifespanCount": 99,
      "parameters": {
        "data": "{}"
      }
    }
  ]
}
Actions SDK JSON
{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "PLACEHOLDER"
              }
            }
          ]
        }
      },
      "possibleIntents": [
        {
          "intent": "actions.intent.CONFIRMATION",
          "inputValueData": {
            "@type": "type.googleapis.com/google.actions.v2.ConfirmationValueSpec",
            "dialogSpec": {
              "requestConfirmationText": "Can you confirm?"
            }
          }
        }
      ]
    }
  ],
  "conversationToken": "{\"data\":{}}",
  "userStorage": "{\"data\":{}}"
}

Getting the results of the helper

The following code example shows how to access the result of the helper using the client library. The JSON snippets represent the request, containing the result of the helper that your fulfillment will receive.

After the user responds to the helper, you receive a request to your fulfillment and can check whether the user confirmed or not.

Node.js
app.intent('ask_for_confirmation_confirmation', (conv, params, confirmationGranted) => {
  return conv.ask(confirmationGranted ? 'Wonderful' : 'Maybe next time');
});
Dialogflow JSON
{
  "responseId": "",
  "queryResult": {
    "queryText": "",
    "action": "",
    "parameters": {},
    "allRequiredParamsPresent": true,
    "fulfillmentText": "",
    "fulfillmentMessages": [],
    "outputContexts": [],
    "intent": {
      "name": "ask_for_confirmation_confirmation",
      "displayName": "ask_for_confirmation_confirmation"
    },
    "intentDetectionConfidence": 1,
    "diagnosticInfo": {},
    "languageCode": ""
  },
  "originalDetectIntentRequest": {
    "source": "google",
    "version": "2",
    "payload": {
      "isInSandbox": true,
      "surface": {
        "capabilities": [
          {
            "name": "actions.capability.SCREEN_OUTPUT"
          },
          {
            "name": "actions.capability.AUDIO_OUTPUT"
          },
          {
            "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
          },
          {
            "name": "actions.capability.WEB_BROWSER"
          }
        ]
      },
      "inputs": [
        {
          "rawInputs": [],
          "intent": "",
          "arguments": [
            {
              "name": "CONFIRMATION",
              "boolValue": true
            }
          ]
        }
      ],
      "user": {},
      "conversation": {},
      "availableSurfaces": [
        {
          "capabilities": [
            {
              "name": "actions.capability.SCREEN_OUTPUT"
            },
            {
              "name": "actions.capability.AUDIO_OUTPUT"
            },
            {
              "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
            },
            {
              "name": "actions.capability.WEB_BROWSER"
            }
          ]
        }
      ]
    }
  },
  "session": ""
}
Actions SDK JSON
{
  "user": {},
  "device": {},
  "surface": {
    "capabilities": [
      {
        "name": "actions.capability.SCREEN_OUTPUT"
      },
      {
        "name": "actions.capability.AUDIO_OUTPUT"
      },
      {
        "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
      },
      {
        "name": "actions.capability.WEB_BROWSER"
      }
    ]
  },
  "conversation": {},
  "inputs": [
    {
      "rawInputs": [],
      "intent": "ask_for_confirmation_confirmation",
      "arguments": [
        {
          "name": "CONFIRMATION",
          "boolValue": true
        }
      ]
    }
  ],
  "availableSurfaces": [
    {
      "capabilities": [
        {
          "name": "actions.capability.SCREEN_OUTPUT"
        },
        {
          "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
          "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
        },
        {
          "name": "actions.capability.WEB_BROWSER"
        }
      ]
    }
  ]
}