Agents and Actions

In API.AI, agents are the container for a group of actions that you build with intents. Your actions can have a single intent or many, depending on how simple or complex your actions are.

See the API.AI documentation for information on how to create agents.

Major components of an intent

The following diagram and descriptions show how the major components of an intent work together to capture user input, process it, and return a response to the user.

"User Says" phrases

These are phrases that users say to trigger the intent. You specify several example phrases and API.AI uses machine learning to detect similar phrases. For example, if you specify:

"about some recipes for cake"

API.AI might also automatically detect:

"find me a few cake recipes" and "show me some recipes for cake"

When User says phrases are detected, API.AI parses the phrases and triggers business logic. Sometimes phrases contain specific parameters that you want to parse. For example, for the phrase:

"get me recipes for homemade cannoli by Julia Child"

your action might be interested in parsing the following parameters to find corresponding recipes:

"homemade cannoli" and "Julia Child"

You can specify parameters to parse from user input, which API.AI will then provide to your fulfillment logic (a webhook in API.AI) so that it can process them.


Entities define how to extract data from User says phrases into standard data types. For example, you can tell API.AI to bias its logic towards extracting a parameter from a particular user input phrase, such as:

"find me homemade cannoli recipes that I can cook in less than 30 minutes".

In this example, you would declare "30 minutes" as a `@sys.duration` entity. Any phrase that users say for this particular intent that contains a time duration will be parsed appropriately. See the API.AI entities documentation for more information.


API.AI lets you define a response after an action is processed. You can define a response directly in API.AI for very simple intents that don't use a webhook, but the most common model is to return a response with a webhook after some processing of parameter data. For example:

"get me recipes for homemade cannoli by Julia Child"

can have two parameters, `homemade cannoli` and `Julia Child` that a webhook uses to look up a recipes and return a response, such as:

"Here's a homemade cannoli recipe by Julia Child. Do you want to hear it now?"


An important concept in API.AI is the notion of contexts, strings that represent the current context of a user’s request. This is helpful for differentiating phrases that may be vague or have different meanings depending on the user’s preferences, geographic location, or the current state of the conversation. For example, a user input of "yes" might serve as a confirmation to quit the conversation action or to confirm some input. Contexts allow you to differentiate the actions you take with this same "yes" input across different intents. You can also use contexts to store parameter values and maintain state across different intents.

Actions on Google Integration Settings

Agents that you build for Actions on Google have a special API.AI settings screen that allows you to set agent-wide settings such as the default welcome intent, start intents, and OAuth linking.

See the API.AI documentation.

Configuration Limits

The following limits apply when creating agents in API.AI.

API.AI Maximum #
Start intents 10
Custom entities 10
Entries per entity 100
Synonyms per entry 5
Intents 10
Expressions per intent 10
UTF characters per user expression 128
Characters in intent name 255