Dialogflow API . projects . agent . sessions

Instance Methods

contexts()

Returns the contexts Resource.

entityTypes()

Returns the entityTypes Resource.

deleteContexts(parent=None, x__xgafv=None)

Deletes all active contexts in the specified session.

detectIntent(session=None, body=None, x__xgafv=None)

Processes a natural language query and returns structured, actionable data

Method Details

deleteContexts(parent=None, x__xgafv=None)
Deletes all active contexts in the specified session.

Args:
  parent: string, Required. The name of the session to delete all contexts from. Format:
`projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project
ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session
ID>`.
If `Environment ID` is not specified we assume default 'draft' environment.
If `User ID` is not specified, we assume default '-' user. (required)
  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # A generic empty message that you can re-use to avoid defining duplicated
      # empty messages in your APIs. A typical example is to use it as the request
      # or the response type of an API method. For instance:
      #
      #     service Foo {
      #       rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
      #     }
      #
      # The JSON representation for `Empty` is empty JSON object `{}`.
  }
detectIntent(session=None, body=None, x__xgafv=None)
Processes a natural language query and returns structured, actionable data
as a result. This method is not idempotent, because it may cause contexts
and session entity types to be updated, which in turn might affect
results of future queries.

Args:
  session: string, Required. The name of the session this query is sent to. Format:
`projects/<Project ID>/agent/sessions/<Session ID>`, or
`projects/<Project ID>/agent/environments/<Environment ID>/users/<User
ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
default 'draft' environment. If `User ID` is not specified, we are using
"-". It's up to the API caller to choose an appropriate `Session ID` and
`User Id`. They can be a random number or some type of user and session
identifiers (preferably hashed). The length of the `Session ID` and
`User ID` must not exceed 36 characters. (required)
  body: object, The request body.
    The object takes the form of:

{ # The request to detect user's intent.
    "outputAudioConfigMask": "A String", # Mask for output_audio_config indicating which settings in this
        # request-level config should override speech synthesizer settings defined at
        # agent-level.
        # 
        # If unspecified or empty, output_audio_config replaces the agent-level
        # config in its entirety.
    "outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # Instructs the speech synthesizer how to generate the output
        # audio. If this field is not set and agent-level speech synthesizer is not
        # configured, no output audio is generated.
        # If this audio config is supplied in a request, it overrides all existing
        # text-to-speech settings applied to the agent.
      "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
          # provided, then the synthesizer will use the default sample rate based on
          # the audio encoding. If this is different from the voice's natural sample
          # rate, then the synthesizer will honor this request by converting to the
          # desired sample rate (which might result in worse audio quality).
      "audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
      "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
        "effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
            # applied on (post synthesized) text to speech. Effects are applied on top of
            # each other in the order they are given.
          "A String",
        ],
        "voice": { # Description of which voice to use for speech synthesis. # Optional. The desired voice of the synthesized audio.
          "ssmlGender": "A String", # Optional. The preferred gender of the voice. If not set, the service will
              # choose a voice based on the other parameters such as language_code and
              # name. Note that this is only a preference, not requirement. If a
              # voice of the appropriate gender is not available, the synthesizer should
              # substitute a voice with a different gender rather than failing the request.
          "name": "A String", # Optional. The name of the voice. If not set, the service will choose a
              # voice based on the other parameters such as language_code and
              # ssml_gender.
        },
        "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
            # native speed supported by the specific voice. 2.0 is twice as fast, and
            # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
            # other values < 0.25 or > 4.0 will return an error.
        "volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
            # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
            # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
            # will play at approximately half the amplitude of the normal native signal
            # amplitude. A value of +6.0 (dB) will play at approximately twice the
            # amplitude of the normal native signal amplitude. We strongly recommend not
            # to exceed +10 (dB) as there's usually no effective increase in loudness for
            # any value greater than that.
        "pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
            # semitones from the original pitch. -20 means decrease 20 semitones from the
            # original pitch.
      },
    },
    "queryInput": { # Represents the query input. It can contain either: # Required. The input specification. It can be set to:
        # 
        # 1.  an audio config
        #     which instructs the speech recognizer how to process the speech audio,
        # 
        # 2.  a conversational query in the form of text, or
        # 
        # 3.  an event that specifies which intent to trigger.
        #
        # 1.  An audio config which
        #     instructs the speech recognizer how to process the speech audio.
        #
        # 2.  A conversational query in the form of text,.
        #
        # 3.  An event that specifies which intent to trigger.
      "text": { # Represents the natural language text to be processed. # The natural language text to be processed.
        "text": "A String", # Required. The UTF-8 encoded natural language text to be processed.
            # Text length must not exceed 256 characters.
        "languageCode": "A String", # Required. The language of this conversational query. See [Language
            # Support](https://cloud.google.com/dialogflow/docs/reference/language)
            # for a list of the currently supported language codes. Note that queries in
            # the same session do not necessarily need to specify the same language.
      },
      "event": { # Events allow for matching intents by event name instead of the natural # The event to be processed.
          # language input. For instance, input `<event: { name: "welcome_event",
          # parameters: { name: "Sam" } }>` can trigger a personalized welcome response.
          # The parameter `name` may be used by the agent in the response:
          # `"Hello #welcome_event.name! What can I do for you today?"`.
        "languageCode": "A String", # Required. The language of this query. See [Language
            # Support](https://cloud.google.com/dialogflow/docs/reference/language)
            # for a list of the currently supported language codes. Note that queries in
            # the same session do not necessarily need to specify the same language.
        "name": "A String", # Required. The unique identifier of the event.
        "parameters": { # The collection of parameters associated with the event.
            #
            # Depending on your protocol or client library language, this is a
            # map, associative array, symbol table, dictionary, or JSON object
            # composed of a collection of (MapKey, MapValue) pairs:
            #
            # -   MapKey type: string
            # -   MapKey value: parameter name
            # -   MapValue type:
            #     -   If parameter's entity type is a composite entity: map
            #     -   Else: string or number, depending on parameter value type
            # -   MapValue value:
            #     -   If parameter's entity type is a composite entity:
            #         map from composite entity property names to property values
            #     -   Else: parameter value
          "a_key": "", # Properties of the object.
        },
      },
      "audioConfig": { # Instructs the speech recognizer how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
        "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
            # translations. See [Language
            # Support](https://cloud.google.com/dialogflow/docs/reference/language)
            # for a list of the currently supported language codes. Note that queries in
            # the same session do not necessarily need to specify the same language.
        "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
        "phraseHints": [ # A list of strings containing words and phrases that the speech
            # recognizer should recognize with higher likelihood.
            #
            # See [the Cloud Speech
            # documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)
            # for more details.
            #
            # This field is deprecated. Please use [speech_contexts]() instead. If you
            # specify both [phrase_hints]() and [speech_contexts](), Dialogflow will
            # treat the [phrase_hints]() as a single additional [SpeechContext]().
          "A String",
        ],
        "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in
            # StreamingRecognitionResult with information about the recognized speech
            # words, e.g. start and end time offsets. If false or unspecified, Speech
            # doesn't return any word-level information.
        "sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query.
            # Refer to
            # [Cloud Speech API
            # documentation](https://cloud.google.com/speech-to-text/docs/basics) for
            # more details.
        "modelVariant": "A String", # Which variant of the Speech model to use.
        "model": "A String", # Which Speech model to select for the given request. Select the
            # model best suited to your domain to get best results. If a model is not
            # explicitly specified, then we auto-select a model based on the parameters
            # in the InputAudioConfig.
            # If enhanced speech model is enabled for the agent and an enhanced
            # version of the specified model for the language does not exist, then the
            # speech is recognized using the standard version of the specified model.
            # Refer to
            # [Cloud Speech API
            # documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model)
            # for more details.
        "speechContexts": [ # Context information to assist speech recognition.
            #
            # See [the Cloud Speech
            # documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)
            # for more details.
          { # Hints for the speech recognizer to help with recognition in a specific
              # conversation state.
            "phrases": [ # Optional. A list of strings containing words and phrases that the speech
                # recognizer should recognize with higher likelihood.
                #
                # This list can be used to:
                # * improve accuracy for words and phrases you expect the user to say,
                #   e.g. typical commands for your Dialogflow agent
                # * add additional words to the speech recognizer vocabulary
                # * ...
                #
                # See the [Cloud Speech
                # documentation](https://cloud.google.com/speech-to-text/quotas) for usage
                # limits.
              "A String",
            ],
            "boost": 3.14, # Optional. Boost for this context compared to other contexts:
                #
                # * If the boost is positive, Dialogflow will increase the probability that
                #   the phrases in this context are recognized over similar sounding phrases.
                # * If the boost is unspecified or non-positive, Dialogflow will not apply
                #   any boost.
                #
                # Dialogflow recommends that you use boosts in the range (0, 20] and that you
                # find a value that fits your use case with binary search.
          },
        ],
        "singleUtterance": True or False, # If `false` (default), recognition does not cease until the
            # client closes the stream.
            # If `true`, the recognizer will detect a single spoken utterance in input
            # audio. Recognition ceases when it detects the audio's voice has
            # stopped or paused. In this case, once a detected intent is received, the
            # client should close the stream and start a new request with a new stream as
            # needed.
            # Note: This setting is relevant only for streaming methods.
            # Note: When specified, InputAudioConfig.single_utterance takes precedence
            # over StreamingDetectIntentRequest.single_utterance.
      },
    },
    "inputAudio": "A String", # The natural language speech audio to be processed. This field
        # should be populated iff `query_input` is set to an input audio config.
        # A single request can contain up to 1 minute of speech audio data.
    "queryParams": { # Represents the parameters of the conversational query. # The parameters of this query.
      "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # The geo location of this conversational query.
          # of doubles representing degrees latitude and degrees longitude. Unless
          # specified otherwise, this must conform to the
          # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
          # standard</a>. Values must be within normalized ranges.
        "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
        "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
      },
      "contexts": [ # The collection of contexts to be activated before this query is
          # executed.
        { # Represents a context.
          "name": "A String", # Required. The unique identifier of the context. Format:
              # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
              # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
              # ID>/sessions/<Session ID>/contexts/<Context ID>`.
              #
              # The `Context ID` is always converted to lowercase, may only contain
              # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
              #
              # If `Environment ID` is not specified, we assume default 'draft'
              # environment. If `User ID` is not specified, we assume default '-' user.
              #
              # The following context names are reserved for internal use by Dialogflow.
              # You should not use these contexts or create contexts with these names:
              #
              # * `__system_counters__`
              # * `*_id_dialog_context`
              # * `*_dialog_params_size`
          "parameters": { # Optional. The collection of parameters associated with this context.
              #
              # Depending on your protocol or client library language, this is a
              # map, associative array, symbol table, dictionary, or JSON object
              # composed of a collection of (MapKey, MapValue) pairs:
              #
              # -   MapKey type: string
              # -   MapKey value: parameter name
              # -   MapValue type:
              #     -   If parameter's entity type is a composite entity: map
              #     -   Else: string or number, depending on parameter value type
              # -   MapValue value:
              #     -   If parameter's entity type is a composite entity:
              #         map from composite entity property names to property values
              #     -   Else: parameter value
            "a_key": "", # Properties of the object.
          },
          "lifespanCount": 42, # Optional. The number of conversational query requests after which the
              # context expires. The default is `0`. If set to `0`, the context expires
              # immediately. Contexts expire automatically after 20 minutes if there
              # are no matching queries.
        },
      ],
      "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Configures the type of sentiment analysis to perform. If not
          # provided, sentiment analysis is not performed.
        "analyzeQueryTextSentiment": True or False, # Instructs the service to perform sentiment analysis on
            # `query_text`. If not provided, sentiment analysis is not performed on
            # `query_text`.
      },
      "resetContexts": True or False, # Specifies whether to delete all contexts in the current session
          # before the new ones are activated.
      "timeZone": "A String", # The time zone of this conversational query from the
          # [time zone database](https://www.iana.org/time-zones), e.g.,
          # America/New_York, Europe/Paris. If not provided, the time zone specified in
          # agent settings is used.
      "payload": { # This field can be used to pass custom data to your webhook.
          # Arbitrary JSON objects are supported.
          # If supplied, the value is used to populate the
          # `WebhookRequest.original_detect_intent_request.payload`
          # field sent to your webhook.
        "a_key": "", # Properties of the object.
      },
      "sessionEntityTypes": [ # Additional session entity types to replace or extend developer
          # entity types with. The entity synonyms apply to all languages and persist
          # for the session of this query.
        { # Represents a session entity type.
            #
            # Extends or replaces a custom entity type at the user session level (we
            # refer to the entity types defined at the agent level as "custom entity
            # types").
            #
            # Note: session entity types apply to all queries, regardless of the language.
          "entities": [ # Required. The collection of entities associated with this session entity
              # type.
            { # An **entity entry** for an associated entity type.
              "synonyms": [ # Required. A collection of value synonyms. For example, if the entity type
                  # is *vegetable*, and `value` is *scallions*, a synonym could be *green
                  # onions*.
                  #
                  # For `KIND_LIST` entity types:
                  #
                  # *   This collection must contain exactly one synonym equal to `value`.
                "A String",
              ],
              "value": "A String", # Required. The primary value associated with this entity entry.
                  # For example, if the entity type is *vegetable*, the value could be
                  # *scallions*.
                  #
                  # For `KIND_MAP` entity types:
                  #
                  # *   A reference value to be used in place of synonyms.
                  #
                  # For `KIND_LIST` entity types:
                  #
                  # *   A string that can contain references to other entity types (with or
                  #     without aliases).
            },
          ],
          "name": "A String", # Required. The unique identifier of this session entity type. Format:
              # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
              # Display Name>`, or `projects/<Project ID>/agent/environments/<Environment
              # ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display
              # Name>`.
              # If `Environment ID` is not specified, we assume default 'draft'
              # environment. If `User ID` is not specified, we assume default '-' user.
              #
              # `<Entity Type Display Name>` must be the display name of an existing entity
              # type in the same agent that will be overridden or supplemented.
          "entityOverrideMode": "A String", # Required. Indicates whether the additional data should override or
              # supplement the custom entity type definition.
        },
      ],
    },
  }

  x__xgafv: string, V1 error format.
    Allowed values
      1 - v1 error format
      2 - v2 error format

Returns:
  An object of the form:

    { # The message returned from the DetectIntent method.
    "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
        # Note: The output audio is generated based on the values of default platform
        # text responses found in the `query_result.fulfillment_messages` field. If
        # multiple default text responses exist, they will be concatenated when
        # generating audio. If no default platform text responses exist, the
        # generated audio content will be empty.
    "webhookStatus": { # The `Status` type defines a logical error model that is suitable for # Specifies the status of the webhook request.
        # different programming environments, including REST APIs and RPC APIs. It is
        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
        # three pieces of data: error code, error message, and error details.
        #
        # You can find out more about this error model and how to work with it in the
        # [API Design Guide](https://cloud.google.com/apis/design/errors).
      "message": "A String", # A developer-facing error message, which should be in English. Any
          # user-facing error message should be localized and sent in the
          # google.rpc.Status.details field, or localized by the client.
      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
      "details": [ # A list of messages that carry the error details.  There is a common set of
          # message types for APIs to use.
        {
          "a_key": "", # Properties of the object. Contains field @type with type URL.
        },
      ],
    },
    "outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # The config used by the speech synthesizer to generate the output audio.
        # If this audio config is supplied in a request, it overrides all existing
        # text-to-speech settings applied to the agent.
      "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
          # provided, then the synthesizer will use the default sample rate based on
          # the audio encoding. If this is different from the voice's natural sample
          # rate, then the synthesizer will honor this request by converting to the
          # desired sample rate (which might result in worse audio quality).
      "audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
      "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
        "effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
            # applied on (post synthesized) text to speech. Effects are applied on top of
            # each other in the order they are given.
          "A String",
        ],
        "voice": { # Description of which voice to use for speech synthesis. # Optional. The desired voice of the synthesized audio.
          "ssmlGender": "A String", # Optional. The preferred gender of the voice. If not set, the service will
              # choose a voice based on the other parameters such as language_code and
              # name. Note that this is only a preference, not requirement. If a
              # voice of the appropriate gender is not available, the synthesizer should
              # substitute a voice with a different gender rather than failing the request.
          "name": "A String", # Optional. The name of the voice. If not set, the service will choose a
              # voice based on the other parameters such as language_code and
              # ssml_gender.
        },
        "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
            # native speed supported by the specific voice. 2.0 is twice as fast, and
            # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
            # other values < 0.25 or > 4.0 will return an error.
        "volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
            # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
            # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
            # will play at approximately half the amplitude of the normal native signal
            # amplitude. A value of +6.0 (dB) will play at approximately twice the
            # amplitude of the normal native signal amplitude. We strongly recommend not
            # to exceed +10 (dB) as there's usually no effective increase in loudness for
            # any value greater than that.
        "pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
            # semitones from the original pitch. -20 means decrease 20 semitones from the
            # original pitch.
      },
    },
    "queryResult": { # Represents the result of conversational query or event processing. # The selected results of the conversational query or event processing.
        # See `alternative_query_results` for additional potential results.
      "languageCode": "A String", # The language that was triggered during intent detection.
          # See [Language
          # Support](https://cloud.google.com/dialogflow/docs/reference/language)
          # for a list of the currently supported language codes.
      "fulfillmentText": "A String", # The text to be pronounced to the user or shown on the screen.
          # Note: This is a legacy field, `fulfillment_messages` should be preferred.
      "allRequiredParamsPresent": True or False, # This field is set to:
          #
          # - `false` if the matched intent has required parameters and not all of
          #    the required parameter values have been collected.
          # - `true` if all required parameter values have been collected, or if the
          #    matched intent doesn't contain any required parameters.
      "parameters": { # The collection of extracted parameters.
          #
          # Depending on your protocol or client library language, this is a
          # map, associative array, symbol table, dictionary, or JSON object
          # composed of a collection of (MapKey, MapValue) pairs:
          #
          # -   MapKey type: string
          # -   MapKey value: parameter name
          # -   MapValue type:
          #     -   If parameter's entity type is a composite entity: map
          #     -   Else: string or number, depending on parameter value type
          # -   MapValue value:
          #     -   If parameter's entity type is a composite entity:
          #         map from composite entity property names to property values
          #     -   Else: parameter value
        "a_key": "", # Properties of the object.
      },
      "fulfillmentMessages": [ # The collection of rich messages to present to the user.
        { # A rich response message.
            # Corresponds to the intent `Response` field in the Dialogflow console.
            # For more information, see
            # [Rich response
            # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
          "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
              # This message in `QueryResult.fulfillment_messages` and
              # `WebhookResponse.fulfillment_messages` should contain only one
              # `SimpleResponse`.
            "simpleResponses": [ # Required. The list of simple responses.
              { # The simple response message containing speech or text.
                "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
                    # response to the user in the SSML format. Mutually exclusive with
                    # text_to_speech.
                "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
                    # speech output. Mutually exclusive with ssml.
                "displayText": "A String", # Optional. The text to display.
              },
            ],
          },
          "quickReplies": { # The quick replies response message. # The quick replies response.
            "quickReplies": [ # Optional. The collection of quick replies.
              "A String",
            ],
            "title": "A String", # Optional. The title of the collection of quick replies.
          },
          "platform": "A String", # Optional. The platform that this message is intended for.
          "text": { # The text response message. # The text response.
            "text": [ # Optional. The collection of the agent's responses.
              "A String",
            ],
          },
          "image": { # The image response message. # The image response.
            "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                # e.g., screen readers.
            "imageUri": "A String", # Optional. The public URI to an image file.
          },
          "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
            "mediaObjects": [ # Required. List of media objects.
              { # Response media object for media content card.
                "contentUrl": "A String", # Required. Url where the media is stored.
                "description": "A String", # Optional. Description of media card.
                "name": "A String", # Required. Name of media card.
                "largeImage": { # The image response message. # Optional. Image to display above media content.
                  "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                      # e.g., screen readers.
                  "imageUri": "A String", # Optional. The public URI to an image file.
                },
                "icon": { # The image response message. # Optional. Icon to display above media content.
                  "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                      # e.g., screen readers.
                  "imageUri": "A String", # Optional. The public URI to an image file.
                },
              },
            ],
            "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
          },
          "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
            "suggestions": [ # Required. The list of suggested replies.
              { # The suggestion chip message that the user can tap to quickly post a reply
                  # to the conversation.
                "title": "A String", # Required. The text shown the in the suggestion chip.
              },
            ],
          },
          "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
              # or website associated with this agent.
            "uri": "A String", # Required. The URI of the app or site to open when the user taps the
                # suggestion chip.
            "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
          },
          "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
              # https://developers.google.com/actions/assistant/responses#browsing_carousel
            "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
                # items, maximum of ten.
              { # Browsing carousel tile
                "image": { # The image response message. # Optional. Hero image for the carousel item.
                  "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                      # e.g., screen readers.
                  "imageUri": "A String", # Optional. The public URI to an image file.
                },
                "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
                    # Card. Maximum of one line of text.
                "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
                    # text.
                "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
                  "url": "A String", # Required. URL
                  "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
                      # the URL. Defaults to opening via web browser.
                },
                "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
              },
            ],
            "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
                # items.
          },
          "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
            "buttons": [ # Optional. The collection of card buttons.
              { # The button object that appears at the bottom of a card.
                "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
                  "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
                },
                "title": "A String", # Required. The title of the button.
              },
            ],
            "subtitle": "A String", # Optional. The subtitle of the card.
            "image": { # The image response message. # Optional. The image for the card.
              "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                  # e.g., screen readers.
              "imageUri": "A String", # Optional. The public URI to an image file.
            },
            "formattedText": "A String", # Required, unless image is present. The body text of the card.
            "title": "A String", # Optional. The title of the card.
          },
          "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
            "rows": [ # Optional. Rows in this table of data.
              { # Row of TableCard.
                "cells": [ # Optional. List of cells that make up this row.
                  { # Cell of TableCardRow.
                    "text": "A String", # Required. Text in this cell.
                  },
                ],
                "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
              },
            ],
            "subtitle": "A String", # Optional. Subtitle to the title.
            "title": "A String", # Required. Title of the card.
            "image": { # The image response message. # Optional. Image which should be displayed on the card.
              "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                  # e.g., screen readers.
              "imageUri": "A String", # Optional. The public URI to an image file.
            },
            "columnProperties": [ # Optional. Display properties for the columns in this table.
              { # Column properties for TableCard.
                "header": "A String", # Required. Column heading.
                "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
              },
            ],
            "buttons": [ # Optional. List of buttons for the card.
              { # The button object that appears at the bottom of a card.
                "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
                  "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
                },
                "title": "A String", # Required. The title of the button.
              },
            ],
          },
          "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
            "items": [ # Required. Carousel items.
              { # An item in the carousel.
                "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
                    # dialog.
                  "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
                      # item in dialog.
                    "A String",
                  ],
                  "key": "A String", # Required. A unique key that will be sent back to the agent if this
                      # response is given.
                },
                "image": { # The image response message. # Optional. The image to display.
                  "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                      # e.g., screen readers.
                  "imageUri": "A String", # Optional. The public URI to an image file.
                },
                "description": "A String", # Optional. The body text of the card.
                "title": "A String", # Required. Title of the carousel item.
              },
            ],
          },
          "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
            "items": [ # Required. List items.
              { # An item in the list.
                "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
                    # dialog.
                  "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
                      # item in dialog.
                    "A String",
                  ],
                  "key": "A String", # Required. A unique key that will be sent back to the agent if this
                      # response is given.
                },
                "image": { # The image response message. # Optional. The image to display.
                  "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                      # e.g., screen readers.
                  "imageUri": "A String", # Optional. The public URI to an image file.
                },
                "description": "A String", # Optional. The main text describing the item.
                "title": "A String", # Required. The title of the list item.
              },
            ],
            "subtitle": "A String", # Optional. Subtitle of the list.
            "title": "A String", # Optional. The overall title of the list.
          },
          "payload": { # A custom platform-specific response.
            "a_key": "", # Properties of the object.
          },
          "card": { # The card response message. # The card response.
            "buttons": [ # Optional. The collection of card buttons.
              { # Contains information about a button.
                "text": "A String", # Optional. The text to show on the button.
                "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
                    # open.
              },
            ],
            "title": "A String", # Optional. The title of the card.
            "subtitle": "A String", # Optional. The subtitle of the card.
            "imageUri": "A String", # Optional. The public URI to an image file for the card.
          },
        },
      ],
      "speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
          # indicates an estimated greater likelihood that the recognized words are
          # correct. The default of 0.0 is a sentinel value indicating that confidence
          # was not set.
          #
          # This field is not guaranteed to be accurate or set. In particular this
          # field isn't set for StreamingDetectIntent since the streaming endpoint has
          # separate confidence estimates per portion of the audio in
          # StreamingRecognitionResult.
      "intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
          # (completely uncertain) to 1.0 (completely certain).
          # This value is for informational purpose only and is only used to
          # help match the best intent within the classification threshold.
          # This value may change for the same end-user expression at any time due to a
          # model retraining or change in implementation.
          # If there are `multiple knowledge_answers` messages, this value is set to
          # the greatest `knowledgeAnswers.match_confidence` value in the list.
      "action": "A String", # The action name from the matched intent.
      "intent": { # Represents an intent. # The intent that matched the conversational query. Some, not
          # all fields are filled in this message, including but not limited to:
          # `name`, `display_name`, `end_interaction` and `is_fallback`.
          # Intents convert a number of user expressions or patterns into an action. An
          # action is an extraction of a user command or sentence semantics.
        "isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
        "mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
            # Note: If `ml_disabled` setting is set to true, then this intent is not
            # taken into account during inference in `ML ONLY` match mode. Also,
            # auto-markup in the UI is turned off.
        "displayName": "A String", # Required. The name of this intent.
        "name": "A String", # Optional. The unique identifier of this intent.
            # Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
            # methods.
            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
        "parameters": [ # Optional. The collection of parameters associated with the intent.
          { # Represents intent parameters.
            "mandatory": True or False, # Optional. Indicates whether the parameter is required. That is,
                # whether the intent cannot be completed without collecting the parameter
                # value.
            "name": "A String", # The unique identifier of this parameter.
            "defaultValue": "A String", # Optional. The default value to use when the `value` yields an empty
                # result.
                # Default values can be extracted from contexts by using the following
                # syntax: `#context_name.parameter_name`.
            "entityTypeDisplayName": "A String", # Optional. The name of the entity type, prefixed with `@`, that
                # describes values of the parameter. If the parameter is
                # required, this must be provided.
            "value": "A String", # Optional. The definition of the parameter value. It can be:
                # - a constant string,
                # - a parameter value defined as `$parameter_name`,
                # - an original parameter value defined as `$parameter_name.original`,
                # - a parameter value from some context defined as
                #   `#context_name.parameter_name`.
            "prompts": [ # Optional. The collection of prompts that the agent can present to the
                # user in order to collect a value for the parameter.
              "A String",
            ],
            "isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
            "displayName": "A String", # Required. The name of the parameter.
          },
        ],
        "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
            # chain of followup intents. You can set this field when creating an intent,
            # for example with CreateIntent or
            # BatchUpdateIntents, in order to make this
            # intent a followup intent.
            #
            # It identifies the parent followup intent.
            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
        "followupIntentInfo": [ # Read-only. Information about all followup intents that have this intent as
            # a direct or indirect parent. We populate this field only in the output.
          { # Represents a single followup intent in the chain.
            "followupIntentName": "A String", # The unique identifier of the followup intent.
                # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
            "parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
                # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
          },
        ],
        "webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
        "trainingPhrases": [ # Optional. The collection of examples that the agent is
            # trained on.
          { # Represents an example that the agent is trained on.
            "parts": [ # Required. The ordered list of training phrase parts.
                # The parts are concatenated in order to form the training phrase.
                #
                # Note: The API does not automatically annotate training phrases like the
                # Dialogflow Console does.
                #
                # Note: Do not forget to include whitespace at part boundaries,
                # so the training phrase is well formatted when the parts are concatenated.
                #
                # If the training phrase does not need to be annotated with parameters,
                # you just need a single part with only the Part.text field set.
                #
                # If you want to annotate the training phrase, you must create multiple
                # parts, where the fields of each part are populated in one of two ways:
                #
                # -   `Part.text` is set to a part of the phrase that has no parameters.
                # -   `Part.text` is set to a part of the phrase that you want to annotate,
                #     and the `entity_type`, `alias`, and `user_defined` fields are all
                #     set.
              { # Represents a part of a training phrase.
                "text": "A String", # Required. The text for this part.
                "entityType": "A String", # Optional. The entity type name prefixed with `@`.
                    # This field is required for annotated parts of the training phrase.
                "userDefined": True or False, # Optional. Indicates whether the text was manually annotated.
                    # This field is set to true when the Dialogflow Console is used to
                    # manually annotate the part. When creating an annotated part with the
                    # API, you must set this to true.
                "alias": "A String", # Optional. The parameter name for the value extracted from the
                    # annotated part of the example.
                    # This field is required for annotated parts of the training phrase.
              },
            ],
            "type": "A String", # Required. The type of the training phrase.
            "name": "A String", # Output only. The unique identifier of this training phrase.
            "timesAddedCount": 42, # Optional. Indicates how many times this example was added to
                # the intent. Each time a developer adds an existing sample by editing an
                # intent or training, this counter is increased.
          },
        ],
        "messages": [ # Optional. The collection of rich messages corresponding to the
            # `Response` field in the Dialogflow console.
          { # A rich response message.
              # Corresponds to the intent `Response` field in the Dialogflow console.
              # For more information, see
              # [Rich response
              # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
            "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
                # This message in `QueryResult.fulfillment_messages` and
                # `WebhookResponse.fulfillment_messages` should contain only one
                # `SimpleResponse`.
              "simpleResponses": [ # Required. The list of simple responses.
                { # The simple response message containing speech or text.
                  "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
                      # response to the user in the SSML format. Mutually exclusive with
                      # text_to_speech.
                  "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
                      # speech output. Mutually exclusive with ssml.
                  "displayText": "A String", # Optional. The text to display.
                },
              ],
            },
            "quickReplies": { # The quick replies response message. # The quick replies response.
              "quickReplies": [ # Optional. The collection of quick replies.
                "A String",
              ],
              "title": "A String", # Optional. The title of the collection of quick replies.
            },
            "platform": "A String", # Optional. The platform that this message is intended for.
            "text": { # The text response message. # The text response.
              "text": [ # Optional. The collection of the agent's responses.
                "A String",
              ],
            },
            "image": { # The image response message. # The image response.
              "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                  # e.g., screen readers.
              "imageUri": "A String", # Optional. The public URI to an image file.
            },
            "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
              "mediaObjects": [ # Required. List of media objects.
                { # Response media object for media content card.
                  "contentUrl": "A String", # Required. Url where the media is stored.
                  "description": "A String", # Optional. Description of media card.
                  "name": "A String", # Required. Name of media card.
                  "largeImage": { # The image response message. # Optional. Image to display above media content.
                    "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                        # e.g., screen readers.
                    "imageUri": "A String", # Optional. The public URI to an image file.
                  },
                  "icon": { # The image response message. # Optional. Icon to display above media content.
                    "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                        # e.g., screen readers.
                    "imageUri": "A String", # Optional. The public URI to an image file.
                  },
                },
              ],
              "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
            },
            "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
              "suggestions": [ # Required. The list of suggested replies.
                { # The suggestion chip message that the user can tap to quickly post a reply
                    # to the conversation.
                  "title": "A String", # Required. The text shown the in the suggestion chip.
                },
              ],
            },
            "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
                # or website associated with this agent.
              "uri": "A String", # Required. The URI of the app or site to open when the user taps the
                  # suggestion chip.
              "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
            },
            "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
                # https://developers.google.com/actions/assistant/responses#browsing_carousel
              "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
                  # items, maximum of ten.
                { # Browsing carousel tile
                  "image": { # The image response message. # Optional. Hero image for the carousel item.
                    "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                        # e.g., screen readers.
                    "imageUri": "A String", # Optional. The public URI to an image file.
                  },
                  "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
                      # Card. Maximum of one line of text.
                  "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
                      # text.
                  "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
                    "url": "A String", # Required. URL
                    "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
                        # the URL. Defaults to opening via web browser.
                  },
                  "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
                },
              ],
              "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
                  # items.
            },
            "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
              "buttons": [ # Optional. The collection of card buttons.
                { # The button object that appears at the bottom of a card.
                  "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
                    "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
                  },
                  "title": "A String", # Required. The title of the button.
                },
              ],
              "subtitle": "A String", # Optional. The subtitle of the card.
              "image": { # The image response message. # Optional. The image for the card.
                "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                    # e.g., screen readers.
                "imageUri": "A String", # Optional. The public URI to an image file.
              },
              "formattedText": "A String", # Required, unless image is present. The body text of the card.
              "title": "A String", # Optional. The title of the card.
            },
            "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
              "rows": [ # Optional. Rows in this table of data.
                { # Row of TableCard.
                  "cells": [ # Optional. List of cells that make up this row.
                    { # Cell of TableCardRow.
                      "text": "A String", # Required. Text in this cell.
                    },
                  ],
                  "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
                },
              ],
              "subtitle": "A String", # Optional. Subtitle to the title.
              "title": "A String", # Required. Title of the card.
              "image": { # The image response message. # Optional. Image which should be displayed on the card.
                "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                    # e.g., screen readers.
                "imageUri": "A String", # Optional. The public URI to an image file.
              },
              "columnProperties": [ # Optional. Display properties for the columns in this table.
                { # Column properties for TableCard.
                  "header": "A String", # Required. Column heading.
                  "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
                },
              ],
              "buttons": [ # Optional. List of buttons for the card.
                { # The button object that appears at the bottom of a card.
                  "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
                    "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
                  },
                  "title": "A String", # Required. The title of the button.
                },
              ],
            },
            "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
              "items": [ # Required. Carousel items.
                { # An item in the carousel.
                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
                      # dialog.
                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
                        # item in dialog.
                      "A String",
                    ],
                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
                        # response is given.
                  },
                  "image": { # The image response message. # Optional. The image to display.
                    "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                        # e.g., screen readers.
                    "imageUri": "A String", # Optional. The public URI to an image file.
                  },
                  "description": "A String", # Optional. The body text of the card.
                  "title": "A String", # Required. Title of the carousel item.
                },
              ],
            },
            "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
              "items": [ # Required. List items.
                { # An item in the list.
                  "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
                      # dialog.
                    "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
                        # item in dialog.
                      "A String",
                    ],
                    "key": "A String", # Required. A unique key that will be sent back to the agent if this
                        # response is given.
                  },
                  "image": { # The image response message. # Optional. The image to display.
                    "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
                        # e.g., screen readers.
                    "imageUri": "A String", # Optional. The public URI to an image file.
                  },
                  "description": "A String", # Optional. The main text describing the item.
                  "title": "A String", # Required. The title of the list item.
                },
              ],
              "subtitle": "A String", # Optional. Subtitle of the list.
              "title": "A String", # Optional. The overall title of the list.
            },
            "payload": { # A custom platform-specific response.
              "a_key": "", # Properties of the object.
            },
            "card": { # The card response message. # The card response.
              "buttons": [ # Optional. The collection of card buttons.
                { # Contains information about a button.
                  "text": "A String", # Optional. The text to show on the button.
                  "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
                      # open.
                },
              ],
              "title": "A String", # Optional. The title of the card.
              "subtitle": "A String", # Optional. The subtitle of the card.
              "imageUri": "A String", # Optional. The public URI to an image file for the card.
            },
          },
        ],
        "defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
            # copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
          "A String",
        ],
        "priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
            # priorities.
            #
            # - If the supplied value is unspecified or 0, the service
            #   translates the value to 500,000, which corresponds to the
            #   `Normal` priority in the console.
            # - If the supplied value is negative, the intent is ignored
            #   in runtime detect intent requests.
        "rootFollowupIntentName": "A String", # Read-only. The unique identifier of the root intent in the chain of
            # followup intents. It identifies the correct followup intents chain for
            # this intent. We populate this field only in the output.
            #
            # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
        "resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
            # session when this intent is matched.
        "inputContextNames": [ # Optional. The list of context names required for this intent to be
            # triggered.
            # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
          "A String",
        ],
        "action": "A String", # Optional. The name of the action associated with the intent.
            # Note: The action name must not contain whitespaces.
        "outputContexts": [ # Optional. The collection of contexts that are activated when the intent
            # is matched. Context messages in this collection should not set the
            # parameters field. Setting the `lifespan_count` to 0 will reset the context
            # when the intent is matched.
            # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
          { # Represents a context.
            "name": "A String", # Required. The unique identifier of the context. Format:
                # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
                # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
                # ID>/sessions/<Session ID>/contexts/<Context ID>`.
                #
                # The `Context ID` is always converted to lowercase, may only contain
                # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
                #
                # If `Environment ID` is not specified, we assume default 'draft'
                # environment. If `User ID` is not specified, we assume default '-' user.
                #
                # The following context names are reserved for internal use by Dialogflow.
                # You should not use these contexts or create contexts with these names:
                #
                # * `__system_counters__`
                # * `*_id_dialog_context`
                # * `*_dialog_params_size`
            "parameters": { # Optional. The collection of parameters associated with this context.
                #
                # Depending on your protocol or client library language, this is a
                # map, associative array, symbol table, dictionary, or JSON object
                # composed of a collection of (MapKey, MapValue) pairs:
                #
                # -   MapKey type: string
                # -   MapKey value: parameter name
                # -   MapValue type:
                #     -   If parameter's entity type is a composite entity: map
                #     -   Else: string or number, depending on parameter value type
                # -   MapValue value:
                #     -   If parameter's entity type is a composite entity:
                #         map from composite entity property names to property values
                #     -   Else: parameter value
              "a_key": "", # Properties of the object.
            },
            "lifespanCount": 42, # Optional. The number of conversational query requests after which the
                # context expires. The default is `0`. If set to `0`, the context expires
                # immediately. Contexts expire automatically after 20 minutes if there
                # are no matching queries.
          },
        ],
        "events": [ # Optional. The collection of event names that trigger the intent.
            # If the collection of input contexts is not empty, all of the contexts must
            # be present in the active user session for an event to trigger this intent.
            # Event names are limited to 150 characters.
          "A String",
        ],
      },
      "sentimentAnalysisResult": { # The result of sentiment analysis as configured by # The sentiment analysis result, which depends on the
          # `sentiment_analysis_request_config` specified in the request.
          # `sentiment_analysis_request_config`.
        "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit # The sentiment analysis result for `query_text`.
            # of analysis, such as the query text.
          "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive
              # sentiment).
          "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute
              # magnitude of sentiment, regardless of score (positive or negative).
        },
      },
      "diagnosticInfo": { # Free-form diagnostic information for the associated detect intent request.
          # The fields of this data can change without notice, so you should not write
          # code that depends on its structure.
          # The data may contain:
          #
          # - webhook call latency
          # - webhook errors
        "a_key": "", # Properties of the object.
      },
      "queryText": "A String", # The original conversational query text:
          #
          # - If natural language text was provided as input, `query_text` contains
          #   a copy of the input.
          # - If natural language speech audio was provided as input, `query_text`
          #   contains the speech recognition result. If speech recognizer produced
          #   multiple alternatives, a particular one is picked.
          # - If automatic spell correction is enabled, `query_text` will contain the
          #   corrected user input.
      "outputContexts": [ # The collection of output contexts. If applicable,
          # `output_contexts.parameters` contains entries with name
          # `<parameter name>.original` containing the original parameter values
          # before the query.
        { # Represents a context.
          "name": "A String", # Required. The unique identifier of the context. Format:
              # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
              # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
              # ID>/sessions/<Session ID>/contexts/<Context ID>`.
              #
              # The `Context ID` is always converted to lowercase, may only contain
              # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
              #
              # If `Environment ID` is not specified, we assume default 'draft'
              # environment. If `User ID` is not specified, we assume default '-' user.
              #
              # The following context names are reserved for internal use by Dialogflow.
              # You should not use these contexts or create contexts with these names:
              #
              # * `__system_counters__`
              # * `*_id_dialog_context`
              # * `*_dialog_params_size`
          "parameters": { # Optional. The collection of parameters associated with this context.
              #
              # Depending on your protocol or client library language, this is a
              # map, associative array, symbol table, dictionary, or JSON object
              # composed of a collection of (MapKey, MapValue) pairs:
              #
              # -   MapKey type: string
              # -   MapKey value: parameter name
              # -   MapValue type:
              #     -   If parameter's entity type is a composite entity: map
              #     -   Else: string or number, depending on parameter value type
              # -   MapValue value:
              #     -   If parameter's entity type is a composite entity:
              #         map from composite entity property names to property values
              #     -   Else: parameter value
            "a_key": "", # Properties of the object.
          },
          "lifespanCount": 42, # Optional. The number of conversational query requests after which the
              # context expires. The default is `0`. If set to `0`, the context expires
              # immediately. Contexts expire automatically after 20 minutes if there
              # are no matching queries.
        },
      ],
      "webhookSource": "A String", # If the query was fulfilled by a webhook call, this field is set to the
          # value of the `source` field returned in the webhook response.
      "webhookPayload": { # If the query was fulfilled by a webhook call, this field is set to the
          # value of the `payload` field returned in the webhook response.
        "a_key": "", # Properties of the object.
      },
    },
    "responseId": "A String", # The unique identifier of the response. It can be used to
        # locate a response in the training example set or for reporting issues.
  }