Method: aisafety.classifyContent

Analyze a piece of content with the provided set of policies.

HTTP request

POST https://checks.googleapis.com/v1alpha/aisafety:classifyContent

The URL uses gRPC Transcoding syntax.

Request body

The request body contains data with the following structure:

JSON representation
{
  "input": {
    object (InputContent)
  },
  "context": {
    object (Context)
  },
  "policies": [
    {
      object (PolicyConfig)
    }
  ],
  "classifierVersion": enum (ClassifierVersion)
}
Fields
input

object (InputContent)

Required. Content to be classified.

context

object (Context)

Optional. Context about the input that will be used to help on the classification.

policies[]

object (PolicyConfig)

Required. List of policies to classify against.

classifierVersion

enum (ClassifierVersion)

Optional. Version of the classifier to use. If not specified, the latest version will be used.

Response body

Response proto for aisafety.classifyContent RPC.

If successful, the response body contains data with the following structure:

JSON representation
{
  "policyResults": [
    {
      object (PolicyResult)
    }
  ]
}
Fields
policyResults[]

object (PolicyResult)

Results of the classification for each policy.

Authorization scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/checks

For more information, see the OAuth 2.0 Overview.

InputContent

Content to be classified.

JSON representation
{

  // Union field input can be only one of the following:
  "textInput": {
    object (TextInput)
  }
  // End of list of possible types for union field input.
}
Fields
Union field input. Content to be classified. input can be only one of the following:
textInput

object (TextInput)

Content in text format.

TextInput

Text input to be classified.

JSON representation
{
  "languageCode": string,

  // Union field source can be only one of the following:
  "content": string
  // End of list of possible types for union field source.
}
Fields
languageCode

string

Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it.

Union field source. Source of the text to be classified. source can be only one of the following:
content

string

Actual piece of text to be classified.

Context

Context about the input that will be used to help on the classification.

JSON representation
{
  "prompt": string
}
Fields
prompt

string

Optional. Prompt that generated the model response.

PolicyConfig

List of policies to classify against.

JSON representation
{
  "policyType": enum (PolicyType),
  "threshold": number
}
Fields
policyType

enum (PolicyType)

Required. Type of the policy.

threshold

number

Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used.

PolicyType

The unique identifier for a safety policy.

Enums
POLICY_TYPE_UNSPECIFIED Default.
DANGEROUS_CONTENT The model facilitates, promotes or enables access to harmful goods, services, and activities.
PII_SOLICITING_RECITING The model reveals an individual’s personal information and data.
HARASSMENT The model generates content that is malicious, intimidating, bullying, or abusive towards another individual.
SEXUALLY_EXPLICIT The model generates content that is sexually explicit in nature.
HATE_SPEECH The model promotes violence, hatred, discrimination on the basis of race, religion, etc.
MEDICAL_INFO The model facilitates harm by providing health advice or guidance.
VIOLENCE_AND_GORE The model generates content that contains gratuitous, realistic descriptions of violence or gore.
OBSCENITY_AND_PROFANITY

ClassifierVersion

Version of the classifier to use.

Enums
CLASSIFIER_VERSION_UNSPECIFIED Unspecified version.
STABLE Stable version.
LATEST Latest version.

PolicyResult

Result for one policy against the corresponding input.

JSON representation
{
  "policyType": enum (PolicyType),
  "score": number,
  "violationResult": enum (ViolationResult)
}
Fields
policyType

enum (PolicyType)

Type of the policy.

score

number

Final score for the results of this policy.

violationResult

enum (ViolationResult)

Result of the classification for the policy.

ViolationResult

Result of the classification for the policy.

Enums
VIOLATION_RESULT_UNSPECIFIED Unspecified result.
VIOLATIVE The final score is greater or equal the input score threshold.
NON_VIOLATIVE The final score is smaller than the input score threshold.
CLASSIFICATION_ERROR There was an error and the violation result could not be determined.