- HTTP request
- Request body
- Response body
- Authorization scopes
- InputContent
- TextInput
- Context
- PolicyConfig
- PolicyType
- ClassifierVersion
- PolicyResult
- ViolationResult
- Try it!
Analyze a piece of content with the provided set of policies.
HTTP request
POST https://checks.googleapis.com/v1alpha/aisafety:classifyContent
The URL uses gRPC Transcoding syntax.
Request body
The request body contains data with the following structure:
JSON representation |
---|
{ "input": { object ( |
Fields | |
---|---|
input |
Required. Content to be classified. |
context |
Optional. Context about the input that will be used to help on the classification. |
policies[] |
Required. List of policies to classify against. |
classifier |
Optional. Version of the classifier to use. If not specified, the latest version will be used. |
Response body
Response proto for aisafety.classifyContent RPC.
If successful, the response body contains data with the following structure:
JSON representation |
---|
{
"policyResults": [
{
object ( |
Fields | |
---|---|
policy |
Results of the classification for each policy. |
Authorization scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/checks
For more information, see the OAuth 2.0 Overview.
InputContent
Content to be classified.
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field input . Content to be classified. input can be only one of the following: |
|
text |
Content in text format. |
TextInput
Text input to be classified.
JSON representation |
---|
{ "languageCode": string, // Union field |
Fields | |
---|---|
language |
Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it. |
Union field source . Source of the text to be classified. source can be only one of the following: |
|
content |
Actual piece of text to be classified. |
Context
Context about the input that will be used to help on the classification.
JSON representation |
---|
{ "prompt": string } |
Fields | |
---|---|
prompt |
Optional. Prompt that generated the model response. |
PolicyConfig
List of policies to classify against.
JSON representation |
---|
{
"policyType": enum ( |
Fields | |
---|---|
policy |
Required. Type of the policy. |
threshold |
Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used. |
PolicyType
The unique identifier for a safety policy.
Enums | |
---|---|
POLICY_TYPE_UNSPECIFIED |
Default. |
DANGEROUS_CONTENT |
The model facilitates, promotes or enables access to harmful goods, services, and activities. |
PII_SOLICITING_RECITING |
The model reveals an individual’s personal information and data. |
HARASSMENT |
The model generates content that is malicious, intimidating, bullying, or abusive towards another individual. |
SEXUALLY_EXPLICIT |
The model generates content that is sexually explicit in nature. |
HATE_SPEECH |
The model promotes violence, hatred, discrimination on the basis of race, religion, etc. |
MEDICAL_INFO |
The model facilitates harm by providing health advice or guidance. |
VIOLENCE_AND_GORE |
The model generates content that contains gratuitous, realistic descriptions of violence or gore. |
OBSCENITY_AND_PROFANITY |
ClassifierVersion
Version of the classifier to use.
Enums | |
---|---|
CLASSIFIER_VERSION_UNSPECIFIED |
Unspecified version. |
STABLE |
Stable version. |
LATEST |
Latest version. |
PolicyResult
Result for one policy against the corresponding input.
JSON representation |
---|
{ "policyType": enum ( |
Fields | |
---|---|
policy |
Type of the policy. |
score |
Final score for the results of this policy. |
violation |
Result of the classification for the policy. |
ViolationResult
Result of the classification for the policy.
Enums | |
---|---|
VIOLATION_RESULT_UNSPECIFIED |
Unspecified result. |
VIOLATIVE |
The final score is greater or equal the input score threshold. |
NON_VIOLATIVE |
The final score is smaller than the input score threshold. |
CLASSIFICATION_ERROR |
There was an error and the violation result could not be determined. |