|
vision
v1p1beta1
|
| ▼NGoogle | |
| ▼NApis | |
| ▼NVision | |
| ▼Nv1p1beta1 | |
| ►NData | |
| CAnnotateFileResponse | Response to a single file annotation request. A file may contain one or more images, which individually have their own responses. |
| CAnnotateImageResponse | Response to an image annotation request. |
| CAsyncAnnotateFileResponse | The response for a single offline file annotation request. |
| CAsyncBatchAnnotateFilesResponse | Response to an async batch file annotation request. |
| CAsyncBatchAnnotateImagesResponse | Response to an async batch image annotation request. |
| CBatchAnnotateFilesResponse | A list of file annotation responses. |
| CBatchOperationMetadata | Metadata for the batch operations such as the current state |
| CBlock | Logical element on the page. |
| CBoundingPoly | A bounding polygon for the detected image annotation. |
| CColor | Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness; for example, the fields of this representation can be trivially provided to the constructor of "java.awt.Color" in Java; it can also be trivially provided to UIColor's "+colorWithRed:green:blue:alpha" method in iOS; and, with just a little work, it can be easily formatted into a CSS "rgba()" string in JavaScript, as well |
| CColorInfo | Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| CCropHint | Single crop hint that is used to generate a new crop when serving an image. |
| CCropHintsAnnotation | Set of crop hints that are used to generate new crops when serving images. |
| CDetectedBreak | Detected start or end of a structural component. |
| CDetectedLanguage | Detected language for a structural component. |
| CDominantColorsAnnotation | Set of dominant colors and their corresponding scores. |
| CEntityAnnotation | Set of detected entity features. |
| CFaceAnnotation | A face annotation object contains the results of face detection. |
| CGcsDestination | The Google Cloud Storage location where the output will be written to. |
| CGcsSource | The Google Cloud Storage location where the input will be read from. |
| CGoogleCloudVisionV1p1beta1AnnotateFileRequest | A request to annotate one single file, e.g. a PDF, TIFF or GIF file. |
| CGoogleCloudVisionV1p1beta1AnnotateFileResponse | Response to a single file annotation request. A file may contain one or more images, which individually have their own responses. |
| CGoogleCloudVisionV1p1beta1AnnotateImageRequest | Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features, and with context information. |
| CGoogleCloudVisionV1p1beta1AnnotateImageResponse | Response to an image annotation request. |
| CGoogleCloudVisionV1p1beta1AsyncAnnotateFileRequest | An offline file annotation request. |
| CGoogleCloudVisionV1p1beta1AsyncAnnotateFileResponse | The response for a single offline file annotation request. |
| CGoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesRequest | Multiple async file annotation requests are batched into a single service call. |
| CGoogleCloudVisionV1p1beta1AsyncBatchAnnotateFilesResponse | Response to an async batch file annotation request. |
| CGoogleCloudVisionV1p1beta1AsyncBatchAnnotateImagesRequest | Request for async image annotation for a list of images. |
| CGoogleCloudVisionV1p1beta1BatchAnnotateFilesRequest | A list of requests to annotate files using the BatchAnnotateFiles API. |
| CGoogleCloudVisionV1p1beta1BatchAnnotateFilesResponse | A list of file annotation responses. |
| CGoogleCloudVisionV1p1beta1BatchAnnotateImagesRequest | Multiple image annotation requests are batched into a single service call. |
| CGoogleCloudVisionV1p1beta1BatchAnnotateImagesResponse | Response to a batch image annotation request. |
| CGoogleCloudVisionV1p1beta1Block | Logical element on the page. |
| CGoogleCloudVisionV1p1beta1BoundingPoly | A bounding polygon for the detected image annotation. |
| CGoogleCloudVisionV1p1beta1ColorInfo | Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| CGoogleCloudVisionV1p1beta1CropHint | Single crop hint that is used to generate a new crop when serving an image. |
| CGoogleCloudVisionV1p1beta1CropHintsAnnotation | Set of crop hints that are used to generate new crops when serving images. |
| CGoogleCloudVisionV1p1beta1CropHintsParams | Parameters for crop hints annotation request. |
| CGoogleCloudVisionV1p1beta1DominantColorsAnnotation | Set of dominant colors and their corresponding scores. |
| CGoogleCloudVisionV1p1beta1EntityAnnotation | Set of detected entity features. |
| CGoogleCloudVisionV1p1beta1FaceAnnotation | A face annotation object contains the results of face detection. |
| CGoogleCloudVisionV1p1beta1FaceAnnotationLandmark | A face-specific landmark (for example, a face feature). |
| CGoogleCloudVisionV1p1beta1Feature | The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list. |
| CGoogleCloudVisionV1p1beta1GcsDestination | The Google Cloud Storage location where the output will be written to. |
| CGoogleCloudVisionV1p1beta1GcsSource | The Google Cloud Storage location where the input will be read from. |
| CGoogleCloudVisionV1p1beta1Image | Client image to perform Google Cloud Vision API tasks over. |
| CGoogleCloudVisionV1p1beta1ImageAnnotationContext | If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image. |
| CGoogleCloudVisionV1p1beta1ImageContext | Image context and/or feature-specific parameters. |
| CGoogleCloudVisionV1p1beta1ImageProperties | Stores image properties, such as dominant colors. |
| CGoogleCloudVisionV1p1beta1ImageSource | External image source (Google Cloud Storage or web URL image location). |
| CGoogleCloudVisionV1p1beta1InputConfig | The desired input location and metadata. |
| CGoogleCloudVisionV1p1beta1LatLongRect | Rectangle determined by min and max LatLng pairs. |
| CGoogleCloudVisionV1p1beta1LocalizedObjectAnnotation | Set of detected objects with bounding boxes. |
| CGoogleCloudVisionV1p1beta1LocationInfo | Detected entity location information. |
| CGoogleCloudVisionV1p1beta1NormalizedVertex | A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1. |
| CGoogleCloudVisionV1p1beta1OperationMetadata | Contains metadata for the BatchAnnotateImages operation. |
| CGoogleCloudVisionV1p1beta1OutputConfig | The desired output location and metadata. |
| CGoogleCloudVisionV1p1beta1Page | Detected page from OCR. |
| CGoogleCloudVisionV1p1beta1Paragraph | Structural unit of text representing a number of words in certain order. |
| CGoogleCloudVisionV1p1beta1Position | A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p1beta1Product | A Product contains ReferenceImages. |
| CGoogleCloudVisionV1p1beta1ProductKeyValue | A product label represented as a key-value pair. |
| CGoogleCloudVisionV1p1beta1ProductSearchParams | Parameters for a product search request. |
| CGoogleCloudVisionV1p1beta1ProductSearchResults | Results for a product search request. |
| CGoogleCloudVisionV1p1beta1ProductSearchResultsGroupedResult | Information about the products similar to a single product in a query image. |
| CGoogleCloudVisionV1p1beta1ProductSearchResultsObjectAnnotation | Prediction for what the object in the bounding box is. |
| CGoogleCloudVisionV1p1beta1ProductSearchResultsResult | Information about a product. |
| CGoogleCloudVisionV1p1beta1Property | A Property consists of a user-supplied name/value pair. |
| CGoogleCloudVisionV1p1beta1SafeSearchAnnotation | Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| CGoogleCloudVisionV1p1beta1Symbol | A single symbol representation. |
| CGoogleCloudVisionV1p1beta1TextAnnotation | TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail. |
| CGoogleCloudVisionV1p1beta1TextAnnotationDetectedBreak | Detected start or end of a structural component. |
| CGoogleCloudVisionV1p1beta1TextAnnotationDetectedLanguage | Detected language for a structural component. |
| CGoogleCloudVisionV1p1beta1TextAnnotationTextProperty | Additional information detected on the structural component. |
| CGoogleCloudVisionV1p1beta1Vertex | A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p1beta1WebDetection | Relevant information for the image from the Internet. |
| CGoogleCloudVisionV1p1beta1WebDetectionParams | Parameters for web detection request. |
| CGoogleCloudVisionV1p1beta1WebDetectionWebEntity | Entity deduced from similar images on the Internet. |
| CGoogleCloudVisionV1p1beta1WebDetectionWebImage | Metadata for online images. |
| CGoogleCloudVisionV1p1beta1WebDetectionWebLabel | Label to provide extra metadata for the web detection. |
| CGoogleCloudVisionV1p1beta1WebDetectionWebPage | Metadata for web pages. |
| CGoogleCloudVisionV1p1beta1Word | A word representation. |
| CGoogleCloudVisionV1p2beta1AnnotateFileResponse | Response to a single file annotation request. A file may contain one or more images, which individually have their own responses. |
| CGoogleCloudVisionV1p2beta1AnnotateImageResponse | Response to an image annotation request. |
| CGoogleCloudVisionV1p2beta1AsyncAnnotateFileResponse | The response for a single offline file annotation request. |
| CGoogleCloudVisionV1p2beta1AsyncBatchAnnotateFilesResponse | Response to an async batch file annotation request. |
| CGoogleCloudVisionV1p2beta1Block | Logical element on the page. |
| CGoogleCloudVisionV1p2beta1BoundingPoly | A bounding polygon for the detected image annotation. |
| CGoogleCloudVisionV1p2beta1ColorInfo | Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| CGoogleCloudVisionV1p2beta1CropHint | Single crop hint that is used to generate a new crop when serving an image. |
| CGoogleCloudVisionV1p2beta1CropHintsAnnotation | Set of crop hints that are used to generate new crops when serving images. |
| CGoogleCloudVisionV1p2beta1DominantColorsAnnotation | Set of dominant colors and their corresponding scores. |
| CGoogleCloudVisionV1p2beta1EntityAnnotation | Set of detected entity features. |
| CGoogleCloudVisionV1p2beta1FaceAnnotation | A face annotation object contains the results of face detection. |
| CGoogleCloudVisionV1p2beta1FaceAnnotationLandmark | A face-specific landmark (for example, a face feature). |
| CGoogleCloudVisionV1p2beta1GcsDestination | The Google Cloud Storage location where the output will be written to. |
| CGoogleCloudVisionV1p2beta1GcsSource | The Google Cloud Storage location where the input will be read from. |
| CGoogleCloudVisionV1p2beta1ImageAnnotationContext | If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image. |
| CGoogleCloudVisionV1p2beta1ImageProperties | Stores image properties, such as dominant colors. |
| CGoogleCloudVisionV1p2beta1InputConfig | The desired input location and metadata. |
| CGoogleCloudVisionV1p2beta1LocalizedObjectAnnotation | Set of detected objects with bounding boxes. |
| CGoogleCloudVisionV1p2beta1LocationInfo | Detected entity location information. |
| CGoogleCloudVisionV1p2beta1NormalizedVertex | A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1. |
| CGoogleCloudVisionV1p2beta1OperationMetadata | Contains metadata for the BatchAnnotateImages operation. |
| CGoogleCloudVisionV1p2beta1OutputConfig | The desired output location and metadata. |
| CGoogleCloudVisionV1p2beta1Page | Detected page from OCR. |
| CGoogleCloudVisionV1p2beta1Paragraph | Structural unit of text representing a number of words in certain order. |
| CGoogleCloudVisionV1p2beta1Position | A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p2beta1Product | A Product contains ReferenceImages. |
| CGoogleCloudVisionV1p2beta1ProductKeyValue | A product label represented as a key-value pair. |
| CGoogleCloudVisionV1p2beta1ProductSearchResults | Results for a product search request. |
| CGoogleCloudVisionV1p2beta1ProductSearchResultsGroupedResult | Information about the products similar to a single product in a query image. |
| CGoogleCloudVisionV1p2beta1ProductSearchResultsObjectAnnotation | Prediction for what the object in the bounding box is. |
| CGoogleCloudVisionV1p2beta1ProductSearchResultsResult | Information about a product. |
| CGoogleCloudVisionV1p2beta1Property | A Property consists of a user-supplied name/value pair. |
| CGoogleCloudVisionV1p2beta1SafeSearchAnnotation | Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| CGoogleCloudVisionV1p2beta1Symbol | A single symbol representation. |
| CGoogleCloudVisionV1p2beta1TextAnnotation | TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail. |
| CGoogleCloudVisionV1p2beta1TextAnnotationDetectedBreak | Detected start or end of a structural component. |
| CGoogleCloudVisionV1p2beta1TextAnnotationDetectedLanguage | Detected language for a structural component. |
| CGoogleCloudVisionV1p2beta1TextAnnotationTextProperty | Additional information detected on the structural component. |
| CGoogleCloudVisionV1p2beta1Vertex | A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p2beta1WebDetection | Relevant information for the image from the Internet. |
| CGoogleCloudVisionV1p2beta1WebDetectionWebEntity | Entity deduced from similar images on the Internet. |
| CGoogleCloudVisionV1p2beta1WebDetectionWebImage | Metadata for online images. |
| CGoogleCloudVisionV1p2beta1WebDetectionWebLabel | Label to provide extra metadata for the web detection. |
| CGoogleCloudVisionV1p2beta1WebDetectionWebPage | Metadata for web pages. |
| CGoogleCloudVisionV1p2beta1Word | A word representation. |
| CGoogleCloudVisionV1p3beta1AnnotateFileResponse | Response to a single file annotation request. A file may contain one or more images, which individually have their own responses. |
| CGoogleCloudVisionV1p3beta1AnnotateImageResponse | Response to an image annotation request. |
| CGoogleCloudVisionV1p3beta1AsyncAnnotateFileResponse | The response for a single offline file annotation request. |
| CGoogleCloudVisionV1p3beta1AsyncBatchAnnotateFilesResponse | Response to an async batch file annotation request. |
| CGoogleCloudVisionV1p3beta1BatchOperationMetadata | Metadata for the batch operations such as the current state |
| CGoogleCloudVisionV1p3beta1Block | Logical element on the page. |
| CGoogleCloudVisionV1p3beta1BoundingPoly | A bounding polygon for the detected image annotation. |
| CGoogleCloudVisionV1p3beta1ColorInfo | Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| CGoogleCloudVisionV1p3beta1CropHint | Single crop hint that is used to generate a new crop when serving an image. |
| CGoogleCloudVisionV1p3beta1CropHintsAnnotation | Set of crop hints that are used to generate new crops when serving images. |
| CGoogleCloudVisionV1p3beta1DominantColorsAnnotation | Set of dominant colors and their corresponding scores. |
| CGoogleCloudVisionV1p3beta1EntityAnnotation | Set of detected entity features. |
| CGoogleCloudVisionV1p3beta1FaceAnnotation | A face annotation object contains the results of face detection. |
| CGoogleCloudVisionV1p3beta1FaceAnnotationLandmark | A face-specific landmark (for example, a face feature). |
| CGoogleCloudVisionV1p3beta1GcsDestination | The Google Cloud Storage location where the output will be written to. |
| CGoogleCloudVisionV1p3beta1GcsSource | The Google Cloud Storage location where the input will be read from. |
| CGoogleCloudVisionV1p3beta1ImageAnnotationContext | If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image. |
| CGoogleCloudVisionV1p3beta1ImageProperties | Stores image properties, such as dominant colors. |
| CGoogleCloudVisionV1p3beta1ImportProductSetsResponse | Response message for the ImportProductSets method |
| CGoogleCloudVisionV1p3beta1InputConfig | The desired input location and metadata. |
| CGoogleCloudVisionV1p3beta1LocalizedObjectAnnotation | Set of detected objects with bounding boxes. |
| CGoogleCloudVisionV1p3beta1LocationInfo | Detected entity location information. |
| CGoogleCloudVisionV1p3beta1NormalizedVertex | A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1. |
| CGoogleCloudVisionV1p3beta1OperationMetadata | Contains metadata for the BatchAnnotateImages operation. |
| CGoogleCloudVisionV1p3beta1OutputConfig | The desired output location and metadata. |
| CGoogleCloudVisionV1p3beta1Page | Detected page from OCR. |
| CGoogleCloudVisionV1p3beta1Paragraph | Structural unit of text representing a number of words in certain order. |
| CGoogleCloudVisionV1p3beta1Position | A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p3beta1Product | A Product contains ReferenceImages. |
| CGoogleCloudVisionV1p3beta1ProductKeyValue | A product label represented as a key-value pair. |
| CGoogleCloudVisionV1p3beta1ProductSearchResults | Results for a product search request. |
| CGoogleCloudVisionV1p3beta1ProductSearchResultsGroupedResult | Information about the products similar to a single product in a query image. |
| CGoogleCloudVisionV1p3beta1ProductSearchResultsObjectAnnotation | Prediction for what the object in the bounding box is. |
| CGoogleCloudVisionV1p3beta1ProductSearchResultsResult | Information about a product. |
| CGoogleCloudVisionV1p3beta1Property | A Property consists of a user-supplied name/value pair. |
| CGoogleCloudVisionV1p3beta1ReferenceImage | A ReferenceImage represents a product image and its associated metadata, such as bounding boxes. |
| CGoogleCloudVisionV1p3beta1SafeSearchAnnotation | Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| CGoogleCloudVisionV1p3beta1Symbol | A single symbol representation. |
| CGoogleCloudVisionV1p3beta1TextAnnotation | TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail. |
| CGoogleCloudVisionV1p3beta1TextAnnotationDetectedBreak | Detected start or end of a structural component. |
| CGoogleCloudVisionV1p3beta1TextAnnotationDetectedLanguage | Detected language for a structural component. |
| CGoogleCloudVisionV1p3beta1TextAnnotationTextProperty | Additional information detected on the structural component. |
| CGoogleCloudVisionV1p3beta1Vertex | A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p3beta1WebDetection | Relevant information for the image from the Internet. |
| CGoogleCloudVisionV1p3beta1WebDetectionWebEntity | Entity deduced from similar images on the Internet. |
| CGoogleCloudVisionV1p3beta1WebDetectionWebImage | Metadata for online images. |
| CGoogleCloudVisionV1p3beta1WebDetectionWebLabel | Label to provide extra metadata for the web detection. |
| CGoogleCloudVisionV1p3beta1WebDetectionWebPage | Metadata for web pages. |
| CGoogleCloudVisionV1p3beta1Word | A word representation. |
| CGoogleCloudVisionV1p4beta1AnnotateFileResponse | Response to a single file annotation request. A file may contain one or more images, which individually have their own responses. |
| CGoogleCloudVisionV1p4beta1AnnotateImageResponse | Response to an image annotation request. |
| CGoogleCloudVisionV1p4beta1AsyncAnnotateFileResponse | The response for a single offline file annotation request. |
| CGoogleCloudVisionV1p4beta1AsyncBatchAnnotateFilesResponse | Response to an async batch file annotation request. |
| CGoogleCloudVisionV1p4beta1AsyncBatchAnnotateImagesResponse | Response to an async batch image annotation request. |
| CGoogleCloudVisionV1p4beta1BatchAnnotateFilesResponse | A list of file annotation responses. |
| CGoogleCloudVisionV1p4beta1BatchOperationMetadata | Metadata for the batch operations such as the current state |
| CGoogleCloudVisionV1p4beta1Block | Logical element on the page. |
| CGoogleCloudVisionV1p4beta1BoundingPoly | A bounding polygon for the detected image annotation. |
| CGoogleCloudVisionV1p4beta1Celebrity | A Celebrity is a group of Faces with an identity. |
| CGoogleCloudVisionV1p4beta1ColorInfo | Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| CGoogleCloudVisionV1p4beta1CropHint | Single crop hint that is used to generate a new crop when serving an image. |
| CGoogleCloudVisionV1p4beta1CropHintsAnnotation | Set of crop hints that are used to generate new crops when serving images. |
| CGoogleCloudVisionV1p4beta1DominantColorsAnnotation | Set of dominant colors and their corresponding scores. |
| CGoogleCloudVisionV1p4beta1EntityAnnotation | Set of detected entity features. |
| CGoogleCloudVisionV1p4beta1FaceAnnotation | A face annotation object contains the results of face detection. |
| CGoogleCloudVisionV1p4beta1FaceAnnotationLandmark | A face-specific landmark (for example, a face feature). |
| CGoogleCloudVisionV1p4beta1FaceRecognitionResult | Information about a face's identity. |
| CGoogleCloudVisionV1p4beta1GcsDestination | The Google Cloud Storage location where the output will be written to. |
| CGoogleCloudVisionV1p4beta1GcsSource | The Google Cloud Storage location where the input will be read from. |
| CGoogleCloudVisionV1p4beta1ImageAnnotationContext | If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image. |
| CGoogleCloudVisionV1p4beta1ImageProperties | Stores image properties, such as dominant colors. |
| CGoogleCloudVisionV1p4beta1ImportProductSetsResponse | Response message for the ImportProductSets method |
| CGoogleCloudVisionV1p4beta1InputConfig | The desired input location and metadata. |
| CGoogleCloudVisionV1p4beta1LocalizedObjectAnnotation | Set of detected objects with bounding boxes. |
| CGoogleCloudVisionV1p4beta1LocationInfo | Detected entity location information. |
| CGoogleCloudVisionV1p4beta1NormalizedVertex | A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1. |
| CGoogleCloudVisionV1p4beta1OperationMetadata | Contains metadata for the BatchAnnotateImages operation. |
| CGoogleCloudVisionV1p4beta1OutputConfig | The desired output location and metadata. |
| CGoogleCloudVisionV1p4beta1Page | Detected page from OCR. |
| CGoogleCloudVisionV1p4beta1Paragraph | Structural unit of text representing a number of words in certain order. |
| CGoogleCloudVisionV1p4beta1Position | A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p4beta1Product | A Product contains ReferenceImages. |
| CGoogleCloudVisionV1p4beta1ProductKeyValue | A product label represented as a key-value pair. |
| CGoogleCloudVisionV1p4beta1ProductSearchResults | Results for a product search request. |
| CGoogleCloudVisionV1p4beta1ProductSearchResultsGroupedResult | Information about the products similar to a single product in a query image. |
| CGoogleCloudVisionV1p4beta1ProductSearchResultsObjectAnnotation | Prediction for what the object in the bounding box is. |
| CGoogleCloudVisionV1p4beta1ProductSearchResultsResult | Information about a product. |
| CGoogleCloudVisionV1p4beta1Property | A Property consists of a user-supplied name/value pair. |
| CGoogleCloudVisionV1p4beta1ReferenceImage | A ReferenceImage represents a product image and its associated metadata, such as bounding boxes. |
| CGoogleCloudVisionV1p4beta1SafeSearchAnnotation | Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| CGoogleCloudVisionV1p4beta1Symbol | A single symbol representation. |
| CGoogleCloudVisionV1p4beta1TextAnnotation | TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail. |
| CGoogleCloudVisionV1p4beta1TextAnnotationDetectedBreak | Detected start or end of a structural component. |
| CGoogleCloudVisionV1p4beta1TextAnnotationDetectedLanguage | Detected language for a structural component. |
| CGoogleCloudVisionV1p4beta1TextAnnotationTextProperty | Additional information detected on the structural component. |
| CGoogleCloudVisionV1p4beta1Vertex | A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| CGoogleCloudVisionV1p4beta1WebDetection | Relevant information for the image from the Internet. |
| CGoogleCloudVisionV1p4beta1WebDetectionWebEntity | Entity deduced from similar images on the Internet. |
| CGoogleCloudVisionV1p4beta1WebDetectionWebImage | Metadata for online images. |
| CGoogleCloudVisionV1p4beta1WebDetectionWebLabel | Label to provide extra metadata for the web detection. |
| CGoogleCloudVisionV1p4beta1WebDetectionWebPage | Metadata for web pages. |
| CGoogleCloudVisionV1p4beta1Word | A word representation. |
| CGroupedResult | Information about the products similar to a single product in a query image. |
| CImageAnnotationContext | If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image. |
| CImageProperties | Stores image properties, such as dominant colors. |
| CImportProductSetsResponse | Response message for the ImportProductSets method |
| CInputConfig | The desired input location and metadata. |
| CKeyValue | A product label represented as a key-value pair. |
| CLandmark | A face-specific landmark (for example, a face feature). |
| CLatLng | An object representing a latitude/longitude pair. This is expressed as a pair of doubles representing degrees latitude and degrees longitude. Unless specified otherwise, this must conform to the WGS84 standard. Values must be within normalized ranges. |
| CLocalizedObjectAnnotation | Set of detected objects with bounding boxes. |
| CLocationInfo | Detected entity location information. |
| CNormalizedVertex | A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1. |
| CObjectAnnotation | Prediction for what the object in the bounding box is. |
| COperation | This resource represents a long-running operation that is the result of a network API call. |
| COperationMetadata | Contains metadata for the BatchAnnotateImages operation. |
| COutputConfig | The desired output location and metadata. |
| CPage | Detected page from OCR. |
| CParagraph | Structural unit of text representing a number of words in certain order. |
| CPosition | A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| CProduct | A Product contains ReferenceImages. |
| CProductSearchResults | Results for a product search request. |
| CProperty | A Property consists of a user-supplied name/value pair. |
| CReferenceImage | A ReferenceImage represents a product image and its associated metadata, such as bounding boxes. |
| CResult | Information about a product. |
| CSafeSearchAnnotation | Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| CStatus | The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status message contains three pieces of data: error code, error message, and error details |
| CSymbol | A single symbol representation. |
| CTextAnnotation | TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail. |
| CTextProperty | Additional information detected on the structural component. |
| CVertex | A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| CWebDetection | Relevant information for the image from the Internet. |
| CWebEntity | Entity deduced from similar images on the Internet. |
| CWebImage | Metadata for online images. |
| CWebLabel | Label to provide extra metadata for the web detection. |
| CWebPage | Metadata for web pages. |
| CWord | A word representation. |
| ►CFilesResource | The "files" collection of methods. |
| CAnnotateRequest | Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results). |
| ►CImagesResource | The "images" collection of methods. |
| CAnnotateRequest | Run image detection and annotation for a batch of images. |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of images |
| ►CProjectsResource | The "projects" collection of methods. |
| ►CFilesResource | The "files" collection of methods. |
| CAnnotateRequest | Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results). |
| ►CImagesResource | The "images" collection of methods. |
| CAnnotateRequest | Run image detection and annotation for a batch of images. |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of images |
| ►CLocationsResource | The "locations" collection of methods. |
| ►CFilesResource | The "files" collection of methods. |
| CAnnotateRequest | Service that performs image detection and annotation for a batch of files. Now only "application/pdf", "image/tiff" and "image/gif" are supported |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of generic files, such as PDF files, which may contain multiple pages and multiple images per page. Progress and results can be retrieved through the google.longrunning.Operations interface. Operation.metadata contains OperationMetadata (metadata). Operation.response contains AsyncBatchAnnotateFilesResponse (results). |
| ►CImagesResource | The "images" collection of methods. |
| CAnnotateRequest | Run image detection and annotation for a batch of images. |
| CAsyncBatchAnnotateRequest | Run asynchronous image detection and annotation for a list of images |
| CVisionBaseServiceRequest | A base abstract class for Vision requests. |
| CVisionService | The Vision Service. |
© 2020 Google - Privacy Policy - Terms and Conditions - About Google