Google Prediction API

Hostedmodels: predict

Requires authorization

Submit input and request an output against a hosted model. Try it now.

Request

HTTP request

POST https://www.googleapis.com/prediction/v1.5/hostedmodels/hostedModelName/predict

Parameters

Parameter name Value Description
Required parameters
hostedModelName string The name of a hosted model.

Authorization

This request requires authorization with at least one of the following scopes (read more about authentication and authorization).

Scope
https://www.googleapis.com/auth/prediction

Request body

In the request body, supply data with the following structure:

{
  "input": {
    "csvInstance": [
      (value)
    ]
  }
}
Property name Value Description Notes
input object Input to the model for a prediction
input.csvInstance[] list A list of input features, these can be strings or doubles.

Response

If successful, this method returns a response body with the following structure:

{
  "kind": "prediction#output",
  "id": string,
  "selfLink": string,
  "outputLabel": string,
  "outputMulti": [
    {
      "label": string,
      "score": double
    }
  ],
  "outputValue": double
}
Property name Value Description Notes
kind string What kind of resource this is.
id string The unique name for the predictive model.
outputLabel string [Categorical models only] The most likely class label.
outputMulti[] list [Categorical models only] A list of class labels with their estimated probabilities.
outputMulti[].label string The class label.
outputMulti[].score double

A score for this class label. A few notes on the scores:

  • Score values range from 0.0–1.0, with 1.0 being the highest. All values should add up to 1.0. 
    Note: if you used an earlier version of the API that used a different range, you must retrain your data model in order for scores to be scaled to 0.0–1.0.
  • Consider having a cutoff value above which the categorization is useful and below which you might ignore it. We can't advise a hard cutoff value; instead, try running a few queries for borderline items, and use that as an approximate cutoff value for your categories.
  • These values are not probabilities; that is, they are not the confidence that a rating is correct. They are a measure of how closely a category seems to conform to the query item.
  • It is hard to say absolutely what is a significant difference in scores. For example, is 0.33 is "significantly" better than 0.42? Is 0.25 "twice as good" as 0.125? Instead, assume that the highest value is the best fit, and have a cutoff value below which you won't use the data. You'll have to experiment with the system to determine what is a meaningful cutoff value for your data.
outputValue double [Regression models only] The estimated regression value.

Try it!

Use the APIs Explorer below to call this method on live data and see the response. Alternatively, try the standalone Explorer.

Authentication required

You need to be signed in with Google+ to do that.

Signing you in...

Google Developers needs your permission to do that.