Reporting Best Practices

Build new reports in the UI first

Reports are subject to a number of restrictions and requirements pertaining to reporting types, filters, dimensions, and metrics. These limitations are enforced in the API, returning an HTTP 400 error. In order to avoid errors when building reports, we recommend that you first build new reports in the Display & Video 360 UI.

Once you have built your desired report, use the "Try this API" feature in the reference documentation to retrieve the DoubleClick Bid Manager API JSON representation of the query resource using the getquery method. Use this JSON representation to build similar reports in the future.

Save and re-use reports

We recommend that you create and save reports for queries you run regularly because inserting and deleting the same report multiple times wastes resources. Using relative date ranges such as PREVIOUS_DAY or LAST_7_DAYS makes reports more reusable.

Schedule reports

Ad hoc or one-off reports can be wasteful of resources because they are run individually and may execute against an incomplete dataset. Scheduled reports make the best use of reporting resources because they are run in bulk and are guaranteed not to execute until the previous day's data has completed processing. See the available scheduling fields for details.

Use exponential backoff when polling for report status

It's not possible to predict how long a report will take to run. The length of time can range from seconds to hours depending on many factors including date range and the amount of data to be processed, for instance. There's also no correlation between report run-time and the number of rows returned in the report. You therefore need to regularly check if the GCS path to the latest report in the query resource has been updated to determine whether the latest report has finished running. This is a process known as "polling".

While polling is necessary, an inefficient implementation may quickly exhaust your quota when encountering a long running report. Thus we recommend that you use exponential backoff to limit retries and conserve quota.

Exponential backoff

Exponential backoff is a standard error handling strategy for network applications in which the client periodically retries request over an increasing amount of time. Used properly, exponential backoff increases the efficiency of bandwidth usage, reduces the number of requests required to get a successful response, and maximizes the throughput of requests in concurrent environments.

The flow for implementing simple exponential backoff is as follows:

  1. Make a getquery request to the API.
  2. Retrieve the query object. If the metadata.googleCloudStoragePathForLatestReport field has not been updated, that indicates you should retry the request.
  3. Wait 5 seconds + a random number of milliseconds and retry the request.
  4. Retrieve the query object. If the metadata.googleCloudStoragePathForLatestReport field has not been updated, that indicates you should retry the request.
  5. Wait 10 seconds + a random number of milliseconds and retry the request.
  6. Retrieve the query object. If the metadata.googleCloudStoragePathForLatestReport field has not been updated, that indicates you should retry the request.
  7. Wait 20 seconds + a random number of milliseconds and retry the request.
  8. Retrieve the query object. If the metadata.googleCloudStoragePathForLatestReport field has not been updated, that indicates you should retry the request.
  9. Wait 40 seconds + a random number of milliseconds and retry the request.
  10. Retrieve the query object. If the metadata.googleCloudStoragePathForLatestReport field has not been updated, that indicates you should retry the request.
  11. Wait 80 seconds + a random number of milliseconds and retry the request.
  12. Continue this pattern until the query object is updated or a maximum time elapsed is reached.

Limit the number of concurrent running reports

If you run more than 10 reports concurrently, any additional run requests will be throttled. Throttling causes reports to take longer to run and may lead to timeouts or errors in extreme cases. We recommend that you limit concurrent reports to 5 to stay below the limit comfortably.