Pete Frisella, Google Analytics Developer Advocate – June 2013
This document describes how to run server-side experiments using Google Analytics.
Server-side implementations offer more flexibility to do things such as:
- Run experiments for websites with dynamic content.
- Test non-UI changes that still have an affect on your objective. For example, a database query result set that is returned to a user.
- Integrate Google Analytics experiments with your service (e.g. Content Management provider).
- Manage experiments using your own optimization platform.
This guide provides implementation considerations and flows for server-side experiments.
The main steps to running experiments server-side are:
- Define the experiment that you want to run.
- Configure the experiment and objectives in Google Analytics.
- Handle users and Experiments
- Publish changes and Run the Experiment
Define the Experiment
The first step in any experiment is to define the original page, the variations to test, the objective of the experiment, and any other relevant parameters.
Defining variations is dependent on what you want to test. It could be a single element on a website page, or an entire page, the text size of an offer on a kiosk screen, or the result set of a database query. Goals and objectives will also vary depending on what you're testing and may involve minimizing or maximizing a goal or a predefined metric such as time on site, or page views.
The important thing is that you need to know the variations you'd like to test and have an objective to optimize in order to create and configure an experiment.
Configure the Experiment and Objectives in Google Analytics
Once you’ve defined the experiment and variations you’d like to test, configure the experiment using the Google Analytics web interface or Management API.
The steps to configure the experiment using the web interface are:
- Sign-in to the Google Analytics web interface and select the view (profile) in which you want to create the experiment.
- Click the Reporting tab
- Click Behavior > Experiments.
- Click Create experiment.
- Choose an experiment objective:
Select or create a goal as the experiment objective.
For details on using goals see Set up and edit Goals (Help Center). Once you've chosen an experiment objective, click Next Step to continue.
- Configure your experiment:
A name and URL is required for each variation. However, if you intend to eliminate redirects by making server-side changes then you can use any value for the variation URL since it won't be applicable in this case.
Once you've configured the experiment, click Next Step to continue.
- Setting up your experiment code:
Click Next Step to continue.
- Review and start:
Click Start Experiment or alternatively you can start the experiment after you've completed the implementation of the variations. If you receive a validation error, click Yes to ignore and continue.
For additional details and instructions on configuring an experiment see Run an Experiment (Help Center).
Experiments can be created, updated, and deleted programmatically using the
Management API. This is useful if you want to fully automate experiments. If
you do create and manage all experiments using the Management API then you
should set the
servingFramework parameter of the experiment to
API. However, if you intend to disable the multi-armed bandit
optimization used by Google Analytics for experiments then set
EXTERNAL. See the
Feature Reference and
Experiments Developer Guide for additional details.
Once you’ve configured an experiment and have an Experiment ID, you will need to handle choosing and showing variations to users when they are exposed to an experiment.
Handle Users and Experiments
As users interact with a property that has a running experiment, you need to determine if the user is new or returning to the experiment, which variation to show them, and then send experiment data to Google Analytics. The steps required to accomplish this are:
- Periodically refresh and store data for running experiments.
- Determine the experiment status for a user and what to show them.
- Send experiment data to Google Analytics and show the variation to the user.
Store and Retrieve Experiment Data
Storing and retrieving experiments data is applicable if you are relying on
the Google Analytics statistical engine and want to make a server-side
decision about an experiment and which variation to show a user (i.e.
servingFramework is set to
API). To make a
decision requires that you have up-to-date information about all the
experiments running for your property. This can be accomplished by
periodically querying the Management API for the latest experiments
information and storing this on your server. Google Analytics evaluates and
makes optimization decisions that are updated twice daily, so it is
recommended that you update your experiments info multiple times per day to
retrieve the latest variation weights and status for your running experiments.
The Management API list method can be used to query Google Analytics for a list of experiments and the get method can be used to retrieve experiment details for an individual experiment. This information can be saved on your server and cached for quick access.
When a user makes a request on your property you will need to determine at that time whether there is an experiment running. For this reason you should store experiment details in a manner that will make it easy to lookup and retrieve any relevant experiment info. For example, for a website you may want to use a content ID or the URL as an index mapped to experiment IDs.
The following Python code shows a simple handler that will refresh experiments data periodically using a scheduled task with cron for AppEngine. This is not a comprehensive example but is for illustrative purposes only.
class RefreshExperimentsHandler(BaseHandler): """Handles periodic refresh for a scheduled task with cron for AppEngine.""" def get(self): experiments = get_experiments() update_experiments_db(experiments) def get_experiments(): """Queries the Management API for all active experiments.""" try: experiments = ANALYTICS_SERVICE.management().experiments().list( accountId='1234', webPropertyId='UA-1234-1', profileId='56789').execute() except TypeError, error: # Handle errors in constructing a query. logging.error('There was an error constructing the query : %s' % error) except HttpError, error: # Handle API errors. logging.error('API error : %s : %s' % (error.resp.status, error._get_reason())) return experiments def update_experiments_db(experiments): """Updates the datastore with the provided experiment data. Args: experiments: A list of experiments. """ if experiments: for experiment in experiments.get('items', ): experiment_key = db.Key.from_path('Experiment', experiment.get('id')) experiment_db = db.get(experiment_key) # Update experiment values experiment_db.status = experiment.get('status') experiment_db.start_time = experiment.get('startTime') ... # Continue to update all properties # Update memcache with the experiment values. memcache.set(experiment.get('id'), experiment) # Update Variations for index, variation in enumerate(experiment.get('variations', )): variation_db = experiment_db.variations.get_by_id(index) variation_db.status = variation.get('status') variation_db.weight = variation.get('weight') variation_db.won = variation.get('won') ... # Continue updating variation properties variation_db.put() experiment_db.put()
And an example
cron: - description: refresh experiments info url: /refresh_experiments schedule: every 12 hours
Store Experiment Information for Users
When a user interacts with your property and is exposed to an experiment for the first time you need to make various checks and decisions as to whether they should be included in an experiment and whether to show them a variation or the original. Once these choices are made they should remain the same for the user on subsequent exposures to the same experiment. For this reason it is necessary to anonymously store experiment details for a user in a location that is secure but accessible to you whenever a user interacts with your property.
For websites, the recommended approach is to write values to a cookie and for implementations where there is no client-side storage mechanism, this information will need to be saved server-side using some sort of lookup table that maps anonymous but stable user IDs to experiment details for a user.
There are two experiment values that need to be stored to handle experiments consistently for returning users. For each experiment the user is exposed to, save the following details:
- Experiment ID—The ID of the experiment the user has been exposed to.
- Variation ID—The index of the variation chosen for
the user. Google Analytics represents variations as a list, using a 0-based
index, where the first element in the list is the "Original". The list of
variations for an experiment is always in the same order and cannot be
modified once an experiment is running. Therefore, the index of a variation
can be used as the Variation ID. For users that are not
included in the experiment, it is recommended that you use a value of
-1as the Variation ID.
Once you have set up a way to periodically refresh and store experiments info, the next step is to consider how to make decisions and show variations for users that are exposed to experiments.
Choose a Variation
When a user interacts with your property there are multiple checks that need to be made to handle a user's exposure to a running experiment. The rest of this section provides questions to help explain the proper steps to take depending on the user and experiment configuration.
For the current user interaction (e.g. pageview), is there an experiment running?
Refer to the experiment information stored on your server to determine if there is an active experiment running for the particular user interaction. In other words, you need to figure out if the user is interacting with the 'Original' of an experiment. For example, a user visits a page on a website that has been configured as the original page for an experiment.
- Yes: continue to the next check.
- No: There is no experiment running, so show the user whatever they requested and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
This is an sample function for an AppEngine application that uses a
Page entity to model pages and an
to store details about experiments.
def is_experiment_running(page_id): """Checks if an experiment is currently running for a page. Args: page_id: The ID of the page to check for a running experiment. Returns: A Boolean indicating whether an experiment is running or not. """ try: page = db_models.Page.get_by_key_name(page_id) except db.BadKeyError: return False if page: experiment_id = page.experiment_id try: experiment = db_models.Experiment.get_by_key_name(experiment_id) except db.BadKeyError: return False if experiment: return experiment.status == "RUNNING" return false
Has the user been previously exposed to this experiment?
To determine whether a user has been exposed requires that you previously saved this information for the user in a location that is accessible to you. For websites the recommended approach is to write a value to a cookie and for implementations where there is no client-side storage mechanism this information will need to be saved server-side using some sort of lookup table that maps anonymous user IDs to experiment IDs and the variant selected for the user. See Store Experiment Information for Users for details. The point is that you need to have some way of determining whether a user has previously been exposed to experiments.
Attempt to retrieve any stored experiment info for the user to determine if the user has been previously exposed to this experiment.
- Yes: is the variation that was previously selected for
this user still
- Yes: you have the variation information for the user, skip the rest of the checks.
- No: the variation is no longer active, show the original and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
- No: continue to the next check.
Should the user be included in the experiment?
You can determine whether the user should be included in the experiment by
trafficCoverage value of an experiment. Choose a random
number in the range
1.0. If the random
number is less than or equal to
trafficCoverage for the experiment, then include the user in
- Yes: continue to the next check.
- No: store experiment information for this user to indicate they should not be included in this experiment (see Store Experiment Information for Users ). Show them the original and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
This following is a sample function to determine whether a user should be included in an experiment.
import random def should_user_participate(traffic_coverage): """A utility to decide whether a new user should be included in an experiment. Args: traffic_coverage: The fraction of traffic that should participate in the experiment. Returns: A Boolean indicating whether the user should be included in the experiment or not. """ random_spin = random.random() return random_spin <= traffic_coverage
Choose a variation for the user
If a user has never been exposed to an experiment and they have been selected
to be included in the experiment, then you will need to choose a variation to
show the user. The first step is retrieve all of the
variations for the experiment they have been exposed to.
The second step is to randomly choose a
variation to show based on the weights of each
The following is a sample function that will choose a variation based on a list of variations and their weights.
import random def choose_variation(variations): """Chooses a variation based on weights and status. Args: variations: A collection of variations to choose from. Returns: An Integer representing the index of the chosen variation. """ random_spin = random.random() cumulative_weights = 0 for index, variation in enumerate(variations): if variation.get('status') == 'ACTIVE': cumulative_weights += variation.get('weight') if random_spin < cumulative_weights: return index return 0
Send Experiment Data and Show a Variation
When a user is exposed to an experiment, and if they are selected to be included in the experiment, and the variation to show them is active, then you need to send the Experiment ID and Variation ID to Google Analytics. See Store Experiment Information for Users for details on the experiment values.
- Experiment ID — the ID of the experiment the user has been exposed to.
- Variation ID — the variation shown to the user. An integer value representing the variation in Google Analytics for the experiment. For details see Determining experiment status for a user and Choosing a variation.
If you are managing experiments and have set the
servingFramework field of an experiment to
as described in Handling Users and Experiments, then
it is likely you have an internal ID for experiment variations.
In this case, to send experiment data to Google Analytics, you will need to
map your internal variation ID to the variation number that
Google Analytics has assigned to the matching variation.
For example, if a website has an experiment running on a single page of the website, experiment data needs to be sent to Google Analytics when a user is exposed to that single page (assuming the user is included in the experiment). For pageviews on other parts of the site where no experiments are running, it is not necessary to send experiment data to Google Analytics, since this is handled automatically for you. If the user returns to the page where the experiment is running, experiment data needs to be sent again to Google Analytics and any other time the user visits the page until the experiment has ended.
Publish and Run the Experiment
Once you’ve configured the experiment and have implemented the server-side logic and any changes to the original page, the next step is to ensure that the experiment is running and make the changes live.
After the experiment has ended you can make changes to the original page and remove any experiment-related logic from the page.
Tips and Considerations
- Predefined metrics such as time on site, pageviews, revenue, etc. can be used for Experiment objectives instead of Goals.
Google Analytics uses a multi-armed bandit approach to managing online experiments which automatically calculates weights and sets the status of each variation in an experiment. The weights can then be used to randomly choose a variation to show a user that is exposed to an experiment for the first time. Google Analytics automatically updates these weights twice daily by evaluating the performance of each variation. To learn more about the statistical engine that Google Analytics uses to manage experiments, see Multi-armed bandit experiments.