ROI priors and calibration

ROI priors offer an intuitive way to incorporate domain knowledge, such as past experiment results, into your model to help guide the model training process.

When ROI experiment results are used to set channel-specific ROI priors, Meridian refers to this as calibration. It isn't necessary to have experiment results in order to utilize ROI priors. ROI priors are the recommended approach regardless of what data is available to inform them.

ROI priors ensure that the effective coefficient prior is on a scale that is appropriate relative to the spend for each channel. It can be tempting to think that coefficient priors make better non-informative priors, but this isn't the case. If you use the same non-informative coefficient prior on all channels, you are effectively placing very different ROI priors on these channels that could differ by orders of magnitude.

Here are some important considerations when setting ROI priors:

  • There is no specific formula to translate an experiment result into a prior. One option is to align the experiment's point estimate and standard error with the prior mean and standard error (see an example in Set custom priors using past experiments. However, prior knowledge in the Bayesian sense is more broadly defined, and doesn't need to be a formulaic calculation. Other domain knowledge can be used in combination with experiment results to subjectively set the priors.

  • Meridian's default ROI prior distribution is Log-normal. This distribution was chosen as the default because it has two parameters, which gives control over both the mean and standard deviation. However, any distribution with any number of parameters can be used in place of Log-normal. Generally, it's not recommended to allow negative ROI values because this can inflate the posterior variance and lead to overfitting.

  • The ROI measured by an experiment never aligns perfectly with the ROI measured by MMM. (In statistical terms, the experiment and MMM have different estimands.) Experiments are always related to the specific conditions of the experiment, such as the time window, geographic regions, campaign settings. Experiment results can provide highly relevant information about the MMM ROI, but translating experiment results to an MMM prior involves an additional layer of uncertainty beyond only the experiment's standard error.

  • When setting prior distributions, and prior standard deviations in particular:

    • Consider that some degree of regularization is typically necessary to achieve a suitable bias-variance tradeoff. Although some modelers might be inclined to use flat, noninformative priors for channels with no prior experiments, this can lead to overfitting and poor results (low bias but high variance).

    • Finding an appropriate degree of regularization can be an iterative process that involves checking out-of-sample model fit at various regularization strengths. Bayesian purists might argue against this because the posterior distribution doesn't have a clear interpretation unless the prior distribution precisely reflects prior knowledge. Although this is true, such an approach is not necessarily practical for MMM. Furthermore, it is infeasible to obtain domain knowledge and set a true prior on every single parameter in the model, and Bayesian inference should be interpreted accordingly.

For more information, see: