Introduction to Meridian Demo

Welcome to the Meridian end-to-end demo. This simplified demo showcases the fundamental functionalities and basic usage of the library, including working examples of the major modeling steps:

  1. Install and Enviroment Configuration
  2. Load the data
  3. Configure the model
  4. Run post-modeling quality checks
  5. Run model diagnostics
  6. Generate model results & two-page output
  7. Run budget optimization & two-page output
  8. Save the model object
  9. Interactive Scenario Planning

Note that this notebook skips all of the exploratory data analysis and preprocessing steps. It assumes that you have completed these tasks before reaching this point in the demo.

This notebook utilizes sample data. As a result, the numbers and results obtained might not accurately reflect what you encounter when working with a real dataset.

Step 0: Install and Enviroment Configuration

1. Make sure you are using one of the available GPU Colab runtimes which is required to run Meridian. You can change your notebook's runtime in Runtime > Change runtime type in the menu. All users can use the T4 GPU runtime which is sufficient to run the demo colab, free of charge. Users who have purchased one of Colab's paid plans have access to premium GPUs (such as V100, A100 or L4 Nvidia GPU).

2. Install the latest version of Meridian, and verify that GPU is available.

# Install meridian: from PyPI @ latest release
pip install --upgrade google-meridian[colab,and-cuda,schema]

# Install meridian: from PyPI @ specific version
# !pip install google-meridian[colab,and-cuda,schema]==1.3.1

# Install meridian: from GitHub @HEAD
# !pip install --upgrade "google-meridian[colab,and-cuda,schema] @ git+https://github.com/google/meridian.git@main"
Collecting google-meridian[and-cuda,colab,schema]
  Downloading google_meridian-1.4.0-py3-none-any.whl.metadata (9.8 kB)
Requirement already satisfied: arviz in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (0.22.0)
Requirement already satisfied: altair>=5 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (5.5.0)
Requirement already satisfied: immutabledict in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (4.2.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (1.5.3)
Collecting natsort<8,>=7.1.1 (from google-meridian[and-cuda,colab,schema])
  Downloading natsort-7.1.1-py3-none-any.whl.metadata (22 kB)
Requirement already satisfied: numpy<3,>=2.0.2 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (2.0.2)
Requirement already satisfied: pandas<3,>=2.2.2 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (2.2.2)
Requirement already satisfied: scipy<2,>=1.13.1 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (1.16.3)
Requirement already satisfied: statsmodels>=0.14.5 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (0.14.6)
Requirement already satisfied: tensorflow<2.21,>=2.18 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (2.19.0)
Requirement already satisfied: tensorflow-probability<0.26,>=0.25 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (0.25.0)
Requirement already satisfied: tf-keras<2.21,>=2.18 in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (2.19.0)
Requirement already satisfied: xarray in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (2025.12.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.12/dist-packages (from google-meridian[and-cuda,colab,schema]) (5.9.5)
Collecting python-calamine (from google-meridian[and-cuda,colab,schema])
  Downloading python_calamine-0.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.1 kB)
Collecting mmm-proto-schema>=1.1.0 (from google-meridian[and-cuda,colab,schema])
  Downloading mmm_proto_schema-1.1.1-py3-none-any.whl.metadata (1.7 kB)
Collecting semver (from google-meridian[and-cuda,colab,schema])
  Downloading semver-3.0.4-py3-none-any.whl.metadata (6.8 kB)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.12/dist-packages (from altair>=5->google-meridian[and-cuda,colab,schema]) (3.1.6)
Requirement already satisfied: jsonschema>=3.0 in /usr/local/lib/python3.12/dist-packages (from altair>=5->google-meridian[and-cuda,colab,schema]) (4.25.1)
Requirement already satisfied: narwhals>=1.14.2 in /usr/local/lib/python3.12/dist-packages (from altair>=5->google-meridian[and-cuda,colab,schema]) (2.13.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.12/dist-packages (from altair>=5->google-meridian[and-cuda,colab,schema]) (25.0)
Requirement already satisfied: typing-extensions>=4.10.0 in /usr/local/lib/python3.12/dist-packages (from altair>=5->google-meridian[and-cuda,colab,schema]) (4.15.0)
Requirement already satisfied: googleapis-common-protos in /usr/local/lib/python3.12/dist-packages (from mmm-proto-schema>=1.1.0->google-meridian[and-cuda,colab,schema]) (1.72.0)
Requirement already satisfied: protobuf in /usr/local/lib/python3.12/dist-packages (from mmm-proto-schema>=1.1.0->google-meridian[and-cuda,colab,schema]) (5.29.5)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.12/dist-packages (from pandas<3,>=2.2.2->google-meridian[and-cuda,colab,schema]) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.12/dist-packages (from pandas<3,>=2.2.2->google-meridian[and-cuda,colab,schema]) (2025.2)
Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.12/dist-packages (from pandas<3,>=2.2.2->google-meridian[and-cuda,colab,schema]) (2025.3)
Requirement already satisfied: patsy>=0.5.6 in /usr/local/lib/python3.12/dist-packages (from statsmodels>=0.14.5->google-meridian[and-cuda,colab,schema]) (1.0.2)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (1.4.0)
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (1.6.3)
Requirement already satisfied: flatbuffers>=24.3.25 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (25.9.23)
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.7.0)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.2.0)
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (18.1.1)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.4.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2.32.4)
Requirement already satisfied: setuptools in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (75.2.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (1.17.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.2.0)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2.0.1)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (1.76.0)
Requirement already satisfied: tensorboard~=2.19.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2.19.0)
Requirement already satisfied: keras>=3.5.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.10.0)
Requirement already satisfied: h5py>=3.11.0 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.15.1)
Requirement already satisfied: ml-dtypes<1.0.0,>=0.5.1 in /usr/local/lib/python3.12/dist-packages (from tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.5.4)
Requirement already satisfied: decorator in /usr/local/lib/python3.12/dist-packages (from tensorflow-probability<0.26,>=0.25->google-meridian[and-cuda,colab,schema]) (4.4.2)
Requirement already satisfied: cloudpickle>=1.3 in /usr/local/lib/python3.12/dist-packages (from tensorflow-probability<0.26,>=0.25->google-meridian[and-cuda,colab,schema]) (3.1.2)
Requirement already satisfied: dm-tree in /usr/local/lib/python3.12/dist-packages (from tensorflow-probability<0.26,>=0.25->google-meridian[and-cuda,colab,schema]) (0.1.9)
Collecting nvidia-cublas-cu12==12.5.3.2 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cublas_cu12-12.5.3.2-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.5.82 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cuda_cupti_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Requirement already satisfied: nvidia-cuda-nvcc-cu12==12.5.82 in /usr/local/lib/python3.12/dist-packages (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema]) (12.5.82)
Collecting nvidia-cuda-nvrtc-cu12==12.5.82 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cuda_nvrtc_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.5.82 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cuda_runtime_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cudnn-cu12==9.3.0.75 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cudnn_cu12-9.3.0.75-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cufft-cu12==11.2.3.61 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cufft_cu12-11.2.3.61-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.6.82 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_curand_cu12-10.3.6.82-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.6.3.83 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cusolver_cu12-11.6.3.83-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.1.3 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_cusparse_cu12-12.5.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nccl-cu12==2.23.4 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_nccl_cu12-2.23.4-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-nvjitlink-cu12==12.5.82 (from tensorflow[and-cuda]<2.21,>=2.18; extra == "and-cuda"->google-meridian[and-cuda,colab,schema])
  Downloading nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: matplotlib>=3.8 in /usr/local/lib/python3.12/dist-packages (from arviz->google-meridian[and-cuda,colab,schema]) (3.10.0)
Requirement already satisfied: h5netcdf>=1.0.2 in /usr/local/lib/python3.12/dist-packages (from arviz->google-meridian[and-cuda,colab,schema]) (1.7.3)
Requirement already satisfied: xarray-einstats>=0.3 in /usr/local/lib/python3.12/dist-packages (from arviz->google-meridian[and-cuda,colab,schema]) (0.9.1)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.12/dist-packages (from astunparse>=1.6.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.45.1)
Requirement already satisfied: attrs>=22.2.0 in /usr/local/lib/python3.12/dist-packages (from jsonschema>=3.0->altair>=5->google-meridian[and-cuda,colab,schema]) (25.4.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.12/dist-packages (from jsonschema>=3.0->altair>=5->google-meridian[and-cuda,colab,schema]) (2025.9.1)
Requirement already satisfied: referencing>=0.28.4 in /usr/local/lib/python3.12/dist-packages (from jsonschema>=3.0->altair>=5->google-meridian[and-cuda,colab,schema]) (0.37.0)
Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.12/dist-packages (from jsonschema>=3.0->altair>=5->google-meridian[and-cuda,colab,schema]) (0.30.0)
Requirement already satisfied: rich in /usr/local/lib/python3.12/dist-packages (from keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (13.9.4)
Requirement already satisfied: namex in /usr/local/lib/python3.12/dist-packages (from keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.1.0)
Requirement already satisfied: optree in /usr/local/lib/python3.12/dist-packages (from keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.18.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (1.3.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (4.61.1)
Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (1.4.9)
Requirement already satisfied: pillow>=8 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (11.3.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib>=3.8->arviz->google-meridian[and-cuda,colab,schema]) (3.2.5)
Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests<3,>=2.21.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.4.4)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests<3,>=2.21.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.11)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.12/dist-packages (from requests<3,>=2.21.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2.5.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests<3,>=2.21.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2025.11.12)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.12/dist-packages (from tensorboard~=2.19.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.10)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.12/dist-packages (from tensorboard~=2.19.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.12/dist-packages (from tensorboard~=2.19.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.12/dist-packages (from jinja2->altair>=5->google-meridian[and-cuda,colab,schema]) (3.0.3)
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.12/dist-packages (from rich->keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (4.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.12/dist-packages (from rich->keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (2.19.2)
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.12/dist-packages (from markdown-it-py>=2.2.0->rich->keras>=3.5.0->tensorflow<2.21,>=2.18->google-meridian[and-cuda,colab,schema]) (0.1.2)
Downloading mmm_proto_schema-1.1.1-py3-none-any.whl (63 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.8/63.8 kB 7.2 MB/s eta 0:00:00
Downloading natsort-7.1.1-py3-none-any.whl (35 kB)
Downloading nvidia_cublas_cu12-12.5.3.2-py3-none-manylinux2014_x86_64.whl (363.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 363.3/363.3 MB 3.4 MB/s eta 0:00:00
Downloading nvidia_cuda_cupti_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl (13.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.8/13.8 MB 131.0 MB/s eta 0:00:00
Downloading nvidia_cuda_nvrtc_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl (24.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.9/24.9 MB 31.3 MB/s eta 0:00:00
Downloading nvidia_cuda_runtime_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl (895 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 895.7/895.7 kB 72.3 MB/s eta 0:00:00
Downloading nvidia_cudnn_cu12-9.3.0.75-py3-none-manylinux2014_x86_64.whl (577.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 577.2/577.2 MB 3.3 MB/s eta 0:00:00
Downloading nvidia_cufft_cu12-11.2.3.61-py3-none-manylinux2014_x86_64.whl (192.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 192.5/192.5 MB 5.7 MB/s eta 0:00:00
Downloading nvidia_curand_cu12-10.3.6.82-py3-none-manylinux2014_x86_64.whl (56.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 MB 48.9 MB/s eta 0:00:00
Downloading nvidia_cusolver_cu12-11.6.3.83-py3-none-manylinux2014_x86_64.whl (130.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 130.3/130.3 MB 20.6 MB/s eta 0:00:00
Downloading nvidia_cusparse_cu12-12.5.1.3-py3-none-manylinux2014_x86_64.whl (217.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 217.6/217.6 MB 4.2 MB/s eta 0:00:00
Downloading nvidia_nccl_cu12-2.23.4-py3-none-manylinux2014_x86_64.whl (199.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 199.0/199.0 MB 5.4 MB/s eta 0:00:00
Downloading nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl (21.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.3/21.3 MB 124.0 MB/s eta 0:00:00
Downloading google_meridian-1.4.0-py3-none-any.whl (435 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 435.7/435.7 kB 40.3 MB/s eta 0:00:00
Downloading python_calamine-0.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (933 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 933.6/933.6 kB 77.7 MB/s eta 0:00:00
Downloading semver-3.0.4-py3-none-any.whl (17 kB)
Installing collected packages: semver, python-calamine, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, natsort, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cusolver-cu12, mmm-proto-schema, google-meridian
  Attempting uninstall: nvidia-nvjitlink-cu12
    Found existing installation: nvidia-nvjitlink-cu12 12.6.85
    Uninstalling nvidia-nvjitlink-cu12-12.6.85:
      Successfully uninstalled nvidia-nvjitlink-cu12-12.6.85
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.27.5
    Uninstalling nvidia-nccl-cu12-2.27.5:
      Successfully uninstalled nvidia-nccl-cu12-2.27.5
  Attempting uninstall: nvidia-curand-cu12
    Found existing installation: nvidia-curand-cu12 10.3.7.77
    Uninstalling nvidia-curand-cu12-10.3.7.77:
      Successfully uninstalled nvidia-curand-cu12-10.3.7.77
  Attempting uninstall: nvidia-cuda-runtime-cu12
    Found existing installation: nvidia-cuda-runtime-cu12 12.6.77
    Uninstalling nvidia-cuda-runtime-cu12-12.6.77:
      Successfully uninstalled nvidia-cuda-runtime-cu12-12.6.77
  Attempting uninstall: nvidia-cuda-nvrtc-cu12
    Found existing installation: nvidia-cuda-nvrtc-cu12 12.6.77
    Uninstalling nvidia-cuda-nvrtc-cu12-12.6.77:
      Successfully uninstalled nvidia-cuda-nvrtc-cu12-12.6.77
  Attempting uninstall: nvidia-cuda-cupti-cu12
    Found existing installation: nvidia-cuda-cupti-cu12 12.6.80
    Uninstalling nvidia-cuda-cupti-cu12-12.6.80:
      Successfully uninstalled nvidia-cuda-cupti-cu12-12.6.80
  Attempting uninstall: nvidia-cublas-cu12
    Found existing installation: nvidia-cublas-cu12 12.6.4.1
    Uninstalling nvidia-cublas-cu12-12.6.4.1:
      Successfully uninstalled nvidia-cublas-cu12-12.6.4.1
  Attempting uninstall: natsort
    Found existing installation: natsort 8.4.0
    Uninstalling natsort-8.4.0:
      Successfully uninstalled natsort-8.4.0
  Attempting uninstall: nvidia-cusparse-cu12
    Found existing installation: nvidia-cusparse-cu12 12.5.4.2
    Uninstalling nvidia-cusparse-cu12-12.5.4.2:
      Successfully uninstalled nvidia-cusparse-cu12-12.5.4.2
  Attempting uninstall: nvidia-cufft-cu12
    Found existing installation: nvidia-cufft-cu12 11.3.0.4
    Uninstalling nvidia-cufft-cu12-11.3.0.4:
      Successfully uninstalled nvidia-cufft-cu12-11.3.0.4
  Attempting uninstall: nvidia-cudnn-cu12
    Found existing installation: nvidia-cudnn-cu12 9.10.2.21
    Uninstalling nvidia-cudnn-cu12-9.10.2.21:
      Successfully uninstalled nvidia-cudnn-cu12-9.10.2.21
  Attempting uninstall: nvidia-cusolver-cu12
    Found existing installation: nvidia-cusolver-cu12 11.7.1.2
    Uninstalling nvidia-cusolver-cu12-11.7.1.2:
      Successfully uninstalled nvidia-cusolver-cu12-11.7.1.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch 2.9.0+cu126 requires nvidia-cublas-cu12==12.6.4.1; platform_system == "Linux", but you have nvidia-cublas-cu12 12.5.3.2 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cuda-cupti-cu12==12.6.80; platform_system == "Linux", but you have nvidia-cuda-cupti-cu12 12.5.82 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cuda-nvrtc-cu12==12.6.77; platform_system == "Linux", but you have nvidia-cuda-nvrtc-cu12 12.5.82 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cuda-runtime-cu12==12.6.77; platform_system == "Linux", but you have nvidia-cuda-runtime-cu12 12.5.82 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cudnn-cu12==9.10.2.21; platform_system == "Linux", but you have nvidia-cudnn-cu12 9.3.0.75 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cufft-cu12==11.3.0.4; platform_system == "Linux", but you have nvidia-cufft-cu12 11.2.3.61 which is incompatible.
torch 2.9.0+cu126 requires nvidia-curand-cu12==10.3.7.77; platform_system == "Linux", but you have nvidia-curand-cu12 10.3.6.82 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cusolver-cu12==11.7.1.2; platform_system == "Linux", but you have nvidia-cusolver-cu12 11.6.3.83 which is incompatible.
torch 2.9.0+cu126 requires nvidia-cusparse-cu12==12.5.4.2; platform_system == "Linux", but you have nvidia-cusparse-cu12 12.5.1.3 which is incompatible.
torch 2.9.0+cu126 requires nvidia-nccl-cu12==2.27.5; platform_system == "Linux", but you have nvidia-nccl-cu12 2.23.4 which is incompatible.
torch 2.9.0+cu126 requires nvidia-nvjitlink-cu12==12.6.85; platform_system == "Linux", but you have nvidia-nvjitlink-cu12 12.5.82 which is incompatible.
Successfully installed google-meridian-1.4.0 mmm-proto-schema-1.1.1 natsort-7.1.1 nvidia-cublas-cu12-12.5.3.2 nvidia-cuda-cupti-cu12-12.5.82 nvidia-cuda-nvrtc-cu12-12.5.82 nvidia-cuda-runtime-cu12-12.5.82 nvidia-cudnn-cu12-9.3.0.75 nvidia-cufft-cu12-11.2.3.61 nvidia-curand-cu12-10.3.6.82 nvidia-cusolver-cu12-11.6.3.83 nvidia-cusparse-cu12-12.5.1.3 nvidia-nccl-cu12-2.23.4 nvidia-nvjitlink-cu12-12.5.82 python-calamine-0.6.1 semver-3.0.4
import IPython
from meridian import constants
from meridian.analysis import analyzer
from meridian.analysis import optimizer
from meridian.analysis import summarizer
from meridian.analysis import visualizer
from meridian.analysis.review import reviewer
from meridian.data import data_frame_input_data_builder
from meridian.model import model
from meridian.model import prior_distribution
from meridian.model import spec
from schema.serde import meridian_serde
import numpy as np
import pandas as pd
# check if GPU is available
from psutil import virtual_memory
import tensorflow as tf
import tensorflow_probability as tfp

ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
print(
    'Num GPUs Available: ',
    len(tf.config.experimental.list_physical_devices('GPU')),
)
print(
    'Num CPUs Available: ',
    len(tf.config.experimental.list_physical_devices('CPU')),
)
Your runtime has 54.8 gigabytes of available RAM

Num GPUs Available:  1
Num CPUs Available:  1

3. Mount a storage. Use meridian_root to refer the mounted root. The mounted root will be used to save trained model, stage two-pager output and generate scenario planning dashboard.

For Colab Free/Pro user, we will use the MyDrive folder in Google Drive as the external storage. For Colab Enterprise user, we will use Cloud FUSE to mount a GCS bucket.

Mounted at /content/drive

Step 1: Load the data

Load the simulated dataset in CSV format as follows.

1. Read the data into a Pandas DataFrame.

df = pd.read_csv(
    # Optionally, use `f"${meridian_root}/<path_to_csv>"` to load data from the mounted storage.
    "https://raw.githubusercontent.com/google/meridian/refs/heads/main/meridian/data/simulated_data/csv/geo_all_channels.csv"
)

2. Create a DataFrameInputDataBuilder instance.

builder = data_frame_input_data_builder.DataFrameInputDataBuilder(
    kpi_type='non_revenue',
    default_kpi_column='conversions',
    default_revenue_per_kpi_column='revenue_per_conversion',
)

3. Offer the components to the builder. Note that the components may be offered all at once or piecewise.

builder = (
    builder.with_kpi(df)
    .with_revenue_per_kpi(df)
    .with_population(df)
    .with_controls(
        df, control_cols=["sentiment_score_control", "competitor_sales_control"]
    )
)

channels = ["Channel0", "Channel1", "Channel2", "Channel3", "Channel4"]
builder = builder.with_media(
    df,
    media_cols=[f"{channel}_impression" for channel in channels],
    media_spend_cols=[f"{channel}_spend" for channel in channels],
    media_channels=channels,
)
  1. If your data includes organic media or non-media treatments, you can add them using with_organic_media and with_non_media_treatments methods. For the definition of each variable, see Collect and organize your data
builder = builder.with_non_media_treatments(
    df, non_media_treatment_cols=['Promo']
).with_organic_media(
    df,
    organic_media_cols=['Organic_channel0_impression'],
    organic_media_channels=['Organic_channel0'],
)
  1. Finally, build the InputData.
data = builder.build()

Note that the simulated data here does not contain reach and frequency. We recommend including reach and frequency data whenever they are available. For information about the advantages of utilizing reach and frequency, see Bayesian Hierarchical Media Mix Model Incorporating Reach and Frequency Data. For code snippet for loading reach and frequency data, see Load geo-level data with reach and frequency

The documentation provides guidance for instances where reach and frequency data is accessible for specific channels. Additionally, for information about how to load other data types and formats, including data with reach and frequency, see Supported data types and formats.

Step 2: Configure the model

Meridian uses Bayesian framework and Markov Chain Monte Carlo (MCMC) algorithms to sample from the posterior distribution.

1. Inititalize the Meridian class by passing the loaded data and the customized model specification. One advantage of Meridian lies in its capacity to calibrate the model directly through ROI priors, as described in Media Mix Model Calibration With Bayesian Priors. In this particular example, the ROI priors for all media channels are identical, with each being represented as Lognormal(0.2, 0.9).

roi_mu = 0.2  # Mu for ROI prior for each media channel.
roi_sigma = 0.9  # Sigma for ROI prior for each media channel.
prior = prior_distribution.PriorDistribution(
    roi_m=tfp.distributions.LogNormal(roi_mu, roi_sigma, name=constants.ROI_M)
)
model_spec = spec.ModelSpec(prior=prior, enable_aks=True)

mmm = model.Meridian(input_data=data, model_spec=model_spec)

2. Use the sample_prior() and sample_posterior() methods to obtain samples from the prior and posterior distributions of model parameters. If you are using the T4 GPU runtime this step may take about 10 minutes for the provided data set.

%%time
mmm.sample_prior(500)
mmm.sample_posterior(
    n_chains=10, n_adapt=2000, n_burnin=500, n_keep=1000, seed=0
)
/usr/local/lib/python3.12/dist-packages/meridian/model/knots.py:505: RuntimeWarning: overflow encountered in cast
  np.float32(math.comb(ncol, design_mat.shape[1]))
CPU times: user 16min 14s, sys: 21.2 s, total: 16min 36s
Wall time: 15min 17s

For more information about configuring the parameters and using a customized model specification, such as setting different ROI priors for each media channel, see Configure the model.

Step 3: Run post-modeling quality checks

These post-modeling quality checks are designed to diagnose common issues related to model convergence, specification, and plausibility. Run the following command to generate the results for all necessary diagnostics:

reviewer.ModelReviewer(mmm).run()
========================================
Model Quality Checks
========================================
Overall Status: PASS
Summary: Passed: No major quality issues were identified.

Check Results:
----------------------------------------
Convergence Check:
  Status: PASS
  Recommendation: The model has likely converged, as all parameters have R-hat values < 1.2.
----------------------------------------
Baseline Check:
  Status: PASS
  Recommendation: The posterior probability that the baseline is negative is 0.00. We recommend visually inspecting the baseline time series in the Model Fit charts to confirm this.
----------------------------------------
BayesianPPP Check:
  Status: PASS
  Recommendation: The Bayesian posterior predictive p-value is 0.99. The observed total outcome is consistent with the model's posterior predictive distribution.
----------------------------------------
GoodnessOfFit Check:
  Status: PASS
  Recommendation: R-squared = 0.7738, MAPE = 0.2557, and wMAPE = 0.1998. These goodness-of-fit metrics are intended for guidance and relative comparison.
----------------------------------------
PriorPosteriorShift Check:
  Status: PASS
  Recommendation: The model has successfully learned from the data. This is a positive sign that your data was informative.

Step 4: Run model diagnostics

To further assess convergence and model fit, you can use the methods from visualizer module.

1. Assess convergence. Run the following code to generate r-hat statistics. R-hat close to 1.0 indicate convergence. R-hat < 1.2 indicates approximate convergence and is a reasonable threshold for many problems.

model_diagnostics = visualizer.ModelDiagnostics(mmm)
model_diagnostics.plot_rhat_boxplot()

2. Assess the model's fit by comparing the expected sales against the actual sales.

model_fit = visualizer.ModelFit(mmm)
model_fit.plot_model_fit()

For more information and additional model diagnostics checks, see Modeling diagnostics.

Step 5: Generate model results & two-page output

To export the two-page HTML summary output, initialize the Summarizer class with the model object. Then pass in the filename, filepath, start date, and end date to output_model_results_summary to run the summary for that time duration and save it to the specified file.

mmm_summarizer = summarizer.Summarizer(mmm)
filepath = meridian_root
start_date = '2021-01-25'
end_date = '2024-01-15'
mmm_summarizer.output_model_results_summary(
    'summary_output.html', filepath, start_date, end_date
)
/usr/local/lib/python3.12/dist-packages/numpy/lib/_function_base_impl.py:4779: RuntimeWarning: invalid value encountered in subtract
  diff_b_a = subtract(b, a)
/usr/local/lib/python3.12/dist-packages/meridian/analysis/analyzer.py:3238: UserWarning: Effectiveness is not reported because it does not have a clear interpretation by time period.
  warnings.warn(

Here is a preview of the two-page output based on the simulated data:

IPython.display.HTML(filename=f'{meridian_root}/summary_output.html')

For a customized two-page report, model results summary table, and individual visualizations, see Model results report and plot media visualizations.

Step 6: Run budget optimization & generate an optimization report

You can choose what scenario to run for the budget allocation. In default scenario, you find the optimal allocation across channels for a given budget to maximize the return on investment (ROI).

Alternatively, if you would like to have a sharable interactive dashboard, check out Meridian Scenario Planner.

1. Instantiate the BudgetOptimizer class and run the optimize() method without any customization, to run the default library's Fixed Budget Scenario to maximize ROI.

%%time
budget_optimizer = optimizer.BudgetOptimizer(mmm)
optimization_results = budget_optimizer.optimize()
CPU times: user 28.5 s, sys: 1.4 s, total: 29.9 s
Wall time: 36.3 s

2. Export the 2-page HTML optimization report, which contains optimized spend allocations and ROI.

filepath = meridian_root
optimization_results.output_optimization_summary(
    'optimization_output.html', filepath
)
IPython.display.HTML(filename=f'{meridian_root}/optimization_output.html')

For information about customized optimization scenarios, such as flexible budget scenarios, see Budget optimization scenarios. For more information about optimization results summary and individual visualizations, see optimization results output and optimization visualizations.

Optimization can also be performed on a hypothetical data representing a future scenario. The new data takes the same structure as the input data and encodes an anticipated flighting pattern, cost per media unit, and revenue per kpi.

3. Load the simulated dataset in CSV format into Pandas DataFrame.

df = pd.read_csv(
    "https://raw.githubusercontent.com/google/meridian/refs/heads/main/meridian/data/simulated_data/csv/hypothetical_geo_all_channels.csv"
)

4. New data is read from a csv file and converted into a set of multi-dimensional arrays. The arrays are used to construct a DataTensors instance, which is passed to optimize() as the new_data argument.

Constructing a DataTensors instance requires that all arrays have "time" and "geo" dimensions. Alternatively, the BudgetOptimizer.create_optimization_tensors method can be used to construct a DataTensors instance. This helper method can simplify the process, particularly when you do not need "time" and "geo" dimensions for all inputs. For example, it can be convenient if you want to assume a constant "revenue per kpi" or "cost per media unit" for all geos and time periods.

n_geos = mmm.n_geos
n_media_channels = mmm.n_media_channels
n_non_media_channels = mmm.n_non_media_channels
n_organic_media_channels = mmm.n_organic_media_channels

# The number of time periods and time range do not need to match the input data.
df[constants.TIME] = pd.to_datetime(df[constants.TIME], errors='coerce')
unique_times = sorted(df[constants.TIME].unique())
n_times = len(unique_times)

geos = mmm.input_data.geo.values
media_channels = mmm.input_data.media_channel.values
media_cols = [f"{channel}_impression" for channel in media_channels]
media_spend_cols = [f"{channel}_spend" for channel in media_channels]
non_media_treatment_cols = ['Promo']
organic_media_cols = ['Organic_channel0_impression']
organic_media_channels = ['Organic_channel0']
revenue_per_kpi_col='revenue_per_conversion'
times_str = [time.strftime(constants.DATE_FORMAT) for time in unique_times]

media_np = np.zeros((n_geos, n_times, n_media_channels))
media_spend_np = np.zeros((n_geos, n_times, n_media_channels))
non_media_treatment_np = np.zeros((n_geos, n_times, n_non_media_channels))
organic_media_np = np.zeros((n_geos, n_times, n_organic_media_channels))
revenue_per_kpi_np = np.zeros((n_geos, n_times))

df_grouped = df.set_index([constants.GEO, constants.TIME])
for geo_idx, geo in enumerate(geos):
  for time_idx, time in enumerate(unique_times):
    row = df_grouped.loc[(geo, time)]
    media_np[geo_idx, time_idx, :] = row[media_cols].values
    media_spend_np[geo_idx, time_idx, :] = row[media_spend_cols].values
    non_media_treatment_np[geo_idx, time_idx, :] = row[non_media_treatment_cols].values
    organic_media_np[geo_idx, time_idx, :] = row[organic_media_cols].values
    revenue_per_kpi_np[geo_idx, time_idx] = row[revenue_per_kpi_col].item()

data_tensors = analyzer.DataTensors(
    media=tf.convert_to_tensor(media_np, dtype=tf.float32),
    media_spend=tf.convert_to_tensor(media_spend_np, dtype=tf.float32),
    non_media_treatments=tf.convert_to_tensor(non_media_treatment_np, dtype=tf.float32),
    organic_media=tf.convert_to_tensor(organic_media_np, dtype=tf.float32),
    revenue_per_kpi=tf.convert_to_tensor(revenue_per_kpi_np, dtype=tf.float32),
    time=tf.convert_to_tensor(times_str, dtype=tf.string),
)
# Default values for `budget` and `pct_of_spend` are derived from the `new_data`,
# but these values can be overridden without modifying the `new_data` itself.
hypothetical_optimization_results = budget_optimizer.optimize(
    new_data=data_tensors,
    budget=50_000_000,
    pct_of_spend=[.2, .1, .2, .2, .3]
)
/usr/local/lib/python3.12/dist-packages/meridian/analysis/analyzer.py:343: UserWarning: A `organic_media` value was passed in the `new_data` argument. This is not supported and will be ignored.
  warnings.warn(
/usr/local/lib/python3.12/dist-packages/meridian/analysis/analyzer.py:343: UserWarning: A `non_media_treatments` value was passed in the `new_data` argument. This is not supported and will be ignored.
  warnings.warn(

5. Export the 2-page HTML optimization report.

filepath = meridian_root
hypothetical_optimization_results.output_optimization_summary(
    'hypothetical_optimization_output.html', filepath
)
/usr/local/lib/python3.12/dist-packages/meridian/analysis/analyzer.py:343: UserWarning: A `organic_media` value was passed in the `new_data` argument. This is not supported and will be ignored.
  warnings.warn(
/usr/local/lib/python3.12/dist-packages/meridian/analysis/analyzer.py:343: UserWarning: A `non_media_treatments` value was passed in the `new_data` argument. This is not supported and will be ignored.
  warnings.warn(
IPython.display.HTML(filename=f'{meridian_root}/hypothetical_optimization_output.html')

Step 7: Save the model object

We recommend that you save the model object for future use. This helps you to avoid repetitive model runs and saves time and computational resources. After the model object is saved, you can load it at a later stage to continue the analysis or visualizations without having to re-run the model.

Run the following codes to save the model object:

file_path = f'{meridian_root}/saved_mmm.binpb'
meridian_serde.save_meridian(mmm, file_path)
print(f'model is saved at {file_path}')
model is saved at /content/drive/MyDrive//saved_mmm.binpb

Run the following codes to load the saved model:

mmm = meridian_serde.load_meridian(file_path)
/usr/local/lib/python3.12/dist-packages/meridian/model/knots.py:505: RuntimeWarning: overflow encountered in cast
  np.float32(math.comb(ncol, design_mat.shape[1]))

Step 8: Interactive Scenario Planning

Meridian Scenario Planner is a tool that allow advertisers to create a sharable and interactive dashboard from Colab; you can reuse the saved model from this Colab for dashboard generation.