This page provides a technical walkthrough for running WeatherNext 2. The primary interface for this workflow is a Colab Enterprise notebook, which allows you to configure, submit, and manage on-demand forecast jobs.
The WeatherNext 2 workflow
Generating a forecast involves providing the model with an initial state and configuration parameters. The model then runs the inference and saves the output to a Google Cloud Storage bucket you specify.
Step 1. Request access to WeatherNext 2
WeatherNext 2 is available through an Early Access Program (EAP), which means access is granted to a limited number of customers.
See Access Vertex page for more details on how to express interest.
Step 2. Start with the colab notebook
First, open the main "Getting Started" notebook. It contains all the code needed to authenticate, configure, and run your first forecast without any local setup.
Step 3. Configure the forecast job
Within the notebook, you can define the parameters for your forecast. This allows you to customize the job to meet your specific needs.
| Parameter | Description | Example Value / Flag |
|---|---|---|
| forecast_init_time | The starting time for the forecast in ISO 8601 format (e.g., 2025-09-21T00:00:00Z). Models are available for dates from 2024 onwards. | "2025-10-09" |
| num_samples | The number of ensemble members to generate for a probabilistic forecast. The default is 64.
It must be a multiple of the number of GPUs in your selected machine type. |
32 |
| horizon_hrs | The desired length of the forecast in hours (e.g., 240 for a 10-day forecast). | 10 |
| enable_hourly_prediction | A flag to enable the generation of 1-hour temporal resolution data. | --enable-hourly-prediction |
Step 4: Submit the inference job
Executing the relevant cell in the notebook does not run the model directly. Instead, it submits a Vertex Job Request.
This action provisions a dedicated Vertex AI Worker (a GPU instance) in a secure, single-tenant Google project. This worker runs the inference container with the configuration you specified. This single-tenant architecture ensures your job has dedicated resources.
# The notebook formats and submits the Vertex Job Request for you
job = aiplatform.CustomContainerTrainingJob(
display_name="weathernext-forecast-job",
worker_pool_specs=worker_pool_specs,
# ... additional configuration
)
job.run()
Step 5. Retrieve and utilize forecast outputs
Once the Vertex Job is complete, the forecast outputs are automatically saved to the Google Cloud Storage bucket you specified during configuration.
Output format
The model generates output as one or two .zarr files:
- A file containing the standard 6-hourly forecast data.
- An additional file containing 1-hourly data (if the
--enable-hourly-predictionflag was used).
Key output variables
The WeatherNext 2.0 (FGN) model provides a wide range of forecast variables. See the model specifications and data schema page for more information.