meridian.model.posterior_sampler.PosteriorMCMCSampler
Stay organized with collections
Save and categorize content based on your preferences.
A callable that samples from posterior distributions using MCMC.
meridian.model.posterior_sampler.PosteriorMCMCSampler(
meridian: 'model.Meridian'
)
Methods
__call__
View source
__call__(
n_chains: (Sequence[int] | int),
n_adapt: int,
n_burnin: int,
n_keep: int,
current_state: (Mapping[str, backend.Tensor] | None) = None,
init_step_size: (int | None) = None,
dual_averaging_kwargs: (Mapping[str, int] | None) = None,
max_tree_depth: int = 10,
max_energy_diff: float = 500.0,
unrolled_leapfrog_steps: int = 1,
parallel_iterations: int = 10,
seed: (Sequence[int] | int | None) = None,
**pins
) -> None
Runs Markov Chain Monte Carlo (MCMC) sampling of posterior distributions.
For more information about the arguments, see windowed_adaptive_nuts.
| Args |
n_chains
|
Number of MCMC chains. Given a sequence of integers,
windowed_adaptive_nuts will be called once for each element. The
n_chains argument of each windowed_adaptive_nuts call will be equal
to the respective integer element. Using a list of integers, one can
split the chains of a windowed_adaptive_nuts call into multiple calls
with fewer chains per call. This can reduce memory usage. This might
require an increased number of adaptation steps for convergence, as the
optimization is occurring across fewer chains per sampling call.
|
n_adapt
|
Number of adaptation draws per chain.
|
n_burnin
|
Number of burn-in draws per chain. Burn-in draws occur after
adaptation draws and before the kept draws.
|
n_keep
|
Integer number of draws per chain to keep for inference.
|
current_state
|
Optional structure of tensors at which to initialize
sampling. Use the same shape and structure as
model.experimental_pin(**pins).sample(n_chains).
|
init_step_size
|
Optional integer determining where to initialize the step
size for the leapfrog integrator. The structure must broadcast with
current_state. For example, if the initial state is: { 'a':
tf.zeros(n_chains), 'b': tf.zeros([n_chains, n_features]), } then
any of 1., {'a': 1., 'b': 1.}, or {'a': tf.ones(n_chains), 'b':
tf.ones([n_chains, n_features])} will work. Defaults to the dimension
of the log density to the ¼ power.
|
dual_averaging_kwargs
|
Optional dict keyword arguments to pass to
tfp.mcmc.DualAveragingStepSizeAdaptation. By default, a
target_accept_prob of 0.85 is set, acceptance probabilities across
chains are reduced using a harmonic mean, and the class defaults are
used otherwise.
|
max_tree_depth
|
Maximum depth of the tree implicitly built by NUTS. The
maximum number of leapfrog steps is bounded by 2**max_tree_depth, for
example, the number of nodes in a binary tree max_tree_depth nodes
deep. The default setting of 10 takes up to 1024 leapfrog steps.
|
max_energy_diff
|
Scalar threshold of energy differences at each leapfrog,
divergence samples are defined as leapfrog steps that exceed this
threshold. Default is 1000.
|
unrolled_leapfrog_steps
|
The number of leapfrogs to unroll per tree
expansion step. Applies a direct linear multiplier to the maximum
trajectory length implied by max_tree_depth. Defaults is 1.
|
parallel_iterations
|
Number of iterations allowed to run in parallel. Must
be a positive integer. For more information, see tf.while_loop.
|
seed
|
An int32[2] Tensor or a Python list or tuple of 2 ints, which
will be treated as stateless seeds; or a Python int or None, which
will be converted into a stateless seed. See tfp.random.sanitize_seed.
|
**pins
|
These are used to condition the provided joint distribution, and
are passed directly to joint_dist.experimental_pin(**pins).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-09 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-12-09 UTC."],[],["The `PosteriorMCMCSampler` utilizes Markov Chain Monte Carlo (MCMC) to sample from posterior distributions. Key actions include setting the `n_chains` (number of chains), `n_adapt` (adaptation draws), `n_burnin` (burn-in draws), and `n_keep` (draws for inference). It allows for initializing the sampling `current_state`, setting `init_step_size`, and adjusting parameters like `max_tree_depth`, `max_energy_diff` and `dual_averaging_kwargs`. The output is an Arviz `InferenceData` object containing posterior samples.\n"]]