|View source on GitHub|
Runs one step of the No U-Turn Sampler.
The No U-Turn Sampler (NUTS) is an adaptive variant of the Hamiltonian Monte
Carlo (HMC) method for MCMC. NUTS adapts the distance traveled in response to
the curvature of the target density. Conceptually, one proposal consists of
reversibly evolving a trajectory through the sample space, continuing until
that trajectory turns back on itself (hence the name, 'No U-Turn'). This class
implements one random NUTS step from a given
Mathematical details and derivations can be found in
[Hoffman, Gelman (2011)] and [Betancourt (2018)].
one_step function can update multiple chains in parallel. It assumes
that a prefix of leftmost dimensions of
current_state index independent
chain states (and are therefore updated independently). The output of
target_log_prob_fn(*current_state) should sum log-probabilities across all
event dimensions. Slices along the rightmost dimensions may have different
target distributions; for example,
current_state[0, ...] could have a
different target distribution from
current_state[1, ...]. These
semantics are governed by
target_log_prob_fn(*current_state). (The number of
independent chains is
: Matthew D. Hoffman, Andrew Gelman. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. 2011. https://arxiv.org/pdf/1111.4246.pdf.
: Michael Betancourt. A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434, 2018. https://arxiv.org/abs/1701.02434
__init__( target_log_prob_fn, step_size, max_tree_depth=10, max_energy_diff=1000.0, unrolled_leapfrog_steps=1, parallel_iterations=10, seed=None, name=None )
Initializes this transition kernel.
target_log_prob_fn: Python callable which takes an argument like
*current_stateif it's a list) and returns its (possibly unnormalized) log-density under the target distribution.
Tensors representing the step size for the leapfrog integrator. Must broadcast with the shape of
current_state. Larger step sizes lead to faster progress, but too-large step sizes make rejection exponentially more likely. When possible, it's often helpful to match per-variable step sizes to the standard deviations of the target distribution in each variable.
max_tree_depth: Maximum depth of the tree implicitly built by NUTS. The maximum number of leapfrog steps is bounded by
2**max_tree_depthi.e. the number of nodes in a binary tree
max_tree_depthnodes deep. The default setting of 10 takes up to 1024 leapfrog steps.
max_energy_diff: Scaler threshold of energy differences at each leapfrog, divergence samples are defined as leapfrog steps that exceed this threshold. Default to 1000.
unrolled_leapfrog_steps: The number of leapfrogs to unroll per tree expansion step. Applies a direct linear multipler to the maximum trajectory length implied by max_tree_depth. Defaults to 1.
parallel_iterations: The number of iterations allowed to run in parallel. It must be a positive integer. See
tf.while_loopfor more details. Note that if you set the seed to have deterministic output you should also set
seed: Python integer to seed the random number generator.
strname prefixed to Ops created by this function. Default value:
True if Markov chain converges to specified distribution.
TransitionKernels which are "uncalibrated" are often calibrated by
composing them with the
previous_kernel_results using a supplied
loop_tree_doubling( step_size, momentum_state_memory, current_step_meta_info, iter_, initial_step_state, initial_step_metastate )
Main loop for tree doubling.
one_step( current_state, previous_kernel_results )
Takes one step of the TransitionKernel.
Must be overridden by subclasses.
Tensors representing the current state(s) of the Markov chain(s).
previous_kernel_results: A (possibly nested)
Tensors representing internal calculations made within the previous call to this function (or as returned by
Tensors representing the next state(s) of the Markov chain(s).
kernel_results: A (possibly nested)
Tensors representing internal calculations made within this function.