TensorFlow 2.0 Beta is available Learn more

tfp.mcmc.UncalibratedRandomWalk

Class UncalibratedRandomWalk

Generate proposal for the Random Walk Metropolis algorithm.

Inherits From: TransitionKernel

Defined in python/mcmc/random_walk_metropolis.py.

For more details on UncalibratedRandomWalk, see RandomWalkMetropolis.

__init__

__init__(
    target_log_prob_fn,
    new_state_fn=None,
    seed=None,
    name=None
)

Initializes this transition kernel.

Args:

  • target_log_prob_fn: Python callable which takes an argument like current_state (or *current_state if it's a list) and returns its (possibly unnormalized) log-density under the target distribution.
  • new_state_fn: Python callable which takes a list of state parts and a seed; returns a same-type list of Tensors, each being a perturbation of the input state parts. The perturbation distribution is assumed to be a symmetric distribution centered at the input state part. Default value: None which is mapped to tfp.mcmc.random_walk_normal_fn().
  • seed: Python integer to seed the random number generator.
  • name: Python str name prefixed to Ops created by this function. Default value: None (i.e., 'rwm_kernel').

Returns:

  • next_state: Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) at each result step. Has same shape as current_state.
  • kernel_results: collections.namedtuple of internal calculations used to advance the chain.

Raises:

  • ValueError: if there isn't one scale or a list with same length as current_state.

Properties

is_calibrated

Returns True if Markov chain converges to specified distribution.

TransitionKernels which are "uncalibrated" are often calibrated by composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.

name

new_state_fn

parameters

Return dict of __init__ arguments and their values.

seed

target_log_prob_fn

Methods

bootstrap_results

bootstrap_results(init_state)

Creates initial previous_kernel_results using a supplied state.

one_step

one_step(
    current_state,
    previous_kernel_results
)

Runs one iteration of Random Walk Metropolis with normal proposal.

Args:

  • current_state: Tensor or Python list of Tensors representing the current state(s) of the Markov chain(s). The first r dimensions index independent chains, r = tf.rank(target_log_prob_fn(*current_state)).
  • previous_kernel_results: collections.namedtuple containing Tensors representing values from previous calls to this function (or from the bootstrap_results function.)

Returns:

  • next_state: Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as current_state.
  • kernel_results: collections.namedtuple of internal calculations used to advance the chain.

Raises:

  • ValueError: if there isn't one scale or a list with same length as current_state.