Save the date! Google I/O returns May 18-20 Register now


Generate proposal for the Random Walk Metropolis algorithm.

Inherits From: TransitionKernel

For more details on UncalibratedRandomWalk, see RandomWalkMetropolis.

target_log_prob_fn Python callable which takes an argument like current_state (or *current_state if it's a list) and returns its (possibly unnormalized) log-density under the target distribution.
new_state_fn Python callable which takes a list of state parts and a seed; returns a same-type list of Tensors, each being a perturbation of the input state parts. The perturbation distribution is assumed to be a symmetric distribution centered at the input state part. Default value: None which is mapped to tfp.mcmc.random_walk_normal_fn().
name Python str name prefixed to Ops created by this function. Default value: None (i.e., 'rwm_kernel').

ValueError if there isn't one scale or a list with same length as current_state.

is_calibrated Returns True if Markov chain converges to specified distribution.

TransitionKernels which are "uncalibrated" are often calibrated by composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.



parameters Return dict of __init__ arguments and their values.



View source

Creates initial previous_kernel_results using a supplied state.


View source

Non-destructively creates a deep copy of the kernel.

**override_parameter_kwargs Python String/value dictionary of initialization arguments to override with new values.

new_kernel TransitionKernel object of same type as self, initialized with the union of self.parameters and override_parameter_kwargs, with any shared keys overridden by the value of override_parameter_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).


View source

Runs one iteration of Random Walk Metropolis with normal proposal.

current_state Tensor or Python list of Tensors representing the current state(s) of the Markov chain(s). The first r dimensions index independent chains, r = tf.rank(target_log_prob_fn(*current_state)).
previous_kernel_results collections.namedtuple containing Tensors representing values from previous calls to this function (or from the bootstrap_results function.)
seed Optional, a seed for reproducible sampling.

next_state Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as current_state.
kernel_results collections.namedtuple of internal calculations used to advance the chain.

ValueError if there isn't one scale or a list with same length as current_state.