![]() |
Class UncalibratedRandomWalk
Generate proposal for the Random Walk Metropolis algorithm.
Inherits From: TransitionKernel
For more details on UncalibratedRandomWalk
, see
RandomWalkMetropolis
.
__init__
__init__(
target_log_prob_fn,
new_state_fn=None,
seed=None,
name=None
)
Initializes this transition kernel.
Args:
target_log_prob_fn
: Python callable which takes an argument likecurrent_state
(or*current_state
if it's a list) and returns its (possibly unnormalized) log-density under the target distribution.new_state_fn
: Python callable which takes a list of state parts and a seed; returns a same-typelist
ofTensor
s, each being a perturbation of the input state parts. The perturbation distribution is assumed to be a symmetric distribution centered at the input state part. Default value:None
which is mapped totfp.mcmc.random_walk_normal_fn()
.seed
: Python integer to seed the random number generator.name
: Pythonstr
name prefixed to Ops created by this function. Default value:None
(i.e., 'rwm_kernel').
Returns:
next_state
: Tensor or Python list ofTensor
s representing the state(s) of the Markov chain(s) at each result step. Has same shape ascurrent_state
.kernel_results
:collections.namedtuple
of internal calculations used to advance the chain.
Raises:
ValueError
: if there isn't onescale
or a list with same length ascurrent_state
.
Properties
is_calibrated
Returns True
if Markov chain converges to specified distribution.
TransitionKernel
s which are "uncalibrated" are often calibrated by
composing them with the tfp.mcmc.MetropolisHastings
TransitionKernel
.
name
new_state_fn
parameters
Return dict
of __init__
arguments and their values.
seed
target_log_prob_fn
Methods
bootstrap_results
bootstrap_results(init_state)
Creates initial previous_kernel_results
using a supplied state
.
one_step
one_step(
current_state,
previous_kernel_results
)
Runs one iteration of Random Walk Metropolis with normal proposal.
Args:
current_state
:Tensor
or Pythonlist
ofTensor
s representing the current state(s) of the Markov chain(s). The firstr
dimensions index independent chains,r = tf.rank(target_log_prob_fn(*current_state))
.previous_kernel_results
:collections.namedtuple
containingTensor
s representing values from previous calls to this function (or from thebootstrap_results
function.)
Returns:
next_state
: Tensor or Python list ofTensor
s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape ascurrent_state
.kernel_results
:collections.namedtuple
of internal calculations used to advance the chain.
Raises:
ValueError
: if there isn't onescale
or a list with same length ascurrent_state
.