TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes.

tfp.mcmc.MetropolisHastings

Class MetropolisHastings

Runs one step of the Metropolis-Hastings algorithm.

Inherits From: TransitionKernel

The Metropolis-Hastings algorithm is a Markov chain Monte Carlo (MCMC) technique which uses a proposal distribution to eventually sample from a target distribution.

• have a target_log_prob field,
• optionally have a log_acceptance_correction field, and,
• have only fields which are Tensor-valued.

The Metropolis-Hastings log acceptance-probability is computed as:

log_accept_ratio = (current_kernel_results.target_log_prob
- previous_kernel_results.target_log_prob
+ current_kernel_results.log_acceptance_correction)

If current_kernel_results.log_acceptance_correction does not exist, it is presumed 0. (i.e., that the proposal distribution is symmetric).

The most common use-case for log_acceptance_correction is in the Metropolis-Hastings algorithm, i.e.,

accept_prob(x' | x) = p(x') / p(x) (g(x|x') / g(x'|x))

where,
p  represents the target distribution,
g  represents the proposal (conditional) distribution,
x' is the proposed state, and,
x  is current state

The log of the parenthetical term is the log_acceptance_correction.

The log_acceptance_correction may not necessarily correspond to the ratio of proposal distributions, e.g, log_acceptance_correction has a different interpretation in Hamiltonian Monte Carlo.

Examples

import tensorflow_probability as tfp
hmc = tfp.mcmc.MetropolisHastings(
tfp.mcmc.UncalibratedHamiltonianMonteCarlo(
target_log_prob_fn=lambda x: -x - x**2,
step_size=0.1,
num_leapfrog_steps=3))
# ==> functionally equivalent to:
# hmc = tfp.mcmc.HamiltonianMonteCarlo(
#     target_log_prob_fn=lambda x: -x - x**2,
#     step_size=0.1,
#     num_leapfrog_steps=3)

__init__

View source

__init__(
inner_kernel,
seed=None,
name=None
)

Instantiates this object.

Args:

• inner_kernel: TransitionKernel-like object which has collections.namedtuple kernel_results and which contains a target_log_prob member and optionally a log_acceptance_correction member.
• seed: Python integer to seed the random number generator.
• name: Python str name prefixed to Ops created by this function. Default value: None (i.e., "mh_kernel").

Returns:

• metropolis_hastings_kernel: Instance of TransitionKernel which wraps the input transition kernel with the Metropolis-Hastings algorithm.

Properties

is_calibrated

Returns True if Markov chain converges to specified distribution.

TransitionKernels which are "uncalibrated" are often calibrated by composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.

parameters

Return dict of __init__ arguments and their values.

Methods

bootstrap_results

View source

bootstrap_results(init_state)

Returns an object with the same type as returned by one_step.

Args:

• init_state: Tensor or Python list of Tensors representing the initial state(s) of the Markov chain(s).

Returns:

• kernel_results: A (possibly nested) tuple, namedtuple or list of Tensors representing internal calculations made within this function.

Raises:

• ValueError: if inner_kernel results doesn't contain the member "target_log_prob".

one_step

View source

one_step(
current_state,
previous_kernel_results
)

Takes one step of the TransitionKernel.

Args:

• current_state: Tensor or Python list of Tensors representing the current state(s) of the Markov chain(s).
• previous_kernel_results: A (possibly nested) tuple, namedtuple or list of Tensors representing internal calculations made within the previous call to this function (or as returned by bootstrap_results).

Returns:

• next_state: Tensor or Python list of Tensors representing the next state(s) of the Markov chain(s).
• kernel_results: A (possibly nested) tuple, namedtuple or list of Tensors representing internal calculations made within this function.

Raises:

• ValueError: if inner_kernel results doesn't contain the member "target_log_prob".