# tfp.experimental.substrates.jax.mcmc.MetropolisHastings

Runs one step of the Metropolis-Hastings algorithm.

Inherits From: `TransitionKernel`

The Metropolis-Hastings algorithm is a Markov chain Monte Carlo (MCMC) technique which uses a proposal distribution to eventually sample from a target distribution.

• have a `target_log_prob` field,
• optionally have a `log_acceptance_correction` field, and,
• have only fields which are `Tensor`-valued.

The Metropolis-Hastings log acceptance-probability is computed as:

``````log_accept_ratio = (current_kernel_results.target_log_prob

- previous_kernel_results.target_log_prob
+ current_kernel_results.log_acceptance_correction)
``````

If `current_kernel_results.log_acceptance_correction` does not exist, it is presumed `0.` (i.e., that the proposal distribution is symmetric).

The most common use-case for `log_acceptance_correction` is in the Metropolis-Hastings algorithm, i.e.,

``````accept_prob(x' | x) = p(x') / p(x) (g(x|x') / g(x'|x))

where,
p  represents the target distribution,
g  represents the proposal (conditional) distribution,
x' is the proposed state, and,
x  is current state
``````

The log of the parenthetical term is the `log_acceptance_correction`.

The `log_acceptance_correction` may not necessarily correspond to the ratio of proposal distributions, e.g, `log_acceptance_correction` has a different interpretation in Hamiltonian Monte Carlo.

#### Examples

``````import tensorflow_probability as tfp; tfp = tfp.experimental.substrates.jax
hmc = tfp.mcmc.MetropolisHastings(
tfp.mcmc.UncalibratedHamiltonianMonteCarlo(
target_log_prob_fn=lambda x: -x - x**2,
step_size=0.1,
num_leapfrog_steps=3))
# ==> functionally equivalent to:
# hmc = tfp.mcmc.HamiltonianMonteCarlo(
#     target_log_prob_fn=lambda x: -x - x**2,
#     step_size=0.1,
#     num_leapfrog_steps=3)
``````

`inner_kernel` `TransitionKernel`-like object which has `collections.namedtuple` `kernel_results` and which contains a `target_log_prob` member and optionally a `log_acceptance_correction` member.
`seed` Python integer to seed the random number generator. Deprecated, pass seed to `tfp.mcmc.sample_chain`.
`name` Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., "mh_kernel").

`inner_kernel`

`is_calibrated` Returns `True` if Markov chain converges to specified distribution.

`TransitionKernel`s which are "uncalibrated" are often calibrated by composing them with the `tfp.mcmc.MetropolisHastings` `TransitionKernel`.

`name`

`parameters` Return `dict` of `__init__` arguments and their values.
`seed`

## Methods

### `bootstrap_results`

View source

Returns an object with the same type as returned by `one_step`.

Args
`init_state` `Tensor` or Python `list` of `Tensor`s representing the initial state(s) of the Markov chain(s).

Returns
`kernel_results` A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function.

Raises
`ValueError` if `inner_kernel` results doesn't contain the member "target_log_prob".

### `copy`

View source

Non-destructively creates a deep copy of the kernel.

Args
`**override_parameter_kwargs` Python String/value `dictionary` of initialization arguments to override with new values.

Returns
`new_kernel` `TransitionKernel` object of same type as `self`, initialized with the union of self.parameters and override_parameter_kwargs, with any shared keys overridden by the value of override_parameter_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`.

### `one_step`

View source

Takes one step of the TransitionKernel.

Args
`current_state` `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s).
`previous_kernel_results` A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within the previous call to this function (or as returned by `bootstrap_results`).
`seed` Optional, a seed for reproducible sampling.

Returns
`next_state` `Tensor` or Python `list` of `Tensor`s representing the next state(s) of the Markov chain(s).
`kernel_results` A (possibly nested) `tuple`, `namedtuple` or `list` of `Tensor`s representing internal calculations made within this function.

Raises
`ValueError` if `inner_kernel` results doesn't contain the member "target_log_prob".