tfp.experimental.mcmc.EllipticalSliceSampler

Runs one step of the elliptic slice sampler.

Inherits From: TransitionKernel

Elliptical Slice Sampling is a Markov Chain Monte Carlo (MCMC) algorithm based, as stated in [Murray, 2010][1].

Given log_likelihood_fn and normal_sampler_fn, the goal of Elliptical Slice Sampling is to sample from:

p(f) = N(f; 0, Sigma)L(f) / Z

where:

  • L = log_likelihood_fn
  • Sigma is a covariance matrix.
  • Samples from normal_sampler_fn are distributed as N(f; 0, Sigma).
  • Z is a normalizing constant.

In other words, sampling from a posterior distribution that is proportional to a multivariate gaussian prior multiplied by some likelihood function.

The one_step function can update multiple chains in parallel. It assumes that all leftmost dimensions of current_state index independent chain states (and are therefore updated independently). The output of log_likelihood_fn(*current_state) should sum log-probabilities across all event dimensions. Slices along the rightmost dimensions may have different target distributions; for example, current_state[0, :] could have a different target distribution from current_state[1, :]. These semantics are governed both by log_likelihood_fn(*current_state) and normal_sampler_fn.

Note that the sampler only supports states where all components have a common dtype.

Examples:

Simple chain with warm-up.

In this example we have the following model.

  p(loc | loc0, scale0) ~ N(loc0, scale0)
  p(x | loc, sigma) ~ N(mu, sigma)

What we would like to do is sample from p(loc | x, loc0, scale0). In other words, given some data, we would like to infer the posterior distribution of the mean that generated that data point.

We can use elliptical slice sampling here.

  import tensorflow as tf
  import tensorflow_probability as tfp
  import numpy as np

  tfd = tfp.distributions

  dtype = np.float64

  # loc0 = 0, scale0 = 1
  normal_sampler_fn = lambda seed: return tfd.Normal(
      loc=dtype(0), scale=dtype(1)).sample(seed=seed)

  # We saw the following data.
  data_points = np.random.randn(20)

  # scale = 2.
  log_likelihood_fn = lambda state: return tf.reduce_sum(
      tfd.Normal(state, dtype(2.)).log_prob(data_points))

  kernel = tfp.mcmc.EllipticalSliceSampler(
      normal_sampler_fn=normal_sampler_fn,
      log_likelihood_fn=log_likelihood_fn,
      seed=1234)

  samples = tfp.mcmc.sample_chain(
      num_results=int(3e5),
      current_state=dtype(1),
      kernel=kernel,
      num_burnin_steps=1000,
      trace_fn=None,
      parallel_iterations=1)  # For determinism.

  sample_mean = tf.reduce_mean(samples, axis=0)
  sample_std = tf.sqrt(
    tf.reduce_mean(tf.squared_difference(samples, sample_mean),
                   axis=0))

  with tf.Session() as sess:
    [sample_mean, sample_std] = sess.run([sample_mean, sample_std])

  print("Sample mean: ", sample_mean)
  print("Sample Std: ", sample_std)

References

[1]: Ian Murray, Ryan P. Adams, David J.C. MacKay. Elliptical slice sampling. proceedings.mlr.press/v9/murray10a/murray10a.pdf

normal_sampler_fn Python callable that takes in a seed and returns a sample from a multivariate normal distribution. Note that the shape of the samples must agree with log_likelihood_fn.
log_likelihood_fn Python callable which takes an argument like current_state (or *current_state if it is a list) and returns its (possibly unnormalized) log-likelihood.
name Python str name prefixed to Ops created by this function. Default value: None (i.e., 'slice_sampler_kernel').

experimental_shard_axis_names The shard axis names for members of the state.
is_calibrated Returns True if Markov chain converges to specified distribution.

TransitionKernels which are "uncalibrated" are often calibrated by composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.

log_likelihood_fn

name

normal_sampler_fn

parameters Returns dict of __init__ arguments and their values.

Methods

bootstrap_results

View source

Returns an object with the same type as returned by one_step(...)[1].

Args
init_state Tensor or Python list of Tensors representing the initial state(s) of the Markov chain(s).

Returns
kernel_results A (possibly nested) tuple, namedtuple or list of Tensors representing internal calculations made within this function.

copy

View source

Non-destructively creates a deep copy of the kernel.

Args
**override_parameter_kwargs Python String/value dictionary of initialization arguments to override with new values.

Returns
new_kernel TransitionKernel object of same type as self, initialized with the union of self.parameters and override_parameter_kwargs, with any shared keys overridden by the value of override_parameter_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

experimental_with_shard_axes

View source

Returns a copy of the kernel with the provided shard axis names.

Args
shard_axis_names a structure of strings indicating the shard axis names for each component of this kernel's state.

Returns
A copy of the current kernel with the shard axis information.

one_step

View source

Runs one iteration of the Elliptical Slice Sampler.

Args
current_state Tensor or Python list of Tensors representing the current state(s) of the Markov chain(s). The first r dimensions index independent chains, r = tf.rank(log_likelihood_fn(*normal_sampler_fn())).
previous_kernel_results collections.namedtuple containing Tensors representing values from previous calls to this function (or from the bootstrap_results function.)
seed PRNG seed; see tfp.random.sanitize_seed for details.

Returns
next_state Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as current_state.
kernel_results collections.namedtuple of internal calculations used to advance the chain.

Raises
TypeError if log_likelihood.dtype is not floating point.