View source on GitHub |

Runs one step of the elliptic slice sampler.

Inherits From: `TransitionKernel`

```
tfp.experimental.mcmc.EllipticalSliceSampler(
normal_sampler_fn, log_likelihood_fn, seed=None, name=None
)
```

Elliptical Slice Sampling is a Markov Chain Monte Carlo (MCMC) algorithm based, as stated in [Murray, 2010][1].

Given `log_likelihood_fn`

and `normal_sampler_fn`

, the goal of Elliptical
Slice Sampling is to sample from:

```
p(f) = N(f; 0, Sigma)L(f) / Z
```

where:

`L = log_likelihood_fn`

`Sigma`

is a covariance matrix.- Samples from
`normal_sampler_fn`

are distributed as`N(f; 0, Sigma)`

. `Z`

is a normalizing constant.

In other words, sampling from a posterior distribution that is proportional to a multivariate gaussian prior multiplied by some likelihood function.

The `one_step`

function can update multiple chains in parallel. It assumes
that all leftmost dimensions of `current_state`

index independent chain states
(and are therefore updated independently). The output of
`log_likelihood_fn(*current_state)`

should sum log-probabilities across all
event dimensions. Slices along the rightmost dimensions may have different
target distributions; for example, `current_state[0, :]`

could have a
different target distribution from `current_state[1, :]`

.
These semantics are governed both by `log_likelihood_fn(*current_state)`

and
`normal_sampler_fn`

.

Note that the sampler only supports states where all components have a common dtype.

### Examples:

#### Simple chain with warm-up.

In this example we have the following model.

```
p(loc | loc0, scale0) ~ N(loc0, scale0)
p(x | loc, sigma) ~ N(mu, sigma)
```

What we would like to do is sample from `p(loc | x, loc0, scale0)`

. In other
words, given some data, we would like to infer the posterior distribution
of the mean that generated that data point.

We can use elliptical slice sampling here.

```
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
tfd = tfp.distributions
dtype = np.float64
# loc0 = 0, scale0 = 1
normal_sampler_fn = lambda seed: return tfd.Normal(
loc=dtype(0), scale=dtype(1)).sample(seed=seed)
# We saw the following data.
data_points = np.random.randn(20)
# scale = 2.
log_likelihood_fn = lambda state: return tf.reduce_sum(
tfd.Normal(state, dtype(2.)).log_prob(data_points))
kernel = tfp.mcmc.EllipticalSliceSampler(
normal_sampler_fn=normal_sampler_fn,
log_likelihood_fn=log_likelihood_fn,
seed=1234)
samples, _ = tfp.mcmc.sample_chain(
num_results=int(3e5),
current_state=dtype(1),
kernel=kernel,
num_burnin_steps=1000,
parallel_iterations=1) # For determinism.
sample_mean = tf.reduce_mean(samples, axis=0)
sample_std = tf.sqrt(
tf.reduce_mean(tf.squared_difference(samples, sample_mean),
axis=0))
with tf.Session() as sess:
[sample_mean, sample_std] = sess.run([sample_mean, sample_std])
print("Sample mean: ", sample_mean)
print("Sample Std: ", sample_std)
```

### References

[1]: Ian Murray, Ryan P. Adams, David J.C. MacKay. Elliptical slice sampling. proceedings.mlr.press/v9/murray10a/murray10a.pdf

#### Args:

: Python callable that takes in a seed and returns a sample from a multivariate normal distribution. Note that the shape of the samples must agree with`normal_sampler_fn`

`log_likelihood_fn`

.: Python callable which takes an argument like`log_likelihood_fn`

`current_state`

(or`*current_state`

if it is a list) and returns its (possibly unnormalized) log-likelihood.: Python integer to seed the random number generator.`seed`

: Python`name`

`str`

name prefixed to Ops created by this function. Default value:`None`

(i.e., 'slice_sampler_kernel').

#### Attributes:

: Returns`is_calibrated`

`True`

if Markov chain converges to specified distribution.`TransitionKernel`

s which are "uncalibrated" are often calibrated by composing them with the`tfp.mcmc.MetropolisHastings`

`TransitionKernel`

.`log_likelihood_fn`

`name`

`normal_sampler_fn`

: Returns`parameters`

`dict`

of`__init__`

arguments and their values.`seed`

## Methods

`bootstrap_results`

```
bootstrap_results(
init_state
)
```

Returns an object with the same type as returned by `one_step(...)[1]`

.

#### Args:

:`init_state`

`Tensor`

or Python`list`

of`Tensor`

s representing the initial state(s) of the Markov chain(s).

#### Returns:

: A (possibly nested)`kernel_results`

`tuple`

,`namedtuple`

or`list`

of`Tensor`

s representing internal calculations made within this function.

`one_step`

```
one_step(
current_state, previous_kernel_results
)
```

Runs one iteration of the Elliptical Slice Sampler.

#### Args:

:`current_state`

`Tensor`

or Python`list`

of`Tensor`

s representing the current state(s) of the Markov chain(s). The first`r`

dimensions index independent chains,`r = tf.rank(log_likelihood_fn(*normal_sampler_fn()))`

.:`previous_kernel_results`

`collections.namedtuple`

containing`Tensor`

s representing values from previous calls to this function (or from the`bootstrap_results`

function.)

#### Returns:

: Tensor or Python list of`next_state`

`Tensor`

s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as`current_state`

.:`kernel_results`

`collections.namedtuple`

of internal calculations used to advance the chain.

#### Raises:

: if`TypeError`

`not log_likelihood.dtype.is_floating`

.