Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfp.experimental.substrates.jax.distributions.MultivariateStudentTLinearOperator

View source on GitHub

The [Multivariate Student's t-distribution](

Inherits From: Distribution

tfp.experimental.substrates.jax.distributions.MultivariateStudentTLinearOperator(
    df, loc, scale, validate_args=False, allow_nan_stats=True,
    name='MultivariateStudentTLinearOperator'
)

https://en.wikipedia.org/wiki/Multivariate_t-distribution) on R^k.

Mathematical Details

The probability density function (pdf) is,

pdf(x; df, loc, Sigma) = (1 + ||y||**2 / df)**(-0.5 (df + k)) / Z
where,
y = inv(Sigma) (x - loc)
Z = abs(det(Sigma)) sqrt(df pi)**k Gamma(0.5 df) / Gamma(0.5 (df + k))

where:

  • df is a positive scalar.
  • loc is a vector in R^k,
  • Sigma is a positive definite shape' matrix inR^{k x k}, parameterized asscale @ scale.T` in this class,
  • Z denotes the normalization constant, and,
  • ||y||**2 denotes the squared Euclidean norm of y.

The Multivariate Student's t-distribution distribution is a member of the location-scale family, i.e., it can be constructed as,

X ~ MultivariateT(loc=0, scale=1)   # Identity scale, zero shift.
Y = scale @ X + loc

Examples

tfd = tfp.distributions

# Initialize a single 3-variate Student's t.
df = 3.
loc = [1., 2, 3]
scale = [[ 0.6,  0. ,  0. ],
         [ 0.2,  0.5,  0. ],
         [ 0.1, -0.3,  0.4]]
sigma = tf.matmul(scale, scale, adjoint_b=True)
# ==> [[ 0.36,  0.12,  0.06],
#      [ 0.12,  0.29, -0.13],
#      [ 0.06, -0.13,  0.26]]

mvt = tfd.MultivariateStudentTLinearOperator(
    df=df,
    loc=loc,
    scale=tf.linalg.LinearOperatorLowerTriangular(scale))

# Covariance is closely related to the sigma matrix (for df=3, it is 3x of the
# sigma matrix).

mvt.covariance().eval()
# ==> [[ 1.08,  0.36,  0.18],
#      [ 0.36,  0.87, -0.39],
#      [ 0.18, -0.39,  0.78]]

# Compute the pdf of an`R^3` observation; return a scalar.
mvt.prob([-1., 0, 1]).eval()  # shape: []

#### Args:


* <b>`df`</b>: A positive floating-point `Tensor`. Has shape `[B1, ..., Bb]` where `b
  >= 0`.
* <b>`loc`</b>: Floating-point `Tensor`. Has shape `[B1, ..., Bb, k]` where `k` is
  the event size.
* <b>`scale`</b>: Instance of `LinearOperator` with a floating `dtype` and shape
  `[B1, ..., Bb, k, k]`.
* <b>`validate_args`</b>: Python `bool`, default `False`. Whether to validate input
  with asserts. If `validate_args` is `False`, and the inputs are invalid,
  correct behavior is not guaranteed.
* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. If `False`, raise an
  exception if a statistic (e.g. mean/variance/etc...) is undefined for
  any batch member If `True`, batch members with valid parameters leading
  to undefined statistics will return NaN for this statistic.
* <b>`name`</b>: The name to give Ops created by the initializer.


#### Attributes:

* <b>`allow_nan_stats`</b>:   Python `bool` describing behavior when a stat is undefined.

  Stats return +/- infinity when it makes sense. E.g., the variance of a
  Cauchy distribution is infinity. However, sometimes the statistic is
  undefined, e.g., if a distribution's pdf does not achieve a maximum within
  the support of the distribution, the mode is undefined. If the mean is
  undefined, then by definition the variance is undefined. E.g. the mean for
  Student's T for df = 1 is undefined (no clear way to say it is either + or -
  infinity), so the variance = E[(X - mean)**2] is also undefined.

* <b>`batch_shape`</b>:   Shape of a single sample from a single event index as a `TensorShape`.

  May be partially defined or unknown.

  The batch dimensions are indexes into independent, non-identical
  parameterizations of this distribution.

* <b>`df`</b>:   The degrees of freedom of the distribution.

  This controls the degrees of freedom of the distribution. The tails of the
  distribution get more heavier the smaller `df` is. As `df` goes to
  infinitiy, the distribution approaches the Multivariate Normal with the same
  `loc` and `scale`.

* <b>`dtype`</b>:   The `DType` of `Tensor`s handled by this `Distribution`.
* <b>`event_shape`</b>:   Shape of a single sample from a single batch as a `TensorShape`.

  May be partially defined or unknown.

* <b>`loc`</b>:   The location parameter of the distribution.

  `loc` applies an elementwise shift to the distribution.

  ```none
  X ~ MultivariateT(loc=0, scale=1)   # Identity scale, zero shift.
  Y = scale @ X + loc
  ```

* <b>`name`</b>:   Name prepended to all ops created by this `Distribution`.
* <b>`parameters`</b>:   Dictionary of parameters used to instantiate this `Distribution`.
* <b>`reparameterization_type`</b>:   Describes how samples from the distribution are reparameterized.

  Currently this is one of the static instances
  `tfd.FULLY_REPARAMETERIZED` or `tfd.NOT_REPARAMETERIZED`.

* <b>`scale`</b>:   The scale parameter of the distribution.

  `scale` applies an affine scale to the distribution.

>     X ~ MultivariateT(loc=0, scale=1)   # Identity scale, zero shift.
>     Y = scale @ X + loc

* <b>`trainable_variables`</b>
* <b>`validate_args`</b>:   Python `bool` indicating possibly expensive checks are enabled.
* <b>`variables`</b>


#### Raises:


* <b>`TypeError`</b>: if not `scale.dtype.is_floating`.
* <b>`ValueError`</b>: if not `scale.is_non_singular`.

## Methods

<h3 id="__getitem__"><code>__getitem__</code></h3>

<a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.9.0/tensorflow_probability/python/distributions/_jax/distribution.py#L624-L651">View source</a>

```python
__getitem__(
    slices
)

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args:

  • slices: slices from the [] operator

Returns:

  • dist: A new tfd.Distribution instance with sliced parameters.

__iter__

View source

__iter__()

batch_shape_tensor

View source

batch_shape_tensor(
    name='batch_shape_tensor'
)

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args:

  • name: name to give to the op

Returns:

  • batch_shape: Tensor.

cdf

View source

cdf(
    value, name='cdf', **kwargs
)

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

copy(
    **override_parameters_kwargs
)

Creates a deep copy of the distribution.

Args:

  • **override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

Returns:

  • distribution: A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

covariance(
    name='covariance', **kwargs
)

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Additional documentation from MultivariateStudentTLinearOperator:

The covariance for Multivariate Student's t equals

scale @ scale.T * df / (df - 2), when df > 2
infinity, when 1 < df <= 2
NaN, when df <= 1

If self.allow_nan_stats=False, then an exception will be raised rather than returning NaN.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • covariance: Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

cross_entropy(
    other, name='cross_entropy'
)

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args:

Returns:

  • cross_entropy: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

entropy(
    name='entropy', **kwargs
)

Shannon entropy in nats.

event_shape_tensor

View source

event_shape_tensor(
    name='event_shape_tensor'
)

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args:

  • name: name to give to the op

Returns:

  • event_shape: Tensor.

is_scalar_batch

View source

is_scalar_batch(
    name='is_scalar_batch'
)

Indicates that batch_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_batch: bool scalar Tensor.

is_scalar_event

View source

is_scalar_event(
    name='is_scalar_event'
)

Indicates that event_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_event: bool scalar Tensor.

kl_divergence

View source

kl_divergence(
    other, name='kl_divergence'
)

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args:

Returns:

  • kl_divergence: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

View source

log_cdf(
    value, name='log_cdf', **kwargs
)

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • logcdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

log_prob(
    value, name='log_prob', **kwargs
)

Log probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

log_survival_function(
    value, name='log_survival_function', **kwargs
)

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

mean(
    name='mean', **kwargs
)

Mean.

Additional documentation from MultivariateStudentTLinearOperator:

The mean of Student's T equals loc if df > 1, otherwise it is NaN. If self.allow_nan_stats=False, then an exception will be raised rather than returning NaN.

mode

View source

mode(
    name='mode', **kwargs
)

Mode.

param_shapes

View source

@classmethod
param_shapes(
    cls, sample_shape, name='DistributionParamShapes'
)

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args:

  • sample_shape: Tensor or python list/tuple. Desired shape of a call to sample().
  • name: name to prepend ops with.

Returns:

dict of parameter name to Tensor shapes.

param_static_shapes

View source

@classmethod
param_static_shapes(
    cls, sample_shape
)

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args:

  • sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample().

Returns:

dict of parameter name to TensorShape.

Raises:

  • ValueError: if sample_shape is a TensorShape and is not fully defined.

prob

View source

prob(
    value, name='prob', **kwargs
)

Probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

quantile(
    value, name='quantile', **kwargs
)

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • quantile: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

sample(
    sample_shape=(), seed=None, name='sample', **kwargs
)

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args:

  • sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples.
  • seed: Python integer or tfp.util.SeedStream instance, for seeding PRNG.
  • name: name to give to the op.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • samples: a Tensor with prepended dimensions sample_shape.

stddev

View source

stddev(
    name='stddev', **kwargs
)

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Additional documentation from MultivariateStudentTLinearOperator:

The standard deviation for Student's T equals

sqrt(diag(scale @ scale.T)) * df / (df - 2), when df > 2
infinity, when 1 < df <= 2
NaN, when df <= 1

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • stddev: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

survival_function(
    value, name='survival_function', **kwargs
)

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

variance(
    name='variance', **kwargs
)

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Additional documentation from MultivariateStudentTLinearOperator:

The variance for Student's T equals

diag(scale @ scale.T) * df / (df - 2), when df > 2
infinity, when 1 < df <= 2
NaN, when df <= 1

If self.allow_nan_stats=False, then an exception will be raised rather than returning NaN.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • variance: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().