Attend the Women in ML Symposium on December 7 Register now

tfp.substrates.numpy.distributions.MultivariateNormalLinearOperator

Stay organized with collections Save and categorize content based on your preferences.

The multivariate normal distribution on R^k.

Inherits From: TransformedDistribution, Distribution

The Multivariate Normal distribution is defined over R^k and parameterized by a (batch of) length-k loc vector (aka "mu") and a (batch of) k x k scale matrix; covariance = scale @ scale.T, where @ denotes matrix-multiplication.

Mathematical Details

The probability density function (pdf) is,

pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
y = inv(scale) @ (x - loc),
Z = (2 pi)**(0.5 k) |det(scale)|,

where:

  • loc is a vector in R^k,
  • scale is a linear operator in R^{k x k}, cov = scale @ scale.T,
  • Z denotes the normalization constant, and,
  • ||y||**2 denotes the squared Euclidean norm of y.

The MultivariateNormal distribution is a member of the location-scale family, i.e., it can be constructed as,

X ~ MultivariateNormal(loc=0, scale=1)   # Identity scale, zero shift.
Y = scale @ X + loc

Examples

tfd = tfp.distributions

# Initialize a single 3-variate Gaussian.
mu = [1., 2, 3]
cov = [[ 0.36,  0.12,  0.06],
       [ 0.12,  0.29, -0.13],
       [ 0.06, -0.13,  0.26]]
scale = tf.linalg.cholesky(cov)
# ==> [[ 0.6,  0. ,  0. ],
#      [ 0.2,  0.5,  0. ],
#      [ 0.1, -0.3,  0.4]])

mvn = tfd.MultivariateNormalLinearOperator(
    loc=mu,
    scale=tf.linalg.LinearOperatorLowerTriangular(scale))

# Covariance agrees with cholesky(cov) parameterization.
mvn.covariance()
# ==> [[ 0.36,  0.12,  0.06],
#      [ 0.12,  0.29, -0.13],
#      [ 0.06, -0.13,  0.26]]

# Compute the pdf of an`R^3` observation; return a scalar.
mvn.prob([-1., 0, 1])  # shape: []

# Initialize a 2-batch of 3-variate Gaussians.
mu = [[1., 2, 3],
      [11, 22, 33]]              # shape: [2, 3]
scale_diag = [[1., 2, 3],
              [0.5, 1, 1.5]]     # shape: [2, 3]

mvn = tfd.MultivariateNormalLinearOperator(
    loc=mu,
    scale=tf.linalg.LinearOperatorDiag(scale_diag))

# Compute the pdf of two `R^3` observations; return a length-2 vector.
x = [[-0.9, 0, 0.1],
     [-10, 0, 9]]     # shape: [2, 3]
mvn.prob(x)    # shape: [2]

loc Floating-point Tensor. If this is set to None, loc is implicitly 0. When specified, may have shape [B1, ..., Bb, k] where b >= 0 and k is the event size.
scale Instance of LinearOperator with same dtype as loc and shape [B1, ..., Bb, k, k].
validate_args Python bool, default False. Whether to validate input with asserts. If validate_args is False, and the inputs are invalid, correct behavior is not guaranteed.
allow_nan_stats Python bool, default True. If False, raise an exception if a statistic (e.g. mean/mode/etc...) is undefined for any batch member If True, batch members with valid parameters leading to undefined statistics will return NaN for this statistic.
experimental_use_kahan_sum Python bool. When True, we use Kahan summation to aggregate independent underlying log_prob values. For best results, Kahan summation should also be applied when computing the log-determinant of the LinearOperator representing the scale matrix. Kahan summation improves against the precision of a naive float32 sum. This can be noticeable in particular for large dimensions in float32. See CPU caveat on tfp.math.reduce_kahan_sum.
name The name to give Ops created by the initializer.

ValueError if scale is unspecified.
TypeError if not scale.dtype.is_floating

allow_nan_stats Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

batch_shape Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

bijector Function transforming x => y.
distribution Base distribution, p(x).
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

experimental_shard_axis_names The list or structure of lists of active shard axis names.
loc The loc Tensor in Y = scale @ X + loc.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

scale The scale LinearOperator in Y = scale @ X + loc.
trainable_variables

validate_args Python bool indicating possibly expensive checks are enabled.
variables

Methods

batch_shape_tensor

View source

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args
name name to give to the op

Returns
batch_shape Tensor.

cdf

View source

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

Creates a deep copy of the distribution.

Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.

Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are