Marginal distribution of a Student's T process at finitely many points.
Inherits From: Distribution
oryx.distributions.StudentTProcess(
df, kernel, index_points=None, mean_fn=None, jitter=1e06, validate_args=False,
allow_nan_stats=False, name='StudentTProcess'
)
A Student's T process (TP) is an indexed collection of random variables, any finite collection of which are jointly Multivariate Student's T. While this definition applies to finite index sets, it is typically implicit that the index set is infinite; in applications, it is often some finite dimensional real or complex vector space. In such cases, the TP may be thought of as a distribution over (real or complexvalued) functions defined over the index set.
Just as Student's T distributions are fully specified by their degrees of
freedom, location and scale, a Student's T process can be completely specified
by a degrees of freedom parameter, mean function and covariance function.
Let S
denote the index set and K
the space in
which each indexed random variable takes its values (again, often R or C).
The mean function is then a map m: S > K
, and the covariance function,
or kernel, is a positivedefinite function k: (S x S) > K
. The properties
of functions drawn from a TP are entirely dictated (up to translation) by
the form of the kernel function.
This Distribution
represents the marginal joint distribution over function
values at a given finite collection of points [x[1], ..., x[N]]
from the
index set S
. By definition, this marginal distribution is just a
multivariate Student's T distribution, whose mean is given by the vector
[ m(x[1]), ..., m(x[N]) ]
and whose covariance matrix is constructed from
pairwise applications of the kernel function to the given inputs:
 k(x[1], x[1]) k(x[1], x[2]) ... k(x[1], x[N]) 
 k(x[2], x[1]) k(x[2], x[2]) ... k(x[2], x[N]) 
 ... ... ... 
 k(x[N], x[1]) k(x[N], x[2]) ... k(x[N], x[N]) 
For this to be a valid covariance matrix, it must be symmetric and positive
definite; hence the requirement that k
be a positive definite function
(which, by definition, says that the above procedure will yield PD matrices).
Note also we use a parameterization as suggested in [1], which requires df
to be greater than 2. This allows for the covariance for any finite
dimensional marginal of the TP (a multivariate Student's T distribution) to
just be the PD matrix generated by the kernel.
Mathematical Details
The probability density function (pdf) is a multivariate Student's T whose parameters are derived from the TP's properties:
pdf(x; df, index_points, mean_fn, kernel) = MultivariateStudentT(df, loc, K)
K = (df  2) / df * (kernel.matrix(index_points, index_points) +
jitter * eye(N))
loc = (x  mean_fn(index_points))^T @ K @ (x  mean_fn(index_points))
where:
df
is the degrees of freedom parameter for the TP.index_points
are points in the index set over which the TP is defined,mean_fn
is a callable mapping the index set to the TP's mean values,kernel
isPositiveSemidefiniteKernel
like and represents the covariance function of the TP,jitter
is added to the diagonal to ensure positive definiteness up to machine precision (otherwise Choleskydecomposition is prone to failure),eye(N)
is an NbyN identity matrix.
Examples
Draw joint samples from a TP prior
import numpy as np
from tensorflow_probability.python.internal.backend.jax.compat import v2 as tf
import tensorflow_probability as tfp; tfp = tfp.substrates.jax
tf.enable_v2_behavior()
tfd = tfp.distributions
psd_kernels = tfp.math.psd_kernels
num_points = 100
# Index points should be a collection (100, here) of feature vectors. In this
# example, we're using 1d vectors, so we just need to reshape the output from
# np.linspace, to give a shape of (100, 1).
index_points = np.expand_dims(np.linspace(1., 1., num_points), 1)
# Define a kernel with default parameters.
kernel = psd_kernels.ExponentiatedQuadratic()
tp = tfd.StudentTProcess(3., kernel, index_points)
samples = tp.sample(10)
# ==> 10 independently drawn, joint samples at `index_points`
noisy_tp = tfd.StudentTProcess(
df=3.,
kernel=kernel,
index_points=index_points)
noisy_samples = noisy_tp.sample(10)
# ==> 10 independently drawn, noisy joint samples at `index_points`
Optimize kernel parameters via maximum marginal likelihood.
# Suppose we have some data from a known function. Note the index points in
# general have shape `[b1, ..., bB, f1, ..., fF]` (here we assume `F == 1`),
# so we need to explicitly consume the feature dimensions (just the last one
# here).
f = lambda x: np.sin(10*x[..., 0]) * np.exp(x[..., 0]**2)
observed_index_points = np.expand_dims(np.random.uniform(1., 1., 50), 1)
# Squeeze to take the shape from [50, 1] to [50].
observed_values = f(observed_index_points)
amplitude = tfp.util.TransformedVariable(
1., tfp.bijectors.Softplus(), dtype=np.float64, name='amplitude')
length_scale = tfp.util.TransformedVariable(
1., tfp.bijectors.Softplus(), dtype=np.float64, name='length_scale')
# Define a kernel with trainable parameters.
kernel = psd_kernels.ExponentiatedQuadratic(
amplitude=amplitude,
length_scale=length_scale)
tp = tfd.StudentTProcess(3., kernel, observed_index_points)
optimizer = tf.optimizers.Adam()
@tf.function
def optimize():
with tf.GradientTape() as tape:
loss = tp.log_prob(observed_values)
grads = tape.gradient(loss, tp.trainable_variables)
optimizer.apply_gradients(zip(grads, tp.trainable_variables))
return loss
for i in range(1000):
nll = optimize()
if i % 100 == 0:
print("Step {}: NLL = {}".format(i, nll))
print("Final NLL = {}".format(nll))
References
[1]: Amar Shah, Andrew Gordon Wilson, and Zoubin Ghahramani. Studentt Processes as Alternatives to Gaussian Processes. In Artificial Intelligence and Statistics, 2014. https://www.cs.cmu.edu/~andrewgw/tprocess.pdf
Args  

df

Positive Floatingpoint Tensor representing the degrees of freedom.
Must be greater than 2.

kernel

PositiveSemidefiniteKernel like instance representing the
TP's covariance function.

index_points

float Tensor representing finite (batch of) vector(s) of
points in the index set over which the TP is defined. Shape has the form
[b1, ..., bB, e, f1, ..., fF] where F is the number of feature
dimensions and must equal kernel.feature_ndims and e is the number
(size) of index points in each batch. Ultimately this distribution
corresponds to a e dimensional multivariate Student's T. The batch
shape must be broadcastable with kernel.batch_shape and any batch dims
yielded by mean_fn .

mean_fn

Python callable that acts on index_points to produce a (batch
of) vector(s) of mean values at index_points . Takes a Tensor of
shape [b1, ..., bB, f1, ..., fF] and returns a Tensor whose shape is
broadcastable with [b1, ..., bB] . Default value: None implies
constant zero function.

jitter

float scalar Tensor added to the diagonal of the covariance
matrix to ensure positive definiteness of the covariance matrix.
Default value: 1e6 .

validate_args

Python bool , default False . When True distribution
parameters are checked for validity despite possibly degrading runtime
performance. When False invalid inputs may silently render incorrect
outputs.
Default value: False .

allow_nan_stats

Python bool , default True . When True ,
statistics (e.g., mean, mode, variance) use the value "NaN " to
indicate the result is undefined. When False , an exception is raised
if one or more of the statistic's batch members are undefined.
Default value: False .

name

Python str name prefixed to Ops created by this class.
Default value: "StudentTProcess".

Raises  

ValueError

if mean_fn is not None and is not callable.

Attributes  

allow_nan_stats

Python bool describing behavior when a stat is undefined.
Stats return +/ infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or  infinity), so the variance = E[(X  mean)**2] is also undefined. 
batch_shape

Shape of a single sample from a single event index as a TensorShape .
May be partially defined or unknown. The batch dimensions are indexes into independent, nonidentical parameterizations of this distribution. 
df


dtype

The DType of Tensor s handled by this Distribution .

event_shape

Shape of a single sample from a single batch as a TensorShape .
May be partially defined or unknown. 
index_points


jitter


kernel


mean_fn


name

Name prepended to all ops created by this Distribution .

parameters

Dictionary of parameters used to instantiate this Distribution .

reparameterization_type

Describes how samples from the distribution are reparameterized.
Currently this is one of the static instances

trainable_variables


validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Methods
batch_shape_tensor
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1D Tensor
.
The batch dimensions are indexes into independent, nonidentical parameterizations of this distribution.
Args  

name

name to give to the op 
Returns  

batch_shape

Tensor .

cdf
cdf(
value, name='cdf', **kwargs
)
Cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
cdf(x) := P[X <= x]
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

cdf

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

copy
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Args  

**override_parameters_kwargs

String/value dictionary of initialization arguments to override with new values. 
Returns  

distribution

A new instance of type(self) initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs) .

covariance
covariance(
name='covariance', **kwargs
)
Covariance.
Covariance is (possibly) defined only for nonscalarevent distributions.
For example, for a lengthk
, vectorvalued distribution, it is calculated
as,
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i  E[X_i]) (X_j  E[X_j])]
where Cov
is a (batch of) k x k
matrix, 0 <= (i, j) < k
, and E
denotes expectation.
Alternatively, for nonvector, multivariate distributions (e.g.,
matrixvalued, Wishart), Covariance
shall return a (batch of) matrices
under some vectorization of the events, i.e.,
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov
is a (batch of) k' x k'
matrices,
0 <= (i, j) < k' = reduce_prod(event_shape)
, and Vec
is some function
mapping indices of this distribution's event dimensions to indices of a
lengthk'
vector.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

covariance

Floatingpoint Tensor with shape [B1, ..., Bn, k', k']
where the first n dimensions are batch coordinates and
k' = reduce_prod(self.event_shape) .

cross_entropy
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy.
Denote this distribution (self
) by P
and the other
distribution by
Q
. Assuming P, Q
are absolutely continuous with respect to
one another and permit densities p(x) dr(x)
and q(x) dr(x)
, (Shannon)
cross entropy is defined as:
H[P, Q] = E_p[log q(X)] = int_F p(x) log q(x) dr(x)
where F
denotes the support of the random variable X ~ P
.
Args  

other

tfp.distributions.Distribution instance.

name

Python str prepended to names of ops created by this function.

Returns  

cross_entropy

self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of (Shannon) cross entropy.

entropy
entropy(
name='entropy', **kwargs
)
Shannon entropy in nats.
event_shape_tensor
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1D int32 Tensor
.
Args  

name

name to give to the op 
Returns  

event_shape

Tensor .

experimental_default_event_space_bijector
experimental_default_event_space_bijector(
*args, **kwargs
)
Bijector mapping the reals (R**n) to the event space of the distribution.
Distributions with continuous support may implement
_default_event_space_bijector
which returns a subclass of
tfp.bijectors.Bijector
that maps R**n to the distribution's event space.
For example, the default bijector for the Beta
distribution
is tfp.bijectors.Sigmoid()
, which maps the real line to [0, 1]
, the
support of the Beta
distribution. The default bijector for the
CholeskyLKJ
distribution is tfp.bijectors.CorrelationCholesky
, which
maps R^(k * (k1) // 2) to the submanifold of k x k lower triangular
matrices with ones along the diagonal.
The purpose of experimental_default_event_space_bijector
is
to enable gradient descent in an unconstrained space for Variational
Inference and Hamiltonian Monte Carlo methods. Some effort has been made to
choose bijectors such that the tails of the distribution in the
unconstrained space are between Gaussian and Exponential.
For distributions with discrete event space, or for which TFP currently
lacks a suitable bijector, this function returns None
.
Args  

*args

Passed to implementation _default_event_space_bijector .

**kwargs

Passed to implementation _default_event_space_bijector .

Returns  

event_space_bijector

Bijector instance or None .

get_marginal_distribution
get_marginal_distribution(
index_points=None
)
Compute the marginal over function values at index_points
.
Args  

index_points

float Tensor representing finite (batch of) vector(s) of
points in the index set over which the TP is defined. Shape has the form
[b1, ..., bB, e, f1, ..., fF] where F is the number of feature
dimensions and must equal kernel.feature_ndims and e is the number
(size) of index points in each batch. Ultimately this distribution
corresponds to a e dimensional multivariate student t. The batch shape
must be broadcastable with kernel.batch_shape and any batch dims
yielded by mean_fn .

Returns  

marginal

a StudentT or MultivariateStudentT distribution,
according to whether index_points consists of one or many index
points, respectively.

is_scalar_batch
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == []
.
Args  

name

Python str prepended to names of ops created by this function.

Returns  

is_scalar_batch

bool scalar Tensor .

is_scalar_event
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == []
.
Args  

name

Python str prepended to names of ops created by this function.

Returns  

is_scalar_event

bool scalar Tensor .

kl_divergence
kl_divergence(
other, name='kl_divergence'
)
Computes the KullbackLeibler divergence.
Denote this distribution (self
) by p
and the other
distribution by
q
. Assuming p, q
are absolutely continuous with respect to reference
measure r
, the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))]
= int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q]  H[p]
where F
denotes the support of the random variable X ~ p
, H[., .]
denotes (Shannon) cross entropy, and H[.]
denotes (Shannon) entropy.
Args  

other

tfp.distributions.Distribution instance.

name

Python str prepended to names of ops created by this function.

Returns  

kl_divergence

self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of the KullbackLeibler
divergence.

log_cdf
log_cdf(
value, name='log_cdf', **kwargs
)
Log cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x)
that yields
a more accurate answer than simply taking the logarithm of the cdf
when
x << 1
.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

logcdf

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

log_prob
log_prob(
value, name='log_prob', **kwargs
)
Log probability density/mass function.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

log_prob

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

log_survival_function
log_survival_function(
value, name='log_survival_function', **kwargs
)
Log survival function.
Given random variable X
, the survival function is defined:
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1  P[X <= x] ]
= Log[ 1  cdf(x) ]
Typically, different numerical approximations can be used for the log
survival function, which are more accurate than 1  cdf(x)
when x >> 1
.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .

mean
mean(
name='mean', **kwargs
)
Mean.
mode
mode(
name='mode', **kwargs
)
Mode.
param_shapes
@classmethod
param_shapes( sample_shape, name='DistributionParamShapes' )
Shapes of parameters given the desired shape of a call to sample()
.
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
.
Subclasses should override class method _param_shapes
.
Args  

sample_shape

Tensor or python list/tuple. Desired shape of a call to
sample() .

name

name to prepend ops with. 
Returns  

dict of parameter name to Tensor shapes.

param_static_shapes
@classmethod
param_static_shapes( sample_shape )
param_shapes with static (i.e. TensorShape
) shapes.
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
. Assumes that the sample's
shape is known statically.
Subclasses should override class method _param_shapes
to return
constantvalued tensors when constant values are fed.
Args  

sample_shape

TensorShape or python list/tuple. Desired shape of a call
to sample() .

Returns  

dict of parameter name to TensorShape .

Raises  

ValueError

if sample_shape is a TensorShape and is not fully defined.

parameter_properties
@classmethod
parameter_properties( dtype=tf.float32, num_classes=None )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the distribution's
Tensor
valued constructor arguments.
Args  

dtype

Optional float dtype to assume for continuousvalued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g., tfb.Softplus.low ) must be
instantiated with the same dtype as the values to be transformed.

num_classes

Optional int Tensor number of classes to assume when
inferring the shape of parameters for categoricallike distributions.
Otherwise ignored.

Returns  

parameter_properties

A
str > tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names to ParameterProperties`
instances.

prob
prob(
value, name='prob', **kwargs
)
Probability density/mass function.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

prob

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

quantile
quantile(
value, name='quantile', **kwargs
)
Quantile function. Aka 'inverse cdf' or 'percent point function'.
Given random variable X
and p in [0, 1]
, the quantile
is:
quantile(p) := x such that P[X <= x] == p
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

quantile

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

sample
sample(
sample_shape=(), seed=None, name='sample', **kwargs
)
Generate samples of the specified shape.
Note that a call to sample()
without arguments will generate a single
sample.
Args  

sample_shape

0D or 1D int32 Tensor . Shape of the generated samples.

seed

Python integer or tfp.util.SeedStream instance, for seeding PRNG.

name

name to give to the op. 
**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

samples

a Tensor with prepended dimensions sample_shape .

stddev
stddev(
name='stddev', **kwargs
)
Standard deviation.
Standard deviation is defined as,
stddev = E[(X  E[X])**2]**0.5
where X
is the random variable associated with this distribution, E
denotes expectation, and stddev.shape = batch_shape + event_shape
.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

stddev

Floatingpoint Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .

survival_function
survival_function(
value, name='survival_function', **kwargs
)
Survival function.
Given random variable X
, the survival function is defined:
survival_function(x) = P[X > x]
= 1  P[X <= x]
= 1  cdf(x).
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .

variance
variance(
name='variance', **kwargs
)
Variance.
Variance is defined as,
Var = E[(X  E[X])**2]
where X
is the random variable associated with this distribution, E
denotes expectation, and Var.shape = batch_shape + event_shape
.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

variance

Floatingpoint Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .

__getitem__
__getitem__(
slices
)
Slices the batch axes of this distribution, returning a new instance.
b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., 2:, 1::2]
b2.batch_shape # => [3, 1, 5, 2, 4]
x = tf.random.stateless_normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.stateless_normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape # => [4, 5, 3]
mvn.event_shape # => [2]
mvn2 = mvn[:, 3:, ..., ::1, tf.newaxis]
mvn2.batch_shape # => [4, 2, 3, 1]
mvn2.event_shape # => [2]
Args  

slices

slices from the [] operator 
Returns  

dist

A new tfd.Distribution instance with sliced parameters.

__iter__
__iter__()