TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfp.distributions.GaussianProcessRegressionModel

View source on GitHub

Class GaussianProcessRegressionModel

Posterior predictive distribution in a conjugate GP regression model.

Inherits From: GaussianProcess

This class represents the distribution over function values at a set of points in some index set, conditioned on noisy observations at some other set of points. More specifically, we assume a Gaussian process prior, f ~ GP(m, k) with IID normal noise on observations of function values. In this model posterior inference can be done analytically. This Distribution is parameterized by

  • the mean and covariance functions of the GP prior,
  • the set of (noisy) observations and index points to which they correspond,
  • the set of index points at which the resulting posterior predictive distribution over function values is defined,
  • the observation noise variance,
  • jitter, to compensate for numerical instability of Cholesky decomposition,

in addition to the usual params like validate_args and allow_nan_stats.

Mathematical Details

Gaussian process regression (GPR) assumes a Gaussian process (GP) prior and a normal likelihood as a generative model for data. Given GP mean function m, covariance kernel k, and observation noise variance v, we have

  f ~ GP(m, k)

                   iid
  (y[i] | f, x[i])  ~  Normal(f(x[i]), v),   i = 1, ... , N

where y[i] are the noisy observations of function values at points x[i].

In practice, f is an infinite object (eg, a function over R^n) which can't be realized on a finite machine, but fortunately the marginal distribution over function values at a finite set of points is just a multivariate normal with mean and covariance given by the mean and covariance functions applied at our finite set of points (see [Rasmussen and Williams, 2006][1] for a more extensive discussion of these facts).

We spell out the generative model in detail below, but first, a digression on notation. In what follows we drop the indices on vectorial objects such as x[i], it being implied that we are generally considering finite collections of index points and corresponding function values and noisy observations thereof. Thus x should be considered to stand for a collection of index points (indeed, themselves often vectorial). Furthermore:

  • f(x) refers to the collection of function values at the index points in the collection x",
  • m(t) refers to the collection of values of the mean function at the index points in the collection t, and
  • k(x, t) refers to the matrix whose entries are values of the kernel function k at all pairs of index points from x and t.

With these conventions in place, we may write

  (f(x) | x) ~ MVN(m(x), k(x, x))

  (y | f(x), x) ~ Normal(f(x), v)

When we condition on observed data y at the points x, we can derive the posterior distribution over function values f(x) at those points. We can then compute the posterior predictive distribution over function values f(t) at a new set of points t, conditional on those observed data.

  (f(t) | t, x, f(x)) ~ MVN(loc, cov)

  where

  loc = k(t, x) @ inv(k(x, x) + v * I) @ (y - loc)
  cov = k(t, t) - k(t, x) @ inv(k(x, x) + v * I) @ k(x, t)

where I is the identity matrix of appropriate dimension. Finally, the distribution over noisy observations at the new set of points t is obtained by adding IID noise from Normal(0., observation_noise_variance).

Examples

Draw joint samples from the posterior predictive distribution in a GP

regression model

import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp

tfd = tfp.distributions
psd_kernels = tfp.positive_semidefinite_kernels

# Generate noisy observations from a known function at some random points.
observation_noise_variance = .5
f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)
observation_index_points = np.random.uniform(-1., 1., 50)[..., np.newaxis]
observations = (f(observation_index_points) +
                np.random.normal(0., np.sqrt(observation_noise_variance)))

index_points = np.linspace(-1., 1., 100)[..., np.newaxis]

kernel = psd_kernels.MaternFiveHalves()

gprm = tfd.GaussianProcessRegressionModel(
    kernel=kernel,
    index_points=index_points,
    observation_index_points=observation_index_points,
    observations=observations,
    observation_noise_variance=observation_noise_variance)

samples = gprm.sample(10)
# ==> 10 independently drawn, joint samples at `index_points`.

Above, we have used the kernel with default parameters, which are unlikely to be good. Instead, we can train the kernel hyperparameters on the data, as in the next example.

Optimize model parameters via maximum marginal likelihood

Here we learn the kernel parameters as well as the observation noise variance using gradient descent on the maximum marginal likelihood.

# Suppose we have some data from a known function. Note the index points in
# general have shape `[b1, ..., bB, f1, ..., fF]` (here we assume `F == 1`),
# so we need to explicitly consume the feature dimensions (just the last one
# here).
f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)

observation_index_points = np.random.uniform(-1., 1., 50)[..., np.newaxis]
observations = f(observation_index_points) + np.random.normal(0., .05, 50)

# Define a kernel with trainable parameters. Note we transform the trainable
# variables to apply a positivity constraint.
amplitude = tf.exp(tf.Variable(np.float64(0)), name='amplitude')
length_scale = tf.exp(tf.Variable(np.float64(0)), name='length_scale')
kernel = psd_kernels.ExponentiatedQuadratic(amplitude, length_scale)

observation_noise_variance = tf.exp(
    tf.Variable(np.float64(-5)), name='observation_noise_variance')

# We'll use an unconditioned GP to train the kernel parameters.
gp = tfd.GaussianProcess(
    kernel=kernel,
    index_points=observation_index_points,
    observation_noise_variance=observation_noise_variance)
neg_log_likelihood = -gp.log_prob(observations)

optimizer = tf.train.AdamOptimizer(learning_rate=.05, beta1=.5, beta2=.99)
optimize = optimizer.minimize(neg_log_likelihood)

# We can construct the posterior at a new set of `index_points` using the same
# kernel (with the same parameters, which we'll optimize below).
index_points = np.linspace(-1., 1., 100)[..., np.newaxis]
gprm = tfd.GaussianProcessRegressionModel(
    kernel=kernel,
    index_points=index_points,
    observation_index_points=observation_index_points,
    observations=observations,
    observation_noise_variance=observation_noise_variance)

samples = gprm.sample(10)
# ==> 10 independently drawn, joint samples at `index_points`.

# Now execute the above ops in a Session, first training the model
# parameters, then drawing and plotting posterior samples.
with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())

  for i in range(1000):
    _, neg_log_likelihood_ = sess.run([optimize, neg_log_likelihood])
    if i % 100 == 0:
      print("Step {}: NLL = {}".format(i, neg_log_likelihood_))

  print("Final NLL = {}".format(neg_log_likelihood_))
  samples_ = sess.run(samples)

  plt.scatter(np.squeeze(observation_index_points), observations)
  plt.plot(np.stack([index_points[:, 0]]*10).T, samples_.T, c='r', alpha=.2)
Marginalization of model hyperparameters

Here we use TensorFlow Probability's MCMC functionality to perform marginalization of the model hyperparameters: kernel params as well as observation noise variance.

f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2)
observation_index_points = np.random.uniform(-1., 1., 25)[..., np.newaxis]
observations = np.random.normal(f(observation_index_points), .05)

def joint_log_prob(
    index_points, observations, amplitude, length_scale, noise_variance):

  # Hyperparameter Distributions.
  rv_amplitude = tfd.LogNormal(np.float64(0.), np.float64(1))
  rv_length_scale = tfd.LogNormal(np.float64(0.), np.float64(1))
  rv_noise_variance = tfd.LogNormal(np.float64(0.), np.float64(1))

  gp = tfd.GaussianProcess(
      kernel=psd_kernels.ExponentiatedQuadratic(amplitude, length_scale),
      index_points=index_points,
      observation_noise_variance=noise_variance)

  return (
      rv_amplitude.log_prob(amplitude) +
      rv_length_scale.log_prob(length_scale) +
      rv_noise_variance.log_prob(noise_variance) +
      gp.log_prob(observations)
  )

initial_chain_states = [
    1e-1 * tf.ones([], dtype=np.float64, name='init_amplitude'),
    1e-1 * tf.ones([], dtype=np.float64, name='init_length_scale'),
    1e-1 * tf.ones([], dtype=np.float64, name='init_obs_noise_variance')
]

# Since HMC operates over unconstrained space, we need to transform the
# samples so they live in real-space.
unconstraining_bijectors = [
    tfp.bijectors.Softplus(),
    tfp.bijectors.Softplus(),
    tfp.bijectors.Softplus(),
]

def unnormalized_log_posterior(amplitude, length_scale, noise_variance):
  return joint_log_prob(
      observation_index_points, observations, amplitude, length_scale,
      noise_variance)

num_results = 200
[
    amplitudes,
    length_scales,
    observation_noise_variances
], kernel_results = tfp.mcmc.sample_chain(
    num_results=num_results,
    num_burnin_steps=500,
    num_steps_between_results=3,
    current_state=initial_chain_states,
    kernel=tfp.mcmc.TransformedTransitionKernel(
        inner_kernel = tfp.mcmc.HamiltonianMonteCarlo(
            target_log_prob_fn=unnormalized_log_posterior,
            step_size=[np.float64(.15)],
            num_leapfrog_steps=3),
        bijector=unconstraining_bijectors))

# Now we can sample from the posterior predictive distribution at a new set
# of index points.
index_points = np.linspace(-1., 1., 200)[..., np.newaxis]
gprm = tfd.GaussianProcessRegressionModel(
    # Batch of `num_results` kernels parameterized by the MCMC samples.
    kernel=psd_kernels.ExponentiatedQuadratic(amplitudes, length_scales),
    index_points=index_points,
    observation_index_points=observation_index_points,
    observations=observations,
    observation_noise_variance=observation_noise_variances)
samples = gprm.sample()

with tf.Session() as sess:
  kernel_results_, samples_ = sess.run([kernel_results, samples])

  print("Acceptance rate: {}".format(
      np.mean(kernel_results_.inner_results.is_accepted)))

  # Plot posterior samples and their mean, target function, and observations.
  plt.plot(np.stack([index_points[:, 0]]*num_results).T,
           samples_.T,
           c='r',
           alpha=.01)
  plt.plot(index_points[:, 0], np.mean(samples_, axis=0), c='k')
  plt.plot(index_points[:, 0], f(index_points))
  plt.scatter(observation_index_points[:, 0], observations)

References

[1]: Carl Rasmussen, Chris Williams. Gaussian Processes For Machine Learning, 2006.

__init__

__init__(
    kernel,
    index_points=None,
    observation_index_points=None,
    observations=None,
    observation_noise_variance=0.0,
    predictive_noise_variance=None,
    mean_fn=None,
    jitter=1e-06,
    validate_args=False,
    allow_nan_stats=False,
    name='GaussianProcessRegressionModel'
)

Construct a GaussianProcessRegressionModel instance.

Args:

  • kernel: PositiveSemidefiniteKernel-like instance representing the GP's covariance function.
  • index_points: float Tensor representing finite collection, or batch of collections, of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of index points in each batch. Ultimately this distribution corresponds to an e-dimensional multivariate normal. The batch shape must be broadcastable with kernel.batch_shape and any batch dims yielded by mean_fn.
  • observation_index_points: float Tensor representing finite collection, or batch of collections, of points in the index set for which some data has been observed. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims, and e is the number (size) of index points in each batch. [b1, ..., bB, e] must be broadcastable with the shape of observations, and [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
  • observations: float Tensor representing collection, or batch of collections, of observations corresponding to observation_index_points. Shape has the form [b1, ..., bB, e], which must be brodcastable with the batch and example shapes of observation_index_points. The batch shape [b1, ..., bB] must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). The default value is None, which corresponds to the empty set of observations, and simply results in the prior predictive model (a GP with noise of variance predictive_noise_variance).
  • observation_noise_variance: float Tensor representing the variance of the noise in the Normal likelihood distribution of the model. May be batched, in which case the batch shape must be broadcastable with the shapes of all other batched parameters (kernel.batch_shape, index_points, etc.). Default value: 0.
  • predictive_noise_variance: float Tensor representing the variance in the posterior predictive model. If None, we simply re-use observation_noise_variance for the posterior predictive noise. If set explicitly, however, we use this value. This allows us, for example, to omit predictive noise variance (by setting this to zero) to obtain noiseless posterior predictions of function values, conditioned on noisy observations.
  • mean_fn: Python callable that acts on index_points to produce a collection, or batch of collections, of mean values at index_points. Takes a Tensor of shape [b1, ..., bB, f1, ..., fF] and returns a Tensor whose shape is broadcastable with [b1, ..., bB]. Default value: None implies the constant zero function.
  • jitter: float scalar Tensor added to the diagonal of the covariance matrix to ensure positive definiteness of the covariance matrix. Default value: 1e-6.
  • validate_args: Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs. Default value: False.
  • allow_nan_stats: Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value NaN to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined. Default value: False.
  • name: Python str name prefixed to Ops created by this class. Default value: 'GaussianProcessRegressionModel'.

Raises:

  • ValueError: if either
    • only one of observations and observation_index_points is given, or
    • mean_fn is not None and not callable.

Properties

allow_nan_stats

Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

Returns:

  • allow_nan_stats: Python bool.

batch_shape

Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Returns:

  • batch_shape: TensorShape, possibly unknown.

dtype

The DType of Tensors handled by this Distribution.

event_shape

Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

Returns:

  • event_shape: TensorShape, possibly unknown.

index_points

jitter

kernel

mean_fn

name

Name prepended to all ops created by this Distribution.

name_scope

Returns a tf.name_scope instance for this class.

observation_index_points

observation_noise_variance

observations

parameters

Dictionary of parameters used to instantiate this Distribution.

predictive_noise_variance

reparameterization_type

Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

Returns:

An instance of ReparameterizationType.

submodules

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []

Returns:

A sequence of all submodules.

trainable_variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

Methods

__getitem__

View source

__getitem__(slices)

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args:

  • slices: slices from the [] operator

Returns:

  • dist: A new tfd.Distribution instance with sliced parameters.

__iter__

View source

__iter__()

batch_shape_tensor

View source

batch_shape_tensor(name='batch_shape_tensor')

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args:

  • name: name to give to the op

Returns:

  • batch_shape: Tensor.

cdf

View source

cdf(
    value,
    name='cdf',
    **kwargs
)

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

copy(**override_parameters_kwargs)

Creates a deep copy of the distribution.

Args:

  • **override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

Returns:

  • distribution: A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

covariance(
    name='covariance',
    **kwargs
)

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • covariance: Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

cross_entropy(
    other,
    name='cross_entropy'
)

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL, Normal, VariationalGaussianProcess

Args:

Returns:

  • cross_entropy: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

entropy(
    name='entropy',
    **kwargs
)

Shannon entropy in nats.

event_shape_tensor

View source

event_shape_tensor(name='event_shape_tensor')

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args:

  • name: name to give to the op

Returns:

  • event_shape: Tensor.

get_marginal_distribution

View source

get_marginal_distribution(index_points=None)

Compute the marginal of this GP over function values at index_points.

Args:

  • index_points: float Tensor representing finite (batch of) vector(s) of points in the index set over which the GP is defined. Shape has the form [b1, ..., bB, e, f1, ..., fF] where F is the number of feature dimensions and must equal kernel.feature_ndims and e is the number (size) of index points in each batch. Ultimately this distribution corresponds to a e-dimensional multivariate normal. The batch shape must be broadcastable with kernel.batch_shape and any batch dims yielded by mean_fn.

Returns:

  • marginal: a Normal or MultivariateNormalLinearOperator distribution, according to whether index_points consists of one or many index points, respectively.

is_scalar_batch

View source

is_scalar_batch(name='is_scalar_batch')

Indicates that batch_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_batch: bool scalar Tensor.

is_scalar_event

View source

is_scalar_event(name='is_scalar_event')

Indicates that event_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_event: bool scalar Tensor.

kl_divergence

View source

kl_divergence(
    other,
    name='kl_divergence'
)

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

other types with built-in registrations: MultivariateNormalDiag, MultivariateNormalDiagPlusLowRank, MultivariateNormalFullCovariance, MultivariateNormalLinearOperator, MultivariateNormalTriL, Normal, VariationalGaussianProcess

Args:

Returns:

  • kl_divergence: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

View source

log_cdf(
    value,
    name='log_cdf',
    **kwargs
)

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • logcdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

log_prob(
    value,
    name='log_prob',
    **kwargs
)

Log probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

log_survival_function(
    value,
    name='log_survival_function',
    **kwargs
)

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

mean(
    name='mean',
    **kwargs
)

Mean.

mode

View source

mode(
    name='mode',
    **kwargs
)

Mode.

param_shapes

View source

param_shapes(
    cls,
    sample_shape,
    name='DistributionParamShapes'
)

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args:

  • sample_shape: Tensor or python list/tuple. Desired shape of a call to sample().
  • name: name to prepend ops with.

Returns:

dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_static_shapes(
    cls,
    sample_shape
)

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args:

  • sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample().

Returns:

dict of parameter name to TensorShape.

Raises:

  • ValueError: if sample_shape is a TensorShape and is not fully defined.

prob

View source

prob(
    value,
    name='prob',
    **kwargs
)

Probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

quantile(
    value,
    name='quantile',
    **kwargs
)

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • quantile: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

sample(
    sample_shape=(),
    seed=None,
    name='sample',
    **kwargs
)

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args:

  • sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples.
  • seed: Python integer seed for RNG
  • name: name to give to the op.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • samples: a Tensor with prepended dimensions sample_shape.

stddev

View source

stddev(
    name='stddev',
    **kwargs
)

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • stddev: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

survival_function(
    value,
    name='survival_function',
    **kwargs
)

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

variance(
    name='variance',
    **kwargs
)

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • variance: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

with_name_scope

with_name_scope(
    cls,
    method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 64]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.