TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfp.sts.AdditiveStateSpaceModel

View source on GitHub

Class AdditiveStateSpaceModel

A state space model representing a sum of component state space models.

Inherits From: LinearGaussianStateSpaceModel

A state space model (SSM) posits a set of latent (unobserved) variables that evolve over time with dynamics specified by a probabilistic transition model p(z[t+1] | z[t]). At each timestep, we observe a value sampled from an observation model conditioned on the current state, p(x[t] | z[t]). The special case where both the transition and observation models are Gaussians with mean specified as a linear function of the inputs, is known as a linear Gaussian state space model and supports tractable exact probabilistic calculations; see tfp.distributions.LinearGaussianStateSpaceModel for details.

The AdditiveStateSpaceModel represents a sum of component state space models. Each of the N components describes a random process generating a distribution on observed time series x1[t], x2[t], ..., xN[t]. The additive model represents the sum of these processes, y[t] = x1[t] + x2[t] + ... + xN[t] + eps[t], where eps[t] ~ N(0, observation_noise_scale) is an observation noise term.

Mathematical Details

The additive model concatenates the latent states of its component models. The generative process runs each component's dynamics in its own subspace of latent space, and then observes the sum of the observation models from the components.

Formally, the transition model is linear Gaussian:

p(z[t+1] | z[t]) ~ Normal(loc = transition_matrix.matmul(z[t]),
                          cov = transition_cov)

where each z[t] is a latent state vector concatenating the component state vectors, z[t] = [z1[t], z2[t], ..., zN[t]], so it has size latent_size = sum([c.latent_size for c in components]).

The transition matrix is the block-diagonal composition of transition matrices from the component processes:

transition_matrix =
  [[ c0.transition_matrix,  0.,                   ..., 0.                   ],
   [ 0.,                    c1.transition_matrix, ..., 0.                   ],
   [ ...                    ...                   ...                       ],
   [ 0.,                    0.,                   ..., cN.transition_matrix ]]

and the noise covariance is similarly the block-diagonal composition of component noise covariances:

transition_cov =
  [[ c0.transition_cov, 0.,                ..., 0.                ],
   [ 0.,                c1.transition_cov, ..., 0.                ],
   [ ...                ...                     ...               ],
   [ 0.,                0.,                ..., cN.transition_cov ]]

The observation model is also linear Gaussian,

p(y[t] | z[t]) ~ Normal(loc = observation_matrix.matmul(z[t]),
                        stddev = observation_noise_scale)

This implementation assumes scalar observations, so observation_matrix has shape [1, latent_size]. The additive observation matrix simply concatenates the observation matrices from each component:

observation_matrix =
  concat([c0.obs_matrix, c1.obs_matrix, ..., cN.obs_matrix], axis=-1)

The effect is that each component observation matrix acts on the dimensions of latent state corresponding to that component, and the overall expected observation is the sum of the expected observations from each component.

If observation_noise_scale is not explicitly specified, it is also computed by summing the noise variances of the component processes:

observation_noise_scale = sqrt(sum([
  c.observation_noise_scale**2 for c in components]))

Examples

To construct an additive state space model combining a local linear trend and day-of-week seasonality component (note, the StructuralTimeSeries classes, e.g., Sum, provide a higher-level interface for this construction, which will likely be preferred by most users):

  num_timesteps = 30
  local_ssm = tfp.sts.LocalLinearTrendStateSpaceModel(
      num_timesteps=num_timesteps,
      level_scale=0.5,
      slope_scale=0.1,
      initial_state_prior=tfd.MultivariateNormalDiag(
          loc=[0., 0.], scale_diag=[1., 1.]))
  day_of_week_ssm = tfp.sts.SeasonalStateSpaceModel(
      num_timesteps=num_timesteps,
      num_seasons=7,
      initial_state_prior=tfd.MultivariateNormalDiag(
          loc=tf.zeros([7]), scale_diag=tf.ones([7])))
  additive_ssm = tfp.sts.AdditiveStateSpaceModel(
      component_ssms=[local_ssm, day_of_week_ssm],
      observation_noise_scale=0.1)

  y = additive_ssm.sample()
  print(y.shape)
  # => []

__init__

__init__(
    component_ssms,
    constant_offset=0.0,
    observation_noise_scale=None,
    initial_state_prior=None,
    initial_step=0,
    validate_args=False,
    allow_nan_stats=True,
    name=None
)

Build a state space model representing the sum of component models.

Args:

  • component_ssms: Python list containing one or more tfd.LinearGaussianStateSpaceModel instances. The components will in general implement different time-series models, with possibly different latent_size, but they must have the same dtype, event shape (num_timesteps and observation_size), and their batch shapes must broadcast to a compatible batch shape.
  • constant_offset: scalar float Tensor, or batch of scalars, specifying a constant value added to the sum of outputs from the component models. This allows the components to model the shifted series observed_time_series - constant_offset. Default value: 0.
  • observation_noise_scale: Optional scalar float Tensor indicating the standard deviation of the observation noise. May contain additional batch dimensions, which must broadcast with the batch shape of elements in component_ssms. If observation_noise_scale is specified for the AdditiveStateSpaceModel, the observation noise scales of component models are ignored. If None, the observation noise scale is derived by summing the noise variances of the component models, i.e., observation_noise_scale = sqrt(sum( [ssm.observation_noise_scale**2 for ssm in component_ssms])).
  • initial_state_prior: Optional instance of tfd.MultivariateNormal representing a prior distribution on the latent state at time initial_step. If None, defaults to the independent priors from component models, i.e., [component.initial_state_prior for component in component_ssms]. Default value: None.
  • initial_step: Optional scalar int Tensor specifying the starting timestep. Default value: 0.
  • validate_args: Python bool. Whether to validate input with asserts. If validate_args is False, and the inputs are invalid, correct behavior is not guaranteed. Default value: False.
  • allow_nan_stats: Python bool. If False, raise an exception if a statistic (e.g. mean/mode/etc...) is undefined for any batch member. If True, batch members with valid parameters leading to undefined statistics will return NaN for this statistic. Default value: True.
  • name: Python str name prefixed to ops created by this class. Default value: "AdditiveStateSpaceModel".

Raises:

  • ValueError: if components have different num_timesteps.

Properties

allow_nan_stats

Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

Returns:

  • allow_nan_stats: Python bool.

batch_shape

Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Returns:

  • batch_shape: TensorShape, possibly unknown.

dtype

The DType of Tensors handled by this Distribution.

event_shape

Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

Returns:

  • event_shape: TensorShape, possibly unknown.

latent_size

name

Name prepended to all ops created by this Distribution.

name_scope

Returns a tf.name_scope instance for this class.

observation_size

parameters

Dictionary of parameters used to instantiate this Distribution.

reparameterization_type

Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

Returns:

An instance of ReparameterizationType.

submodules

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []

Returns:

A sequence of all submodules.

trainable_variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

Methods

__getitem__

View source

__getitem__(slices)

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args:

  • slices: slices from the [] operator

Returns:

  • dist: A new tfd.Distribution instance with sliced parameters.

__iter__

View source

__iter__()

backward_smoothing_pass

View source

backward_smoothing_pass(
    filtered_means,
    filtered_covs,
    predicted_means,
    predicted_covs
)

Run the backward pass in Kalman smoother.

The backward smoothing is using Rauch, Tung and Striebel smoother as as discussed in section 18.3.2 of Kevin P. Murphy, 2012, Machine Learning: A Probabilistic Perspective, The MIT Press. The inputs are returned by forward_filter function.

Args:

  • filtered_means: Means of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • filtered_covs: Covariances of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size].
  • predicted_means: Means of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • predicted_covs: Covariances of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size].

Returns:

  • posterior_means: Means of the smoothed marginal distributions p(z[t] | x[1:T]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size], which is of the same shape as filtered_means.
  • posterior_covs: Covariances of the smoothed marginal distributions p(z[t] | x[1:T]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size]. which is of the same shape as filtered_covs.

batch_shape_tensor

View source

batch_shape_tensor(name='batch_shape_tensor')

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args:

  • name: name to give to the op

Returns:

  • batch_shape: Tensor.

cdf

View source

cdf(
    value,
    name='cdf',
    **kwargs
)

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

copy(**override_parameters_kwargs)

Creates a deep copy of the distribution.

Args:

  • **override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

Returns:

  • distribution: A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

covariance(
    name='covariance',
    **kwargs
)

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • covariance: Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

cross_entropy(
    other,
    name='cross_entropy'
)

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args:

Returns:

  • cross_entropy: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

entropy(
    name='entropy',
    **kwargs
)

Shannon entropy in nats.

event_shape_tensor

View source

event_shape_tensor(name='event_shape_tensor')

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args:

  • name: name to give to the op

Returns:

  • event_shape: Tensor.

forward_filter

View source

forward_filter(
    x,
    mask=None
)

Run a Kalman filter over a provided sequence of outputs.

Note that the returned values filtered_means, predicted_means, and observation_means depend on the observed time series x, while the corresponding covariances are independent of the observed series; i.e., they depend only on the model itself. This means that the mean values have shape concat([sample_shape(x), batch_shape, [num_timesteps, {latent/observation}_size]]), while the covariances have shape concat[(batch_shape, [num_timesteps, {latent/observation}_size, {latent/observation}_size]]), which does not depend on the sample shape.

Args:

  • x: a float-type Tensor with rightmost dimensions [num_timesteps, observation_size] matching self.event_shape. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions are interpreted as a sample shape.
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Returns:

  • log_likelihoods: Per-timestep log marginal likelihoods log p(x[t] | x[:t-1]) evaluated at the input x, as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps].
  • filtered_means: Means of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • filtered_covs: Covariances of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, latent_size, latent_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than filtered_means.
  • predicted_means: Means of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • predicted_covs: Covariances of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, latent_size, latent_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than predicted_means.
  • observation_means: Means of the per-timestep predictive distributions over observations, p(x[t] | x[:t-1]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, observation_size].
  • observation_covs: Covariances of the per-timestep predictive distributions over observations, p(x[t] | x[:t-1]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, observation_size, observation_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than observation_means.

is_scalar_batch

View source

is_scalar_batch(name='is_scalar_batch')

Indicates that batch_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_batch: bool scalar Tensor.

is_scalar_event

View source

is_scalar_event(name='is_scalar_event')

Indicates that event_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_event: bool scalar Tensor.

kl_divergence

View source

kl_divergence(
    other,
    name='kl_divergence'
)

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args:

Returns:

  • kl_divergence: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

latents_to_observations

View source

latents_to_observations(
    latent_means,
    latent_covs
)

Push latent means and covariances forward through the observation model.

Args:

  • latent_means: float Tensor of shape [..., num_timesteps, latent_size]
  • latent_covs: float Tensor of shape [..., num_timesteps, latent_size, latent_size].

Returns:

  • observation_means: float Tensor of shape [..., num_timesteps, observation_size]
  • observation_covs: float Tensor of shape [..., num_timesteps, observation_size, observation_size]

log_cdf

View source

log_cdf(
    value,
    name='log_cdf',
    **kwargs
)

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • logcdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

log_prob(
    value,
    name='log_prob',
    **kwargs
)

Log probability density/mass function.

Additional documentation from LinearGaussianStateSpaceModel:

kwargs:
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

log_survival_function(
    value,
    name='log_survival_function',
    **kwargs
)

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

mean(
    name='mean',
    **kwargs
)

Mean.

mode

View source

mode(
    name='mode',
    **kwargs
)

Mode.

param_shapes

View source

param_shapes(
    cls,
    sample_shape,
    name='DistributionParamShapes'
)

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args:

  • sample_shape: Tensor or python list/tuple. Desired shape of a call to sample().
  • name: name to prepend ops with.

Returns:

dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_static_shapes(
    cls,
    sample_shape
)

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args:

  • sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample().

Returns:

dict of parameter name to TensorShape.

Raises:

  • ValueError: if sample_shape is a TensorShape and is not fully defined.

posterior_marginals

View source

posterior_marginals(
    x,
    mask=None
)

Run a Kalman smoother to return posterior mean and cov.

Note that the returned values smoothed_means depend on the observed time series x, while the smoothed_covs are independent of the observed series; i.e., they depend only on the model itself. This means that the mean values have shape concat([sample_shape(x), batch_shape, [num_timesteps, {latent/observation}_size]]), while the covariances have shape concat[(batch_shape, [num_timesteps, {latent/observation}_size, {latent/observation}_size]]), which does not depend on the sample shape.

This function only performs smoothing. If the user wants the intermediate values, which are returned by filtering pass forward_filter, one could get it by:

(log_likelihoods,
 filtered_means, filtered_covs,
 predicted_means, predicted_covs,
 observation_means, observation_covs) = model.forward_filter(x)
smoothed_means, smoothed_covs = model.backward_smoothing_pass(x)

where x is an observation sequence.

Args:

  • x: a float-type Tensor with rightmost dimensions [num_timesteps, observation_size] matching self.event_shape. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions are interpreted as a sample shape.
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Returns:

  • smoothed_means: Means of the per-timestep smoothed distributions over latent states, p(z[t] | x[:T]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, observation_size].
  • smoothed_covs: Covariances of the per-timestep smoothed distributions over latent states, p(z[t] | x[:T]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, observation_size, observation_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than filtered_means.

prob

View source

prob(
    value,
    name='prob',
    **kwargs
)

Probability density/mass function.

Additional documentation from LinearGaussianStateSpaceModel:

kwargs:
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

quantile(
    value,
    name='quantile',
    **kwargs
)

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • quantile: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

sample(
    sample_shape=(),
    seed=None,
    name='sample',
    **kwargs
)

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args:

  • sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples.
  • seed: Python integer seed for RNG
  • name: name to give to the op.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • samples: a Tensor with prepended dimensions sample_shape.

stddev

View source

stddev(
    name='stddev',
    **kwargs
)

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • stddev: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

survival_function(
    value,
    name='survival_function',
    **kwargs
)

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

variance(
    name='variance',
    **kwargs
)

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • variance: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

with_name_scope

with_name_scope(
    cls,
    method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 64]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.