tfp.sts.SeasonalStateSpaceModel

View source on GitHub

Class SeasonalStateSpaceModel

State space model for a seasonal effect.

Inherits From: LinearGaussianStateSpaceModel

A state space model (SSM) posits a set of latent (unobserved) variables that evolve over time with dynamics specified by a probabilistic transition model p(z[t+1] | z[t]). At each timestep, we observe a value sampled from an observation model conditioned on the current state, p(x[t] | z[t]). The special case where both the transition and observation models are Gaussians with mean specified as a linear function of the inputs, is known as a linear Gaussian state space model and supports tractable exact probabilistic calculations; see tfp.distributions.LinearGaussianStateSpaceModel for details.

A seasonal effect model is a special case of a linear Gaussian SSM. The latent states represent an unknown effect from each of several 'seasons'; these are generally not meteorological seasons, but represent regular recurring patterns such as hour-of-day or day-of-week effects. The effect of each season drifts from one occurrence to the next, following a Gaussian random walk:

effects[season, occurrence[i]] = (
  effects[season, occurrence[i-1]] + Normal(loc=0., scale=drift_scale))

The latent state has dimension num_seasons, containing one effect for each seasonal component. The parameters drift_scale and observation_noise_scale are each (a batch of) scalars. The batch shape of this Distribution is the broadcast batch shape of these parameters and of the initial_state_prior.

Mathematical Details

The seasonal effect model implements a tfp.distributions.LinearGaussianStateSpaceModel with latent_size = num_seasons and observation_size = 1. The latent state is organized so that the current seasonal effect is always in the first (zeroth) dimension. The transition model rotates the latent state to shift to a new effect at the end of each season:

transition_matrix[t] = (permutation_matrix([1, 2, ..., num_seasons-1, 0])
                        if season_is_changing(t)
                        else eye(num_seasons)
transition_noise[t] ~ Normal(loc=0., scale_diag=(
                             [drift_scale, 0, ..., 0]
                             if season_is_changing(t)
                             else [0, 0, ..., 0]))

where season_is_changing(t) is True if t `mod` sum(num_steps_per_season) is in the set of final days for each season, given by cumsum(num_steps_per_season) - 1. The observation model always picks out the effect for the current season, i.e., the first element of the latent state:

observation_matrix = [[1., 0., ..., 0.]]
observation_noise ~ Normal(loc=0, scale=observation_noise_scale)

Examples

A state-space model with day-of-week seasonality on hourly data:

day_of_week = SeasonalStateSpaceModel(
  num_timesteps=30,
  num_seasons=7,
  drift_scale=0.1,
  initial_state_prior=tfd.MultivariateNormalDiag(
    scale_diag=tf.ones([7], dtype=tf.float32),
  num_steps_per_season=24)

A model with basic month-of-year seasonality on daily data, demonstrating seasons of varying length:

month_of_year = SeasonalStateSpaceModel(
  num_timesteps=2 * 365,  # 2 years
  num_seasons=12,
  drift_scale=0.1,
  initial_state_prior=tfd.MultivariateNormalDiag(
    scale_diag=tf.ones([12], dtype=tf.float32)),
  num_steps_per_season=[31, 28, 31, 30, 30, 31, 31, 31, 30, 31, 30, 31],
  initial_step=22)

Note that we've used initial_step=22 to denote that the model begins on January 23 (steps are zero-indexed). This version works over time periods not involving a leap year. A general implementation of month-of-year seasonality would require additional logic:

num_days_per_month = np.array(
  [[31, 28, 31, 30, 30, 31, 31, 31, 30, 31, 30, 31],
   [31, 29, 31, 30, 30, 31, 31, 31, 30, 31, 30, 31],  # year with leap day
   [31, 28, 31, 30, 30, 31, 31, 31, 30, 31, 30, 31],
   [31, 28, 31, 30, 30, 31, 31, 31, 30, 31, 30, 31]])

month_of_year = SeasonalStateSpaceModel(
  num_timesteps=4 * 365 + 2,  # 8 years with leap days
  num_seasons=12,
  drift_scale=0.1,
  initial_state_prior=tfd.MultivariateNormalDiag(
    scale_diag=tf.ones([12], dtype=tf.float32)),
  num_steps_per_season=num_days_per_month,
  initial_step=22)

__init__

__init__(
    num_timesteps,
    num_seasons,
    drift_scale,
    initial_state_prior,
    observation_noise_scale=0.0,
    num_steps_per_season=1,
    initial_step=0,
    validate_args=False,
    allow_nan_stats=True,
    name=None
)

Build a seasonal effect state space model.

{seasonal_init_args}

Properties

allow_nan_stats

Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

Returns:

  • allow_nan_stats: Python bool.

batch_shape

Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Returns:

  • batch_shape: TensorShape, possibly unknown.

drift_scale

Standard deviation of the drift in effects between seasonal cycles.

dtype

The DType of Tensors handled by this Distribution.

event_shape

Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

Returns:

  • event_shape: TensorShape, possibly unknown.

latent_size

name

Name prepended to all ops created by this Distribution.

name_scope

Returns a tf.name_scope instance for this class.

num_seasons

Number of seasons.

num_steps_per_season

Number of steps in each season.

observation_noise_scale

Standard deviation of the observation noise.

observation_size

parameters

Dictionary of parameters used to instantiate this Distribution.

reparameterization_type

Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

Returns:

An instance of ReparameterizationType.

submodules

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []

Returns:

A sequence of all submodules.

trainable_variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Sequence of variables owned by this module and it's submodules.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

Methods

__getitem__

View source

__getitem__(slices)

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args:

  • slices: slices from the [] operator

Returns:

  • dist: A new tfd.Distribution instance with sliced parameters.

__iter__

View source

__iter__()

backward_smoothing_pass

View source

backward_smoothing_pass(
    filtered_means,
    filtered_covs,
    predicted_means,
    predicted_covs
)

Run the backward pass in Kalman smoother.

The backward smoothing is using Rauch, Tung and Striebel smoother as as discussed in section 18.3.2 of Kevin P. Murphy, 2012, Machine Learning: A Probabilistic Perspective, The MIT Press. The inputs are returned by forward_filter function.

Args:

  • filtered_means: Means of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • filtered_covs: Covariances of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size].
  • predicted_means: Means of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • predicted_covs: Covariances of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size].

Returns:

  • posterior_means: Means of the smoothed marginal distributions p(z[t] | x[1:T]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size], which is of the same shape as filtered_means.
  • posterior_covs: Covariances of the smoothed marginal distributions p(z[t] | x[1:T]), as a Tensor of shape batch_shape + [num_timesteps, latent_size, latent_size]. which is of the same shape as filtered_covs.

batch_shape_tensor

View source

batch_shape_tensor(name='batch_shape_tensor')

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args:

  • name: name to give to the op

Returns:

  • batch_shape: Tensor.

cdf

View source

cdf(
    value,
    name='cdf',
    **kwargs
)

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

copy(**override_parameters_kwargs)

Creates a deep copy of the distribution.

Args:

  • **override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

Returns:

  • distribution: A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

covariance(
    name='covariance',
    **kwargs
)

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • covariance: Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

cross_entropy(
    other,
    name='cross_entropy'
)

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args:

Returns:

  • cross_entropy: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

entropy(
    name='entropy',
    **kwargs
)

Shannon entropy in nats.

event_shape_tensor

View source

event_shape_tensor(name='event_shape_tensor')

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args:

  • name: name to give to the op

Returns:

  • event_shape: Tensor.

forward_filter

View source

forward_filter(
    x,
    mask=None
)

Run a Kalman filter over a provided sequence of outputs.

Note that the returned values filtered_means, predicted_means, and observation_means depend on the observed time series x, while the corresponding covariances are independent of the observed series; i.e., they depend only on the model itself. This means that the mean values have shape concat([sample_shape(x), batch_shape, [num_timesteps, {latent/observation}_size]]), while the covariances have shape concat[(batch_shape, [num_timesteps, {latent/observation}_size, {latent/observation}_size]]), which does not depend on the sample shape.

Args:

  • x: a float-type Tensor with rightmost dimensions [num_timesteps, observation_size] matching self.event_shape. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions are interpreted as a sample shape.
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Returns:

  • log_likelihoods: Per-timestep log marginal likelihoods log p(x[t] | x[:t-1]) evaluated at the input x, as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps].
  • filtered_means: Means of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • filtered_covs: Covariances of the per-timestep filtered marginal distributions p(z[t] | x[:t]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, latent_size, latent_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than filtered_means.
  • predicted_means: Means of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, latent_size].
  • predicted_covs: Covariances of the per-timestep predictive distributions over latent states, p(z[t+1] | x[:t]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, latent_size, latent_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than predicted_means.
  • observation_means: Means of the per-timestep predictive distributions over observations, p(x[t] | x[:t-1]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, observation_size].
  • observation_covs: Covariances of the per-timestep predictive distributions over observations, p(x[t] | x[:t-1]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, observation_size, observation_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than observation_means.

is_scalar_batch

View source

is_scalar_batch(name='is_scalar_batch')

Indicates that batch_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_batch: bool scalar Tensor.

is_scalar_event

View source

is_scalar_event(name='is_scalar_event')

Indicates that event_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_event: bool scalar Tensor.

kl_divergence

View source

kl_divergence(
    other,
    name='kl_divergence'
)

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args:

Returns:

  • kl_divergence: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

latents_to_observations

View source

latents_to_observations(
    latent_means,
    latent_covs
)

Push latent means and covariances forward through the observation model.

Args:

  • latent_means: float Tensor of shape [..., num_timesteps, latent_size]
  • latent_covs: float Tensor of shape [..., num_timesteps, latent_size, latent_size].

Returns:

  • observation_means: float Tensor of shape [..., num_timesteps, observation_size]
  • observation_covs: float Tensor of shape [..., num_timesteps, observation_size, observation_size]

log_cdf

View source

log_cdf(
    value,
    name='log_cdf',
    **kwargs
)

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • logcdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

log_prob(
    value,
    name='log_prob',
    **kwargs
)

Log probability density/mass function.

Additional documentation from LinearGaussianStateSpaceModel:

kwargs:
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

log_survival_function(
    value,
    name='log_survival_function',
    **kwargs
)

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

mean(
    name='mean',
    **kwargs
)

Mean.

mode

View source

mode(
    name='mode',
    **kwargs
)

Mode.

param_shapes

View source

param_shapes(
    cls,
    sample_shape,
    name='DistributionParamShapes'
)

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args:

  • sample_shape: Tensor or python list/tuple. Desired shape of a call to sample().
  • name: name to prepend ops with.

Returns:

dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_static_shapes(
    cls,
    sample_shape
)

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args:

  • sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample().

Returns:

dict of parameter name to TensorShape.

Raises:

  • ValueError: if sample_shape is a TensorShape and is not fully defined.

posterior_marginals

View source

posterior_marginals(
    x,
    mask=None
)

Run a Kalman smoother to return posterior mean and cov.

Note that the returned values smoothed_means depend on the observed time series x, while the smoothed_covs are independent of the observed series; i.e., they depend only on the model itself. This means that the mean values have shape concat([sample_shape(x), batch_shape, [num_timesteps, {latent/observation}_size]]), while the covariances have shape concat[(batch_shape, [num_timesteps, {latent/observation}_size, {latent/observation}_size]]), which does not depend on the sample shape.

This function only performs smoothing. If the user wants the intermediate values, which are returned by filtering pass forward_filter, one could get it by:

(log_likelihoods,
 filtered_means, filtered_covs,
 predicted_means, predicted_covs,
 observation_means, observation_covs) = model.forward_filter(x)
smoothed_means, smoothed_covs = model.backward_smoothing_pass(x)

where x is an observation sequence.

Args:

  • x: a float-type Tensor with rightmost dimensions [num_timesteps, observation_size] matching self.event_shape. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions are interpreted as a sample shape.
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Returns:

  • smoothed_means: Means of the per-timestep smoothed distributions over latent states, p(z[t] | x[:T]), as a Tensor of shape sample_shape(x) + batch_shape + [num_timesteps, observation_size].
  • smoothed_covs: Covariances of the per-timestep smoothed distributions over latent states, p(z[t] | x[:T]), as a Tensor of shape sample_shape(mask) + batch_shape + [num_timesteps, observation_size, observation_size]. Note that the covariances depend only on the model and the mask, not on the data, so this may have fewer dimensions than filtered_means.

prob

View source

prob(
    value,
    name='prob',
    **kwargs
)

Probability density/mass function.

Additional documentation from LinearGaussianStateSpaceModel:

kwargs:
  • mask: optional bool-type Tensor with rightmost dimension [num_timesteps]; True values specify that the value of x at that timestep is masked, i.e., not conditioned on. Additional dimensions must match or be broadcastable to self.batch_shape; any further dimensions must match or be broadcastable to the sample shape of x. Default value: None.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

quantile(
    value,
    name='quantile',
    **kwargs
)

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • quantile: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

sample(
    sample_shape=(),
    seed=None,
    name='sample',
    **kwargs
)

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args:

  • sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples.
  • seed: Python integer seed for RNG
  • name: name to give to the op.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • samples: a Tensor with prepended dimensions sample_shape.

stddev

View source

stddev(
    name='stddev',
    **kwargs
)

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • stddev: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

survival_function(
    value,
    name='survival_function',
    **kwargs
)

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

variance(
    name='variance',
    **kwargs
)

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • variance: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

with_name_scope

with_name_scope(
    cls,
    method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 64]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.