Hidden Markov model distribution.
Inherits From: Distribution
oryx.distributions.HiddenMarkovModel(
initial_distribution, transition_distribution, observation_distribution,
num_steps, validate_args=False, allow_nan_stats=True,
time_varying_transition_distribution=False,
time_varying_observation_distribution=False, name='HiddenMarkovModel'
)
The HiddenMarkovModel
distribution implements a (batch of) discrete hidden
Markov models where the initial states, transition probabilities
and observed states are all given by userprovided distributions.
In this model, there is a sequence of integervalued hidden states:
z[0], z[1], ..., z[num_steps  1]
and a sequence of observed states:
x[0], ..., x[num_steps  1]
.
The distribution of z[0]
is given by initial_distribution
.
The conditional probability of z[i + 1]
given z[i]
is described by
the batch of distributions in transition_distribution
.
For a batch of hidden Markov models, the coordinates before the rightmost one
of the transition_distribution
batch correspond to indices into the hidden
Markov model batch. The rightmost coordinate of the batch is used to select
which distribution z[i + 1]
is drawn from. The distributions corresponding
to the probability of z[i + 1]
conditional on z[i] == k
is given by the
elements of the batch whose rightmost coordinate is k
.
Similarly, the conditional distribution of z[i]
given x[i]
is given by
the batch of observation_distribution
.
When the rightmost coordinate of observation_distribution
is k
it
gives the conditional probabilities of x[i]
given z[i] == k
.
The probability distribution associated with the HiddenMarkovModel
distribution is the marginal distribution of x[0],...,x[num_steps  1]
.
Examples
tfd = tfp.distributions
# A simple weather model.
# Represent a cold day with 0 and a hot day with 1.
# Suppose the first day of a sequence has a 0.8 chance of being cold.
# We can model this using the categorical distribution:
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
# Suppose a cold day has a 30% chance of being followed by a hot day
# and a hot day has a 20% chance of being followed by a cold day.
# We can model this as:
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
# Suppose additionally that on each day the temperature is
# normally distributed with mean and standard deviation 0 and 5 on
# a cold day and mean and standard deviation 15 and 10 on a hot day.
# We can model this with:
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
# We can combine these distributions into a single week long
# hidden Markov model with:
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
# The expected temperatures for each day are given by:
model.mean() # shape [7], elements approach 9.0
# The log pdf of a week of temperature 0 is:
model.log_prob(tf.zeros(shape=[7]))
References
[1] https://en.wikipedia.org/wiki/Hidden_Markov_model
Args  

initial_distribution

A Categorical like instance.
Determines probability of first hidden state in Markov chain.
The number of categories must match the number of categories of
transition_distribution as well as both the rightmost batch
dimension of transition_distribution and the rightmost batch
dimension of observation_distribution .

transition_distribution

A Categorical like instance.
The rightmost batch dimension indexes the probability distribution
of each hidden state conditioned on the previous hidden state.

observation_distribution

A tfp.distributions.Distribution like
instance. The rightmost batch dimension indexes the distribution
of each observation conditioned on the corresponding hidden state.

num_steps

The number of steps taken in Markov chain. An integer valued
tensor. The number of transitions is num_steps  1 .

validate_args

Python bool , default False . When True distribution
parameters are checked for validity despite possibly degrading runtime
performance. When False invalid inputs may silently render incorrect
outputs.
Default value: False .

allow_nan_stats

Python bool , default True . When True , statistics
(e.g., mean, mode, variance) use the value "NaN " to indicate the
result is undefined. When False , an exception is raised if one or
more of the statistic's batch members are undefined.
Default value: True .

time_varying_transition_distribution

Python bool , default False .
When True , the transition_distribution has an additional batch
dimension that indexes the distribution of each observation conditioned
on the corresponding timestep. This dimension size should always match
num_steps 1 and is the secondtolast batch axis in the batch
dimensions (just to the left of the dimension for the number of states).
Because transitions only happen between steps, the number of transitions
is one less than num_steps.

time_varying_observation_distribution

Python bool , default False .
When True , the observation_distribution has an additional batch
dimension that indexes the distribution of each observation conditioned
on the corresponding timestep. This dimension size should always match
num_steps and is the secondtolast batch axis in the batch dimensions
(just to the left of the dimension for the number of states).

name

Python str name prefixed to Ops created by this class.
Default value: "HiddenMarkovModel".

Raises  

ValueError

if num_steps is not at least 1.

ValueError

if initial_distribution does not have scalar event_shape .

ValueError

if transition_distribution does not have scalar
event_shape.

ValueError

if transition_distribution and observation_distribution
are fully defined but don't have matching rightmost dimension.

Attributes  

allow_nan_stats

Python bool describing behavior when a stat is undefined.
Stats return +/ infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or  infinity), so the variance = E[(X  mean)**2] is also undefined. 
batch_shape

Shape of a single sample from a single event index as a TensorShape .
May be partially defined or unknown. The batch dimensions are indexes into independent, nonidentical parameterizations of this distribution. 
dtype

The DType of Tensor s handled by this Distribution .

event_shape

Shape of a single sample from a single batch as a TensorShape .
May be partially defined or unknown. 
experimental_shard_axis_names

The list or structure of lists of active shard axis names. 
initial_distribution


name

Name prepended to all ops created by this Distribution .

num_states_static

The number of hidden states in the hidden Markov model. 
num_steps


observation_distribution


parameters

Dictionary of parameters used to instantiate this Distribution .

reparameterization_type

Describes how samples from the distribution are reparameterized.
Currently this is one of the static instances

trainable_variables


transition_distribution


validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Methods
batch_shape_tensor
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1D Tensor
.
The batch dimensions are indexes into independent, nonidentical parameterizations of this distribution.
Args  

name

name to give to the op 
Returns  

batch_shape

Tensor .

cdf
cdf(
value, name='cdf', **kwargs
)
Cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
cdf(x) := P[X <= x]
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

cdf

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

copy
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Args  

**override_parameters_kwargs

String/value dictionary of initialization arguments to override with new values. 
Returns  

distribution

A new instance of type(self) initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs) .

covariance
covariance(
name='covariance', **kwargs
)
Covariance.
Covariance is (possibly) defined only for nonscalarevent distributions.
For example, for a lengthk
, vectorvalued distribution, it is calculated
as,
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i  E[X_i]) (X_j  E[X_j])]
where Cov
is a (batch of) k x k
matrix, 0 <= (i, j) < k
, and E
denotes expectation.
Alternatively, for nonvector, multivariate distributions (e.g.,
matrixvalued, Wishart), Covariance
shall return a (batch of) matrices
under some vectorization of the events, i.e.,
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov
is a (batch of) k' x k'
matrices,
0 <= (i, j) < k' = reduce_prod(event_shape)
, and Vec
is some function
mapping indices of this distribution's event dimensions to indices of a
lengthk'
vector.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

covariance

Floatingpoint Tensor with shape [B1, ..., Bn, k', k']
where the first n dimensions are batch coordinates and
k' = reduce_prod(self.event_shape) .

cross_entropy
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy.
Denote this distribution (self
) by P
and the other
distribution by
Q
. Assuming P, Q
are absolutely continuous with respect to
one another and permit densities p(x) dr(x)
and q(x) dr(x)
, (Shannon)
cross entropy is defined as:
H[P, Q] = E_p[log q(X)] = int_F p(x) log q(x) dr(x)
where F
denotes the support of the random variable X ~ P
.
Args  

other

tfp.distributions.Distribution instance.

name

Python str prepended to names of ops created by this function.

Returns  

cross_entropy

self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of (Shannon) cross entropy.

entropy
entropy(
name='entropy', **kwargs
)
Shannon entropy in nats.
event_shape_tensor
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1D int32 Tensor
.
Args  

name

name to give to the op 
Returns  

event_shape

Tensor .

experimental_default_event_space_bijector
experimental_default_event_space_bijector(
*args, **kwargs
)
Bijector mapping the reals (R**n) to the event space of the distribution.
Distributions with continuous support may implement
_default_event_space_bijector
which returns a subclass of
tfp.bijectors.Bijector
that maps R**n to the distribution's event space.
For example, the default bijector for the Beta
distribution
is tfp.bijectors.Sigmoid()
, which maps the real line to [0, 1]
, the
support of the Beta
distribution. The default bijector for the
CholeskyLKJ
distribution is tfp.bijectors.CorrelationCholesky
, which
maps R^(k * (k1) // 2) to the submanifold of k x k lower triangular
matrices with ones along the diagonal.
The purpose of experimental_default_event_space_bijector
is
to enable gradient descent in an unconstrained space for Variational
Inference and Hamiltonian Monte Carlo methods. Some effort has been made to
choose bijectors such that the tails of the distribution in the
unconstrained space are between Gaussian and Exponential.
For distributions with discrete event space, or for which TFP currently
lacks a suitable bijector, this function returns None
.
Args  

*args

Passed to implementation _default_event_space_bijector .

**kwargs

Passed to implementation _default_event_space_bijector .

Returns  

event_space_bijector

Bijector instance or None .

experimental_sample_and_log_prob
experimental_sample_and_log_prob(
sample_shape=(), seed=None, name='sample_and_log_prob', **kwargs
)
Samples from this distribution and returns the log density of the sample.
The default implementation simply calls sample
and log_prob
:
def _sample_and_log_prob(self, sample_shape, seed, **kwargs):
x = self.sample(sample_shape=sample_shape, seed=seed, **kwargs)
return x, self.log_prob(x, **kwargs)
However, some subclasses may provide more efficient and/or numerically stable implementations.
Args  

sample_shape

integer Tensor desired shape of samples to draw.
Default value: () .

seed

PRNG seed; see tfp.random.sanitize_seed for details.
Default value: None .

name

name to give to the op.
Default value: 'sample_and_log_prob' .

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

samples

a Tensor , or structure of Tensor s, with prepended dimensions
sample_shape .

log_prob

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

is_scalar_batch
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == []
.
Args  

name

Python str prepended to names of ops created by this function.

Returns  

is_scalar_batch

bool scalar Tensor .

is_scalar_event
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == []
.
Args  

name

Python str prepended to names of ops created by this function.

Returns  

is_scalar_event

bool scalar Tensor .

kl_divergence
kl_divergence(
other, name='kl_divergence'
)
Computes the KullbackLeibler divergence.
Denote this distribution (self
) by p
and the other
distribution by
q
. Assuming p, q
are absolutely continuous with respect to reference
measure r
, the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))]
= int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q]  H[p]
where F
denotes the support of the random variable X ~ p
, H[., .]
denotes (Shannon) cross entropy, and H[.]
denotes (Shannon) entropy.
Args  

other

tfp.distributions.Distribution instance.

name

Python str prepended to names of ops created by this function.

Returns  

kl_divergence

self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of the KullbackLeibler
divergence.

log_cdf
log_cdf(
value, name='log_cdf', **kwargs
)
Log cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x)
that yields
a more accurate answer than simply taking the logarithm of the cdf
when
x << 1
.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

logcdf

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

log_prob
log_prob(
value, name='log_prob', **kwargs
)
Log probability density/mass function.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

log_prob

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

log_survival_function
log_survival_function(
value, name='log_survival_function', **kwargs
)
Log survival function.
Given random variable X
, the survival function is defined:
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1  P[X <= x] ]
= Log[ 1  cdf(x) ]
Typically, different numerical approximations can be used for the log
survival function, which are more accurate than 1  cdf(x)
when x >> 1
.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .

mean
mean(
name='mean', **kwargs
)
Mean.
mode
mode(
name='mode', **kwargs
)
Mode.
num_states_tensor
num_states_tensor()
The number of hidden states in the hidden Markov model.
param_shapes
@classmethod
param_shapes( sample_shape, name='DistributionParamShapes' )
Shapes of parameters given the desired shape of a call to sample()
.
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
.
Subclasses should override class method _param_shapes
.
Args  

sample_shape

Tensor or python list/tuple. Desired shape of a call to
sample() .

name

name to prepend ops with. 
Returns  

dict of parameter name to Tensor shapes.

param_static_shapes
@classmethod
param_static_shapes( sample_shape )
param_shapes with static (i.e. TensorShape
) shapes.
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
. Assumes that the sample's
shape is known statically.
Subclasses should override class method _param_shapes
to return
constantvalued tensors when constant values are fed.
Args  

sample_shape

TensorShape or python list/tuple. Desired shape of a call
to sample() .

Returns  

dict of parameter name to TensorShape .

Raises  

ValueError

if sample_shape is a TensorShape and is not fully defined.

parameter_properties
@classmethod
parameter_properties( dtype=tf.float32, num_classes=None )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the distribution's
Tensor
valued constructor arguments.
Distribution subclasses are not required to implement
_parameter_properties
, so this method may raise NotImplementedError
.
Providing a _parameter_properties
implementation enables several advanced
features, including:
 Distribution batch slicing (
sliced_distribution = distribution[i:j]
).  Automatic inference of
_batch_shape
and_batch_shape_tensor
, which must otherwise be computed explicitly.  Automatic instantiation of the distribution within TFP's internal property tests.
 Automatic construction of 'trainable' instances of the distribution using appropriate bijectors to avoid violating parameter constraints. This enables the distribution family to be used easily as a surrogate posterior in variational inference.
In the future, parameter property annotations may enable additional
functionality; for example, returning Distribution instances from
tf.vectorized_map
.
Args  

dtype

Optional float dtype to assume for continuousvalued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g., tfb.Softplus.low ) must be
instantiated with the same dtype as the values to be transformed.

num_classes

Optional int Tensor number of classes to assume when
inferring the shape of parameters for categoricallike distributions.
Otherwise ignored.

Returns  

parameter_properties

A
str > tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names to ParameterProperties`
instances.

Raises  

NotImplementedError

if the distribution class does not implement
_parameter_properties .

posterior_marginals
posterior_marginals(
observations, mask=None, name='posterior_marginals'
)
Compute marginal posterior distribution for each state.
This function computes, for each time step, the marginal conditional probability that the hidden Markov model was in each possible state given the observations that were made at each time step.
So if the hidden states are z[0],...,z[num_steps  1]
and
the observations are x[0], ..., x[num_steps  1]
, then
this function computes P(z[i]  x[0], ..., x[num_steps  1])
for all i
from 0
to num_steps  1
.
This operation is sometimes called smoothing. It uses a form of the forwardbackward algorithm.
Args  

observations

A tensor representing a batch of observations made on the
hidden Markov model. The rightmost dimensions of this tensor correspond
to the dimensions of the observation distributions of the underlying
Markov chain, if the observations are nonscalar. The next dimension
from the right indexes the steps in a sequence of observations from a
single sample from the hidden Markov model. The size of this dimension
should match the num_steps parameter of the hidden Markov model
object. The other dimensions are the dimensions of the batch and these
are broadcast with the hidden Markov model's parameters.

mask

optional booltype tensor with rightmost dimension matching
num_steps indicating which observations the result of this
function should be conditioned on. When the mask has value
True the corresponding observations aren't used.
if mask is None then all of the observations are used.
the mask dimensions left of the last are broadcast with the
hmm batch as well as with the observations.

name

Python str name prefixed to Ops created by this class.
Default value: "HiddenMarkovModel".

Returns  

posterior_marginal

A Categorical distribution object representing the
marginal probability of the hidden Markov model being in each state at
each step. The rightmost dimension of the Categorical distributions
batch will equal the num_steps parameter providing one marginal
distribution for each step. The other dimensions are the dimensions
corresponding to the batch of observations.

Raises  

ValueError

if rightmost dimension of observations does not
have size num_steps .

posterior_mode
posterior_mode(
observations, mask=None, name='posterior_mode'
)
Compute maximum likelihood sequence of hidden states.
When this function is provided with a sequence of observations
x[0], ..., x[num_steps  1]
, it returns the sequence of hidden
states z[0], ..., z[num_steps  1]
, drawn from the underlying
Markov chain, that is most likely to yield those observations.
It uses the Viterbi algorithm.
Args  

observations

A tensor representing a batch of observations made on the
hidden Markov model. The rightmost dimensions of this tensor correspond
to the dimensions of the observation distributions of the underlying
Markov chain, if the observations are nonscalar. The next dimension
from the right indexes the steps in a sequence of observations from a
single sample from the hidden Markov model. The size of this dimension
should match the num_steps parameter of the hidden Markov model
object. The other dimensions are the dimensions of the batch and these
are broadcast with the hidden Markov model's parameters.

mask

optional booltype tensor with rightmost dimension matching
num_steps indicating which observations the result of this
function should be conditioned on. When the mask has value
True the corresponding observations aren't used.
if mask is None then all of the observations are used.
the mask dimensions left of the last are broadcast with the
hmm batch as well as with the observations.

name

Python str name prefixed to Ops created by this class.
Default value: "HiddenMarkovModel".

Returns  

posterior_mode

A Tensor representing the most likely sequence of hidden
states. The rightmost dimension of this tensor will equal the
num_steps parameter providing one hidden state for each step. The
other dimensions are those of the batch.

Raises  

ValueError

if the observations tensor does not consist of
sequences of num_steps observations.

Examples
tfd = tfp.distributions
# A simple weather model.
# Represent a cold day with 0 and a hot day with 1.
# Suppose the first day of a sequence has a 0.8 chance of being cold.
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
# Suppose a cold day has a 30% chance of being followed by a hot day
# and a hot day has a 20% chance of being followed by a cold day.
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
# Suppose additionally that on each day the temperature is
# normally distributed with mean and standard deviation 0 and 5 on
# a cold day and mean and standard deviation 15 and 10 on a hot day.
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
# This gives the hidden Markov model:
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
# Suppose we observe gradually rising temperatures over a week:
temps = [2., 0., 2., 4., 6., 8., 10.]
# We can now compute the most probable sequence of hidden states:
model.posterior_mode(temps)
# The result is [0 0 0 0 0 1 1] telling us that the transition
# from "cold" to "hot" most likely happened between the
# 5th and 6th days.
prior_marginals
prior_marginals(
name='prior'
)
Compute prior marginal distribution for each state.
This function computes, for each time step, the
prior probability that the hidden Markov model is at a given state.
In other words this function computes:
P(z[i])
for all i
from 0
to num_steps  1
.
Args  

name

Python str name prefixed to Ops created by this class.
Default value: "priors".

Returns  

prior

A Categorical distribution object representing the
prior probability of the hidden Markov model being in each state at
each step. The rightmost dimension of the Categorical distributions
batch will equal the num_steps parameter providing one prior
distribution for each step.

prob
prob(
value, name='prob', **kwargs
)
Probability density/mass function.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

prob

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

quantile
quantile(
value, name='quantile', **kwargs
)
Quantile function. Aka 'inverse cdf' or 'percent point function'.
Given random variable X
and p in [0, 1]
, the quantile
is:
quantile(p) := x such that P[X <= x] == p
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

quantile

a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .

sample
sample(
sample_shape=(), seed=None, name='sample', **kwargs
)
Generate samples of the specified shape.
Note that a call to sample()
without arguments will generate a single
sample.
Args  

sample_shape

0D or 1D int32 Tensor . Shape of the generated samples.

seed

PRNG seed; see tfp.random.sanitize_seed for details.

name

name to give to the op. 
**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

samples

a Tensor with prepended dimensions sample_shape .

stddev
stddev(
name='stddev', **kwargs
)
Standard deviation.
Standard deviation is defined as,
stddev = E[(X  E[X])**2]**0.5
where X
is the random variable associated with this distribution, E
denotes expectation, and stddev.shape = batch_shape + event_shape
.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

stddev

Floatingpoint Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .

survival_function
survival_function(
value, name='survival_function', **kwargs
)
Survival function.
Given random variable X
, the survival function is defined:
survival_function(x) = P[X > x]
= 1  P[X <= x]
= 1  cdf(x).
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .

unnormalized_log_prob
unnormalized_log_prob(
value, name='unnormalized_log_prob', **kwargs
)
Potentially unnormalized log probability density/mass function.
This function is similar to log_prob
, but does not require that the
return value be normalized. (Normalization here refers to the total
integral of probability being one, as it should be by definition for any
probability distribution.) This is useful, for example, for distributions
where the normalization constant is difficult or expensive to compute. By
default, this simply calls log_prob
.
Args  

value

float or double Tensor .

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

unnormalized_log_prob

a Tensor of shape
sample_shape(x) + self.batch_shape with values of type self.dtype .

variance
variance(
name='variance', **kwargs
)
Variance.
Variance is defined as,
Var = E[(X  E[X])**2]
where X
is the random variable associated with this distribution, E
denotes expectation, and Var.shape = batch_shape + event_shape
.
Args  

name

Python str prepended to names of ops created by this function.

**kwargs

Named arguments forwarded to subclass implementation. 
Returns  

variance

Floatingpoint Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .

__getitem__
__getitem__(
slices
)
Slices the batch axes of this distribution, returning a new instance.
b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., 2:, 1::2]
b2.batch_shape # => [3, 1, 5, 2, 4]
x = tf.random.stateless_normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.stateless_normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape # => [4, 5, 3]
mvn.event_shape # => [2]
mvn2 = mvn[:, 3:, ..., ::1, tf.newaxis]
mvn2.batch_shape # => [4, 2, 3, 1]
mvn2.event_shape # => [2]
Args  

slices

slices from the [] operator 
Returns  

dist

A new tfd.Distribution instance with sliced parameters.

__iter__
__iter__()