TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfp.experimental.substrates.numpy.distributions.HiddenMarkovModel

View source on GitHub

Class HiddenMarkovModel

Hidden Markov model distribution.

Inherits From: Distribution

The HiddenMarkovModel distribution implements a (batch of) hidden Markov models where the initial states, transition probabilities and observed states are all given by user-provided distributions. This model assumes that the transition matrices are fixed over time.

In this model, there is a sequence of integer-valued hidden states: z[0], z[1], ..., z[num_steps - 1] and a sequence of observed states: x[0], ..., x[num_steps - 1]. The distribution of z[0] is given by initial_distribution. The conditional probability of z[i + 1] given z[i] is described by the batch of distributions in transition_distribution. For a batch of hidden Markov models, the coordinates before the rightmost one of the transition_distribution batch correspond to indices into the hidden Markov model batch. The rightmost coordinate of the batch is used to select which distribution z[i + 1] is drawn from. The distributions corresponding to the probability of z[i + 1] conditional on z[i] == k is given by the elements of the batch whose rightmost coordinate is k. Similarly, the conditional distribution of z[i] given x[i] is given by the batch of observation_distribution. When the rightmost coordinate of observation_distribution is k it gives the conditional probabilities of x[i] given z[i] == k. The probability distribution associated with the HiddenMarkovModel distribution is the marginal distribution of x[0],...,x[num_steps - 1].

Examples

tfd = tfp.distributions

# A simple weather model.

# Represent a cold day with 0 and a hot day with 1.
# Suppose the first day of a sequence has a 0.8 chance of being cold.
# We can model this using the categorical distribution:

initial_distribution = tfd.Categorical(probs=[0.8, 0.2])

# Suppose a cold day has a 30% chance of being followed by a hot day
# and a hot day has a 20% chance of being followed by a cold day.
# We can model this as:

transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
                                                 [0.2, 0.8]])

# Suppose additionally that on each day the temperature is
# normally distributed with mean and standard deviation 0 and 5 on
# a cold day and mean and standard deviation 15 and 10 on a hot day.
# We can model this with:

observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])

# We can combine these distributions into a single week long
# hidden Markov model with:

model = tfd.HiddenMarkovModel(
    initial_distribution=initial_distribution,
    transition_distribution=transition_distribution,
    observation_distribution=observation_distribution,
    num_steps=7)

# The expected temperatures for each day are given by:

model.mean()  # shape [7], elements approach 9.0

# The log pdf of a week of temperature 0 is:

model.log_prob(tf.zeros(shape=[7]))

References

[1] https://en.wikipedia.org/wiki/Hidden_Markov_model

__init__

__init__(
    initial_distribution,
    transition_distribution,
    observation_distribution,
    num_steps,
    validate_args=False,
    allow_nan_stats=True,
    name='HiddenMarkovModel'
)

Initialize hidden Markov model.

Args:

  • initial_distribution: A Categorical-like instance. Determines probability of first hidden state in Markov chain. The number of categories must match the number of categories of transition_distribution as well as both the rightmost batch dimension of transition_distribution and the rightmost batch dimension of observation_distribution.
  • transition_distribution: A Categorical-like instance. The rightmost batch dimension indexes the probability distribution of each hidden state conditioned on the previous hidden state.
  • observation_distribution: A tfp.distributions.Distribution-like instance. The rightmost batch dimension indexes the distribution of each observation conditioned on the corresponding hidden state.
  • num_steps: The number of steps taken in Markov chain. A python int.
  • validate_args: Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs. Default value: False.
  • allow_nan_stats: Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined. Default value: True.
  • name: Python str name prefixed to Ops created by this class. Default value: "HiddenMarkovModel".

Raises:

  • ValueError: if num_steps is not at least 1.
  • ValueError: if initial_distribution does not have scalar event_shape.
  • ValueError: if transition_distribution does not have scalar event_shape.
  • ValueError: if transition_distribution and observation_distribution are fully defined but don't have matching rightmost dimension.

Properties

allow_nan_stats

Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

Returns:

  • allow_nan_stats: Python bool.

batch_shape

Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Returns:

  • batch_shape: TensorShape, possibly unknown.

dtype

The DType of Tensors handled by this Distribution.

event_shape

Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

Returns:

  • event_shape: TensorShape, possibly unknown.

initial_distribution

name

Name prepended to all ops created by this Distribution.

num_states

num_steps

observation_distribution

parameters

Dictionary of parameters used to instantiate this Distribution.

reparameterization_type

Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

Returns:

An instance of ReparameterizationType.

trainable_variables

transition_distribution

validate_args

Python bool indicating possibly expensive checks are enabled.

variables

Methods

__getitem__

View source

__getitem__(slices)

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args:

  • slices: slices from the [] operator

Returns:

  • dist: A new tfd.Distribution instance with sliced parameters.

__iter__

View source

__iter__()

batch_shape_tensor

View source

batch_shape_tensor(name='batch_shape_tensor')

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args:

  • name: name to give to the op

Returns:

  • batch_shape: Tensor.

cdf

View source

cdf(
    value,
    name='cdf',
    **kwargs
)

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

copy(**override_parameters_kwargs)

Creates a deep copy of the distribution.

Args:

  • **override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

Returns:

  • distribution: A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

covariance(
    name='covariance',
    **kwargs
)

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • covariance: Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

cross_entropy(
    other,
    name='cross_entropy'
)

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args:

Returns:

  • cross_entropy: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

entropy(
    name='entropy',
    **kwargs
)

Shannon entropy in nats.

event_shape_tensor

View source

event_shape_tensor(name='event_shape_tensor')

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args:

  • name: name to give to the op

Returns:

  • event_shape: Tensor.

is_scalar_batch

View source

is_scalar_batch(name='is_scalar_batch')

Indicates that batch_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_batch: bool scalar Tensor.

is_scalar_event

View source

is_scalar_event(name='is_scalar_event')

Indicates that event_shape == [].

Args:

  • name: Python str prepended to names of ops created by this function.

Returns:

  • is_scalar_event: bool scalar Tensor.

kl_divergence

View source

kl_divergence(
    other,
    name='kl_divergence'
)

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args:

Returns:

  • kl_divergence: self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

View source

log_cdf(
    value,
    name='log_cdf',
    **kwargs
)

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • logcdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

log_prob(
    value,
    name='log_prob',
    **kwargs
)

Log probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

log_survival_function(
    value,
    name='log_survival_function',
    **kwargs
)

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

mean(
    name='mean',
    **kwargs
)

Mean.

mode

View source

mode(
    name='mode',
    **kwargs
)

Mode.

param_shapes

View source

param_shapes(
    cls,
    sample_shape,
    name='DistributionParamShapes'
)

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args:

  • sample_shape: Tensor or python list/tuple. Desired shape of a call to sample().
  • name: name to prepend ops with.

Returns:

dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_static_shapes(
    cls,
    sample_shape
)

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args:

  • sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample().

Returns:

dict of parameter name to TensorShape.

Raises:

  • ValueError: if sample_shape is a TensorShape and is not fully defined.

posterior_marginals

View source

posterior_marginals(
    observations,
    mask=None,
    name=None
)

Compute marginal posterior distribution for each state.

This function computes, for each time step, the marginal conditional probability that the hidden Markov model was in each possible state given the observations that were made at each time step. So if the hidden states are z[0],...,z[num_steps - 1] and the observations are x[0], ..., x[num_steps - 1], then this function computes P(z[i] | x[0], ..., x[num_steps - 1]) for all i from 0 to num_steps - 1.

This operation is sometimes called smoothing. It uses a form of the forward-backward algorithm.

Args:

  • observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimension of this tensor gives the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the num_steps parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters.
  • mask: optional bool-type tensor with rightmost dimension matching num_steps indicating which observations the result of this function should be conditioned on. When the mask has value True the corresponding observations aren't used. if mask is None then all of the observations are used. the mask dimensions left of the last are broadcast with the hmm batch as well as with the observations.
  • name: Python str name prefixed to Ops created by this class. Default value: "HiddenMarkovModel".

Returns:

  • posterior_marginal: A Categorical distribution object representing the marginal probability of the hidden Markov model being in each state at each step. The rightmost dimension of the Categorical distributions batch will equal the num_steps parameter providing one marginal distribution for each step. The other dimensions are the dimensions corresponding to the batch of observations.

Raises:

  • ValueError: if rightmost dimension of observations does not have size num_steps.

posterior_mode

View source

posterior_mode(
    observations,
    mask=None,
    name=None
)

Compute maximum likelihood sequence of hidden states.

When this function is provided with a sequence of observations x[0], ..., x[num_steps - 1], it returns the sequence of hidden states z[0], ..., z[num_steps - 1], drawn from the underlying Markov chain, that is most likely to yield those observations.

It uses the Viterbi algorithm.

Args:

  • observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimensions of this tensor correspond to the dimensions of the observation distributions of the underlying Markov chain. The next dimension from the right indexes the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the num_steps parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters.
  • mask: optional bool-type tensor with rightmost dimension matching num_steps indicating which observations the result of this function should be conditioned on. When the mask has value True the corresponding observations aren't used. if mask is None then all of the observations are used. the mask dimensions left of the last are broadcast with the hmm batch as well as with the observations.
  • name: Python str name prefixed to Ops created by this class. Default value: "HiddenMarkovModel".

Returns:

  • posterior_mode: A Tensor representing the most likely sequence of hidden states. The rightmost dimension of this tensor will equal the num_steps parameter providing one hidden state for each step. The other dimensions are those of the batch.

Raises:

  • ValueError: if the observations tensor does not consist of sequences of num_steps observations.

Examples

tfd = tfp.distributions

# A simple weather model.

# Represent a cold day with 0 and a hot day with 1.
# Suppose the first day of a sequence has a 0.8 chance of being cold.

initial_distribution = tfd.Categorical(probs=[0.8, 0.2])

# Suppose a cold day has a 30% chance of being followed by a hot day
# and a hot day has a 20% chance of being followed by a cold day.

transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
                                                 [0.2, 0.8]])

# Suppose additionally that on each day the temperature is
# normally distributed with mean and standard deviation 0 and 5 on
# a cold day and mean and standard deviation 15 and 10 on a hot day.

observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])

# This gives the hidden Markov model:

model = tfd.HiddenMarkovModel(
    initial_distribution=initial_distribution,
    transition_distribution=transition_distribution,
    observation_distribution=observation_distribution,
    num_steps=7)

# Suppose we observe gradually rising temperatures over a week:
temps = [-2., 0., 2., 4., 6., 8., 10.]

# We can now compute the most probable sequence of hidden states:

model.posterior_mode(temps)

# The result is [0 0 0 0 0 1 1] telling us that the transition
# from "cold" to "hot" most likely happened between the
# 5th and 6th days.

prob

View source

prob(
    value,
    name='prob',
    **kwargs
)

Probability density/mass function.

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

quantile(
    value,
    name='quantile',
    **kwargs
)

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • quantile: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

sample(
    sample_shape=(),
    seed=None,
    name='sample',
    **kwargs
)

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args:

  • sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples.
  • seed: Python integer seed for RNG
  • name: name to give to the op.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • samples: a Tensor with prepended dimensions sample_shape.

stddev

View source

stddev(
    name='stddev',
    **kwargs
)

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • stddev: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

survival_function(
    value,
    name='survival_function',
    **kwargs
)

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args:

  • value: float or double Tensor.
  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

variance(
    name='variance',
    **kwargs
)

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args:

  • name: Python str prepended to names of ops created by this function.
  • **kwargs: Named arguments forwarded to subclass implementation.

Returns:

  • variance: Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().