![]() |
A Transformed Distribution.
Inherits From: Distribution
tfp.distributions.TransformedDistribution(
distribution, bijector, kwargs_split_fn=_default_kwargs_split_fn,
validate_args=False, parameters=None, name=None
)
Used in the notebooks
Used in the tutorials |
---|
A TransformedDistribution
models p(y)
given a base distribution p(x)
,
and a deterministic, invertible, differentiable transform, Y = g(X)
. The
transform is typically an instance of the Bijector
class and the base
distribution is typically an instance of the Distribution
class.
A Bijector
is expected to implement the following functions:
forward
,inverse
,inverse_log_det_jacobian
.
The semantics of these functions are outlined in the Bijector
documentation.
We now describe how a TransformedDistribution
alters the input/outputs of a
Distribution
associated with a random variable (rv) X
.
Write cdf(Y=y)
for an absolutely continuous cumulative distribution function
of random variable Y
; write the probability density function
pdf(Y=y) := d^k / (dy_1,...,dy_k) cdf(Y=y)
for its derivative wrt to Y
evaluated at y
. Assume that Y = g(X)
where g
is a deterministic
diffeomorphism, i.e., a non-random, continuous, differentiable, and invertible
function. Write the inverse of g
as X = g^{-1}(Y)
and (J o g)(x)
for
the Jacobian of g
evaluated at x
.
A TransformedDistribution
implements the following operations:
sample
Mathematically:Y = g(X)
Programmatically:bijector.forward(distribution.sample(...))
log_prob
Mathematically: `(log o pdf)(Y=y) = (log o pdf o g^{-1})(y)+ (log o abs o det o J o g^{-1})(y)`
Programmatically:
(distribution.log_prob(bijector.inverse(y)) + bijector.inverse_log_det_jacobian(y))
log_cdf
Mathematically:(log o cdf)(Y=y) = (log o cdf o g^{-1})(y)
Programmatically:distribution.log_cdf(bijector.inverse(x))
and similarly for:
cdf
,prob
,log_survival_function
,survival_function
.
Kullback-Leibler divergence is also well defined for TransformedDistribution
instances that have matching bijectors. Bijector matching is performed via
the Bijector.eq
method, e.g., td1.bijector == td2.bijector
, as part
of the KL calculation. If the underlying bijectors do not match, a
NotImplementedError
is raised when calling kl_divergence
. This is the
same behavior as calling kl_divergence
when two distributions do not have
a registered KL divergence.
A simple example constructing a Log-Normal distribution from a Normal distribution:
tfd = tfp.distributions
tfb = tfp.bijectors
log_normal = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfb.Exp(),
name='LogNormalTransformedDistribution')
A LogNormal
made from callables:
tfd = tfp.distributions
tfb = tfp.bijectors
log_normal = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfb.Inline(
forward_fn=tf.exp,
inverse_fn=tf.log,
inverse_log_det_jacobian_fn=(
lambda y: -tf.reduce_sum(tf.log(y), axis=-1)),
name='LogNormalTransformedDistribution')
Another example constructing a Normal from a StandardNormal:
tfd = tfp.distributions
tfb = tfp.bijectors
normal = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfb.Shift(shift=-1.)(tfb.Scale(scale=2.)),
name='NormalTransformedDistribution')
A TransformedDistribution
's batch_shape
is the same as that of the base
distribution, and its event_shape
is the forward_event_shape
of the
bijector applied to the event_shape
of the base distribution.
tfd.Sample
or tfd.Independent
may be used to add extra IID dimensions to
the event_shape
of the base distribution before the bijector operates on it.
The following example demonstrates how to construct a multivariate Normal as a
TransformedDistribution
, by adding a rank-1 IID dimension to the
event_shape
of a standard Normal and applying tfb.ScaleMatvecTriL
.
tfd = tfp.distributions
tfb = tfp.bijectors
# We will create two MVNs with batch_shape = event_shape = 2.
mean = [[-1., 0], # batch:0
[0., 1]] # batch:1
chol_cov = [[[1., 0],
[0, 1]], # batch:0
[[1, 0],
[2, 2]]] # batch:1
mvn1 = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=[0., 0], scale=1.), # base_dist.batch_shape == [2]
sample_shape=[2]) # base_dist.event_shape == [2]
bijector=tfb.Shift(shift=mean)(tfb.ScaleMatvecTriL(scale_tril=chol_cov)))
mvn2 = ds.MultivariateNormalTriL(loc=mean, scale_tril=chol_cov)
# mvn1.log_prob(x) == mvn2.log_prob(x)
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`distribution`
</td>
<td>
The base distribution instance to transform. Typically an
instance of `Distribution`.
</td>
</tr><tr>
<td>
`bijector`
</td>
<td>
The object responsible for calculating the transformation.
Typically an instance of `Bijector`.
</td>
</tr><tr>
<td>
`kwargs_split_fn`
</td>
<td>
Python `callable` which takes a kwargs `dict` and returns
a tuple of kwargs `dict`s for each of the `distribution` and `bijector`
parameters respectively.
Default value: `_default_kwargs_split_fn` (i.e.,
`lambda kwargs: (kwargs.get('distribution_kwargs', {}),
kwargs.get('bijector_kwargs', {}))`)
</td>
</tr><tr>
<td>
`validate_args`
</td>
<td>
Python `bool`, default `False`. When `True` distribution
parameters are checked for validity despite possibly degrading runtime
performance. When `False` invalid inputs may silently render incorrect
outputs.
</td>
</tr><tr>
<td>
`parameters`
</td>
<td>
Locals dict captured by subclass constructor, to be used for
copy/slice re-instantiation operations.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
Python `str` name prefixed to Ops created by this class. Default:
`bijector.name + distribution.name`.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`allow_nan_stats`
</td>
<td>
Python `bool` describing behavior when a stat is undefined.
Stats return +/- infinity when it makes sense. E.g., the variance of a
Cauchy distribution is infinity. However, sometimes the statistic is
undefined, e.g., if a distribution's pdf does not achieve a maximum within
the support of the distribution, the mode is undefined. If the mean is
undefined, then by definition the variance is undefined. E.g. the mean for
Student's T for df = 1 is undefined (no clear way to say it is either + or -
infinity), so the variance = E[(X - mean)**2] is also undefined.
</td>
</tr><tr>
<td>
`batch_shape`
</td>
<td>
Shape of a single sample from a single event index as a `TensorShape`.
May be partially defined or unknown.
The batch dimensions are indexes into independent, non-identical
parameterizations of this distribution.
</td>
</tr><tr>
<td>
`bijector`
</td>
<td>
Function transforming x => y.
</td>
</tr><tr>
<td>
`distribution`
</td>
<td>
Base distribution, p(x).
</td>
</tr><tr>
<td>
`dtype`
</td>
<td>
The `DType` of `Tensor`s handled by this `Distribution`.
</td>
</tr><tr>
<td>
`event_shape`
</td>
<td>
Shape of a single sample from a single batch as a `TensorShape`.
May be partially defined or unknown.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
Name prepended to all ops created by this `Distribution`.
</td>
</tr><tr>
<td>
`name_scope`
</td>
<td>
Returns a `tf.name_scope` instance for this class.
</td>
</tr><tr>
<td>
`parameters`
</td>
<td>
Dictionary of parameters used to instantiate this `Distribution`.
</td>
</tr><tr>
<td>
`reparameterization_type`
</td>
<td>
Describes how samples from the distribution are reparameterized.
Currently this is one of the static instances
`tfd.FULLY_REPARAMETERIZED` or `tfd.NOT_REPARAMETERIZED`.
</td>
</tr><tr>
<td>
`submodules`
</td>
<td>
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as
properties of modules which are properties of this module (and so on).
<pre class="devsite-click-to-copy prettyprint lang-py">
<code class="devsite-terminal" data-terminal-prefix=">>>">a = tf.Module()</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">b = tf.Module()</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">c = tf.Module()</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">a.b = b</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">b.c = c</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">list(a.submodules) == [b, c]</code>
<code class="no-select nocode">True</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">list(b.submodules) == [c]</code>
<code class="no-select nocode">True</code>
<code class="devsite-terminal" data-terminal-prefix=">>>">list(c.submodules) == []</code>
<code class="no-select nocode">True</code>
</pre>
</td>
</tr><tr>
<td>
`trainable_variables`
</td>
<td>
Sequence of trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance
and submodules. For performance reasons you may wish to cache the result
of calling this method if you don't expect the return value to change.
</td>
</tr><tr>
<td>
`validate_args`
</td>
<td>
Python `bool` indicating possibly expensive checks are enabled.
</td>
</tr><tr>
<td>
`variables`
</td>
<td>
Sequence of variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance
and submodules. For performance reasons you may wish to cache the result
of calling this method if you don't expect the return value to change.
</td>
</tr>
</table>
## Methods
<h3 id="batch_shape_tensor"><code>batch_shape_tensor</code></h3>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.12.1/tensorflow_probability/python/distributions/distribution.py#L825-L863">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>batch_shape_tensor(
name='batch_shape_tensor'
)
</code></pre>
Shape of a single sample from a single event index as a 1-D `Tensor`.
The batch dimensions are indexes into independent, non-identical
parameterizations of this distribution.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`name`
</td>
<td>
name to give to the op
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr>
<td>
`batch_shape`
</td>
<td>
`Tensor`.
</td>
</tr>
</table>
<h3 id="cdf"><code>cdf</code></h3>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.12.1/tensorflow_probability/python/distributions/distribution.py#L1112-L1130">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>cdf(
value, name='cdf', **kwargs
)
</code></pre>
Cumulative distribution function.
Given random variable `X`, the cumulative distribution function `cdf` is:
```none
cdf(x) := P[X <= x]
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
cdf
|
a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .
|
copy
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Args | |
---|---|
**override_parameters_kwargs
|
String/value dictionary of initialization arguments to override with new values. |
Returns | |
---|---|
distribution
|
A new instance of type(self) initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs) .
|
covariance
covariance(
name='covariance', **kwargs
)
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-k
, vector-valued distribution, it is calculated
as,
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov
is a (batch of) k x k
matrix, 0 <= (i, j) < k
, and E
denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g.,
matrix-valued, Wishart), Covariance
shall return a (batch of) matrices
under some vectorization of the events, i.e.,
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov
is a (batch of) k' x k'
matrices,
0 <= (i, j) < k' = reduce_prod(event_shape)
, and Vec
is some function
mapping indices of this distribution's event dimensions to indices of a
length-k'
vector.
Args | |
---|---|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
covariance
|
Floating-point Tensor with shape [B1, ..., Bn, k', k']
where the first n dimensions are batch coordinates and
k' = reduce_prod(self.event_shape) .
|
cross_entropy
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy.
Denote this distribution (self
) by P
and the other
distribution by
Q
. Assuming P, Q
are absolutely continuous with respect to
one another and permit densities p(x) dr(x)
and q(x) dr(x)
, (Shannon)
cross entropy is defined as:
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F
denotes the support of the random variable X ~ P
.
other
types with built-in registrations: Chi
, ExpInverseGamma
, GeneralizedExtremeValue
, Gumbel
, JohnsonSU
, Kumaraswamy
, LogLogistic
, LogNormal
, LogitNormal
, Moyal
, MultivariateNormalDiag
, MultivariateNormalDiagPlusLowRank
, MultivariateNormalFullCovariance
, MultivariateNormalLinearOperator
, MultivariateNormalTriL
, RelaxedOneHotCategorical
, SinhArcsinh
, TransformedDistribution
, VectorExponentialDiag
, Weibull
Args | |
---|---|
other
|
tfp.distributions.Distribution instance.
|
name
|
Python str prepended to names of ops created by this function.
|
Returns | |
---|---|
cross_entropy
|
self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of (Shannon) cross entropy.
|
entropy
entropy(
name='entropy', **kwargs
)
Shannon entropy in nats.
event_shape_tensor
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor
.
Args | |
---|---|
name
|
name to give to the op |
Returns | |
---|---|
event_shape
|
Tensor .
|
experimental_default_event_space_bijector
experimental_default_event_space_bijector(
*args, **kwargs
)
Bijector mapping the reals (R**n) to the event space of the distribution.
Distributions with continuous support may implement
_default_event_space_bijector
which returns a subclass of
tfp.bijectors.Bijector
that maps R**n to the distribution's event space.
For example, the default bijector for the Beta
distribution
is tfp.bijectors.Sigmoid()
, which maps the real line to [0, 1]
, the
support of the Beta
distribution. The default bijector for the
CholeskyLKJ
distribution is tfp.bijectors.CorrelationCholesky
, which
maps R^(k * (k-1) // 2) to the submanifold of k x k lower triangular
matrices with ones along the diagonal.
The purpose of experimental_default_event_space_bijector
is
to enable gradient descent in an unconstrained space for Variational
Inference and Hamiltonian Monte Carlo methods. Some effort has been made to
choose bijectors such that the tails of the distribution in the
unconstrained space are between Gaussian and Exponential.
For distributions with discrete event space, or for which TFP currently
lacks a suitable bijector, this function returns None
.
Args | |
---|---|
*args
|
Passed to implementation _default_event_space_bijector .
|
**kwargs
|
Passed to implementation _default_event_space_bijector .
|
Returns | |
---|---|
event_space_bijector
|
Bijector instance or None .
|
is_scalar_batch
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == []
.
Args | |
---|---|
name
|
Python str prepended to names of ops created by this function.
|
Returns | |
---|---|
is_scalar_batch
|
bool scalar Tensor .
|
is_scalar_event
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == []
.
Args | |
---|---|
name
|
Python str prepended to names of ops created by this function.
|
Returns | |
---|---|
is_scalar_event
|
bool scalar Tensor .
|
kl_divergence
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence.
Denote this distribution (self
) by p
and the other
distribution by
q
. Assuming p, q
are absolutely continuous with respect to reference
measure r
, the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F
denotes the support of the random variable X ~ p
, H[., .]
denotes (Shannon) cross entropy, and H[.]
denotes (Shannon) entropy.
other
types with built-in registrations: Chi
, ExpInverseGamma
, GeneralizedExtremeValue
, Gumbel
, JohnsonSU
, Kumaraswamy
, LogLogistic
, LogNormal
, LogitNormal
, Moyal
, MultivariateNormalDiag
, MultivariateNormalDiagPlusLowRank
, MultivariateNormalFullCovariance
, MultivariateNormalLinearOperator
, MultivariateNormalTriL
, RelaxedOneHotCategorical
, SinhArcsinh
, TransformedDistribution
, VectorExponentialDiag
, Weibull
Args | |
---|---|
other
|
tfp.distributions.Distribution instance.
|
name
|
Python str prepended to names of ops created by this function.
|
Returns | |
---|---|
kl_divergence
|
self.dtype Tensor with shape [B1, ..., Bn]
representing n different calculations of the Kullback-Leibler
divergence.
|
log_cdf
log_cdf(
value, name='log_cdf', **kwargs
)
Log cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x)
that yields
a more accurate answer than simply taking the logarithm of the cdf
when
x << -1
.
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
logcdf
|
a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .
|
log_prob
log_prob(
value, name='log_prob', **kwargs
)
Log probability density/mass function.
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
log_prob
|
a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .
|
log_survival_function
log_survival_function(
value, name='log_survival_function', **kwargs
)
Log survival function.
Given random variable X
, the survival function is defined:
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log
survival function, which are more accurate than 1 - cdf(x)
when x >> 1
.
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .
|
mean
mean(
name='mean', **kwargs
)
Mean.
mode
mode(
name='mode', **kwargs
)
Mode.
param_shapes
@classmethod
param_shapes( sample_shape, name='DistributionParamShapes' )
Shapes of parameters given the desired shape of a call to sample()
. (deprecated)
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
.
Subclasses should override class method _param_shapes
.
Args | |
---|---|
sample_shape
|
Tensor or python list/tuple. Desired shape of a call to
sample() .
|
name
|
name to prepend ops with. |
Returns | |
---|---|
dict of parameter name to Tensor shapes.
|
param_static_shapes
@classmethod
param_static_shapes( sample_shape )
param_shapes with static (i.e. TensorShape
) shapes. (deprecated)
This is a class method that describes what key/value arguments are required
to instantiate the given Distribution
so that a particular shape is
returned for that instance's call to sample()
. Assumes that the sample's
shape is known statically.
Subclasses should override class method _param_shapes
to return
constant-valued tensors when constant values are fed.
Args | |
---|---|
sample_shape
|
TensorShape or python list/tuple. Desired shape of a call
to sample() .
|
Returns | |
---|---|
dict of parameter name to TensorShape .
|
Raises | |
---|---|
ValueError
|
if sample_shape is a TensorShape and is not fully defined.
|
parameter_properties
@classmethod
parameter_properties( dtype=tf.float32, num_classes=None )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the distribution's
Tensor
-valued constructor arguments.
Args | |
---|---|
dtype
|
Optional float dtype to assume for continuous-valued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g., tfb.Softplus.low ) must be
instantiated with the same dtype as the values to be transformed.
|
num_classes
|
Optional int Tensor number of classes to assume when
inferring the shape of parameters for categorical-like distributions.
Otherwise ignored.
|
Returns | |
---|---|
parameter_properties
|
A
str -> tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names to ParameterProperties`
instances.
|
prob
prob(
value, name='prob', **kwargs
)
Probability density/mass function.
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
prob
|
a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .
|
quantile
quantile(
value, name='quantile', **kwargs
)
Quantile function. Aka 'inverse cdf' or 'percent point function'.
Given random variable X
and p in [0, 1]
, the quantile
is:
quantile(p) := x such that P[X <= x] == p
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
quantile
|
a Tensor of shape sample_shape(x) + self.batch_shape with
values of type self.dtype .
|
sample
sample(
sample_shape=(), seed=None, name='sample', **kwargs
)
Generate samples of the specified shape.
Note that a call to sample()
without arguments will generate a single
sample.
Args | |
---|---|
sample_shape
|
0D or 1D int32 Tensor . Shape of the generated samples.
|
seed
|
Python integer or tfp.util.SeedStream instance, for seeding PRNG.
|
name
|
name to give to the op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
samples
|
a Tensor with prepended dimensions sample_shape .
|
stddev
stddev(
name='stddev', **kwargs
)
Standard deviation.
Standard deviation is defined as,
stddev = E[(X - E[X])**2]**0.5
where X
is the random variable associated with this distribution, E
denotes expectation, and stddev.shape = batch_shape + event_shape
.
Args | |
---|---|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
stddev
|
Floating-point Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .
|
survival_function
survival_function(
value, name='survival_function', **kwargs
)
Survival function.
Given random variable X
, the survival function is defined:
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args | |
---|---|
value
|
float or double Tensor .
|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor of shape sample_shape(x) + self.batch_shape with values of type
self.dtype .
|
variance
variance(
name='variance', **kwargs
)
Variance.
Variance is defined as,
Var = E[(X - E[X])**2]
where X
is the random variable associated with this distribution, E
denotes expectation, and Var.shape = batch_shape + event_shape
.
Args | |
---|---|
name
|
Python str prepended to names of ops created by this function.
|
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
variance
|
Floating-point Tensor with shape identical to
batch_shape + event_shape , i.e., the same shape as self.mean() .
|
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__getitem__
__getitem__(
slices
)
Slices the batch axes of this distribution, returning a new instance.
b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape # => [3, 1, 5, 2, 4]
x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape # => [4, 5, 3]
mvn.event_shape # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape # => [4, 2, 3, 1]
mvn2.event_shape # => [2]
Args | |
---|---|
slices
|
slices from the [] operator |
Returns | |
---|---|
dist
|
A new tfd.Distribution instance with sliced parameters.
|
__iter__
__iter__()