![]() |
Affine MaskedAutoregressiveFlow bijector.
Inherits From: Bijector
tfp.bijectors.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=None, bijector_fn=None, is_constant_jacobian=False,
validate_args=False, unroll_loop=False, event_ndims=1, name=None
)
The affine autoregressive flow [(Papamakarios et al., 2016)][3] provides a relatively simple framework for user-specified (deep) architectures to learn a distribution over continuous events. Regarding terminology,
'Autoregressive models decompose the joint density as a product of conditionals, and model each conditional in turn. Normalizing flows transform a base density (e.g. a standard Gaussian) into the target density by an invertible transformation with tractable Jacobian.' [(Papamakarios et al., 2016)][3]
In other words, the 'autoregressive property' is equivalent to the
decomposition, p(x) = prod{ p(x[perm[i]] | x[perm[0:i]]) : i=0, ..., d }
where perm
is some permutation of {0, ..., d}
. In the simple case where
the permutation is identity this reduces to:
p(x) = prod{ p(x[i] | x[0:i]) : i=0, ..., d }
.
In TensorFlow Probability, 'normalizing flows' are implemented as
tfp.bijectors.Bijector
s. The forward
'autoregression' is implemented
using a tf.while_loop
and a deep neural network (DNN) with masked weights
such that the autoregressive property is automatically met in the inverse
.
A TransformedDistribution
using MaskedAutoregressiveFlow(...)
uses the
(expensive) forward-mode calculation to draw samples and the (cheap)
reverse-mode calculation to compute log-probabilities. Conversely, a
TransformedDistribution
using Invert(MaskedAutoregressiveFlow(...))
uses
the (expensive) forward-mode calculation to compute log-probabilities and the
(cheap) reverse-mode calculation to compute samples. See 'Example Use'
[below] for more details.
Given a shift_and_log_scale_fn
, the forward and inverse transformations are
(a sequence of) affine transformations. A 'valid' shift_and_log_scale_fn
must compute each shift
(aka loc
or 'mu' in [Germain et al. (2015)][1])
and log(scale)
(aka 'alpha' in [Germain et al. (2015)][1]) such that each
are broadcastable with the arguments to forward
and inverse
, i.e., such
that the calculations in forward
, inverse
[below] are possible.
For convenience, tfp.bijectors.AutoregressiveNetwork
is offered as a
possible shift_and_log_scale_fn
function. It implements the MADE
architecture [(Germain et al., 2015)][1]. MADE is a feed-forward network that
computes a shift
and log(scale)
using masked dense layers in a deep
neural network. Weights are masked to ensure the autoregressive property. It
is possible that this architecture is suboptimal for your task. To build
alternative networks, either change the arguments to
tfp.bijectors.AutoregressiveNetwork
or use some other architecture, e.g.,
using tf.keras.layers
.
Assuming shift_and_log_scale_fn
has valid shape and autoregressive
semantics, the forward transformation is
def forward(x):
y = zeros_like(x)
event_size = x.shape[-event_dims:].num_elements()
for _ in range(event_size):
shift, log_scale = shift_and_log_scale_fn(y)
y = x * tf.exp(log_scale) + shift
return y
and the inverse transformation is
def inverse(y):
shift, log_scale = shift_and_log_scale_fn(y)
return (y - shift) / tf.exp(log_scale)
Notice that the inverse
does not need a for-loop. This is because in the
forward pass each calculation of shift
and log_scale
is based on the y
calculated so far (not x
). In the inverse
, the y
is fully known, thus
is equivalent to the scaling used in forward
after event_size
passes,
i.e., the 'last' y
used to compute shift
, log_scale
. (Roughly speaking,
this also proves the transform is bijective.)
The bijector_fn
argument allows specifying a more general coupling relation,
such as the LSTM-inspired activation from [4], or Neural Spline Flow [5]. It
must logically operate on each element of the input individually, and still
obey the 'autoregressive property' described above. The forward
transformation is
def forward(x):
y = zeros_like(x)
event_size = x.shape[-event_dims:].num_elements()
for _ in range(event_size):
bijector = bijector_fn(y)
y = bijector.forward(x)
return y
and inverse transformation is
def inverse(y):
bijector = bijector_fn(y)
return bijector.inverse(y)
Examples
tfd = tfp.distributions
tfb = tfp.bijectors
dims = 2
# A common choice for a normalizing flow is to use a Gaussian for the base
# distribution. (However, any continuous distribution would work.) Here, we
# use `tfd.Sample` to create a joint Gaussian distribution with diagonal
# covariance for the base distribution (note that in the Gaussian case,
# `tfd.MultivariateNormalDiag` could also be used.)
maf = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[512, 512])))
x = maf.sample() # Expensive; uses `tf.while_loop`, no Bijector caching.
maf.log_prob(x) # Almost free; uses Bijector caching.
# Cheap; no `tf.while_loop` despite no Bijector caching.
maf.log_prob(tf.zeros(dims))
# [Papamakarios et al. (2016)][3] also describe an Inverse Autoregressive
# Flow [(Kingma et al., 2016)][2]:
iaf = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.Invert(tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[512, 512]))))
x = iaf.sample() # Cheap; no `tf.while_loop` despite no Bijector caching.
iaf.log_prob(x) # Almost free; uses Bijector caching.
# Expensive; uses `tf.while_loop`, no Bijector caching.
iaf.log_prob(tf.zeros(dims))
# In many (if not most) cases the default `shift_and_log_scale_fn` will be a
# poor choice. Here's an example of using a 'shift only' version and with a
# different number/depth of hidden layers.
made = tfb.AutoregressiveNetwork(params=1, hidden_units=[32])
maf_no_scale_hidden2 = tfd.TransformedDistribution(
distribution=tfd.Sample(
tfd.Normal(loc=0., scale=1.), sample_shape=[dims]),
bijector=tfb.MaskedAutoregressiveFlow(
lambda y: (made(y)[..., 0], None),
is_constant_jacobian=True))
maf_no_scale_hidden2._made = made # Ensure maf_no_scale_hidden2.trainable
# NOTE: The last line ensures that maf_no_scale_hidden2.trainable_variables
# will include all variables from `made`.
Variable Tracking
A tfb.MaskedAutoregressiveFlow
instance saves a reference to the values
passed as shift_and_log_scale_fn
and bijector_fn
to its constructor.
Thus, for most values passed as shift_and_log_scale_fn
or bijector_fn
,
variables referenced by those values will be found and tracked by the
tfb.MaskedAutoregressiveFlow
instance. Please see the tf.Module
documentation for further details.
However, if the value passed to shift_and_log_scale_fn
or bijector_fn
is a
Python function, then tfb.MaskedAutoregressiveFlow
cannot automatically
track variables used inside shift_and_log_scale_fn
or bijector_fn
. To get
tfb.MaskedAutoregressiveFlow
to track such variables, either:
Replace the Python function with a
tf.Module
,tf.keras.Layer
, or other callable object through whichtf.Module
can find variables.Or, add a reference to the variables to the
tfb.MaskedAutoregressiveFlow
instance by setting an attribute -- for example:made1 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10]) made2 = tfb.AutoregressiveNetwork(params=1, hidden_units=[10, 10]) maf = tfb.MaskedAutoregressiveFlow(lambda y: (made1(y), made2(y) + 1.)) maf._made_variables = made1.variables + made2.variables
References
[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509
[2]: Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934
[3]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. In Neural Information Processing Systems, 2017. https://arxiv.org/abs/1705.07057
[4]: Diederik P Kingma, Tim Salimans, Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934
[5]: Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. Neural Spline Flows, 2019. http://arxiv.org/abs/1906.04032
Args | |
---|---|
shift_and_log_scale_fn
|
Python callable which computes shift and
log_scale from the inverse domain (y ). Calculation must respect the
'autoregressive property' (see class docstring). Suggested default
tfb.AutoregressiveNetwork(params=2, hidden_layers=...) .
Typically the function contains tf.Variables . Returning None for
either (both) shift , log_scale is equivalent to (but more efficient
than) returning zero. If shift_and_log_scale_fn returns a single
Tensor , the returned value will be unstacked to get the shift and
log_scale : tf.unstack(shift_and_log_scale_fn(y), num=2, axis=-1) .
|
bijector_fn
|
Python callable which returns a tfb.Bijector which
transforms event tensor with the signature
(input, **condition_kwargs) -> bijector . The bijector must operate on
scalar events and must not alter the rank of its input. The
bijector_fn will be called with Tensors from the inverse domain
(y ). Calculation must respect the 'autoregressive property' (see
class docstring).
|
is_constant_jacobian
|
Python bool . Default: False . When True the
implementation assumes log_scale does not depend on the forward domain
(x ) or inverse domain (y ) values. (No validation is made;
is_constant_jacobian=False is always safe but possibly computationally
inefficient.)
|
validate_args
|
Python bool indicating whether arguments should be
checked for correctness.
|
unroll_loop
|
Python bool indicating whether the tf.while_loop in
_forward should be replaced with a static for loop. Requires that
the final dimension of x be known at graph construction time. Defaults
to False .
|
event_ndims
|
Python integer , the intrinsic dimensionality of this
bijector. 1 corresponds to a simple vector autoregressive bijector as
implemented by the tfp.bijectors.AutoregressiveNetwork , 2 might be
useful for a 2D convolutional shift_and_log_scale_fn and so on.
|
name
|
Python str , name given to ops managed by this object.
|
Raises | |
---|---|
ValueError
|
If both or none of shift_and_log_scale_fn and bijector_fn
are specified.
|
Attributes | |
---|---|
dtype
|
|
forward_min_event_ndims
|
Returns the minimal number of dimensions bijector.forward operates on.
Multipart bijectors return structured |
graph_parents
|
Returns this Bijector 's graph_parents as a Python list.
|
has_static_min_event_ndims
|
Returns True if the bijector has statically-known min_event_ndims .
|
inverse_min_event_ndims
|
Returns the minimal number of dimensions bijector.inverse operates on.
Multipart bijectors return structured |
is_constant_jacobian
|
Returns true iff the Jacobian matrix is not a function of x. |
name
|
Returns the string name of this Bijector .
|
name_scope
|
Returns a tf.name_scope instance for this class.
|
parameters
|
Dictionary of parameters used to instantiate this Bijector .
|
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
validate_args
|
Returns True if Tensor arguments will be validated. |
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
forward
forward(
x, name='forward', **kwargs
)
Returns the forward Bijector
evaluation, i.e., X = g(Y).
Args | |
---|---|
x
|
Tensor (structure). The input to the 'forward' evaluation.
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure).
|
Raises | |
---|---|
TypeError
|
if self.dtype is specified and x.dtype is not
self.dtype .
|
NotImplementedError
|
if _forward is not implemented.
|
forward_dtype
forward_dtype(
dtype=UNSPECIFIED, name='forward_dtype', **kwargs
)
Returns the dtype returned by forward
for the provided input.
forward_event_ndims
forward_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by forward
.
forward_event_shape
forward_event_shape(
input_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as forward_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
input_shape
|
TensorShape (structure) indicating event-portion shape
passed into forward function.
|
Returns | |
---|---|
forward_event_shape_tensor
|
TensorShape (structure) indicating
event-portion shape after applying forward . Possibly unknown.
|
forward_event_shape_tensor
forward_event_shape_tensor(
input_shape, name='forward_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
input_shape
|
Tensor , int32 vector (structure) indicating event-portion
shape passed into forward function.
|
name
|
name to give to the op |
Returns | |
---|---|
forward_event_shape_tensor
|
Tensor , int32 vector (structure)
indicating event-portion shape after applying forward .
|
forward_log_det_jacobian
forward_log_det_jacobian(
x, event_ndims, name='forward_log_det_jacobian', **kwargs
)
Returns both the forward_log_det_jacobian.
Args | |
---|---|
x
|
Tensor (structure). The input to the 'forward' Jacobian determinant
evaluation.
|
event_ndims
|
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.forward_min_event_ndims . The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(x) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective.
If not injective this is not implemented.
|
Raises | |
---|---|
TypeError
|
if y 's dtype is incompatible with the expected output dtype.
|
NotImplementedError
|
if neither _forward_log_det_jacobian
nor {_inverse , _inverse_log_det_jacobian } are implemented, or
this is a non-injective bijector.
|
inverse
inverse(
y, name='inverse', **kwargs
)
Returns the inverse Bijector
evaluation, i.e., X = g^{-1}(Y).
Args | |
---|---|
y
|
Tensor (structure). The input to the 'inverse' evaluation.
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective.
If not injective, returns the k-tuple containing the unique
k points (x1, ..., xk) such that g(xi) = y .
|
Raises | |
---|---|
TypeError
|
if y 's structured dtype is incompatible with the expected
output dtype.
|
NotImplementedError
|
if _inverse is not implemented.
|
inverse_dtype
inverse_dtype(
dtype=UNSPECIFIED, name='inverse_dtype', **kwargs
)
Returns the dtype returned by inverse
for the provided input.
inverse_event_ndims
inverse_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by inverse
.
inverse_event_shape
inverse_event_shape(
output_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as inverse_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
output_shape
|
TensorShape (structure) indicating event-portion shape
passed into inverse function.
|
Returns | |
---|---|
inverse_event_shape_tensor
|
TensorShape (structure) indicating
event-portion shape after applying inverse . Possibly unknown.
|
inverse_event_shape_tensor
inverse_event_shape_tensor(
output_shape, name='inverse_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
output_shape
|
Tensor , int32 vector (structure) indicating
event-portion shape passed into inverse function.
|
name
|
name to give to the op |
Returns | |
---|---|
inverse_event_shape_tensor
|
Tensor , int32 vector (structure)
indicating event-portion shape after applying inverse .
|
inverse_log_det_jacobian
inverse_log_det_jacobian(
y, event_ndims, name='inverse_log_det_jacobian', **kwargs
)
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y)
. (Recall that: X=g^{-1}(Y)
.)
Note that forward_log_det_jacobian
is the negative of this function,
evaluated at g^{-1}(y)
.
Args | |
---|---|
y
|
Tensor (structure). The input to the 'inverse' Jacobian determinant
evaluation.
|
event_ndims
|
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.inverse_min_event_ndims . The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(y) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
ildj
|
Tensor , if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, log(det(Dg_i^{-1}(y))) , where g_i is the restriction
of g to the ith partition Di .
|
Raises | |
---|---|
TypeError
|
if x 's dtype is incompatible with the expected inverse-dtype.
|
NotImplementedError
|
if _inverse_log_det_jacobian is not implemented.
|
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
value, name=None, **kwargs
)
Applies or composes the Bijector
, depending on input type.
This is a convenience function which applies the Bijector
instance in
three different ways, depending on the input:
- If the input is a
tfd.Distribution
instance, returntfd.TransformedDistribution(distribution=input, bijector=self)
. - If the input is a
tfb.Bijector
instance, returntfb.Chain([self, input])
. - Otherwise, return
self.forward(input)
Args | |
---|---|
value
|
A tfd.Distribution , tfb.Bijector , or a (structure of) Tensor .
|
name
|
Python str name given to ops created by this function.
|
**kwargs
|
Additional keyword arguments passed into the created
tfd.TransformedDistribution , tfb.Bijector , or self.forward .
|
Returns | |
---|---|
composition
|
A tfd.TransformedDistribution if the input was a
tfd.Distribution , a tfb.Chain if the input was a tfb.Bijector , or
a (structure of) Tensor computed by self.forward .
|
Examples
sigmoid = tfb.Reciprocal()(
tfb.Shift(shift=1.)(
tfb.Exp()(
tfb.Scale(scale=-1.))))
# ==> `tfb.Chain([
# tfb.Reciprocal(),
# tfb.Shift(shift=1.),
# tfb.Exp(),
# tfb.Scale(scale=-1.),
# ])` # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
__eq__
__eq__(
other
)
Return self==value.