Help protect the Great Barrier Reef with TensorFlow on Kaggle

# tfp.layers.AutoregressiveTransform

An autoregressive normalizing flow layer.

Inherits From: `DistributionLambda`

Following [Papamakarios et al. (2017)][1], given an autoregressive model p(x) with conditional distributions in the location-scale family, we can construct a normalizing flow for p(x).

Specifically, suppose `made` is a `tfb.AutoregressiveNetwork` -- a layer implementing a Masked Autoencoder for Distribution Estimation (MADE) -- that computes location and log-scale parameters `made(x)[i]` for each input `x[i]`. Then we can represent the autoregressive model `p(x)` as `x = f(u)` where `u` is drawn from from some base distribution and where `f` is an invertible and differentiable function (i.e., a `Bijector`) and `f^{-1}(x)` is defined by:

``````def f_inverse(x):
shift, log_scale = tf.unstack(made(x), 2, axis=-1)
return (x - shift) * tf.math.exp(-log_scale)
``````

Given a `tfb.AutoregressiveNetwork` layer `made`, an `AutoregressiveTransform` layer transforms an input `tfd.Distribution` p(u) to an output `tfp.Distribution` p(x) where `x = f(u)`.

For additional details, see the `tfb.MaskedAutoregressiveFlow` bijector and the `tfb.AutoregressiveNetwork`.

#### Example

``````tfd = tfp.distributions
tfpl = tfp.layers
tfb = tfp.bijectors
tfk = tf.keras

# Generate data -- as in Figure 1 in [Papamakarios et al. (2017)][1]).
n = 2000
x2 = np.random.randn(n) * 2
x1 = np.random.randn(n) + (x2 * x2 / 4)
data = np.stack([x1, x2], axis=-1)

model = tfk.Sequential([
# NOTE: This model takes no input and outputs a Distribution.  (We use
# the batch_size and type of the input, but there are no actual input
# values because the last dimension of the shape is 0.)
#
# For conditional density estimation, the model would take the
# conditioning values as input.)
tfk.layers.InputLayer(input_shape=(0,), dtype=tf.float32),

# Given the empty input, return a standard normal distribution with
# matching batch_shape and event_shape of [2].
tfpl.DistributionLambda(lambda t: tfd.MultivariateNormalDiag(

loc=tf.zeros(tf.concat([tf.shape(t)[:-1], [2]], axis=0)),
scale_diag=[1., 1.])),

# Transform the standard normal distribution with event_shape of [2] to
# the target distribution with event_shape of [2].
tfpl.AutoregressiveTransform(tfb.AutoregressiveNetwork(
params=2, hidden_units=[10], activation='relu')),
])

model.compile(
loss=lambda y, rv_y: -rv_y.log_prob(y))

model.fit(x=np.zeros((n, 0)),
y=data,
batch_size=25,
epochs=10,
steps_per_epoch=n // 25,
verbose=True)

# Use the fitted distribution.
distribution = model(np.zeros((0,)))
distribution.sample(4)
distribution.log_prob(np.zeros((5, 3, 2)))
``````

#### References

[1]: George Papamakarios, Theo Pavlakou, Iain Murray, Masked Autoregressive Flow for Density Estimation. In Neural Information Processing Systems, 2017. https://arxiv.org/abs/1705.07057

`made` A `Made` layer, which must output two parameters for each input.
`**kwargs` Additional keyword arguments passed to `tf.keras.Layer`.

`activity_regularizer` Optional regularizer function for the output of this layer.
`compute_dtype` The dtype of the layer's computations.

This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless mixed precision is used, this is the same as `Layer.dtype`, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in `Layer.call`, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when `compute_dtype` is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

`dtype` The dtype of the layer weights.

This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless mixed precision is used, this is the same as `Layer.compute_dtype`, the dtype of the layer's computations.

`dtype_policy` The dtype policy associated with this layer.

This is an instance of a `tf.keras.mixed_precision.Policy`.

`dynamic` Whether the layer is dynamic (eager-only); set in the constructor.
`input` Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

`input_spec` `InputSpec` instance(s) describing the input format for this layer.

When you create a layer subclass, you can set `self.input_spec` to enable the layer to run input compatibility checks when it is called. Consider a `Conv2D` layer: it can only be called on a single input tensor of rank 4. As such, you can set, in `__init__()`:

``````self.input_spec = tf.keras.layers.InputSpec(ndim=4)
``````

Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape `(2,)`, it will raise a nicely-formatted error:

``````ValueError: Input 0 of layer conv2d is incompatible with the layer:
expected ndim=4, found ndim=1. Full shape received: [2]
``````

Input checks that can be specified via `input_spec` include:

• Structure (e.g. a single input, a list of 2 inputs, etc)
• Shape
• Rank (ndim)
• Dtype

For more information, see `tf.keras.layers.InputSpec`.

`losses` List of losses added using the `add_loss()` API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a `tf.GradientTape` will propagate gradients back to the corresponding variables.

````class MyLayer(tf.keras.layers.Layer):`
`  def call(self, inputs):`
`    self.add_loss(tf.abs(tf.reduce_mean(inputs)))`
`    return inputs`
`l = MyLayer()`
`l(np.ones((10, 1)))`
`l.losses`
`[1.0]`
```
````inputs = tf.keras.Input(shape=(10,))`
`x = tf.keras.layers.Dense(10)(inputs)`
`outputs = tf.keras.layers.Dense(1)(x)`
`model = tf.keras.Model(inputs, outputs)`
`# Activity regularization.`
`len(model.losses)`
`0`
`model.add_loss(tf.abs(tf.reduce_mean(x)))`
`len(model.losses)`
`1`
```
````inputs = tf.keras.Input(shape=(10,))`
`d = tf.keras.layers.Dense(10, kernel_initializer='ones')`
`x = d(inputs)`
`outputs = tf.keras.layers.Dense(1)(x)`
`model = tf.keras.Model(inputs, outputs)`
`# Weight regularization.`
`model.add_loss(lambda: tf.reduce_mean(d.kernel))`
`model.losses`
`[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]`
```

`metrics` List of metrics added using the `add_metric()` API.

````input = tf.keras.layers.Input(shape=(3,))`
`d = tf.keras.layers.Dense(2)`
`output = d(input)`
`d.add_metric(tf.reduce_max(output), name='max')`
`d.add_metric(tf.reduce_min(output), name='min')`
`[m.name for m in d.metrics]`
`['max', 'min']`
```

`name` Name of the layer (string), set in the constructor.
`name_scope` Returns a `tf.name_scope` instance for this class.
`non_trainable_weights` List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in `call()`.

`output` Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

`submodules` Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

````a = tf.Module()`
`b = tf.Module()`
`c = tf.Module()`
`a.b = b`
`b.c = c`
`list(a.submodules) == [b, c]`
`True`
`list(b.submodules) == [c]`
`True`
`list(c.submodules) == []`
`True`
```

`supports_masking` Whether this layer supports computing a mask using `compute_mask`.
`trainable`

`trainable_weights` List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

`variable_dtype` Alias of `Layer.dtype`, the dtype of the weights.
`weights` Returns the list of all layer variables/weights.

## Methods

### `add_loss`

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs `a` and `b`, some entries in `layer.losses` may be dependent on `a` and some on `b`. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model's `call` function, in which case `losses` should be a Tensor or list of Tensors.

#### Example:

``````class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
return inputs
``````

This method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These losses become part of the model's topology and are tracked in `get_config`.

#### Example:

``````inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
``````

If this is not the case for your loss (if, for example, your loss references a `Variable` of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.

#### Example:

``````inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
``````

Args
`losses` Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor.
`**kwargs` Additional keyword arguments for backward compatibility. Accepted values: inputs - Deprecated, will be automatically inferred.

### `add_metric`

Adds metric tensor to the layer.

This method can be used inside the `call()` method of a subclassed layer or model.

``````class MyMetricLayer(tf.keras.layers.Layer):
def __init__(self):
super(MyMetricLayer, self).__init__(name='my_metric_layer')
self.mean = tf.keras.metrics.Mean(name='metric_1')

def call(self, inputs):
return inputs
``````

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model's `Input`s. These metrics become part of the model's topology and are tracked when you save the model via `save()`.

``````inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
``````
``````inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
``````

Args
`value` Metric tensor.
`name` String metric name.
`**kwargs` Additional keyword arguments for backward compatibility. Accepted values: `aggregation` - When the `value` tensor provided is not the result of calling a `keras.Metric` instance, it will be aggregated by default using a `keras.Metric.Mean`.

### `build`

View source

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of `Layer` or `Model` can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of `Layer` subclasses.

Args
`input_shape` Instance of `TensorShape`, or list of instances of `TensorShape` if the layer expects a list of inputs (one instance per input).

### `compute_mask`

Args
`inputs` Tensor or list of tensors.
`mask` Tensor or list of tensors.

Returns
None or a tensor (or list of tensors, one per output tensor of the layer).

### `count_params`

Count the total number of scalars composing the weights.

Returns
An integer count.

Raises
`ValueError` if the layer isn't yet built (in which case its weights aren't yet defined).

### `from_config`

Creates a layer from its config.

This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).

Args
`config` A Python dictionary, typically the output of get_config.

Returns
A layer instance.

### `get_config`

View source

Returns the config of this layer.

This Layer's `make_distribution_fn` is serialized via a library built on Python pickle. This serialization of Python functions is provided for convenience, but:

1. The use of this format for long-term storage of models is discouraged. In particular, it may not be possible to deserialize in a different version of Python.

2. While serialization is generally supported for lambdas, local functions, and static methods (and closures over these constructs), complex functions may fail to serialize.

3. `Tensor` objects (and functions referencing `Tensor` objects) can only be serialized when the tensor value is statically known. (Such Tensors are serialized as numpy arrays.)

Instead of relying on `DistributionLambda.get_config`, consider subclassing `DistributionLambda` and directly implementing Keras serialization via `get_config` / `from_config`.

### `get_weights`

Returns the current weights of the layer, as NumPy arrays.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:

````layer_a = tf.keras.layers.Dense(1,`
`  kernel_initializer=tf.constant_initializer(1.))`
`a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))`
`layer_a.get_weights()`
`[array([[1.],`
`       [1.],`
`       [1.]], dtype=float32), array([0.], dtype=float32)]`
`layer_b = tf.keras.layers.Dense(1,`
`  kernel_initializer=tf.constant_initializer(2.))`
`b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))`
`layer_b.get_weights()`
`[array([[2.],`
`       [2.],`
`       [2.]], dtype=float32), array([0.], dtype=float32)]`
`layer_b.set_weights(layer_a.get_weights())`
`layer_b.get_weights()`
`[array([[1.],`
`       [1.],`
`       [1.]], dtype=float32), array([0.], dtype=float32)]`
```

Returns
Weights values as a list of NumPy arrays.

### `set_weights`

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.

For example, a `Dense` layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another `Dense` layer:

````layer_a = tf.keras.layers.Dense(1,`
`  kernel_initializer=tf.constant_initializer(1.))`
`a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))`
`layer_a.get_weights()`
`[array([[1.],`
`       [1.],`
`       [1.]], dtype=float32), array([0.], dtype=float32)]`
`layer_b = tf.keras.layers.Dense(1,`
`  kernel_initializer=tf.constant_initializer(2.))`
`b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))`
`layer_b.get_weights()`
`[array([[2.],`
`       [2.],`
`       [2.]], dtype=float32), array([0.], dtype=float32)]`
`layer_b.set_weights(layer_a.get_weights())`
`layer_b.get_weights()`
`[array([[1.],`
`       [1.],`
`       [1.]], dtype=float32), array([0.], dtype=float32)]`
```

Args
`weights` a list of NumPy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`).

Raises
`ValueError` If the provided weights list does not match the layer's specifications.

### `with_name_scope`

Decorator to automatically enter the module name scope.

````class MyModule(tf.Module):`
`  @tf.Module.with_name_scope`
`  def __call__(self, x):`
`    if not hasattr(self, 'w'):`
`      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))`
`    return tf.matmul(x, self.w)`
```

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

````mod = MyModule()`
`mod(tf.ones([1, 2]))`
`<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>`
`mod.w`
`<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,`
`numpy=..., dtype=float32)>`
```

Args
`method` The method to wrap.

Returns
The original method wrapped such that it enters the module's name scope.

### `__call__`

View source

Wraps `call`, applying pre- and post-processing steps.

Args
`*args` Positional arguments to be passed to `self.call`.
`**kwargs` Keyword arguments to be passed to `self.call`.

Returns
Output tensor(s).

#### Note:

• The following optional keyword arguments are reserved for specific uses:
• `training`: Boolean scalar tensor of Python boolean indicating whether the `call` is meant for training or inference.
• `mask`: Boolean input mask.
• If the layer's `call` method takes a `mask` argument (as some Keras layers do), its default value will be set to the mask generated for `inputs` by the previous layer (if `input` did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
• If the layer is not built, the method will call `build`.

Raises
`ValueError` if the layer's `call` method returns None (an invalid value).
`RuntimeError` if `super().__init__()` was not called in the constructor.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]