View source on GitHub |

ConvolutionTranspose layer.

```
tfp.experimental.nn.ConvolutionTranspose(
input_size, output_size, filter_shape, rank=2, strides=1, padding='VALID',
dilations=1, output_padding=None, init_kernel_fn=None, init_bias_fn=None,
make_kernel_bias_fn=tfp.experimental.nn.util.make_kernel_bias, dtype=tf.float32,
activation_fn=None, name=None
)
```

This layer creates a ConvolutionTranspose kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs.

This layer has two learnable parameters, `kernel`

and `bias`

.

- The
`kernel`

(aka`filters`

argument of`tf.nn.conv_transpose`

) is a`tf.Variable`

with`rank + 2`

`ndims`

and shape given by`concat([filter_shape, [input_size, output_size]], axis=0)`

. Argument`filter_shape`

is either a length-`rank`

vector or expanded as one, i.e.,`filter_size * tf.ones(rank)`

when`filter_shape`

is an`int`

(which we denote as`filter_size`

). - The
`bias`

is a`tf.Variable`

with`1`

`ndims`

and shape`[output_size]`

.

In summary, the shape of learnable parameters is governed by the following
arguments: `filter_shape`

, `input_size`

, `output_size`

and possibly `rank`

(if
`filter_shape`

needs expansion).

For more information on convolution layers, we recommend the following:

- [Deconvolution Checkerboard][https://distill.pub/2016/deconv-checkerboard/]
- [Convolution Animations][https://github.com/vdumoulin/conv_arithmetic]
- [What are Deconvolutional Layers?][ https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers]

#### Examples

```
import tensorflow as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn
ConvolutionTranspose1D = functools.partial(tfn.ConvolutionTranspose, rank=1)
ConvolutionTranspose2D = tfn.ConvolutionTranspose
ConvolutionTranspose3D = functools.partial(tfn.ConvolutionTranspose, rank=3)
```

#### Args:

: ... In Keras, this argument is inferred from the rightmost input shape, i.e.,`input_size`

`tf.shape(inputs)[-1]`

. This argument specifies the size of the second from the rightmost dimension of both`inputs`

and`kernel`

. Default value:`None`

.: ... In Keras, this argument is called`output_size`

`filters`

. This argument specifies the rightmost dimension size of both`kernel`

and`bias`

.: ... In Keras, this argument is called`filter_shape`

`kernel_size`

. This argument specifies the leftmost`rank`

dimensions' sizes of`kernel`

.: An integer, the rank of the convolution, e.g. "2" for 2D convolution. This argument implies the number of`rank`

`kernel`

dimensions, i.e.`,`

kernel.shape.rank == rank + 2`. In Keras, this argument has the same name and semantics. Default value:`

2`.: An integer or tuple/list of n integers, specifying the stride length of the convolution. In Keras, this argument has the same name and semantics. Default value:`strides`

`1`

.: One of`padding`

`"VALID"`

or`"SAME"`

(case-insensitive). In Keras, this argument has the same name and semantics (except we don't support`"CAUSAL"`

). Default value:`'VALID'`

.: An integer or tuple/list of`dilations`

`rank`

integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any`dilations`

value != 1 is incompatible with specifying any`strides`

value != 1. In Keras, this argument is called`dilation_rate`

. Default value:`1`

.: An`output_padding`

`int`

or length-`rank`

tuple/list representing the amount of padding along the input spatial dimensions (e.g., depth, height, width). A single`int`

indicates the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to`None`

(default), the output shape is inferred. In Keras, this argument has the same name and semantics. Default value:`None`

(i.e., inferred).: ... Default value:`init_kernel_fn`

`None`

(i.e.,`tfp.experimental.nn.initializers.glorot_uniform()`

).: ... Default value:`init_bias_fn`

`None`

(i.e.,`tf.zeros`

).: ... Default value:`make_kernel_bias_fn`

`tfp.experimental.nn.util.make_kernel_bias`

.: ... Default value:`dtype`

`tf.float32`

.: ... Default value:`activation_fn`

`None`

.: ... Default value:`name`

`None`

(i.e.,`'ConvolutionTranspose'`

).

#### Attributes:

`activation_fn`

`also_track`

`bias`

`dtype`

`extra_loss`

`extra_result`

`kernel`

: Returns the name of this module as passed or determined in the ctor.`name`

NOTE: This is not the same as the

`self.name_scope.name`

which includes parent module names.: Returns a`name_scope`

`tf.name_scope`

instance for this class.: Sequence of all sub-modules.`submodules`

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

`a = tf.Module()`

`b = tf.Module()`

`c = tf.Module()`

`a.b = b`

`b.c = c`

`list(a.submodules) == [b, c]`

`True`

`list(b.submodules) == [c]`

`True`

`list(c.submodules) == []`

`True`

: Sequence of trainable variables owned by this module and its submodules.`trainable_variables`

: Sequence of variables owned by this module and its submodules.`variables`

## Methods

`__call__`

```
__call__(
inputs, **kwargs
)
```

Call self as a function.

`eval`

```
eval(
x, is_training=True
)
```

`load`

```
load(
filename
)
```

`save`

```
save(
filename
)
```

`summary`

```
summary()
```

`with_name_scope`

```
@classmethod
with_name_scope(
cls, method
)
```

Decorator to automatically enter the module name scope.

`class MyModule(tf.Module):`

`@tf.Module.with_name_scope`

`def __call__(self, x):`

`if not hasattr(self, 'w'):`

`self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))`

`return tf.matmul(x, self.w)`

Using the above module would produce `tf.Variable`

s and `tf.Tensor`

s whose
names included the module name:

`mod = MyModule()`

`mod(tf.ones([1, 2]))`

`<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>`

`mod.w`

`<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,`

`numpy=..., dtype=float32)>`

#### Args:

: The method to wrap.`method`

#### Returns:

The original method wrapped such that it enters the module's name scope.