View source on GitHub |
ConvolutionTranspose layer.
Inherits From: Layer
tfp.experimental.nn.ConvolutionTranspose(
input_size,
output_size,
filter_shape,
rank=2,
strides=1,
padding='VALID',
dilations=1,
output_padding=None,
method='auto',
kernel_initializer=None,
bias_initializer=None,
make_kernel_bias_fn=tfp.experimental.nn.util.make_kernel_bias
,
dtype=tf.float32,
index_dtype=tf.int32,
activation_fn=None,
validate_args=False,
name=None
)
This layer creates a ConvolutionTranspose kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs.
This layer has two learnable parameters, kernel
and bias
.
- The
kernel
(akafilters
argument oftf.nn.conv_transpose
) is atf.Variable
withrank + 2
ndims
and shape given byconcat([filter_shape, [input_size, output_size]], axis=0)
. Argumentfilter_shape
is either a length-rank
vector or expanded as one, i.e.,filter_size * tf.ones(rank)
whenfilter_shape
is anint
(which we denote asfilter_size
). - The
bias
is atf.Variable
with1
ndims
and shape[output_size]
.
In summary, the shape of learnable parameters is governed by the following
arguments: filter_shape
, input_size
, output_size
and possibly rank
(if
filter_shape
needs expansion).
For more information on convolution layers, we recommend the following:
- [Deconvolution Checkerboard][https://distill.pub/2016/deconv-checkerboard/]
- [Convolution Animations][https://github.com/vdumoulin/conv_arithmetic]
- [What are Deconvolutional Layers?][ https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers]
Examples
import tensorflow as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn
ConvolutionTranspose1D = functools.partial(tfn.ConvolutionTranspose, rank=1)
ConvolutionTranspose2D = tfn.ConvolutionTranspose
ConvolutionTranspose3D = functools.partial(tfn.ConvolutionTranspose, rank=3)
Args | |
---|---|
input_size
|
...
In Keras, this argument is inferred from the rightmost input shape,
i.e., tf.shape(inputs)[-1] . This argument specifies the size of the
second from the rightmost dimension of both inputs and kernel .
Default value: None .
|
output_size
|
...
In Keras, this argument is called filters . This argument specifies the
rightmost dimension size of both kernel and bias .
|
filter_shape
|
...
In Keras, this argument is called kernel_size . This argument specifies
the leftmost rank dimensions' sizes of kernel .
|
rank
|
An integer, the rank of the convolution, e.g. "2" for 2D
convolution. This argument implies the number of kernel dimensions,
i.e., kernel.shape.rank == rank + 2 .
In Keras, this argument has the same name and semantics.
Default value: 2 .
|
strides
|
An integer or tuple/list of n integers, specifying the stride
length of the convolution.
In Keras, this argument has the same name and semantics.
Default value: 1 .
|
padding
|
One of "VALID" or "SAME" (case-insensitive).
In Keras, this argument has the same name and semantics (except we don't
support "CAUSAL" ).
Default value: 'VALID' .
|
dilations
|
An integer or tuple/list of rank integers, specifying the
dilation rate to use for dilated convolution. Currently, specifying any
dilations value != 1 is incompatible with specifying any strides
value != 1.
In Keras, this argument is called dilation_rate .
Default value: 1 .
|
output_padding
|
An int or length-rank tuple/list representing the
amount of padding along the input spatial dimensions (e.g., depth,
height, width). A single int indicates the same value for all spatial
dimensions. The amount of output padding along a given dimension must be
lower than the stride along that same dimension. If set to None
(default), the output shape is inferred.
In Keras, this argument has the same name and semantics.
Default value: None (i.e., inferred).
|
method
|
... |
kernel_initializer
|
...
Default value: None (i.e.,
tfp.experimental.nn.initializers.glorot_uniform() ).
|
bias_initializer
|
...
Default value: None (i.e., tf.initializers.zeros() ).
|
make_kernel_bias_fn
|
...
Default value: tfp.experimental.nn.util.make_kernel_bias .
|
dtype
|
...
Default value: tf.float32 .
|
index_dtype
|
... |
activation_fn
|
...
Default value: None .
|
validate_args
|
... |
name
|
...
Default value: None (i.e., 'ConvolutionTranspose' ).
|
Attributes | |
---|---|
activation_fn
|
|
also_track
|
|
bias
|
|
dtype
|
|
kernel
|
|
name
|
Returns the name of this module as passed or determined in the ctor. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
non_trainable_variables
|
Sequence of non-trainable variables owned by this module and its submodules. |
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
validate_args
|
Python bool indicating possibly expensive checks are enabled.
|
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
load
load(
filename
)
save
save(
filename
)
summary
summary()
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
x
)
Call self as a function.