View source on GitHub |
Convolution layer.
Inherits From: Layer
tfp.experimental.nn.ConvolutionV2(
input_size,
output_size,
filter_shape,
rank=2,
strides=1,
padding='VALID',
dilations=1,
kernel_initializer=None,
bias_initializer=None,
make_kernel_bias_fn=tfp.experimental.nn.util.make_kernel_bias
,
dtype=tf.float32,
index_dtype=tf.int32,
batch_shape=(),
activation_fn=None,
validate_args=False,
name=None
)
This layer creates a Convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs.
This V2 version supports alternative batch semantics. V1 layers, with batch
size B
produced outputs of shape [N, H, W, B, C]
(where N, H, W, C are
minibatch size, height, width and number of channels, as usual). V2 layers
reorder these to [N, B, H, W, C]
.
This layer has two learnable parameters, kernel
and bias
.
- The
kernel
(akafilters
argument oftf.nn.convolution
) is atf.Variable
withrank + 2
ndims
and shape given byconcat([filter_shape, [input_size, output_size]], axis=0)
. Argumentfilter_shape
is either a length-rank
vector or expanded as one, i.e.,filter_size * tf.ones(rank)
whenfilter_shape
is anint
(which we denote asfilter_size
). - The
bias
is atf.Variable
with1
ndims
and shape[output_size]
.
In summary, the shape of learnable parameters is governed by the following
arguments: filter_shape
, input_size
, output_size
and possibly rank
(if
filter_shape
needs expansion).
For more information on convolution layers, we recommend the following:
- [Deconvolution Checkerboard][https://distill.pub/2016/deconv-checkerboard/]
- [Convolution Animations][https://github.com/vdumoulin/conv_arithmetic]
- [What are Deconvolutional Layers?][ https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers]
Examples
import tensorflow as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn
Convolution1DV2 = functools.partial(tfn.ConvolutionV2, rank=1)
Convolution2DV2 = tfn.ConvolutionV2
Convolution3DV2 = functools.partial(tfn.ConvolutionV2, rank=3)
Args | |
---|---|
input_size
|
...
In Keras, this argument is inferred from the rightmost input shape,
i.e., tf.shape(inputs)[-1] . This argument specifies the size of the
second from the rightmost dimension of both inputs and kernel .
Default value: None .
|
output_size
|
...
In Keras, this argument is called filters . This argument specifies the
rightmost dimension size of both kernel and bias .
|
filter_shape
|
...
In Keras, this argument is called kernel_size . This argument specifies
the leftmost rank dimensions' sizes of kernel .
|
rank
|
An integer, the rank of the convolution, e.g. "2" for 2D
convolution. This argument implies the number of kernel dimensions,
i.e., kernel.shape.rank == rank + 2 .
In Keras, this argument has the same name and semantics.
Default value: 2 .
|
strides
|
An integer or tuple/list of n integers, specifying the stride
length of the convolution.
In Keras, this argument has the same name and semantics.
Default value: 1 .
|
padding
|
One of "VALID" or "SAME" (case-insensitive).
In Keras, this argument has the same name and semantics (except we don't
support "CAUSAL" ).
Default value: 'VALID' .
|
dilations
|
An integer or tuple/list of rank integers, specifying the
dilation rate to use for dilated convolution. Currently, specifying any
dilations value != 1 is incompatible with specifying any strides
value != 1.
In Keras, this argument is called dilation_rate .
Default value: 1 .
|
kernel_initializer
|
...
Default value: None (i.e.,
tfp.experimental.nn.initializers.glorot_uniform() ).
|
bias_initializer
|
...
Default value: None (i.e., tf.initializers.zeros() ).
|
make_kernel_bias_fn
|
...
Default value: tfp.experimental.nn.util.make_kernel_bias .
|
dtype
|
...
Default value: tf.float32 .
|
index_dtype
|
... |
batch_shape
|
...
Default value: () .
|
activation_fn
|
...
Default value: None .
|
validate_args
|
... |
name
|
...
Default value: None (i.e., 'ConvolutionV2' ).
|
Attributes | |
---|---|
activation_fn
|
|
also_track
|
|
bias
|
|
dtype
|
|
kernel
|
|
name
|
Returns the name of this module as passed or determined in the ctor. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
non_trainable_variables
|
Sequence of non-trainable variables owned by this module and its submodules. |
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
validate_args
|
Python bool indicating possibly expensive checks are enabled.
|
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
load
load(
filename
)
save
save(
filename
)
summary
summary()
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
x
)
Call self as a function.