Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfp.experimental.nn.Convolution

View source on GitHub

Convolution layer.

tfp.experimental.nn.Convolution(
    input_size, output_size, filter_shape, rank=2, strides=1, padding='VALID',
    dilations=1, init_kernel_fn=None, init_bias_fn=None,
    make_kernel_bias_fn=tfp.experimental.nn.util.make_kernel_bias, dtype=tf.float32,
    batch_shape=(), activation_fn=None, name=None
)

This layer creates a Convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs.

This layer has two learnable parameters, kernel and bias.

  • The kernel (aka filters argument of tf.nn.convolution) is a tf.Variable with rank + 2 ndims and shape given by concat([filter_shape, [input_size, output_size]], axis=0). Argument filter_shape is either a length-rank vector or expanded as one, i.e., filter_size * tf.ones(rank) when filter_shape is an int (which we denote as filter_size).
  • The bias is a tf.Variable with 1 ndims and shape [output_size].

In summary, the shape of learnable parameters is governed by the following arguments: filter_shape, input_size, output_size and possibly rank (if filter_shape needs expansion).

For more information on convolution layers, we recommend the following:

  • [Deconvolution Checkerboard][https://distill.pub/2016/deconv-checkerboard/]
  • [Convolution Animations][https://github.com/vdumoulin/conv_arithmetic]
  • [What are Deconvolutional Layers?][ https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers]

Examples

import tensorflow as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn

Convolution1D = functools.partial(tfn.Convolution, rank=1)
Convolution2D = tfn.Convolution
Convolution3D = functools.partial(tfn.Convolution, rank=3)

Args:

  • input_size: ... In Keras, this argument is inferred from the rightmost input shape, i.e., tf.shape(inputs)[-1]. This argument specifies the size of the second from the rightmost dimension of both inputs and kernel. Default value: None.
  • output_size: ... In Keras, this argument is called filters. This argument specifies the rightmost dimension size of both kernel and bias.
  • filter_shape: ... In Keras, this argument is called kernel_size. This argument specifies the leftmost rank dimensions' sizes of kernel.
  • rank: An integer, the rank of the convolution, e.g. "2" for 2D convolution. This argument implies the number of kernel dimensions, i.e.,kernel.shape.rank == rank + 2. In Keras, this argument has the same name and semantics. Default value:2`.
  • strides: An integer or tuple/list of n integers, specifying the stride length of the convolution. In Keras, this argument has the same name and semantics. Default value: 1.
  • padding: One of "VALID" or "SAME" (case-insensitive). In Keras, this argument has the same name and semantics (except we don't support "CAUSAL"). Default value: 'VALID'.
  • dilations: An integer or tuple/list of rank integers, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilations value != 1 is incompatible with specifying any strides value != 1. In Keras, this argument is called dilation_rate. Default value: 1.
  • init_kernel_fn: ... Default value: None (i.e., tfp.experimental.nn.initializers.glorot_uniform()).
  • init_bias_fn: ... Default value: None (i.e., tf.zeros).
  • make_kernel_bias_fn: ... Default value: tfp.experimental.nn.util.make_kernel_bias.
  • dtype: ... Default value: tf.float32.
  • batch_shape: ... Default value: ().
  • activation_fn: ... Default value: None.
  • name: ... Default value: None (i.e., 'Convolution').

Attributes:

  • activation_fn
  • also_track
  • bias
  • dtype
  • extra_loss
  • extra_result
  • kernel
  • name: Returns the name of this module as passed or determined in the ctor.

    NOTE: This is not the same as the self.name_scope.name which includes parent module names.

  • name_scope: Returns a tf.name_scope instance for this class.

  • submodules: Sequence of all sub-modules.

    Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

  a = tf.Module() 
  b = tf.Module() 
  c = tf.Module() 
  a.b = b 
  b.c = c 
  list(a.submodules) == [b, c] 
    True 
  list(b.submodules) == [c] 
    True 
  list(c.submodules) == [] 
    True 
     
  • trainable_variables: Sequence of trainable variables owned by this module and its submodules.

  • variables: Sequence of variables owned by this module and its submodules.

Methods

__call__

View source

__call__(
    inputs, **kwargs
)

Call self as a function.

eval

View source

eval(
    x, is_training=True
)

load

View source

load(
    filename
)

save

View source

save(
    filename
)

summary

View source

summary()

with_name_scope

@classmethod
with_name_scope(
    cls, method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module): 
  @tf.Module.with_name_scope 
  def __call__(self, x): 
    if not hasattr(self, 'w'): 
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) 
    return tf.matmul(x, self.w) 

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule() 
mod(tf.ones([1, 2])) 
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)> 
mod.w 
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32, 
numpy=..., dtype=float32)> 

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.