# tf.contrib.layers.separable_conv2d

### Aliases:

• tf.contrib.layers.separable_conv2d
• tf.contrib.layers.separable_convolution2d
tf.contrib.layers.separable_conv2d(
inputs,
num_outputs,
kernel_size,
depth_multiplier,
stride=1,
data_format=DATA_FORMAT_NHWC,
rate=1,
activation_fn=tf.nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=initializers.xavier_initializer(),
weights_regularizer=None,
biases_initializer=tf.zeros_initializer(),
biases_regularizer=None,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None
)


Adds a depth-separable 2D convolution with optional batch_norm layer.

This op first performs a depthwise convolution that acts separately on channels, creating a variable called depthwise_weights. If num_outputs is not None, it adds a pointwise convolution that mixes channels, creating a variable called pointwise_weights. Then, if normalizer_fn is None, it adds bias to the result, creating a variable called 'biases', otherwise, the normalizer_fn is applied. It finally applies an activation function to produce the end result.

#### Args:

• inputs: A tensor of size [batch_size, height, width, channels].
• num_outputs: The number of pointwise convolution output filters. If is None, then we skip the pointwise convolution stage.
• kernel_size: A list of length 2: [kernel_height, kernel_width] of of the filters. Can be an int if both values are the same.
• depth_multiplier: The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier.
• stride: A list of length 2: [stride_height, stride_width], specifying the depthwise convolution stride. Can be an int if both strides are the same.
• padding: One of 'VALID' or 'SAME'.
• data_format: A string. NHWC (default) and NCHW are supported.
• rate: A list of length 2: [rate_height, rate_width], specifying the dilation rates for atrous convolution. Can be an int if both rates are the same. If any value is larger than one, then both stride values need to be one.
• activation_fn: Activation function. The default value is a ReLU function. Explicitly set it to None to skip it and maintain a linear activation.
• normalizer_fn: Normalization function to use instead of biases. If normalizer_fn is provided then biases_initializer and biases_regularizer are ignored and biases are not created nor added. default set to None for no normalizer function
• normalizer_params: Normalization function parameters.
• weights_initializer: An initializer for the weights.
• weights_regularizer: Optional regularizer for the weights.
• biases_initializer: An initializer for the biases. If None skip biases.
• biases_regularizer: Optional regularizer for the biases.
• reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
• variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collection per variable.
• outputs_collections: Collection to add the outputs.
• trainable: Whether or not the variables should be trainable or not.
• scope: Optional scope for variable_scope.

#### Returns:

A Tensor representing the output of the operation.

#### Raises:

• ValueError: If data_format is invalid.