|TensorFlow 1 version||View source on GitHub|
Regularizer base class.
Compat aliases for migration
See Migration guide for more details.
Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes.
Regularization penalties are applied on a per-layer basis. The exact API will
depend on the layer, but many layers (e.g.
Conv3D) have a unified API.
These layers expose 3 keyword arguments:
kernel_regularizer: Regularizer to apply a penalty on the layer's kernel
bias_regularizer: Regularizer to apply a penalty on the layer's bias
activity_regularizer: Regularizer to apply a penalty on the layer's output
All layers (including custom layers) expose
activity_regularizer as a
settable property, whether or not it is in the constructor arguments.
The value returned by the
activity_regularizer is divided by the input
batch size so that the relative weighting between the weight regularizers and
the activity regularizers does not change with the batch size.
You can access a layer's regularization penalties by calling
after calling the layer on inputs.
layer = tf.keras.layers.Dense(