|View source on GitHub|
Batch Normalization layer from http://arxiv.org/abs/1502.03167.
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"
Sergey Ioffe, Christian Szegedy
Keras APIs handle BatchNormalization updates to the moving_mean and
moving_variance as part of their
evaluate() loops. However, if a
custom training loop is used with an instance of
Model, these updates need
to be explicitly included. Here's a simple example of how it can be done:
#`model` is an instance of `Model` with `tf.keras.layers.BatchNormalization` update_ops = model.get_updates_for(None) + model.get_updates_for(features) train_op = optimizer.minimize(loss) train_op = tf.group([train_op, update_ops])
intor list of
int, the axis or axes that should be normalized, typically the features axis/axes. For instance, after a
axis=1. If a list of axes is provided, each axis in
axiswill be normalized simultaneously. Default is
-1which uses the last axis. Note: when using multi-axis batch norm, the
moving_variancevariables are the same rank as the input Tensor, with dimension size 1 in all reduced (non-axis) dimensions).
momentum: Momentum for the moving average.
epsilon: Small float added to variance to avoid dividing by zero.
center: If True, add offset of
betato normalized tensor. If False,
scale: If True, multiply by
gamma. If False,
gammais not used. When the next layer is linear (also e.g.
nn.relu), this can be disabled since the scaling can be done by the next layer.
beta_initializer: Initializer for the beta weight.
gamma_initializer: Initializer for the gamma weight.
moving_mean_initializer: Initializer for the moving mean.
moving_variance_initializer: Initializer for the moving variance.
beta_regularizer: Optional regularizer for the beta weight.
gamma_regularizer: Optional regularizer for the gamma weight.
beta_constraint: An optional projection function to be applied to the
betaweight after being updated by an
Optimizer(e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
gamma_constraint: An optional projection function to be applied to the
gammaweight after being updated by an
renorm: Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar
Tensorsused to clip the renorm correction. The correction
(r, d)is used as
corrected_value = normalized_value * r + d, with
rclipped to [rmin, rmax], and
dto [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
renorm_momentum: Momentum used to update the moving means and standard deviations with renorm. Unlike
momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that
momentumis still applied to get the means and variances for inference.
True, use a faster, fused implementation if possible. If
False, use the system recommended implementation.
trainable: Boolean, if
Truealso add variables to the graph collection
int. By default,
None, which means batch normalization is performed across the whole batch. When
None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
adjustment: A function taking the
Tensorcontaining the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1,
adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If
None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
name: A string, the name of the layer.
__init__( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs )