Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge

tf.compat.v1.layers.batch_normalization

Functional interface for the batch normalization layer from_config(Ioffe et al., 2015).

Migrate to TF2

This API is not compatible with eager execution or tf.function.

Please refer to tf.layers section of the migration guide to migrate a TensorFlow v1 model to Keras. The corresponding TensorFlow v2 layer is tf.keras.layers.BatchNormalization.

The batch updating pattern with tf.control_dependencies(tf.GraphKeys.UPDATE_OPS) should not be used in native TF2. Consult the tf.keras.layers.BatchNormalization documentation for further information.

Structural Mapping to Native TF2

None of the supported arguments have changed name.

Before:

 x_norm = tf.compat.v1.layers.batch_normalization(x)

After:

To migrate code using TF1 functional layers use the Keras Functional API:

 x = tf.keras.Input(shape=(28, 28, 1),)
 y = tf.keras.layers.BatchNormalization()(x)
 model = tf.keras.Model(x, y)

How to Map Arguments

TF1 Arg Name TF2 Arg Name Note
name name Layer base class
trainable trainable Layer base class
axis axis -
momentum momentum -
epsilon epsilon -
center center -
scale scale -
beta_initializer beta_initializer -
gamma_initializer gamma_initializer -
moving_mean_initializer moving_mean_initializer -
beta_regularizer `beta_regularizer' -
gamma_regularizer `gamma_regularizer' -
beta_constraint `beta_constraint' -
gamma_constraint `gamma_constraint' -
renorm Not supported -
renorm_clipping Not supported -
renorm_momentum Not supported -
fused Not supported -
virtual_batch_size Not supported -
adjustment Not supported -

Description

Used in the notebooks

Used in the guide
  x_norm = tf.compat.v1.layers.batch_normalization(x, training=training)

  # ...

  update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
  train_op = optimizer.minimize(loss)
  train_op = tf.group([train_op, update_ops])

inputs Tensor input.
axis An int, the axis that should be normalized (typically the features axis). For instance, after a Convolution2D layer with data_format="channels_first", set axis=1 in BatchNormalization.
momentum Momentum for the moving average.
epsilon Small float added to variance to avoid dividing by zero.
center If True, add offset of beta to normalized tensor. If False, beta is ignored.