TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tf.nn.scale_regularization_loss

TensorFlow 1 version View source on GitHub

Scales the sum of the given regularization losses by number of replicas.

Aliases:

tf.nn.scale_regularization_loss(regularization_loss)

Usage with distribution strategy and custom training loop:

with strategy.scope():
  def compute_loss(self, label, predictions):
    per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
        labels, predictions)

    # Compute loss that is scaled by sample_weight and by global batch size.
    loss = tf.compute_average_loss(
        per_example_loss,
        sample_weight=sample_weight,
        global_batch_size=GLOBAL_BATCH_SIZE)

    # Add scaled regularization losses.
    loss += tf.scale_regularization_loss(tf.nn.l2_loss(weights))
    return loss

Args:

  • regularization_loss: Regularization loss.

Returns:

Scalar loss value.