View source on GitHub |
Scales per-example losses with sample_weights and computes their average.
tf.nn.compute_average_loss(
per_example_loss, sample_weight=None, global_batch_size=None
)
Usage with distribution strategy and custom training loop:
with strategy.scope():
def compute_loss(labels, predictions, sample_weight=None):
# If you are using a `Loss` class instead, set reduction to `NONE` so that
# we can do the reduction afterwards and divide by global batch size.
per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, predictions)
# Compute loss that is scaled by sample_weight and by global batch size.
return tf.nn.compute_average_loss(
per_example_loss,
sample_weight=sample_weight,
global_batch_size=GLOBAL_BATCH_SIZE)
Returns | |
---|---|
Scalar loss value. |