ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

tf.keras.losses.CategoricalCrossentropy

Computes the crossentropy loss between the labels and predictions.

Inherits From: Loss

Used in the notebooks

Used in the guide Used in the tutorials

Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided in a one_hot representation. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. There should be # classes floating point values per feature.

In the snippet below, there is # classes floating pointing values per example. The shape of both y_pred and y_true are [batch_size, num_classes].

Standalone usage:

y_true = [[0, 1, 0], [0, 0, 1]]
y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
# Using 'auto'/'sum_over_batch_size' reduction type.
cce = tf.keras.losses.CategoricalCrossentropy()
cce(y_true, y_pred).numpy()
1.177
# Calling with 'sample_weight'.
cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()
0.814
# Using 'sum' reduction type.
cce = tf.keras.losses.CategoricalCrossentropy(
    reduction=tf.keras.losses.Reduction.SUM)
cce(y_true, y_pred).numpy()
2.354
# Using 'none' reduction type.
cce = tf.keras.losses.CategoricalCrossentropy(
    reduction=tf.keras.losses.Reduction.NONE)
cce(y_true, y_pred).numpy()