AUTO: Indicates that the reduction option will be determined by the usage
context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When
used with tf.distribute.Strategy, outside of built-in training loops such
as tf.kerascompile and fit, we expect reduction value to be
SUM or NONE. Using AUTO in that case will raise an error.
NONE: No additional reduction is applied to the output of the wrapped
loss function. When non-scalar losses are returned to Keras functions like
fit/evaluate, the unreduced vector loss is passed to the optimizer
but the reported loss will be a scalar value.
SUM: Scalar sum of weighted losses.
SUM_OVER_BATCH_SIZE: Scalar SUM divided by number of elements in losses.
This reduction type is not supported when used with
tf.distribute.Strategy outside of built-in training loops like tf.kerascompile/fit.
You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:
loss_obj = tf.keras.losses.CategoricalCrossentropy(
loss = tf.reduce_sum(loss_obj(labels, predictions)) *
(1. / global_batch_size)