|View source on GitHub|
Adds a Log Loss term to the training procedure.
tf.compat.v1.losses.log_loss( labels, predictions, weights=1.0, epsilon=1e-07, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS )
weights acts as a coefficient for the loss. If a scalar is provided, then
the loss is simply scaled by the given value. If
weights is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled
by the corresponding element in the
weights vector. If the shape of
weights matches the shape of
predictions, then the loss of each
measurable element of
predictions is scaled by the corresponding value of
labels: The ground truth output tensor, same dimensions as 'predictions'.
predictions: The predicted outputs.
Tensorwhose rank is either 0, or the same rank as
labels, and must be broadcastable to
labels(i.e., all dimensions must be either
1, or the same as the corresponding
epsilon: A small increment to add to avoid taking a log of zero.
scope: The scope for the operations performed in computing the loss.
loss_collection: collection to which the loss will be added.
reduction: Type of reduction to apply to loss.
Weighted loss float
NONE, this has the same
labels; otherwise, it is scalar.
ValueError: If the shape of
predictionsdoesn't match that of
labelsor if the shape of
weightsis invalid. Also if
loss_collection argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a