|View source on GitHub|
Computes the weighted loss.
tf.compat.v1.losses.compute_weighted_loss( losses, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS )
[batch_size, d1, ... dN].
Tensorwhose rank is either 0, or the same rank as
losses, and must be broadcastable to
losses(i.e., all dimensions must be either
1, or the same as the corresponding
scope: the scope for the operations performed in computing the loss.
loss_collection: the loss will be added to these collections.
reduction: Type of reduction to apply to loss.
Tensor of the same type as
NONE, this has the same shape as
losses; otherwise, it is scalar.
Noneor the shape is not compatible with
losses, or if the number of dimensions (rank) of either
When calculating the gradient of a weighted loss contributions from
weights are considered. If your
on some model parameters but you do not want this to affect the loss
gradient, you need to apply
passing them to
loss_collection argument is ignored when executing eagerly. Consider
holding on to the return value or collecting losses via a