Computes the npairs loss between
name: str = 'npairs_loss'
Npairs loss expects paired data where a pair is composed of samples from
the same labels and each pairs in the minibatch have different labels.
The loss takes each row of the pair-wise similarity matrix,
as logits and the remapped multi-class labels,
y_true, as labels.
The similarity matrix
y_pred between two embedding matrices
[batch_size, hidden_size] can be computed as follows:
# y_pred = a * b^T
y_pred = tf.matmul(a, b, transpose_a=False, transpose_b=True)
(Optional) name for the loss.
(Optional) Type of
tf.keras.losses.Reduction to apply to
loss. Default value is
AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to
SUM_OVER_BATCH_SIZE. When used with
tf.distribute.Strategy, outside of built-in training loops such as
will raise an error. Please see this custom training tutorial
for more details.
Optional name for the op.
Loss from its config (output of
Returns the config dictionary for a
y_true, y_pred, sample_weight=None
Ground truth values. shape =
[batch_size, d0, .. dN], except
sparse loss functions such as sparse categorical crossentropy where
[batch_size, d0, .. dN-1]
The predicted values. shape =
[batch_size, d0, .. dN]
sample_weight acts as a
coefficient for the loss. If a scalar is provided, then the loss is
simply scaled by the given value. If
sample_weight is a tensor of size
[batch_size], then the total loss for each sample of the batch is
rescaled by the corresponding element in the
sample_weight vector. If
the shape of
[batch_size, d0, .. dN-1] (or can be
broadcasted to this shape), then each loss element of
y_pred is scaled
by the corresponding value of
sample_weight. (Note on
dN-1: all loss
functions reduce by 1 dimension, usually axis=-1.)
Weighted loss float
NONE, this has
[batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note
because all loss functions reduce by 1 dimension, usually axis=-1.)
If the shape of
sample_weight is invalid.