Computes the triplet loss with hard negative and hard positive mining.
tfa.types.FloatTensorLike = 1.0,
soft: bool = False,
distance_metric: Union[str, Callable] = 'L2',
name: Optional[str] = None,
The loss encourages the maximum positive distance (between a pair of embeddings
with the same labels) to be smaller than the minimum negative distance plus the
margin constant in the mini-batch.
The loss selects the hardest positive and the hardest negative samples
within the batch when forming the triplets for computing the loss.
We expect labels
y_true to be provided as 1-D integer
Tensor with shape
[batch_size] of multi-class integer labels. And embeddings
y_pred must be
Tensor of l2 normalized embedding vectors.
Float, margin term in the loss definition. Default value is 1.0.
Boolean, if set, use the soft margin version. Default value is False.
Optional name for the op.
Loss from its config (output of
Returns the config dictionary for a
y_true, y_pred, sample_weight=None
Ground truth values. shape =
[batch_size, d0, .. dN], except
sparse loss functions such as sparse categorical crossentropy where
[batch_size, d0, .. dN-1]
The predicted values. shape =
[batch_size, d0, .. dN]
sample_weight acts as a coefficient for the
loss. If a scalar is provided, then the loss is simply scaled by the
given value. If
sample_weight is a tensor of size
the total loss for each sample of the batch is rescaled by the
corresponding element in the
sample_weight vector. If the shape of
[batch_size, d0, .. dN-1] (or can be broadcasted to
this shape), then each loss element of
y_pred is scaled
by the corresponding value of
sample_weight. (Note on
dN-1: all loss
functions reduce by 1 dimension, usually axis=-1.)
Weighted loss float
NONE, this has
[batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note
because all loss functions reduce by 1 dimension, usually axis=-1.)
If the shape of
sample_weight is invalid.