Computes the triplet loss with semi-hard negative mining.
tf.contrib.losses.metric_learning.triplet_semihard_loss(
labels, embeddings, margin=1.0
)
The loss encourages the positive distances (between a pair of embeddings with
the same labels) to be smaller than the minimum negative distance among
which are at least greater than the positive distance plus the margin constant
(called semi-hard negative) in the mini-batch. If no such negative exists,
uses the largest negative distance instead.
See: https://arxiv.org/abs/1503.03832
Args |
labels
|
1-D tf.int32 Tensor with shape [batch_size] of
multiclass integer labels.
|
embeddings
|
2-D float Tensor of embedding vectors. Embeddings should
be l2 normalized.
|
margin
|
Float, margin term in the loss definition.
|
Returns |
triplet_loss
|
tf.float32 scalar.
|