tf.nn.nce_loss

Computes and returns the noise-contrastive estimation training loss.

See Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Also see our Candidate Sampling Algorithms Reference

A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference as in the following example:

if mode == "train":
  loss = tf.nn.nce_loss(
      weights=weights,
      biases=biases,
      labels=labels,
      inputs=inputs,
      ...)
elif mode == "eval":
  logits = tf.matmul(inputs, tf.transpose(weights))
  logits = tf.nn.bias_add(logits, biases)
  labels_one_hot = tf.one_hot(labels, n_classes)
  loss = tf.nn.sigmoid_cross_entropy_with_logits(
      labels=labels_one_hot,
      logits=logits)
  loss = tf.reduce_sum(loss, axis=1)

weights A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings.
biases A Tensor of shape [num_classes]. The class biases.
labels A Tensor of type int64 and shape [batch_size, num_true]. The target classes.
inputs A Tensor of shape [batch_size, dim]. The forward activations of the input network.
num_sampled An int. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch.
num_classe