Join us at TensorFlow World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

nsl.lib.kl_divergence

View source on GitHub

Adds a KL-divergence to the training procedure.

nsl.lib.kl_divergence(
    labels,
    predictions,
    axis=None,
    weights=1.0,
    scope=None,
    loss_collection=tf.compat.v1.GraphKeys.LOSSES,
    reduction=tf.compat.v1.losses.Reduction.SUM_BY_NONZERO_WEIGHTS
)

For brevity, let P = labels and Q = predictions. The Kullback-Leibler divergenceKL(P||Q) is

losses = P * log(P) - P * log(Q)

Note, the function assumes that predictions and labels are the values of multinomial distribution, i.e., each value is the probability of the corresponding class.

For the usage of weights and reduction, please refer to tf.losses.

Args:

  • labels: Tensor of type float32 or float64, with shape [d1, ..., dN, num_classes], represents target distribution.
  • predictions: Tensor of the same type and shape as labels, represents predicted distribution.
  • axis: The dimension along which the KL divergence is computed. Note, the values of labels and predictions along the axis should meet the condition of multinomial distribution.
  • weights: (optional) Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding losses dimension).
  • scope: The scope for the operations performed in computing the loss.
  • loss_collection: collection to which the loss will be added.
  • reduction: Type of reduction to apply to loss.

Returns:

Weighted loss float Tensor. If reduction is NONE, this has the same shape as labels; otherwise, it is scalar.

Raises:

  • InvalidArgumentError: If labels or predictions doesn't meet the condition of multinomial distribution.
  • ValueError: If axis is None, or the shape of predictions doesn't match that of labels or if the shape of weights is invalid.