Losses

public class Losses

Built-in loss functions.

Constants

int CHANNELS_FIRST
int CHANNELS_LAST
float EPSILON Default Fuzz factor.

Public Constructors

Public Methods

static <T extends TNumber > Operand <T>
binaryCrossentropy (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, boolean fromLogits, float labelSmoothing)
Computes the binary crossentropy loss between labels and predictions.
static <T extends TNumber > Operand <T>
categoricalCrossentropy (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, boolean fromLogits, float labelSmoothing, int axis)
Computes the categorical crossentropy loss between labels and predictions.
static <T extends TNumber > Operand <T>
categoricalHinge (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the categorical hinge loss between labels and predictions.
static <T extends TNumber > Operand <T>
cosineSimilarity (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, int[] axis)
Computes the cosine similarity loss between labels and predictions.
static <T extends TNumber > Operand <T>
hinge (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the hinge loss between labels and predictions

loss = reduceMean(maximum(1 - labels * predictions, 0))

static <T extends TNumber > Operand <T>
huber (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, float delta)
Computes the Huber loss between labels and predictions.
static <T extends TNumber > Operand <T>
kullbackLeiblerDivergence (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the Kullback-Leibler divergence loss between labels and predictions.
static <T extends TNumber > Operand <T>
l2Normalize (Ops tf, Operand <T> x, int[] axis)
Normalizes along dimension axis using an L2 norm.
static <T extends TNumber > Operand <T>
logCosh (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the hyperbolic cosine loss between labels and predictions.
static <T extends TNumber > Operand <T>
meanAbsoluteError (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Calculates the mean absolute error between labels and predictions.
static <T extends TNumber > Operand <T>
meanAbsolutePercentageError (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Calculates the mean absolute percentage error between labels and predictions.
static <T extends TNumber > Operand <T>
meanSquaredError (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the mean squared error between labels and predictions.
static <T extends TNumber > Operand <T>
meanSquaredLogarithmicError (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Calculates the mean squared logarithmic error between labels and predictions.
static <T extends TNumber > Operand <T>
poisson (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the Poisson loss between labels and predictions.
static <T extends TNumber > Operand <T>
sparseCategoricalCrossentropy (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, boolean fromLogits, int axis)
Computes the sparse categorical crossentropy loss between labels and predictions.
static <T extends TNumber > Operand <T>
squaredHinge (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)
Computes the squared hinge loss between labels and predictions.

Inherited Methods

Constants

public static final int CHANNELS_FIRST

Constant Value: 1

public static final int CHANNELS_LAST

Constant Value: -1

public static final float EPSILON

Default Fuzz factor.

Constant Value: 1.0E-7

Public Constructors

public Losses ()

Public Methods

public static Operand <T> binaryCrossentropy (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, boolean fromLogits, float labelSmoothing)

Computes the binary crossentropy loss between labels and predictions.

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
fromLogits Whether to interpret predictions as a tensor of logit values
labelSmoothing A number in the range [0, 1]. When 0, no smoothing occurs. When > 0, compute the loss between the predicted labels and a smoothed version of the true labels, where the smoothing squeezes the labels towards 0.5. Larger values of labelSmoothing correspond to heavier smoothing.
Returns
  • the binary crossentropy loss.

public static Operand <T> categoricalCrossentropy (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, boolean fromLogits, float labelSmoothing, int axis)

Computes the categorical crossentropy loss between labels and predictions.

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
fromLogits Whether to interpret predictions as a tensor of logit values
labelSmoothing Float in [0, 1] . When > 0 , label values are smoothed, meaning the confidence on label values are relaxed. e.g. labelSmoothing=0.2 means that we will use a value of 0.1 for label 0 and 0.9 for label 1
axis the
Returns
  • the categorical crossentropy loss.

public static Operand <T> categoricalHinge (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)

Computes the categorical hinge loss between labels and predictions.

Parameters
tf the TensorFlow Ops
labels true targets, values are expected to be 0 or 1.
predictions the predictions
Returns
  • the categorical hinge loss

public static Operand <T> cosineSimilarity (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, int[] axis)

Computes the cosine similarity loss between labels and predictions.

Note that it is a number between -1 and 1 , which is different from the mathematical definition of cosine similarity where 1 represents similar vectors, and 0 represents dissimilar vectors. In this function, the numbers are inverted in a range of -1 to 1 . When it is a negative number between -1 and 0 , 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either labels or predictions is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets.

loss = -sum(l2Norm(labels) * l2Norm(predictions))

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
axis Axis along which to determine similarity.
Returns
  • the cosine similarity loss

public static Operand <T> hinge (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)

Computes the hinge loss between labels and predictions

loss = reduceMean(maximum(1 - labels * predictions, 0))

Parameters
tf the TensorFlow Ops
labels true targets, values are expected to be -1 or 1. If binary (0 or 1) labels are provided, they will be converted to -1 or 1.
predictions the predictions
Returns
  • the hinge loss

public static Operand <T> huber (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions, float delta)

Computes the Huber loss between labels and predictions.

For each value x in error = labels - predictions:

     loss = 0.5 * x^2                  if |x| <= d
     loss = 0.5 * d^2 + d * (|x| - d)  if |x| > d
 

where d is delta.

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
delta the point where the Huber loss function changes from quadratic to linear.
Returns
  • the Huber loss

public static Operand <T> kullbackLeiblerDivergence (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)

Computes the Kullback-Leibler divergence loss between labels and predictions.

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
Returns
  • the Kullback-Leibler divergence loss

public static Operand <T> l2Normalize (Ops tf, Operand <T> x, int[] axis)

Normalizes along dimension axis using an L2 norm.

Parameters
tf The TensorFlow Ops
x the input
axis Dimension along which to normalize.
Returns
  • the normalized values based on L2 norm

public static Operand <T> logCosh (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)

Computes the hyperbolic cosine loss between labels and predictions.

log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x . This means that 'logCosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.

Parameters
tf the TensorFlow Ops
labels true targets
predictions the predictions
Returns
  • the hyperbolic cosine divergence loss

public static Operand <T> meanAbsoluteError (Ops tf, Operand <? extends TNumber > labels, Operand <T> predictions)

Calculates the mean absolute error between labels and predictions.

loss = reduceMean(abs(labels - predictions))

Parameters
tf The TensorFlow Ops
labels the labels
predictions the predictions
Returns
  • the mean absolute error