|TensorFlow 1 version||View source on GitHub|
Computes softmax cross entropy between
tf.nn.softmax_cross_entropy_with_logits( labels, logits, axis=-1, name=None )
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
If using exclusive
labels (wherein one and only
one class is true at a time), see
logits = [[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]] labels = [[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]] tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
A common use case is to have logits and labels of shape
[batch_size, num_classes], but higher dimensions are supported, with
axis argument specifying the class dimension.
labels must have the same dtype (either
Backpropagation will happen into both
labels. To disallow
labels, pass label tensors through
before feeding it to this function.
Note that to avoid confusion, it is required to pass only named arguments to this function.
Each vector along the class dimension should hold a valid
probability distribution e.g. for the case in which labels are of shape
||Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.|
||The class dimension. Defaulted to -1 which is the last dimension.|
||A name for the operation (optional).|