tf.nn.log_softmax(logits, dim=-1, name=None)

tf.nn.log_softmax(logits, dim=-1, name=None)

See the guide: Neural Network > Classification

Computes log softmax activations.

For each batch i and class j we have

logsoftmax = logits - log(reduce_sum(exp(logits), dim))


  • logits: A non-empty Tensor. Must be one of the following types: half, float32, float64.
  • dim: The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
  • name: A name for the operation (optional).


A Tensor. Has the same type as logits. Same shape as logits.


  • InvalidArgumentError: if logits is empty or dim is beyond the last dimension of logits.

Defined in tensorflow/python/ops/