Calculates the CTC Loss (log probability) for each batch entry.
tf.raw_ops.CTCLoss(
inputs,
labels_indices,
labels_values,
sequence_length,
preprocess_collapse_repeated=False,
ctc_merge_repeated=True,
ignore_longer_outputs_than_inputs=False,
name=None
)
Also calculates
the gradient. This class performs the softmax operation for you, so inputs
should be e.g. linear projections of outputs by an LSTM.
Args |
inputs
|
A Tensor . Must be one of the following types: float32 , float64 .
3-D, shape: (max_time x batch_size x num_classes) , the logits.
|
labels_indices
|
A Tensor of type int64 .
The indices of a SparseTensor<int32, 2> .
labels_indices(i, :) == [b, t] means labels_values(i) stores the id for
(batch b, time t) .
|
labels_values
|
A Tensor of type int32 .
The values (labels) associated with the given batch and time.
|
sequence_length
|
A Tensor of type int32 .
A vector containing sequence lengths (batch).
|
preprocess_collapse_repeated
|
An optional bool . Defaults to False .
Scalar, if true then repeated labels are
collapsed prior to the CTC calculation.
|
ctc_merge_repeated
|
An optional bool . Defaults to True .
Scalar. If set to false, during CTC calculation
repeated non-blank labels will not be merged and are interpreted as
individual labels. This is a simplified version of CTC.
|
ignore_longer_outputs_than_inputs
|
An optional bool . Defaults to False .
Scalar. If set to true, during CTC
calculation, items that have longer output sequences than input sequences
are skipped: they don't contribute to the loss term and have zero-gradient.
|
name
|
A name for the operation (optional).
|
Returns |
A tuple of Tensor objects (loss, gradient).
|
loss
|
A Tensor . Has the same type as inputs .
|
gradient
|
A Tensor . Has the same type as inputs .
|