{ }
View source on GitHub |
Computes CTC (Connectionist Temporal Classification) loss.
tf.compat.v1.nn.ctc_loss_v2(
labels,
logits,
label_length,
logit_length,
logits_time_major=True,
unique=None,
blank_index=None,
name=None
)
This op implements the CTC loss as presented in (Graves et al., 2006).
Notes:
- Same as the "Classic CTC" in TensorFlow 1.x's tf.compat.v1.nn.ctc_loss setting of preprocess_collapse_repeated=False, ctc_merge_repeated=True
- Labels may be supplied as either a dense, zero-padded tensor with a vector of label sequence lengths OR as a SparseTensor.
- On TPU and GPU: Only dense padded labels are supported.
- On CPU: Caller may use SparseTensor or dense padded labels but calling with a SparseTensor will be significantly faster.
- Default blank label is 0 rather num_classes - 1, unless overridden by blank_index.
Returns | |
---|---|
loss
|
tensor of shape [batch_size], negative log probabilities. |
References | |
---|---|
Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2006 (pdf) |