Have a question? Connect with the community at the TensorFlow Forum Visit Forum

TF.Text Metrics

View on TensorFlow.org Run in Google Colab View on GitHub Download notebook

Overview

TensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models.

The benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs.

Setup

pip install -q tensorflow-text
import tensorflow as tf
import tensorflow_text as text

ROUGE-L

The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).

Source: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/

The TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair.

Consider the following hypothesis/reference pair:

hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'],
                                 ['the', '1990', 'transcript']])
references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'],
                                 ['this', 'concludes', 'the', 'transcript']])

The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.

Now we can call text.metrics.rouge_l and get our result back:

result = text.metrics.rouge_l(hypotheses, references)
print('F-Measure: %s' % result.f_measure)
print('P-Measure: %s' % result.p_measure)
print('R-Measure: %s' % result.r_measure)
F-Measure: tf.Tensor([0.44444448 0.57142854], shape=(2,), dtype=float32)
P-Measure: tf.Tensor([0.4       0.6666667], shape=(2,), dtype=float32)
R-Measure: tf.Tensor([0.5 0.5], shape=(2,), dtype=float32)

ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.

# Compute ROUGE-L with alpha=0
result = text.metrics.rouge_l(hypotheses, references, alpha=0)
print('F-Measure (alpha=0): %s' % result.f_measure)
print('P-Measure (alpha=0): %s' % result.p_measure)
print('R-Measure (alpha=0): %s' % result.r_measure)
F-Measure (alpha=0): tf.Tensor([0.5 0.5], shape=(2,), dtype=float32)
P-Measure (alpha=0): tf.Tensor([0.4       0.6666667], shape=(2,), dtype=float32)
R-Measure (alpha=0): tf.Tensor([0.5 0.5], shape=(2,), dtype=float32)
# Compute ROUGE-L with alpha=1
result = text.metrics.rouge_l(hypotheses, references, alpha=1)
print('F-Measure (alpha=1): %s' % result.f_measure)
print('P-Measure (alpha=1): %s' % result.p_measure)
print('R-Measure (alpha=1): %s' % result.r_measure)
F-Measure (alpha=1): tf.Tensor([0.4       0.6666667], shape=(2,), dtype=float32)
P-Measure (alpha=1): tf.Tensor([0.4       0.6666667], shape=(2,), dtype=float32)
R-Measure (alpha=1): tf.Tensor([0.5 0.5], shape=(2,), dtype=float32)