View on TensorFlow.org | Run in Google Colab | View on GitHub | Download notebook | See TF Hub model |
In this tutorial, you will learn how to use Fairness Indicators to evaluate embeddings from TF Hub. This notebook uses the Civil Comments dataset.
Setup
Install the required libraries.
!pip install -q -U pip==20.2
!pip install fairness-indicators \
"absl-py==0.12.0" \
"pyarrow==2.0.0" \
"apache-beam==2.40.0" \
"avro-python3==1.9.1"
Import other required libraries.
import os
import tempfile
import apache_beam as beam
from datetime import datetime
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.view import widget_view
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from fairness_indicators import example_model
from fairness_indicators.tutorial_utils import util
Dataset
In this notebook, you work with the Civil Comments dataset which contains approximately 2 million public comments made public by the Civil Comments platform in 2017 for ongoing research. This effort was sponsored by Jigsaw, who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.
Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity.
Prepare the data
TensorFlow parses features from data using tf.io.FixedLenFeature
and tf.io.VarLenFeature
. Map out the input feature, output feature, and all other slicing features of interest.
BASE_DIR = tempfile.gettempdir()
# The input and output features of the classifier
TEXT_FEATURE = 'comment_text'
LABEL = 'toxicity'
FEATURE_MAP = {
# input and output features
LABEL: tf.io.FixedLenFeature([], tf.float32),
TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),
# slicing features
'sexual_orientation': tf.io.VarLenFeature(tf.string),
'gender': tf.io.VarLenFeature(tf.string),
'religion': tf.io.VarLenFeature(tf.string),
'race': tf.io.VarLenFeature(tf.string),
'disability': tf.io.VarLenFeature(tf.string)
}
IDENTITY_TERMS = ['gender', 'sexual_orientation', 'race', 'religion', 'disability']
By default, the notebook downloads a preprocessed version of this dataset, but you may use the original dataset and re-run the processing steps if desired.
In the original dataset, each comment is labeled with the percentage
of raters who believed that a comment corresponds to a particular
identity. For example, a comment might be labeled with the following:
{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8,
homosexual_gay_or_lesbian: 1.0 }
.
The processing step groups identity by category (gender,
sexual_orientation, etc.) and removes identities with a score less
than 0.5. So the example above would be converted to the following:
of raters who believed that a comment corresponds to a particular
identity. For example, the comment above would be labeled with the
following:
{ gender: [female], sexual_orientation: [heterosexual,
homosexual_gay_or_lesbian] }
Download the dataset.
download_original_data = False
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshold 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
Create a TensorFlow Model Analysis Pipeline
The Fairness Indicators library operates on TensorFlow Model Analysis (TFMA) models. TFMA models wrap TensorFlow models with additional functionality to evaluate and visualize their results. The actual evaluation occurs inside of an Apache Beam pipeline.
The steps you follow to create a TFMA pipeline are:
- Build a TensorFlow model
- Build a TFMA model on top of the TensorFlow model
- Run the model analysis in an orchestrator. The example model in this notebook uses Apache Beam as the orchestrator.
def embedding_fairness_result(embedding, identity_term='gender'):
model_dir = os.path.join(BASE_DIR, 'train',
datetime.now().strftime('%Y%m%d-%H%M%S'))
print("Training classifier for " + embedding)
classifier = example_model.train_model(model_dir,
train_tf_file,
LABEL,
TEXT_FEATURE,
FEATURE_MAP,
embedding)
# Create a unique path to store the results for this embedding.
embedding_name = embedding.split('/')[-2]
eval_result_path = os.path.join(BASE_DIR, 'eval_result', embedding_name)
example_model.evaluate_model(classifier,
validate_tf_file,
eval_result_path,
identity_term,
LABEL,
FEATURE_MAP)
return tfma.load_eval_result(output_path=eval_result_path)
Run TFMA & Fairness Indicators
Fairness Indicators Metrics
Some of the metrics available with Fairness Indicators are:
- Negative Rate, False Negative Rate (FNR), and True Negative Rate (TNR)
- Positive Rate, False Positive Rate (FPR), and True Positive Rate (TPR)
- Accuracy
- Precision and Recall
- Precision-Recall AUC
- ROC AUC
Text Embeddings
TF-Hub provides several text embeddings. These embeddings will serve as the feature column for the different models. This tutorial uses the following embeddings:
- random-nnlm-en-dim128: random text embeddings, this serves as a convenient baseline.
- nnlm-en-dim128: a text embedding based on A Neural Probabilistic Language Model.
- universal-sentence-encoder: a text embedding based on Universal Sentence Encoder.
Fairness Indicator Results
Compute fairness indicators with the embedding_fairness_result
pipeline, and then render the results in the Fairness Indicator UI widget with widget_view.render_fairness_indicator
for all the above embeddings.
Random NNLM
eval_result_random_nnlm = embedding_fairness_result('https://tfhub.dev/google/random-nnlm-en-dim128/1')
widget_view.render_fairness_indicator(eval_result=eval_result_random_nnlm)
NNLM
eval_result_nnlm = embedding_fairness_result('https://tfhub.dev/google/nnlm-en-dim128/1')
widget_view.render_fairness_indicator(eval_result=eval_result_nnlm)
Universal Sentence Encoder
eval_result_use = embedding_fairness_result('https://tfhub.dev/google/universal-sentence-encoder/2')
widget_view.render_fairness_indicator(eval_result=eval_result_use)
Comparing Embeddings
You can also use Fairness Indicators to compare embeddings directly. For example, compare the models generated from the NNLM and USE embeddings.
widget_view.render_fairness_indicator(multi_eval_results={'nnlm': eval_result_nnlm, 'use': eval_result_use})