TensorFlow.orgで表示 | GoogleColabで実行 | GitHubで表示 | ノートブックをダウンロード |
この例では、Wikiトークページに投稿されたディスカッションコメントに有毒なコンテンツが含まれているかどうかを予測するタスクを検討します(つまり、「失礼、無礼、または不合理」なコンテンツが含まれています)。私たちは公共の使用データセットによって解放会話AIの群衆の労働者によって注釈を付けている英語版ウィキペディアから100kのコメントの上に含まれているプロジェクトを、(参照紙ラベリング方法論のために)。
このデータセットの課題の1つは、コメントのごく一部がセクシュアリティや宗教などのデリケートなトピックをカバーしていることです。そのため、このデータセットでニューラルネットワークモデルをトレーニングすると、より小さな機密トピックで異なるパフォーマンスが得られます。これは、これらのトピックに関する無害な発言が、より高い割合で「有毒」として誤ってフラグが立てられ、スピーチが不当に検閲される可能性があることを意味する可能性があります
トレーニング中に制約を課すことによって、我々はそれを行い、より公平に別のトピックのグループ間でより公正なモデルを訓練することができます。
トレーニング中の公平性の目標を最適化するために、TFCOライブラリを使用します。
インストール
まず、関連するライブラリをインストールしてインポートしましょう。ランタイムのパッケージが古くなっているため、最初のセルを実行した後、一度コラボを再起動する必要がある場合があることに注意してください。そうすれば、インポートに関してそれ以上の問題は発生しないはずです。
pipのインストール
pip install -q -U pip==20.2
pip install git+https://github.com/google-research/tensorflow_constrained_optimization
pip install git+https://github.com/tensorflow/fairness-indicators
以下のセルを実行するタイミングによっては、ColabのTensorFlowのデフォルトバージョンがTensorFlow2.Xにすぐに切り替わるという警告が表示される場合があることに注意してください。このノートブックはTensorFlow1.Xおよび2.Xと互換性があるように設計されているため、この警告は無視してかまいません。
モジュールのインポート
import io
import os
import shutil
import sys
import tempfile
import time
import urllib
import zipfile
import apache_beam as beam
from IPython.display import display
from IPython.display import HTML
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.preprocessing import text
import tensorflow_constrained_optimization as tfco
import tensorflow_model_analysis as tfma
import fairness_indicators as fi
from tensorflow_model_analysis.addons.fairness.view import widget_view
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict
TFCOはイーガー実行およびグラフ実行と互換性がありますが、このノートブックは、イーガー実行がデフォルトで有効になっていることを前提としています。何も壊れないようにするために、下のセルで熱心な実行が有効になります。
熱心な実行と印刷バージョンを有効にする
if tf.__version__ < "2.0.0":
tf.enable_eager_execution()
print("Eager execution enabled.")
else:
print("Eager execution enabled by default.")
print("TensorFlow " + tf.__version__)
print("TFMA " + tfma.__version__)
print("FI " + fi.version.__version__)
Eager execution enabled by default. TensorFlow 2.3.2 TFMA 0.26.0 FI 0.27.0.dev
ハイパーパラメータ
まず、データの前処理とモデルのトレーニングに必要ないくつかのハイパーパラメーターを設定します。
hparams = {
"batch_size": 128,
"cnn_filter_sizes": [128, 128, 128],
"cnn_kernel_sizes": [5, 5, 5],
"cnn_pooling_sizes": [5, 5, 40],
"constraint_learning_rate": 0.01,
"embedding_dim": 100,
"embedding_trainable": False,
"learning_rate": 0.005,
"max_num_words": 10000,
"max_sequence_length": 250
}
データセットの読み込みと前処理
次に、データセットをダウンロードして前処理します。トレイン、テスト、検証のセットは、個別のCSVファイルとして提供されます。
toxicity_data_url = ("https://raw.githubusercontent.com/conversationai/"
"unintended-ml-bias-analysis/master/data/")
data_train = pd.read_csv(toxicity_data_url + "wiki_train.csv")
data_test = pd.read_csv(toxicity_data_url + "wiki_test.csv")
data_vali = pd.read_csv(toxicity_data_url + "wiki_dev.csv")
data_train.head()
comment
欄には、ディスカッションコメントが含まれており、 is_toxic
列はコメントは毒性として注釈されているか否かを示しています。
以下では、次のことを行います。
- ラベルを分離する
- テキストコメントをトークン化する
- デリケートなトピック用語を含むコメントを特定する
まず、ラベルをトレイン、テスト、および検証セットから分離します。ラベルはすべてバイナリ(0または1)です。
labels_train = data_train["is_toxic"].values.reshape(-1, 1) * 1.0
labels_test = data_test["is_toxic"].values.reshape(-1, 1) * 1.0
labels_vali = data_vali["is_toxic"].values.reshape(-1, 1) * 1.0
次に、我々は、使用して、テキストコメントをトークン化Tokenizer
により提供さKeras
。トレーニングセットのコメントのみを使用してトークンの語彙を構築し、それらを使用してすべてのコメントを同じ長さのトークンの(パッド付き)シーケンスに変換します。
tokenizer = text.Tokenizer(num_words=hparams["max_num_words"])
tokenizer.fit_on_texts(data_train["comment"])
def prep_text(texts, tokenizer, max_sequence_length):
# Turns text into into padded sequences.
text_sequences = tokenizer.texts_to_sequences(texts)
return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)
text_train = prep_text(data_train["comment"], tokenizer, hparams["max_sequence_length"])
text_test = prep_text(data_test["comment"], tokenizer, hparams["max_sequence_length"])
text_vali = prep_text(data_vali["comment"], tokenizer, hparams["max_sequence_length"])
最後に、特定の機密トピックグループに関連するコメントを特定します。私たちは、のサブセットを検討アイデンティティ用語セクシュアリティ、ジェンダーアイデンティティ、宗教、人種:4つの大きな話題のグループにデータセットとグループにそれらを提供します。
terms = {
'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'],
'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],
'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],
'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina',
'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese',
'japanese']}
group_names = list(terms.keys())
num_groups = len(group_names)
次に、トレイン、テスト、および検証セット用に個別のグループメンバーシップマトリックスを作成します。行はコメントに対応し、列は4つの機密グループに対応し、各エントリはコメントにトピックグループの用語が含まれているかどうかを示すブール値です。
def get_groups(text):
# Returns a boolean NumPy array of shape (n, k), where n is the number of comments,
# and k is the number of groups. Each entry (i, j) indicates if the i-th comment
# contains a term from the j-th group.
groups = np.zeros((text.shape[0], num_groups))
for ii in range(num_groups):
groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)
return groups
groups_train = get_groups(data_train["comment"])
groups_test = get_groups(data_test["comment"])
groups_vali = get_groups(data_vali["comment"])
以下に示すように、4つのトピックグループはすべて、データセット全体のごく一部を構成し、有毒なコメントの割合が異なります。
print("Overall label proportion = %.1f%%" % (labels_train.mean() * 100))
group_stats = []
for ii in range(num_groups):
group_proportion = groups_train[:, ii].mean()
group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()
group_stats.append([group_names[ii],
"%.2f%%" % (group_proportion * 100),
"%.1f%%" % (group_pos_proportion * 100)])
group_stats = pd.DataFrame(group_stats,
columns=["Topic group", "Group proportion", "Label proportion"])
group_stats
Overall label proportion = 9.7%
データセットの1.3%のみにセクシュアリティに関連するコメントが含まれていることがわかります。その中で、コメントの37%は有毒であると注釈が付けられています。これは、有毒であると注釈が付けられたコメントの全体的な割合よりも大幅に大きいことに注意してください。これは、これらのアイデンティティ用語を使用したいくつかのコメントが蔑称的な文脈で使用したためである可能性があります。上記のように、これにより、コメントにこれらの用語が含まれている場合、モデルがコメントを有毒であると誤って分類する可能性があります。これが懸念されるので、我々はモデルの性能を評価するとき、我々は偽陽性率を見て確認してくださいます。
CNN毒性予測モデルを構築する
データセットを用意したので、私たちは今、構築Keras
予測毒性のためのモデルを。私たちが使用するモデルは、Conversation AIプロジェクトがバイアス除去分析に使用したものと同じアーキテクチャの畳み込みニューラルネットワーク(CNN)です。私たちは、適応コードモデル層を構築するためにそれらによって提供されます。
モデルは埋め込みレイヤーを使用して、テキストトークンを固定長のベクトルに変換します。このレイヤーは、入力テキストシーケンスを一連のベクトルに変換し、それらを畳み込みおよびプーリング操作のいくつかのレイヤーに通し、最後に完全に接続されたレイヤーを渡します。
事前にトレーニングされたGloVe単語ベクトル埋め込みを利用します。これは、以下からダウンロードします。これが完了するまでに数分かかる場合があります。
zip_file_url = "http://nlp.stanford.edu/data/glove.6B.zip"
zip_file = urllib.request.urlopen(zip_file_url)
archive = zipfile.ZipFile(io.BytesIO(zip_file.read()))
私たちは、行がでトークンのワード埋め込み含ま埋め込むマトリックス、作成するために、ダウンロードされた手袋の埋め込みを使用Tokenizer
の語彙を。
embeddings_index = {}
glove_file = "glove.6B.100d.txt"
with archive.open(glove_file) as f:
for line in f:
values = line.split()
word = values[0].decode("utf-8")
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1, hparams["embedding_dim"]))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
embedding_matrix[i] = embedding_vector
現在指定する準備が整いましたKeras
層を。新しいモデルを作成する関数を作成します。この関数は、新しいモデルをトレーニングするたびに呼び出します。
def create_model():
model = keras.Sequential()
# Embedding layer.
embedding_layer = layers.Embedding(
embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=hparams["max_sequence_length"],
trainable=hparams['embedding_trainable'])
model.add(embedding_layer)
# Convolution layers.
for filter_size, kernel_size, pool_size in zip(
hparams['cnn_filter_sizes'], hparams['cnn_kernel_sizes'],
hparams['cnn_pooling_sizes']):
conv_layer = layers.Conv1D(
filter_size, kernel_size, activation='relu', padding='same')
model.add(conv_layer)
pooled_layer = layers.MaxPooling1D(pool_size, padding='same')
model.add(pooled_layer)
# Add a flatten layer, a fully-connected layer and an output layer.
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1))
return model
また、ランダムシードを設定するメソッドを定義します。これは、再現性のある結果を保証するために行われます。
def set_seeds():
np.random.seed(121212)
tf.compat.v1.set_random_seed(212121)
公平性指標
また、公平性指標をプロットする関数を記述します。
def create_examples(labels, predictions, groups, group_names):
# Returns tf.examples with given labels, predictions, and group information.
examples = []
sigmoid = lambda x: 1/(1 + np.exp(-x))
for ii in range(labels.shape[0]):
example = tf.train.Example()
example.features.feature['toxicity'].float_list.value.append(
labels[ii])
example.features.feature['prediction'].float_list.value.append(
sigmoid(predictions[ii])) # predictions need to be in [0, 1].
for jj in range(groups.shape[1]):
example.features.feature[group_names[jj]].bytes_list.value.append(
b'Yes' if groups[ii, jj] else b'No')
examples.append(example)
return examples
def evaluate_results(labels, predictions, groups, group_names):
# Evaluates fairness indicators for given labels, predictions and group
# membership info.
examples = create_examples(labels, predictions, groups, group_names)
# Create feature map for labels, predictions and each group.
feature_map = {
'prediction': tf.io.FixedLenFeature([], tf.float32),
'toxicity': tf.io.FixedLenFeature([], tf.float32),
}
for group in group_names:
feature_map[group] = tf.io.FixedLenFeature([], tf.string)
# Serialize the examples.
serialized_examples = [e.SerializeToString() for e in examples]
BASE_DIR = tempfile.gettempdir()
OUTPUT_DIR = os.path.join(BASE_DIR, 'output')
with beam.Pipeline() as pipeline:
model_agnostic_config = agnostic_predict.ModelAgnosticConfig(
label_keys=['toxicity'],
prediction_keys=['prediction'],
feature_spec=feature_map)
slices = [tfma.slicer.SingleSliceSpec()]
for group in group_names:
slices.append(
tfma.slicer.SingleSliceSpec(columns=[group]))
extractors = [
model_agnostic_extractor.ModelAgnosticExtractor(
model_agnostic_config=model_agnostic_config),
tfma.extractors.slice_key_extractor.SliceKeyExtractor(slices)
]
metrics_callbacks = [
tfma.post_export_metrics.fairness_indicators(
thresholds=[0.5],
target_prediction_keys=['prediction'],
labels_key='toxicity'),
tfma.post_export_metrics.example_count()]
# Create a model agnostic aggregator.
eval_shared_model = tfma.types.EvalSharedModel(
add_metrics_callbacks=metrics_callbacks,
construct_fn=model_agnostic_evaluate_graph.make_construct_fn(
add_metrics_callbacks=metrics_callbacks,
config=model_agnostic_config))
# Run Model Agnostic Eval.
_ = (
pipeline
| beam.Create(serialized_examples)
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
output_path=OUTPUT_DIR,
extractors=extractors,
compute_confidence_intervals=True
)
)
fairness_ind_result = tfma.load_eval_result(output_path=OUTPUT_DIR)
# Also evaluate accuracy of the model.
accuracy = np.mean(labels == (predictions > 0.0))
return fairness_ind_result, accuracy
def plot_fairness_indicators(eval_result, title):
fairness_ind_result, accuracy = eval_result
display(HTML("<center><h2>" + title +
" (Accuracy = %.2f%%)" % (accuracy * 100) + "</h2></center>"))
widget_view.render_fairness_indicator(fairness_ind_result)
def plot_multi_fairness_indicators(multi_eval_results):
multi_results = {}
multi_accuracy = {}
for title, (fairness_ind_result, accuracy) in multi_eval_results.items():
multi_results[title] = fairness_ind_result
multi_accuracy[title] = accuracy
title_str = "<center><h2>"
for title in multi_eval_results.keys():
title_str+=title + " (Accuracy = %.2f%%)" % (multi_accuracy[title] * 100) + "; "
title_str=title_str[:-2]
title_str+="</h2></center>"
# fairness_ind_result, accuracy = eval_result
display(HTML(title_str))
widget_view.render_fairness_indicator(multi_eval_results=multi_results)
制約のないモデルをトレーニングする
私たちは訓練最初のモデルのために、私たちはいかなる制約なしの単純なクロスエントロピー損失を最適化します。..
# Set random seed for reproducible results.
set_seeds()
# Optimizer and loss.
optimizer = tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"])
loss = lambda y_true, y_pred: tf.keras.losses.binary_crossentropy(
y_true, y_pred, from_logits=True)
# Create, compile and fit model.
model_unconstrained = create_model()
model_unconstrained.compile(optimizer=optimizer, loss=loss)
model_unconstrained.fit(
x=text_train, y=labels_train, batch_size=hparams["batch_size"], epochs=2)
Epoch 1/2 748/748 [==============================] - 51s 69ms/step - loss: 0.1590 Epoch 2/2 748/748 [==============================] - 48s 65ms/step - loss: 0.1217 <tensorflow.python.keras.callbacks.History at 0x7f55603a1d30>
制約のないモデルをトレーニングした後、モデルのさまざまな評価指標をテストセットにプロットします。
scores_unconstrained_test = model_unconstrained.predict(text_test)
eval_result_unconstrained = evaluate_results(
labels_test, scores_unconstrained_test, groups_test, group_names)
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features. WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction'])) INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction'])) INFO:tensorflow:Using the first key from predictions_dict: prediction INFO:tensorflow:Using the first key from predictions_dict: prediction WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)` WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
上で説明したように、私たちは偽陽性率に集中しています。現在のバージョン(0.1.2)では、公平性インジケーターはデフォルトで偽陰性率を選択します。以下の行を実行した後、先に進み、false_negative_rateの選択を解除し、false_positive_rateを選択して、関心のあるメトリックを確認します。
plot_fairness_indicators(eval_result_unconstrained, "Unconstrained")
全体的な偽陽性率は2%未満ですが、セクシュアリティ関連のコメントの偽陽性率は大幅に高くなっています。これは、セクシュアリティグループのサイズが非常に小さく、有毒であると注釈が付けられたコメントの割合が不釣り合いに高いためです。したがって、制約なしでモデルをトレーニングすると、セクシュアリティ関連の用語が毒性の強力な指標であるとモデルが信じることになります。
偽陽性率を制限してトレーニングする
異なるグループ間での偽陽性率の大きな違いを回避するために、次に、各グループの偽陽性率を目的の制限内に制限することにより、モデルをトレーニングします。この場合、我々は、グループごとの偽陽性率が低いことにモデル対象の誤り率を最適化する2%に等しいです。
グループごとの制約があるミニバッチのトレーニングは、このデータセットでは難しい場合がありますが、制約するグループはすべてサイズが小さく、個々のミニバッチには各グループの例がほとんど含まれていない可能性があります。したがって、トレーニング中に計算する勾配はノイズが多く、モデルの収束が非常に遅くなります。
この問題を軽減するために、ミニバッチの2つのストリームを使用することをお勧めします。最初のストリームはトレーニングセット全体から以前と同じように形成され、2番目のストリームは機密グループの例のみから形成されます。最初のストリームのミニバッチを使用して目的を計算し、2番目のストリームのミニバッチを使用してグループごとの制約を計算します。 2番目のストリームからのバッチには、各グループからのより多くの例が含まれている可能性が高いため、更新のノイズが少なくなると予想されます。
2つのストリームからのミニバッチを保持するために、個別の機能、ラベル、およびグループテンソルを作成します。
# Set random seed.
set_seeds()
# Features tensors.
batch_shape = (hparams["batch_size"], hparams['max_sequence_length'])
features_tensor = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x')
features_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x_sen')
# Labels tensors.
batch_shape = (hparams["batch_size"], 1)
labels_tensor = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels')
labels_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels_sen')
# Groups tensors.
batch_shape = (hparams["batch_size"], num_groups)
groups_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='groups_sen')
新しいモデルをインスタンス化し、2つのストリームからミニバッチの予測を計算します。
# Create model, and separate prediction functions for the two streams.
# For the predictions, we use a nullary function returning a Tensor to support eager mode.
model_constrained = create_model()
def predictions():
return model_constrained(features_tensor)
def predictions_sen():
return model_constrained(features_tensor_sen)
次に、エラー率を目的とし、グループごとの偽陽性率に制約を設定して、制約付き最適化問題を設定します。
epsilon = 0.02 # Desired false-positive rate threshold.
# Set up separate contexts for the two minibatch streams.
context = tfco.rate_context(predictions, lambda:labels_tensor)
context_sen = tfco.rate_context(predictions_sen, lambda:labels_tensor_sen)
# Compute the objective using the first stream.
objective = tfco.error_rate(context)
# Compute the constraint using the second stream.
# Subset the examples belonging to the "sexuality" group from the second stream
# and add a constraint on the group's false positive rate.
context_sen_subset = context_sen.subset(lambda: groups_tensor_sen[:, 0] > 0)
constraint = [tfco.false_positive_rate(context_sen_subset) <= epsilon]
# Create a rate minimization problem.
problem = tfco.RateMinimizationProblem(objective, constraint)
# Set up a constrained optimizer.
optimizer = tfco.ProxyLagrangianOptimizerV2(
optimizer=tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"]),
num_constraints=problem.num_constraints)
# List of variables to optimize include the model weights,
# and the trainable variables from the rate minimization problem and
# the constrained optimizer.
var_list = (model_constrained.trainable_weights + problem.trainable_variables +
optimizer.trainable_variables())
モデルをトレーニングする準備ができました。 2つのミニバッチストリーム用に個別のカウンターを維持します。私たちは、勾配更新を実行するたびに、我々はテンソルへの最初のストリームからminibatchの内容をコピーする必要がありますfeatures_tensor
とlabels_tensor
テンソルに、そして第二の流れからminibatch内容はfeatures_tensor_sen
、 labels_tensor_sen
とgroups_tensor_sen
。
# Indices of sensitive group members.
protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]
num_examples = text_train.shape[0]
num_examples_sen = protected_group_indices.shape[0]
batch_size = hparams["batch_size"]
# Number of steps needed for one epoch over the training sample.
num_steps = int(num_examples / batch_size)
start_time = time.time()
# Loop over minibatches.
for batch_index in range(num_steps):
# Indices for current minibatch in the first stream.
batch_indices = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices = [ind % num_examples for ind in batch_indices]
# Indices for current minibatch in the second stream.
batch_indices_sen = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices_sen = [protected_group_indices[ind % num_examples_sen]
for ind in batch_indices_sen]
# Assign features, labels, groups from the minibatches to the respective tensors.
features_tensor.assign(text_train[batch_indices, :])
labels_tensor.assign(labels_train[batch_indices])
features_tensor_sen.assign(text_train[batch_indices_sen, :])
labels_tensor_sen.assign(labels_train[batch_indices_sen])
groups_tensor_sen.assign(groups_train[batch_indices_sen, :])
# Gradient update.
optimizer.minimize(problem, var_list=var_list)
# Record and print batch training stats every 10 steps.
if (batch_index + 1) % 10 == 0 or batch_index in (0, num_steps - 1):
hinge_loss = problem.objective()
max_violation = max(problem.constraints())
elapsed_time = time.time() - start_time
sys.stdout.write(
"\rStep %d / %d: Elapsed time = %ds, Loss = %.3f, Violation = %.3f" %
(batch_index + 1, num_steps, elapsed_time, hinge_loss, max_violation))
Step 747 / 747: Elapsed time = 180s, Loss = 0.068, Violation = -0.020
制約付きモデルをトレーニングした後、モデルのさまざまな評価指標をテストセットにプロットします。
scores_constrained_test = model_constrained.predict(text_test)
eval_result_constrained = evaluate_results(
labels_test, scores_constrained_test, groups_test, group_names)
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction'])) INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction'])) INFO:tensorflow:Using the first key from predictions_dict: prediction INFO:tensorflow:Using the first key from predictions_dict: prediction WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching: WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching: WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching:
前回と同様に、false_positive_rateを選択することを忘れないでください。
plot_fairness_indicators(eval_result_constrained, "Constrained")
multi_results = {
'constrained':eval_result_constrained,
'unconstrained':eval_result_unconstrained,
}
plot_multi_fairness_indicators(multi_eval_results=multi_results)
公平性指標からわかるように、制約のないモデルと比較して、制約のあるモデルでは、セクシュアリティ関連のコメントの偽陽性率が大幅に低くなり、全体的な精度がわずかに低下します。