Wiki Talk Commenti Previsione della tossicità

Visualizza su TensorFlow.org Esegui in Google Colab Visualizza su GitHub Scarica quaderno

In questo esempio, consideriamo il compito di prevedere se un commento di discussione pubblicato su una pagina di discussione Wiki contenga contenuti tossici (cioè contenga contenuti “maleducati, irrispettosi o irragionevoli”). Usiamo un pubblico dataset rilasciato dalla IA Conversazione progetto, che contiene più di 100k commenti da Wikipedia inglese che vengono annotati dai lavoratori folla (si veda la carta per la metodologia di etichettatura).

Una delle sfide con questo set di dati è che una percentuale molto piccola dei commenti copre argomenti sensibili come la sessualità o la religione. Pertanto, l'addestramento di un modello di rete neurale su questo set di dati porta a prestazioni disparate sugli argomenti sensibili più piccoli. Ciò può significare che dichiarazioni innocue su tali argomenti potrebbero essere erroneamente contrassegnate come "tossiche" a tassi più elevati, causando una censura ingiusta del discorso

Imponendo vincoli durante l'allenamento, siamo in grado di formare un modello più equo che esegue più equamente tra i diversi gruppi di argomenti.

Utilizzeremo la libreria TFCO per ottimizzare il nostro obiettivo di equità durante la formazione.

Installazione

Per prima cosa installiamo e importiamo le librerie pertinenti. Tieni presente che potresti dover riavviare la tua colab una volta dopo aver eseguito la prima cella a causa di pacchetti obsoleti nel runtime. Dopo averlo fatto, non dovrebbero esserci ulteriori problemi con le importazioni.

pip installa

Tieni presente che, a seconda di quando esegui la cella di seguito, potresti ricevere un avviso sulla versione predefinita di TensorFlow in Colab che passerà presto a TensorFlow 2.X. Puoi tranquillamente ignorare questo avviso poiché questo notebook è stato progettato per essere compatibile con TensorFlow 1.X e 2.X.

Importa moduli

Sebbene TFCO sia compatibile con l'esecuzione ansiosa e grafica, questo notebook presuppone che l'esecuzione ansioso sia abilitata per impostazione predefinita. Per garantire che nulla si interrompa, l'esecuzione desiderosa sarà abilitata nella cella sottostante.

Abilita l'esecuzione desiderosa e le versioni di stampa

Eager execution enabled by default.
TensorFlow 2.3.2
TFMA 0.26.0
FI 0.27.0.dev

Iperparametri

Innanzitutto, abbiamo impostato alcuni iperparametri necessari per la preelaborazione dei dati e l'addestramento del modello.

hparams = {
    "batch_size": 128,
    "cnn_filter_sizes": [128, 128, 128],
    "cnn_kernel_sizes": [5, 5, 5],
    "cnn_pooling_sizes": [5, 5, 40],
    "constraint_learning_rate": 0.01,
    "embedding_dim": 100,
    "embedding_trainable": False,
    "learning_rate": 0.005,
    "max_num_words": 10000,
    "max_sequence_length": 250
}

Carica e pre-elabora il set di dati

Successivamente, scarichiamo il set di dati e lo pre-processiamo. I set di treni, test e convalida sono forniti come file CSV separati.

toxicity_data_url = ("https://raw.githubusercontent.com/conversationai/"
                     "unintended-ml-bias-analysis/master/data/")

data_train = pd.read_csv(toxicity_data_url + "wiki_train.csv")
data_test = pd.read_csv(toxicity_data_url + "wiki_test.csv")
data_vali = pd.read_csv(toxicity_data_url + "wiki_dev.csv")

data_train.head()

Il comment colonna contiene i commenti di discussione e is_toxic colonna indica se un commento è annotato come tossico.

Di seguito, noi:

  1. Separare le etichette
  2. Tokenize i commenti di testo
  3. Identifica i commenti che contengono termini di argomenti sensibili

Innanzitutto, separiamo le etichette dai set di treni, test e convalida. Le etichette sono tutte binarie (0 o 1).

labels_train = data_train["is_toxic"].values.reshape(-1, 1) * 1.0
labels_test = data_test["is_toxic"].values.reshape(-1, 1) * 1.0
labels_vali = data_vali["is_toxic"].values.reshape(-1, 1) * 1.0

Successivamente, abbiamo tokenize i commenti testuali che utilizzano il Tokenizer fornito da Keras . Usiamo solo i commenti del set di addestramento per costruire un vocabolario di token e li usiamo per convertire tutti i commenti in una sequenza (imbottita) di token della stessa lunghezza.

tokenizer = text.Tokenizer(num_words=hparams["max_num_words"])
tokenizer.fit_on_texts(data_train["comment"])

def prep_text(texts, tokenizer, max_sequence_length):
    # Turns text into into padded sequences.
    text_sequences = tokenizer.texts_to_sequences(texts)
    return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)

text_train = prep_text(data_train["comment"], tokenizer, hparams["max_sequence_length"])
text_test = prep_text(data_test["comment"], tokenizer, hparams["max_sequence_length"])
text_vali = prep_text(data_vali["comment"], tokenizer, hparams["max_sequence_length"])

Infine, identifichiamo i commenti relativi a determinati gruppi di argomenti sensibili. Consideriamo un sottoinsieme dei termini di identità forniti con il set di dati e raggrupparli in quattro grandi gruppi tematici: la sessualità, identità di genere, religione e razza.

terms = {
    'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'], 
    'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],
    'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],
    'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina', 
             'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese', 
             'japanese']}

group_names = list(terms.keys())
num_groups = len(group_names)

Quindi creiamo matrici di appartenenza a gruppi separate per i set di treni, test e convalida, in cui le righe corrispondono ai commenti, le colonne corrispondono ai quattro gruppi sensibili e ogni voce è un valore booleano che indica se il commento contiene un termine del gruppo di argomenti.

def get_groups(text):
    # Returns a boolean NumPy array of shape (n, k), where n is the number of comments, 
    # and k is the number of groups. Each entry (i, j) indicates if the i-th comment 
    # contains a term from the j-th group.
    groups = np.zeros((text.shape[0], num_groups))
    for ii in range(num_groups):
        groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)
    return groups

groups_train = get_groups(data_train["comment"])
groups_test = get_groups(data_test["comment"])
groups_vali = get_groups(data_vali["comment"])

Come mostrato di seguito, tutti e quattro i gruppi di argomenti costituiscono solo una piccola parte del set di dati complessivo e hanno proporzioni variabili di commenti tossici.

print("Overall label proportion = %.1f%%" % (labels_train.mean() * 100))

group_stats = []
for ii in range(num_groups):
    group_proportion = groups_train[:, ii].mean()
    group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()
    group_stats.append([group_names[ii],
                        "%.2f%%" % (group_proportion * 100), 
                        "%.1f%%" % (group_pos_proportion * 100)])
group_stats = pd.DataFrame(group_stats, 
                           columns=["Topic group", "Group proportion", "Label proportion"])
group_stats
Overall label proportion = 9.7%

Vediamo che solo l'1,3% del set di dati contiene commenti relativi alla sessualità. Tra questi, il 37% dei commenti è stato annotato come tossico. Si noti che questo è significativamente maggiore della proporzione complessiva di commenti annotati come tossici. Ciò potrebbe essere dovuto al fatto che i pochi commenti che hanno utilizzato quei termini di identità lo hanno fatto in contesti peggiorativi. Come accennato in precedenza, ciò potrebbe far sì che il nostro modello classifichi erroneamente i commenti come tossici quando includono quei termini. Dal momento che questa è la preoccupazione, faremo in modo di guardare il falso positivo Tasso quando valutiamo le prestazioni del modello.

Costruisci un modello di previsione della tossicità della CNN

Dopo aver preparato il set di dati, ora costruito un Keras modello per la tossicità di previsione. Il modello che utilizziamo è una rete neurale convoluzionale (CNN) con la stessa architettura utilizzata dal progetto Conversation AI per la loro analisi di debiasing. Adattiamo codice fornito da loro per costruire i livelli del modello.

Il modello utilizza un livello di incorporamento per convertire i token di testo in vettori di lunghezza fissa. Questo livello converte la sequenza di testo di input in una sequenza di vettori e li fa passare attraverso diversi livelli di operazioni di convoluzione e raggruppamento, seguiti da un livello finale completamente connesso.

Utilizziamo incorporamenti di vettori di parole GloVe pre-addestrati, che scarichiamo di seguito. Il completamento dell'operazione potrebbe richiedere alcuni minuti.

zip_file_url = "http://nlp.stanford.edu/data/glove.6B.zip"
zip_file = urllib.request.urlopen(zip_file_url)
archive = zipfile.ZipFile(io.BytesIO(zip_file.read()))

Usiamo le immersioni guanto scaricati per creare una matrice incorporamento, dove le righe contengono le immersioni parola per i gettoni nel Tokenizer vocabolario s'.

embeddings_index = {}
glove_file = "glove.6B.100d.txt"

with archive.open(glove_file) as f:
    for line in f:
        values = line.split()
        word = values[0].decode("utf-8") 
        coefs = np.asarray(values[1:], dtype="float32")
        embeddings_index[word] = coefs

embedding_matrix = np.zeros((len(tokenizer.word_index) + 1, hparams["embedding_dim"]))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        num_words_in_embedding += 1
        embedding_matrix[i] = embedding_vector

Siamo ora pronti per specificare i Keras strati. Scriviamo una funzione per creare un nuovo modello, che invocheremo ogni volta che desideriamo addestrare un nuovo modello.

def create_model():
    model = keras.Sequential()

    # Embedding layer.
    embedding_layer = layers.Embedding(
        embedding_matrix.shape[0],
        embedding_matrix.shape[1],
        weights=[embedding_matrix],
        input_length=hparams["max_sequence_length"],
        trainable=hparams['embedding_trainable'])
    model.add(embedding_layer)

    # Convolution layers.
    for filter_size, kernel_size, pool_size in zip(
        hparams['cnn_filter_sizes'], hparams['cnn_kernel_sizes'],
        hparams['cnn_pooling_sizes']):

        conv_layer = layers.Conv1D(
            filter_size, kernel_size, activation='relu', padding='same')
        model.add(conv_layer)

        pooled_layer = layers.MaxPooling1D(pool_size, padding='same')
        model.add(pooled_layer)

    # Add a flatten layer, a fully-connected layer and an output layer.
    model.add(layers.Flatten())
    model.add(layers.Dense(128, activation='relu'))
    model.add(layers.Dense(1))

    return model

Definiamo anche un metodo per impostare semi casuali. Questo viene fatto per garantire risultati riproducibili.

def set_seeds():
  np.random.seed(121212)
  tf.compat.v1.set_random_seed(212121)

Indicatori di equità

Scriviamo anche funzioni per tracciare gli indicatori di equità.

def create_examples(labels, predictions, groups, group_names):
  # Returns tf.examples with given labels, predictions, and group information.  
  examples = []
  sigmoid = lambda x: 1/(1 + np.exp(-x)) 
  for ii in range(labels.shape[0]):
    example = tf.train.Example()
    example.features.feature['toxicity'].float_list.value.append(
        labels[ii])
    example.features.feature['prediction'].float_list.value.append(
        sigmoid(predictions[ii]))  # predictions need to be in [0, 1].
    for jj in range(groups.shape[1]):
      example.features.feature[group_names[jj]].bytes_list.value.append(
          b'Yes' if groups[ii, jj] else b'No')
    examples.append(example)
  return examples
def evaluate_results(labels, predictions, groups, group_names):
  # Evaluates fairness indicators for given labels, predictions and group
  # membership info.
  examples = create_examples(labels, predictions, groups, group_names)

  # Create feature map for labels, predictions and each group.
  feature_map = {
      'prediction': tf.io.FixedLenFeature([], tf.float32),
      'toxicity': tf.io.FixedLenFeature([], tf.float32),
  }
  for group in group_names:
    feature_map[group] = tf.io.FixedLenFeature([], tf.string)

  # Serialize the examples.
  serialized_examples = [e.SerializeToString() for e in examples]

  BASE_DIR = tempfile.gettempdir()
  OUTPUT_DIR = os.path.join(BASE_DIR, 'output')

  with beam.Pipeline() as pipeline:
    model_agnostic_config = agnostic_predict.ModelAgnosticConfig(
              label_keys=['toxicity'],
              prediction_keys=['prediction'],
              feature_spec=feature_map)

    slices = [tfma.slicer.SingleSliceSpec()]
    for group in group_names:
      slices.append(
          tfma.slicer.SingleSliceSpec(columns=[group]))

    extractors = [
            model_agnostic_extractor.ModelAgnosticExtractor(
                model_agnostic_config=model_agnostic_config),
            tfma.extractors.slice_key_extractor.SliceKeyExtractor(slices)
        ]

    metrics_callbacks = [
      tfma.post_export_metrics.fairness_indicators(
          thresholds=[0.5],
          target_prediction_keys=['prediction'],
          labels_key='toxicity'),
      tfma.post_export_metrics.example_count()]

    # Create a model agnostic aggregator.
    eval_shared_model = tfma.types.EvalSharedModel(
        add_metrics_callbacks=metrics_callbacks,
        construct_fn=model_agnostic_evaluate_graph.make_construct_fn(
            add_metrics_callbacks=metrics_callbacks,
            config=model_agnostic_config))

    # Run Model Agnostic Eval.
    _ = (
        pipeline
        | beam.Create(serialized_examples)
        | 'ExtractEvaluateAndWriteResults' >>
          tfma.ExtractEvaluateAndWriteResults(
              eval_shared_model=eval_shared_model,
              output_path=OUTPUT_DIR,
              extractors=extractors,
              compute_confidence_intervals=True
          )
    )

  fairness_ind_result = tfma.load_eval_result(output_path=OUTPUT_DIR)

  # Also evaluate accuracy of the model.
  accuracy = np.mean(labels == (predictions > 0.0))

  return fairness_ind_result, accuracy
def plot_fairness_indicators(eval_result, title):
  fairness_ind_result, accuracy = eval_result
  display(HTML("<center><h2>" + title + 
               " (Accuracy = %.2f%%)" % (accuracy * 100) + "</h2></center>"))
  widget_view.render_fairness_indicator(fairness_ind_result)
def plot_multi_fairness_indicators(multi_eval_results):

  multi_results = {}
  multi_accuracy = {}
  for title, (fairness_ind_result, accuracy) in multi_eval_results.items():
    multi_results[title] = fairness_ind_result
    multi_accuracy[title] = accuracy

  title_str = "<center><h2>"
  for title in multi_eval_results.keys():
      title_str+=title + " (Accuracy = %.2f%%)" % (multi_accuracy[title] * 100) + "; "
  title_str=title_str[:-2]
  title_str+="</h2></center>"
  # fairness_ind_result, accuracy = eval_result
  display(HTML(title_str))
  widget_view.render_fairness_indicator(multi_eval_results=multi_results)

Treno modello non vincolato

Per il primo treno modello che, ottimizziamo una perdita semplice cross-entropia senza alcun vincolo ..

# Set random seed for reproducible results.
set_seeds()
# Optimizer and loss.
optimizer = tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"])
loss = lambda y_true, y_pred: tf.keras.losses.binary_crossentropy(
    y_true, y_pred, from_logits=True)

# Create, compile and fit model.
model_unconstrained = create_model()
model_unconstrained.compile(optimizer=optimizer, loss=loss)

model_unconstrained.fit(
    x=text_train, y=labels_train, batch_size=hparams["batch_size"], epochs=2)
Epoch 1/2
748/748 [==============================] - 51s 69ms/step - loss: 0.1590
Epoch 2/2
748/748 [==============================] - 48s 65ms/step - loss: 0.1217
<tensorflow.python.keras.callbacks.History at 0x7f55603a1d30>

Dopo aver addestrato il modello non vincolato, tracciamo varie metriche di valutazione per il modello sul set di test.

scores_unconstrained_test = model_unconstrained.predict(text_test)
eval_result_unconstrained = evaluate_results(
    labels_test, scores_unconstrained_test, groups_test, group_names)
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction']))
INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction']))
INFO:tensorflow:Using the first key from predictions_dict: prediction
INFO:tensorflow:Using the first key from predictions_dict: prediction
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Come spiegato sopra, ci stiamo concentrando sul tasso di falsi positivi. Nella loro versione attuale (0.1.2), gli indicatori di equità selezionano il tasso di falsi negativi per impostazione predefinita. Dopo aver eseguito la riga sottostante, vai avanti e deseleziona false_negative_rate e seleziona false_positive_rate per guardare la metrica che ci interessa.

plot_fairness_indicators(eval_result_unconstrained, "Unconstrained")

Mentre il tasso complessivo di falsi positivi è inferiore al 2%, il tasso di falsi positivi sui commenti relativi alla sessualità è significativamente più alto. Questo perché il gruppo di sessualità è di dimensioni molto ridotte e ha una frazione sproporzionatamente più alta di commenti annotati come tossici. Pertanto, l'addestramento di un modello senza vincoli fa sì che il modello creda che i termini relativi alla sessualità siano un forte indicatore di tossicità.

Allenati con vincoli sui tassi di falsi positivi

Per evitare grandi differenze nei tassi di falsi positivi tra i diversi gruppi, formiamo quindi un modello vincolando i tassi di falsi positivi per ciascun gruppo in modo che rientrino nel limite desiderato. In questo caso, si ottimizzare il tasso di errore del modello soggetto alla per-gruppo di falsi positivi essendo minore o uguale al 2%.

La formazione sui minibatch con vincoli per gruppo può essere impegnativa per questo set di dati, tuttavia, poiché i gruppi che desideriamo vincolare sono tutti di piccole dimensioni ed è probabile che i singoli minibatch contengano pochissimi esempi di ciascun gruppo. Quindi i gradienti che calcoliamo durante l'allenamento saranno rumorosi e faranno convergere il modello molto lentamente.

Per mitigare questo problema, si consiglia di utilizzare due flussi di minibatch, con il primo flusso formato come prima dall'intero set di addestramento e il secondo flusso formato esclusivamente dagli esempi di gruppi sensibili. Calcoleremo l'obiettivo utilizzando i minibatch del primo flusso e i vincoli per gruppo utilizzando i minibatch del secondo flusso. Poiché è probabile che i batch del secondo flusso contengano un numero maggiore di esempi di ciascun gruppo, ci aspettiamo che i nostri aggiornamenti siano meno rumorosi.

Creiamo caratteristiche separate, etichette e gruppi di tensori per contenere i minibatch dei due flussi.

# Set random seed.
set_seeds()

# Features tensors.
batch_shape = (hparams["batch_size"], hparams['max_sequence_length'])
features_tensor = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x')
features_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x_sen')

# Labels tensors.
batch_shape = (hparams["batch_size"], 1)
labels_tensor = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels')
labels_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels_sen')

# Groups tensors.
batch_shape = (hparams["batch_size"], num_groups)
groups_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='groups_sen')

Istanziamo un nuovo modello e calcoliamo le previsioni per i minibatch dai due flussi.

# Create model, and separate prediction functions for the two streams. 
# For the predictions, we use a nullary function returning a Tensor to support eager mode.
model_constrained = create_model()

def predictions():
  return model_constrained(features_tensor)

def predictions_sen():
  return model_constrained(features_tensor_sen)

Abbiamo quindi impostato un problema di ottimizzazione vincolata con il tasso di errore come obiettivo e con vincoli sul tasso di falsi positivi per gruppo.

epsilon = 0.02  # Desired false-positive rate threshold.

# Set up separate contexts for the two minibatch streams.
context = tfco.rate_context(predictions, lambda:labels_tensor)
context_sen = tfco.rate_context(predictions_sen, lambda:labels_tensor_sen)

# Compute the objective using the first stream.
objective = tfco.error_rate(context)

# Compute the constraint using the second stream.
# Subset the examples belonging to the "sexuality" group from the second stream 
# and add a constraint on the group's false positive rate.
context_sen_subset = context_sen.subset(lambda: groups_tensor_sen[:, 0] > 0)
constraint = [tfco.false_positive_rate(context_sen_subset) <= epsilon]

# Create a rate minimization problem.
problem = tfco.RateMinimizationProblem(objective, constraint)

# Set up a constrained optimizer.
optimizer = tfco.ProxyLagrangianOptimizerV2(
    optimizer=tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"]),
    num_constraints=problem.num_constraints)

# List of variables to optimize include the model weights, 
# and the trainable variables from the rate minimization problem and 
# the constrained optimizer.
var_list = (model_constrained.trainable_weights + problem.trainable_variables +
            optimizer.trainable_variables())

Siamo pronti per addestrare il modello. Manteniamo un contatore separato per i due flussi minibatch. Ogni volta eseguiamo un aggiornamento gradiente, dovremo copiare il contenuto minibatch dalla prima corrente ai tensori features_tensor e labels_tensor , e il contenuto minibatch dalla seconda corrente ai tensori features_tensor_sen , labels_tensor_sen e groups_tensor_sen .

# Indices of sensitive group members.
protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]

num_examples = text_train.shape[0]
num_examples_sen = protected_group_indices.shape[0]
batch_size = hparams["batch_size"]

# Number of steps needed for one epoch over the training sample.
num_steps = int(num_examples / batch_size)

start_time = time.time()

# Loop over minibatches.
for batch_index in range(num_steps):
    # Indices for current minibatch in the first stream.
    batch_indices = np.arange(
        batch_index * batch_size, (batch_index + 1) * batch_size)
    batch_indices = [ind % num_examples for ind in batch_indices]

    # Indices for current minibatch in the second stream.
    batch_indices_sen = np.arange(
        batch_index * batch_size, (batch_index + 1) * batch_size)
    batch_indices_sen = [protected_group_indices[ind % num_examples_sen]
                         for ind in batch_indices_sen]

    # Assign features, labels, groups from the minibatches to the respective tensors.
    features_tensor.assign(text_train[batch_indices, :])
    labels_tensor.assign(labels_train[batch_indices])

    features_tensor_sen.assign(text_train[batch_indices_sen, :])
    labels_tensor_sen.assign(labels_train[batch_indices_sen])
    groups_tensor_sen.assign(groups_train[batch_indices_sen, :])

    # Gradient update.
    optimizer.minimize(problem, var_list=var_list)

    # Record and print batch training stats every 10 steps.
    if (batch_index + 1) % 10 == 0 or batch_index in (0, num_steps - 1):
      hinge_loss = problem.objective()
      max_violation = max(problem.constraints())

      elapsed_time = time.time() - start_time
      sys.stdout.write(
          "\rStep %d / %d: Elapsed time = %ds, Loss = %.3f, Violation = %.3f" % 
          (batch_index + 1, num_steps, elapsed_time, hinge_loss, max_violation))
Step 747 / 747: Elapsed time = 180s, Loss = 0.068, Violation = -0.020

Dopo aver addestrato il modello vincolato, tracciamo varie metriche di valutazione per il modello sul set di test.

scores_constrained_test = model_constrained.predict(text_test)
eval_result_constrained = evaluate_results(
    labels_test, scores_constrained_test, groups_test, group_names)
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction']))
INFO:tensorflow:ExampleCount post export metric: could not find any of the standard keys in predictions_dict (keys were: dict_keys(['prediction']))
INFO:tensorflow:Using the first key from predictions_dict: prediction
INFO:tensorflow:Using the first key from predictions_dict: prediction
WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching: 
WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching: 
WARNING:apache_beam.io.filebasedsink:Deleting 1 existing files in target path matching:

Come l'ultima volta, ricorda di selezionare false_positive_rate.

plot_fairness_indicators(eval_result_constrained, "Constrained")
multi_results = {
    'constrained':eval_result_constrained,
    'unconstrained':eval_result_unconstrained,
}
plot_multi_fairness_indicators(multi_eval_results=multi_results)

Come possiamo vedere dagli indicatori di equità, rispetto al modello non vincolato, il modello vincolato produce tassi di falsi positivi significativamente più bassi per i commenti relativi alla sessualità, e lo fa solo con un leggero calo nell'accuratezza complessiva.