Aide à protéger la Grande barrière de corail avec tensorflow sur Kaggle Rejoignez Défi

Étude de cas sur la lignée des indicateurs d'équité

Voir sur TensorFlow.org Exécuter dans Google Colab Voir sur GitHub Télécharger le cahier Voir le modèle TF Hub

Ensemble de données COMPAS

COMPAS (profilage de gestion des délinquants correctionnels pour les sanctions alternatives) est un ensemble de données publique, qui contient environ 18 000 affaires criminelles de Broward County, Floride entre Janvier 2013 et Décembre 2014. Les données contient des informations sur 11.000 accusés uniques, y compris la démographie des antécédents criminels, et un score de risque destiné à représenter la probabilité de récidive du prévenu (récidive). Un modèle d'apprentissage automatique formé sur ces données a été utilisé par les juges et les agents de libération conditionnelle pour déterminer s'il faut ou non fixer la libération sous caution et s'il faut ou non accorder la libération conditionnelle.

En 2016, un article publié dans ProPublica a révélé que le modèle COMPAS prédisait à tort que les accusés afro-américains récidiveraient à des taux beaucoup plus élevés que leurs homologues blancs alors que du Caucase ne récidivent à un taux beaucoup plus élevé de. Pour les accusés caucasiens, le modèle a fait des erreurs dans la direction opposée, faisant des prédictions incorrectes selon lesquelles ils ne commettraient pas un autre crime. Les auteurs ont ensuite montré que ces biais étaient probablement dus à une répartition inégale des données entre les accusés afro-américains et caucasiens. Plus précisément, l'étiquette de la vérité au sol d'un exemple négatif (défendeur ne serait pas commettre un autre crime) et un exemple positif (défendeur commettre un autre crime) étaient disproportionnées entre les deux races. Depuis 2016, l'ensemble de données de COMPAS est apparu fréquemment dans la littérature d'équité ML 1, 2, 3, avec des chercheurs qui l' utilisent pour démontrer les techniques pour déceler et corriger les problèmes de l' équité. Ce tutoriel de la FAT * conférence 2018 illustre comment COMPAS peut avoir un impact considérable sur les perspectives d'un défendeur dans le monde réel.

Il est important de noter que le développement d'un modèle d'apprentissage automatique pour prédire la détention provisoire comporte un certain nombre de considérations éthiques importantes. Vous pouvez en apprendre davantage sur ces questions dans le Partenariat sur la grippe aviaire « Rapport sur les outils algorithmiques d' évaluation des risques dans le système de justice pénale des États - Unis . » Le Partenariat sur l'IA est une organisation multipartite - dont Google est membre - qui crée des directives autour de l'IA.

Nous utilisons l'ensemble de données COMPAS uniquement comme exemple de la façon d'identifier et de résoudre les problèmes d'équité dans les données. Cet ensemble de données est canonique dans la littérature sur l'équité algorithmique.

À propos des outils de cette étude de cas

  • Tensorflow Extended (TFX) est une plate - forme d' apprentissage machine à Google production à l' échelle basée sur tensorflow. Il fournit un cadre de configuration et des bibliothèques partagées pour intégrer les composants communs nécessaires pour définir, lancer et surveiller votre système d'apprentissage automatique.

  • Tensorflow modèle d' analyse est une bibliothèque pour l' évaluation des modèles d'apprentissage de la machine. Les utilisateurs peuvent évaluer leurs modèles sur une grande quantité de données de manière distribuée et afficher des métriques sur différentes tranches d'un bloc-notes.

  • Les indicateurs d' équité est une suite d'outils intégrés au - dessus de tensorflow modèle d' analyse qui permet une évaluation régulière des mesures d'équité dans les pipelines de produits.

  • ML métadonnées est une bibliothèque pour l' enregistrement et la récupération de la lignée et les métadonnées des objets tels que des modèles ML, les ensembles de données et des statistiques. Dans TFX ML, les métadonnées nous aideront à comprendre les artefacts créés dans un pipeline, qui est une unité de données transmise entre les composants TFX.

  • Tensorflow la validation des données est une bibliothèque pour analyser vos données et vérifier les erreurs qui peuvent avoir une incidence sur la formation du modèle ou du service.

Aperçu de l'étude de cas

Pendant la durée de cette étude de cas, nous définirons les « problèmes d'équité » comme un biais au sein d'un modèle qui a un impact négatif sur une tranche de nos données. Plus précisément, nous essayons de limiter toute prédiction de récidive qui pourrait être biaisée en faveur de la race.

Le parcours de l'étude de cas se déroulera comme suit :

  1. Téléchargez les données, prétraitez et explorez l'ensemble de données initial.
  2. Construisez un pipeline TFX avec l'ensemble de données COMPAS à l'aide d'un classificateur binaire Keras.
  3. Exécutez nos résultats via l'analyse du modèle TensorFlow, la validation des données TensorFlow et chargez les indicateurs d'équité pour explorer tout problème d'équité potentiel au sein de notre modèle.
  4. Utilisez ML Metadata pour suivre tous les artefacts d'un modèle que nous avons entraîné avec TFX.
  5. Pondérer l'ensemble de données COMPAS initial pour notre deuxième modèle afin de tenir compte de la répartition inégale entre la récidive et la race.
  6. Examinez les changements de performances dans le nouvel ensemble de données.
  7. Vérifiez les modifications sous-jacentes au sein de notre pipeline TFX avec ML Metadata pour comprendre quelles modifications ont été apportées entre les deux modèles.

Ressources utiles

Cette étude de cas est une extension des études de cas ci-dessous. Il est recommandé de commencer par les études de cas ci-dessous.

Installer

Pour commencer, nous allons installer les packages nécessaires, télécharger les données et importer les modules requis pour l'étude de cas.

Pour installer les packages requis pour cette étude de cas dans votre ordinateur portable, exécutez la commande PIP ci-dessous.


  1. Wadsworth, C., Vera, F., Piech, C. (2017). Atteindre l'équité grâce à l'apprentissage contradictoire : une application à la prédiction de la récidive. https://arxiv.org/abs/1807.000199

  2. Chouldechova, A., G'Sell, M., (2017). Plus juste et plus précis, mais pour qui ? https://arxiv.org/abs/1707.00046

  3. Berk et al, (2017), l' équité en justice pénale évaluation des risques. L'état de l'art, https://arxiv.org/abs/1703.09207

!python -m pip install -q -U pip==20.2

!python -m pip install -q -U \
  tensorflow==2.4.1 \
  tfx==0.28.0 \
  tensorflow-model-analysis==0.28.0 \
  tensorflow_data_validation==0.28.0 \
  tensorflow-metadata==0.28.0 \
  tensorflow-transform==0.28.0 \
  ml-metadata==0.28.0 \
  tfx-bsl==0.28.1 \
  absl-py==0.9

 # If prompted, please restart the Colab environment after the pip installs
 # as you might run into import errors.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import tempfile
import six.moves.urllib as urllib

from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2

import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv

import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view

import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2

Télécharger et prétraiter l'ensemble de données

# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')

data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)

# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
  'age',
  'c_charge_desc',
  'c_charge_degree',
  'c_days_from_compas',
  'is_recid',
  'juv_fel_count',
  'juv_misd_count',
  'juv_other_count',
  'priors_count',
  'r_days_from_arrest',
  'race',
  'sex',
  'vr_charge_desc',                
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]

# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]

# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
  _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]

# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8

# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')

Construire un pipeline TFX


Il y a plusieurs composants Pipeline TFX qui peut être utilisé pour un modèle de production, mais dans le but de la présente étude de cas se concentrera sur l' utilisation que les composants ci - dessous:

  • ExampleGen à lire notre jeu de données.
  • StatisticsGen pour calculer les statistiques de notre ensemble de données.
  • SchemaGen pour créer un schéma de données.
  • Transformer pour l' ingénierie de fonctionnalité.
  • Formateur pour faire fonctionner notre modèle d'apprentissage de la machine.

Créer le Contexte Interactif

Pour exécuter TFX dans un ordinateur portable, nous avons d' abord devrons créer un InteractiveContext pour exécuter les composants de manière interactive.

InteractiveContext utilisera un répertoire temporaire avec une instance de base de données éphémère ML métadonnées. Pour utiliser votre propre racine de pipeline ou base de données, les propriétés facultatives pipeline_root et metadata_connection_config peuvent être transmis à InteractiveContext .

context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r as root for pipeline outputs.
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/metadata.sqlite.

Composant TFX ExampleGen

# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.

example_gen = CsvExampleGen(input_base=_DATA_ROOT)
context.run(example_gen)
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.

Composant TFX StatisticsGen

# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.

statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)

Composant TFX SchemaGen

# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.

infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Composant de transformation TFX

Le Transform effectue des transformations de données de composants et d' ingénierie de fonction. Les résultats incluent un graphique TensorFlow d'entrée qui est utilisé à la fois pendant l'apprentissage et pour prétraiter les données avant l'apprentissage ou l'inférence. Ce graphique devient une partie du SavedModel qui est le résultat de l'apprentissage du modèle. Étant donné que le même graphique d'entrée est utilisé à la fois pour l'entraînement et la diffusion, le prétraitement sera toujours le même et ne doit être écrit qu'une seule fois.

Le composant Transform nécessite plus de code que de nombreux autres composants en raison de la complexité arbitraire de l'ingénierie des fonctionnalités dont vous pouvez avoir besoin pour les données et/ou le modèle avec lesquels vous travaillez.

Définir des constantes et fonctions tant pour la Transform de composant et le Trainer composant. Les définir dans un module Python, dans ce cas enregistré sur le disque en utilisant la %%writefile commande magique puisque vous travaillez dans un ordinateur portable.

Les transformations que nous allons effectuer dans cette étude de cas sont les suivantes :

  • Pour les valeurs de chaîne, nous allons générer un vocabulaire qui correspond à un entier via tft.compute_and_apply_vocabulary.
  • Pour les valeurs entières, nous standardiserons la moyenne de la colonne 0 et la variance 1 via tft.scale_to_z_score.
  • Supprimez les valeurs de ligne vides et remplacez-les par une chaîne vide ou 0 selon le type de fonctionnalité.
  • Ajoutez '_xf' aux noms de colonnes pour indiquer les fonctionnalités qui ont été traitées dans le composant de transformation.

Maintenant , nous allons définir un module contenant le preprocessing_fn() fonction que nous allons passer à la Transform de composants:

# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
%%writefile {_transform_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import tensorflow_transform as tft

CATEGORICAL_FEATURE_KEYS = [
    'sex',
    'race',
    'c_charge_desc',
    'c_charge_degree',
]

INT_FEATURE_KEYS = [
    'age',
    'c_days_from_compas',
    'juv_fel_count',
    'juv_misd_count',
    'juv_other_count',
    'priors_count',
    'sample_weight',
]

LABEL_KEY = 'is_recid'

# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
    2,
    6,
    513,
    14,
]


def transformed_name(key):
  return '{}_xf'.format(key)


def preprocessing_fn(inputs):
  """tf.transform's callback function for preprocessing inputs.

  Args:
    inputs: Map from feature keys to raw features.

  Returns:
    Map from string feature key to transformed feature operations.
  """
  outputs = {}
  for key in CATEGORICAL_FEATURE_KEYS:
    outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
        _fill_in_missing(inputs[key]),
        vocab_filename=key)

  for key in INT_FEATURE_KEYS:
    outputs[transformed_name(key)] = tft.scale_to_z_score(
        _fill_in_missing(inputs[key]))

  # Target label will be to see if the defendant is charged for another crime.
  outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
  return outputs


def _fill_in_missing(tensor_value):
  """Replaces a missing values in a SparseTensor.

  Fills in missing values of `tensor_value` with '' or 0, and converts to a
  dense tensor.

  Args:
    tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
      at most 1 in the second dimension.

  Returns:
    A rank 1 tensor where missing values of `tensor_value` are filled in.
  """
  if not isinstance(tensor_value, tf.sparse.SparseTensor):
    return tensor_value
  default_value = '' if tensor_value.dtype == tf.string else 0
  sparse_tensor = tf.SparseTensor(
      tensor_value.indices,
      tensor_value.values,
      [tensor_value.dense_shape[0], 1])
  dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
  return tf.squeeze(dense_tensor, axis=1)
Writing compas_transform.py
# Build and run the Transform Component.
transform = Transform(
    examples=example_gen.outputs['examples'],
    schema=infer_schema.outputs['schema'],
    module_file=_transform_module_file
)
context.run(transform)
WARNING:absl:The default value of `force_tf_compat_v1` will change in a future release from `True` to `False`. Since this pipeline has TF 2 behaviors enabled, Transform will use native TF 2 at that point. You can test this behavior now by passing `force_tf_compat_v1=False` or disable it by explicitly setting `force_tf_compat_v1=True` in the Transform component.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version.
Instructions for updating:
Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version.
Instructions for updating:
Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/34923099dd2444f1a12dd79e9e93b9d2/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/34923099dd2444f1a12dd79e9e93b9d2/saved_model.pb
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/2d5bc9f0641646379cb0c6d04efedee6/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/2d5bc9f0641646379cb0c6d04efedee6/saved_model.pb
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/saved_model.pb
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore

Composant TFX Trainer

Le Trainer forme composant un modèle de tensorflow spécifié.

Pour exécuter le composant de formateur , nous devons créer un module Python contenant une trainer_fn fonction qui retourne un estimateur pour notre modèle. Si vous préférez créer un modèle Keras, vous pouvez le faire et puis le convertir en un estimateur en utilisant keras.model_to_estimator() .

Les Trainer trains composant un modèle tensorflow spécifié. Pour exécuter le modèle que nous devons créer un module Python contenant aa fonction appelée trainer_fn fonction qui TFX appellera.

Pour notre étude de cas , nous allons construire un modèle Keras qui renverra retournera keras.model_to_estimator() .

# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf

import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils

from compas_transform import *

_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999


def transformed_names(keys):
  return [transformed_name(key) for key in keys]


def transformed_name(key):
  return '{}_xf'.format(key)


def _gzip_reader_fn(filenames):
  """Returns a record reader that can read gzip'ed files.

  Args:
    filenames: A tf.string tensor or tf.data.Dataset containing one or more
      filenames.

  Returns: A nested structure of tf.TypeSpec objects matching the structure of
    an element of this dataset and specifying the type of individual components.
  """
  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')


# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
  """Generates a feature spec from a Schema proto.

  Args:
    schema: A Schema proto.

  Returns:
    A feature spec defined as a dict whose keys are feature names and values are
      instances of FixedLenFeature, VarLenFeature or SparseFeature.
  """
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _example_serving_receiver_fn(tf_transform_output, schema):
  """Builds the serving in inputs.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    TensorFlow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(LABEL_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)
  transformed_features.pop(transformed_name(LABEL_KEY))
  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_output, schema):
  """Builds everything needed for the tf-model-analysis to run the model.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    EvalInputReceiver function, which contains:
      - TensorFlow graph which parses raw untransformed features, applies the
          tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  serialized_tf_example = tf.compat.v1.placeholder(
      dtype=tf.string, shape=[None], name='input_example_tensor')

  # Add a parse_example operator to the tensorflow graph, which will parse
  # raw, untransformed, tf examples.
  features = tf.io.parse_example(
      serialized=serialized_tf_example, features=raw_feature_spec)

  transformed_features = tf_transform_output.transform_raw_features(features)
  labels = transformed_features.pop(transformed_name(LABEL_KEY))

  receiver_tensors = {'examples': serialized_tf_example}

  return tfma.export.EvalInputReceiver(
      features=transformed_features,
      receiver_tensors=receiver_tensors,
      labels=labels)


def _input_fn(filenames, tf_transform_output, batch_size=200):
  """Generates features and labels for training or evaluation.

  Args:
    filenames: List of CSV files to read data from.
    tf_transform_output: A TFTransformOutput.
    batch_size: First dimension size of the Tensors returned by input_fn.

  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_output.transformed_feature_spec().copy())

  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
      filenames,
      batch_size,
      transformed_feature_spec,
      shuffle=False,
      reader=_gzip_reader_fn)

  transformed_features = dataset.make_one_shot_iterator().get_next()

  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      transformed_name(LABEL_KEY))


def _keras_model_builder():
  """Build a keras model for COMPAS dataset classification.

  Returns:
    A compiled Keras model.
  """
  feature_columns = []
  feature_layer_inputs = {}

  for key in transformed_names(INT_FEATURE_KEYS):
    feature_columns.append(tf.feature_column.numeric_column(key))
    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)

  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
                              MAX_CATEGORICAL_FEATURE_VALUES):
    feature_columns.append(
        tf.feature_column.indicator_column(
            tf.feature_column.categorical_column_with_identity(
                key, num_buckets=num_buckets)))
    feature_layer_inputs[key] = tf.keras.Input(
        shape=(1,), name=key, dtype=tf.dtypes.int32)

  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
  feature_layer_outputs = feature_columns_input(feature_layer_inputs)

  dense_layers = tf.keras.layers.Dense(
      20, activation='relu', name='dense_1')(feature_layer_outputs)
  dense_layers = tf.keras.layers.Dense(
      10, activation='relu', name='dense_2')(dense_layers)
  output = tf.keras.layers.Dense(
      1, name='predictions')(dense_layers)

  model = tf.keras.Model(
      inputs=[v for v in feature_layer_inputs.values()], outputs=output)

  model.compile(
      loss=tf.keras.losses.MeanAbsoluteError(),
      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))

  return model


# TFX will call this function.
def trainer_fn(hparams, schema):
  """Build the estimator using the high level API.

  Args:
    hparams: Hyperparameters used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.

  Returns:
    A dict of the following:
      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)

  train_input_fn = lambda: _input_fn(
      hparams.train_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  eval_input_fn = lambda: _input_fn(
      hparams.eval_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  train_spec = tf.estimator.TrainSpec(
      train_input_fn,
      max_steps=hparams.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(
      tf_transform_output, schema)

  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=hparams.eval_steps,
      exporters=[exporter],
      name='compas-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
      keep_checkpoint_max=_MAX_CHECKPOINTS)

  run_config = run_config.replace(model_dir=hparams.serving_model_dir)

  estimator = tf.keras.estimator.model_to_estimator(
      keras_model=_keras_model_builder(), config=run_config)

  # Create an input receiver for TFMA processing.
  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)

  return {
      'estimator': estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }
Writing compas_trainer.py
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
    module_file=_trainer_module_file,
    transformed_examples=transform.outputs['transformed_examples'],
    schema=infer_schema.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 0.47416827, step = 0
INFO:tensorflow:loss = 0.47416827, step = 0
INFO:tensorflow:global_step/sec: 103.552
INFO:tensorflow:global_step/sec: 103.552
INFO:tensorflow:loss = 0.4922419, step = 100 (0.968 sec)
INFO:tensorflow:loss = 0.4922419, step = 100 (0.968 sec)
INFO:tensorflow:global_step/sec: 106.369
INFO:tensorflow:global_step/sec: 106.369
INFO:tensorflow:loss = 0.50697845, step = 200 (0.939 sec)
INFO:tensorflow:loss = 0.50697845, step = 200 (0.939 sec)
INFO:tensorflow:global_step/sec: 108.028
INFO:tensorflow:global_step/sec: 108.028
INFO:tensorflow:loss = 0.50335556, step = 300 (0.926 sec)
INFO:tensorflow:loss = 0.50335556, step = 300 (0.926 sec)
INFO:tensorflow:global_step/sec: 106.316
INFO:tensorflow:global_step/sec: 106.316
INFO:tensorflow:loss = 0.47721145, step = 400 (0.941 sec)
INFO:tensorflow:loss = 0.47721145, step = 400 (0.941 sec)
INFO:tensorflow:global_step/sec: 107.036
INFO:tensorflow:global_step/sec: 107.036
INFO:tensorflow:loss = 0.45895657, step = 500 (0.934 sec)
INFO:tensorflow:loss = 0.45895657, step = 500 (0.934 sec)
INFO:tensorflow:global_step/sec: 106.896
INFO:tensorflow:global_step/sec: 106.896
INFO:tensorflow:loss = 0.45208624, step = 600 (0.935 sec)
INFO:tensorflow:loss = 0.45208624, step = 600 (0.935 sec)
INFO:tensorflow:global_step/sec: 105.365
INFO:tensorflow:global_step/sec: 105.365
INFO:tensorflow:loss = 0.4489294, step = 700 (0.949 sec)
INFO:tensorflow:loss = 0.4489294, step = 700 (0.949 sec)
INFO:tensorflow:global_step/sec: 107.341
INFO:tensorflow:global_step/sec: 107.341
INFO:tensorflow:loss = 0.46455735, step = 800 (0.932 sec)
INFO:tensorflow:loss = 0.46455735, step = 800 (0.932 sec)
INFO:tensorflow:global_step/sec: 103.443
INFO:tensorflow:global_step/sec: 103.443
INFO:tensorflow:loss = 0.47789398, step = 900 (0.967 sec)
INFO:tensorflow:loss = 0.47789398, step = 900 (0.967 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:10:14Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:10:14Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 48.79983s
INFO:tensorflow:Inference Time : 48.79983s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:11:03
INFO:tensorflow:Finished evaluation at 2021-04-23-09:11:03
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4798829
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4798829
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:global_step/sec: 1.99761
INFO:tensorflow:global_step/sec: 1.99761
INFO:tensorflow:loss = 0.49395803, step = 1000 (50.059 sec)
INFO:tensorflow:loss = 0.49395803, step = 1000 (50.059 sec)
INFO:tensorflow:global_step/sec: 103.094
INFO:tensorflow:global_step/sec: 103.094
INFO:tensorflow:loss = 0.48954606, step = 1100 (0.970 sec)
INFO:tensorflow:loss = 0.48954606, step = 1100 (0.970 sec)
INFO:tensorflow:global_step/sec: 101.109
INFO:tensorflow:global_step/sec: 101.109
INFO:tensorflow:loss = 0.49123546, step = 1200 (0.989 sec)
INFO:tensorflow:loss = 0.49123546, step = 1200 (0.989 sec)
INFO:tensorflow:global_step/sec: 100.528
INFO:tensorflow:global_step/sec: 100.528
INFO:tensorflow:loss = 0.4701535, step = 1300 (0.995 sec)
INFO:tensorflow:loss = 0.4701535, step = 1300 (0.995 sec)
INFO:tensorflow:global_step/sec: 100.192
INFO:tensorflow:global_step/sec: 100.192
INFO:tensorflow:loss = 0.46582404, step = 1400 (0.999 sec)
INFO:tensorflow:loss = 0.46582404, step = 1400 (0.999 sec)
INFO:tensorflow:global_step/sec: 100.13
INFO:tensorflow:global_step/sec: 100.13
INFO:tensorflow:loss = 0.45980436, step = 1500 (0.998 sec)
INFO:tensorflow:loss = 0.45980436, step = 1500 (0.998 sec)
INFO:tensorflow:global_step/sec: 101.085
INFO:tensorflow:global_step/sec: 101.085
INFO:tensorflow:loss = 0.46045718, step = 1600 (0.989 sec)
INFO:tensorflow:loss = 0.46045718, step = 1600 (0.989 sec)
INFO:tensorflow:global_step/sec: 100.746
INFO:tensorflow:global_step/sec: 100.746
INFO:tensorflow:loss = 0.47194332, step = 1700 (0.995 sec)
INFO:tensorflow:loss = 0.47194332, step = 1700 (0.995 sec)
INFO:tensorflow:global_step/sec: 99.8541
INFO:tensorflow:global_step/sec: 99.8541
INFO:tensorflow:loss = 0.45978338, step = 1800 (0.999 sec)
INFO:tensorflow:loss = 0.45978338, step = 1800 (0.999 sec)
INFO:tensorflow:global_step/sec: 97.982
INFO:tensorflow:global_step/sec: 97.982
INFO:tensorflow:loss = 0.45745283, step = 1900 (1.021 sec)
INFO:tensorflow:loss = 0.45745283, step = 1900 (1.021 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 96.2637
INFO:tensorflow:global_step/sec: 96.2637
INFO:tensorflow:loss = 0.44210017, step = 2000 (1.039 sec)
INFO:tensorflow:loss = 0.44210017, step = 2000 (1.039 sec)
INFO:tensorflow:global_step/sec: 104.181
INFO:tensorflow:global_step/sec: 104.181
INFO:tensorflow:loss = 0.4267306, step = 2100 (0.960 sec)
INFO:tensorflow:loss = 0.4267306, step = 2100 (0.960 sec)
INFO:tensorflow:global_step/sec: 100.628
INFO:tensorflow:global_step/sec: 100.628
INFO:tensorflow:loss = 0.43270233, step = 2200 (0.994 sec)
INFO:tensorflow:loss = 0.43270233, step = 2200 (0.994 sec)
INFO:tensorflow:global_step/sec: 102.274
INFO:tensorflow:global_step/sec: 102.274
INFO:tensorflow:loss = 0.42014548, step = 2300 (0.978 sec)
INFO:tensorflow:loss = 0.42014548, step = 2300 (0.978 sec)
INFO:tensorflow:global_step/sec: 99.5664
INFO:tensorflow:global_step/sec: 99.5664
INFO:tensorflow:loss = 0.42362845, step = 2400 (1.004 sec)
INFO:tensorflow:loss = 0.42362845, step = 2400 (1.004 sec)
INFO:tensorflow:global_step/sec: 101.008
INFO:tensorflow:global_step/sec: 101.008
INFO:tensorflow:loss = 0.43012613, step = 2500 (0.990 sec)
INFO:tensorflow:loss = 0.43012613, step = 2500 (0.990 sec)
INFO:tensorflow:global_step/sec: 102.62
INFO:tensorflow:global_step/sec: 102.62
INFO:tensorflow:loss = 0.435121, step = 2600 (0.974 sec)
INFO:tensorflow:loss = 0.435121, step = 2600 (0.974 sec)
INFO:tensorflow:global_step/sec: 102.1
INFO:tensorflow:global_step/sec: 102.1
INFO:tensorflow:loss = 0.42686707, step = 2700 (0.981 sec)
INFO:tensorflow:loss = 0.42686707, step = 2700 (0.981 sec)
INFO:tensorflow:global_step/sec: 103.746
INFO:tensorflow:global_step/sec: 103.746
INFO:tensorflow:loss = 0.41858014, step = 2800 (0.964 sec)
INFO:tensorflow:loss = 0.41858014, step = 2800 (0.964 sec)
INFO:tensorflow:global_step/sec: 102.04
INFO:tensorflow:global_step/sec: 102.04
INFO:tensorflow:loss = 0.41823772, step = 2900 (0.978 sec)
INFO:tensorflow:loss = 0.41823772, step = 2900 (0.978 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 100.291
INFO:tensorflow:global_step/sec: 100.291
INFO:tensorflow:loss = 0.40824187, step = 3000 (0.997 sec)
INFO:tensorflow:loss = 0.40824187, step = 3000 (0.997 sec)
INFO:tensorflow:global_step/sec: 106.907
INFO:tensorflow:global_step/sec: 106.907
INFO:tensorflow:loss = 0.40978715, step = 3100 (0.936 sec)
INFO:tensorflow:loss = 0.40978715, step = 3100 (0.936 sec)
INFO:tensorflow:global_step/sec: 104.101
INFO:tensorflow:global_step/sec: 104.101
INFO:tensorflow:loss = 0.417184, step = 3200 (0.960 sec)
INFO:tensorflow:loss = 0.417184, step = 3200 (0.960 sec)
INFO:tensorflow:global_step/sec: 99.6517
INFO:tensorflow:global_step/sec: 99.6517
INFO:tensorflow:loss = 0.43127513, step = 3300 (1.004 sec)
INFO:tensorflow:loss = 0.43127513, step = 3300 (1.004 sec)
INFO:tensorflow:global_step/sec: 99.7764
INFO:tensorflow:global_step/sec: 99.7764
INFO:tensorflow:loss = 0.41585788, step = 3400 (1.002 sec)
INFO:tensorflow:loss = 0.41585788, step = 3400 (1.002 sec)
INFO:tensorflow:global_step/sec: 104.479
INFO:tensorflow:global_step/sec: 104.479
INFO:tensorflow:loss = 0.40642825, step = 3500 (0.957 sec)
INFO:tensorflow:loss = 0.40642825, step = 3500 (0.957 sec)
INFO:tensorflow:global_step/sec: 99.2027
INFO:tensorflow:global_step/sec: 99.2027
INFO:tensorflow:loss = 0.40078893, step = 3600 (1.008 sec)
INFO:tensorflow:loss = 0.40078893, step = 3600 (1.008 sec)
INFO:tensorflow:global_step/sec: 99.5083
INFO:tensorflow:global_step/sec: 99.5083
INFO:tensorflow:loss = 0.4084859, step = 3700 (1.005 sec)
INFO:tensorflow:loss = 0.4084859, step = 3700 (1.005 sec)
INFO:tensorflow:global_step/sec: 101.837
INFO:tensorflow:global_step/sec: 101.837
INFO:tensorflow:loss = 0.38706055, step = 3800 (0.982 sec)
INFO:tensorflow:loss = 0.38706055, step = 3800 (0.982 sec)
INFO:tensorflow:global_step/sec: 100.761
INFO:tensorflow:global_step/sec: 100.761
INFO:tensorflow:loss = 0.38369697, step = 3900 (0.992 sec)
INFO:tensorflow:loss = 0.38369697, step = 3900 (0.992 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 99.897
INFO:tensorflow:global_step/sec: 99.897
INFO:tensorflow:loss = 0.4063977, step = 4000 (1.001 sec)
INFO:tensorflow:loss = 0.4063977, step = 4000 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.4043
INFO:tensorflow:global_step/sec: 99.4043
INFO:tensorflow:loss = 0.42966503, step = 4100 (1.005 sec)
INFO:tensorflow:loss = 0.42966503, step = 4100 (1.005 sec)
INFO:tensorflow:global_step/sec: 99.4718
INFO:tensorflow:global_step/sec: 99.4718
INFO:tensorflow:loss = 0.43339205, step = 4200 (1.006 sec)
INFO:tensorflow:loss = 0.43339205, step = 4200 (1.006 sec)
INFO:tensorflow:global_step/sec: 99.881
INFO:tensorflow:global_step/sec: 99.881
INFO:tensorflow:loss = 0.41945544, step = 4300 (1.001 sec)
INFO:tensorflow:loss = 0.41945544, step = 4300 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.7086
INFO:tensorflow:global_step/sec: 99.7086
INFO:tensorflow:loss = 0.39942062, step = 4400 (1.003 sec)
INFO:tensorflow:loss = 0.39942062, step = 4400 (1.003 sec)
INFO:tensorflow:global_step/sec: 100.605
INFO:tensorflow:global_step/sec: 100.605
INFO:tensorflow:loss = 0.40324017, step = 4500 (0.994 sec)
INFO:tensorflow:loss = 0.40324017, step = 4500 (0.994 sec)
INFO:tensorflow:global_step/sec: 103.285
INFO:tensorflow:global_step/sec: 103.285
INFO:tensorflow:loss = 0.40799192, step = 4600 (0.968 sec)
INFO:tensorflow:loss = 0.40799192, step = 4600 (0.968 sec)
INFO:tensorflow:global_step/sec: 105.19
INFO:tensorflow:global_step/sec: 105.19
INFO:tensorflow:loss = 0.4159081, step = 4700 (0.951 sec)
INFO:tensorflow:loss = 0.4159081, step = 4700 (0.951 sec)
INFO:tensorflow:global_step/sec: 104.719
INFO:tensorflow:global_step/sec: 104.719
INFO:tensorflow:loss = 0.43424368, step = 4800 (0.955 sec)
INFO:tensorflow:loss = 0.43424368, step = 4800 (0.955 sec)
INFO:tensorflow:global_step/sec: 107.189
INFO:tensorflow:global_step/sec: 107.189
INFO:tensorflow:loss = 0.41860652, step = 4900 (0.933 sec)
INFO:tensorflow:loss = 0.41860652, step = 4900 (0.933 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 103.085
INFO:tensorflow:global_step/sec: 103.085
INFO:tensorflow:loss = 0.3955871, step = 5000 (0.970 sec)
INFO:tensorflow:loss = 0.3955871, step = 5000 (0.970 sec)
INFO:tensorflow:global_step/sec: 102.244
INFO:tensorflow:global_step/sec: 102.244
INFO:tensorflow:loss = 0.38054687, step = 5100 (0.979 sec)
INFO:tensorflow:loss = 0.38054687, step = 5100 (0.979 sec)
INFO:tensorflow:global_step/sec: 102.199
INFO:tensorflow:global_step/sec: 102.199
INFO:tensorflow:loss = 0.37835938, step = 5200 (0.979 sec)
INFO:tensorflow:loss = 0.37835938, step = 5200 (0.979 sec)
INFO:tensorflow:global_step/sec: 102.192
INFO:tensorflow:global_step/sec: 102.192
INFO:tensorflow:loss = 0.3742793, step = 5300 (0.978 sec)
INFO:tensorflow:loss = 0.3742793, step = 5300 (0.978 sec)
INFO:tensorflow:global_step/sec: 100.049
INFO:tensorflow:global_step/sec: 100.049
INFO:tensorflow:loss = 0.37766984, step = 5400 (0.999 sec)
INFO:tensorflow:loss = 0.37766984, step = 5400 (0.999 sec)
INFO:tensorflow:global_step/sec: 101.413
INFO:tensorflow:global_step/sec: 101.413
INFO:tensorflow:loss = 0.37288016, step = 5500 (0.989 sec)
INFO:tensorflow:loss = 0.37288016, step = 5500 (0.989 sec)
INFO:tensorflow:global_step/sec: 99.4785
INFO:tensorflow:global_step/sec: 99.4785
INFO:tensorflow:loss = 0.39033508, step = 5600 (1.002 sec)
INFO:tensorflow:loss = 0.39033508, step = 5600 (1.002 sec)
INFO:tensorflow:global_step/sec: 101.706
INFO:tensorflow:global_step/sec: 101.706
INFO:tensorflow:loss = 0.3888662, step = 5700 (0.983 sec)
INFO:tensorflow:loss = 0.3888662, step = 5700 (0.983 sec)
INFO:tensorflow:global_step/sec: 103.171
INFO:tensorflow:global_step/sec: 103.171
INFO:tensorflow:loss = 0.39443827, step = 5800 (0.969 sec)
INFO:tensorflow:loss = 0.39443827, step = 5800 (0.969 sec)
INFO:tensorflow:global_step/sec: 100.242
INFO:tensorflow:global_step/sec: 100.242
INFO:tensorflow:loss = 0.3824133, step = 5900 (0.998 sec)
INFO:tensorflow:loss = 0.3824133, step = 5900 (0.998 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 101.746
INFO:tensorflow:global_step/sec: 101.746
INFO:tensorflow:loss = 0.38710442, step = 6000 (0.983 sec)
INFO:tensorflow:loss = 0.38710442, step = 6000 (0.983 sec)
INFO:tensorflow:global_step/sec: 100.1
INFO:tensorflow:global_step/sec: 100.1
INFO:tensorflow:loss = 0.37636378, step = 6100 (0.999 sec)
INFO:tensorflow:loss = 0.37636378, step = 6100 (0.999 sec)
INFO:tensorflow:global_step/sec: 99.9325
INFO:tensorflow:global_step/sec: 99.9325
INFO:tensorflow:loss = 0.37966123, step = 6200 (1.001 sec)
INFO:tensorflow:loss = 0.37966123, step = 6200 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.0218
INFO:tensorflow:global_step/sec: 99.0218
INFO:tensorflow:loss = 0.36940622, step = 6300 (1.010 sec)
INFO:tensorflow:loss = 0.36940622, step = 6300 (1.010 sec)
INFO:tensorflow:global_step/sec: 102.772
INFO:tensorflow:global_step/sec: 102.772
INFO:tensorflow:loss = 0.37147108, step = 6400 (0.972 sec)
INFO:tensorflow:loss = 0.37147108, step = 6400 (0.972 sec)
INFO:tensorflow:global_step/sec: 105.027
INFO:tensorflow:global_step/sec: 105.027
INFO:tensorflow:loss = 0.36456805, step = 6500 (0.952 sec)
INFO:tensorflow:loss = 0.36456805, step = 6500 (0.952 sec)
INFO:tensorflow:global_step/sec: 103.18
INFO:tensorflow:global_step/sec: 103.18
INFO:tensorflow:loss = 0.3684589, step = 6600 (0.969 sec)
INFO:tensorflow:loss = 0.3684589, step = 6600 (0.969 sec)
INFO:tensorflow:global_step/sec: 99.3375
INFO:tensorflow:global_step/sec: 99.3375
INFO:tensorflow:loss = 0.376545, step = 6700 (1.007 sec)
INFO:tensorflow:loss = 0.376545, step = 6700 (1.007 sec)
INFO:tensorflow:global_step/sec: 105.682
INFO:tensorflow:global_step/sec: 105.682
INFO:tensorflow:loss = 0.3895915, step = 6800 (0.947 sec)
INFO:tensorflow:loss = 0.3895915, step = 6800 (0.947 sec)
INFO:tensorflow:global_step/sec: 114.848
INFO:tensorflow:global_step/sec: 114.848
INFO:tensorflow:loss = 0.37849602, step = 6900 (0.870 sec)
INFO:tensorflow:loss = 0.37849602, step = 6900 (0.870 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 109.616
INFO:tensorflow:global_step/sec: 109.616
INFO:tensorflow:loss = 0.35964197, step = 7000 (0.912 sec)
INFO:tensorflow:loss = 0.35964197, step = 7000 (0.912 sec)
INFO:tensorflow:global_step/sec: 105.581
INFO:tensorflow:global_step/sec: 105.581
INFO:tensorflow:loss = 0.36216918, step = 7100 (0.947 sec)
INFO:tensorflow:loss = 0.36216918, step = 7100 (0.947 sec)
INFO:tensorflow:global_step/sec: 106.131
INFO:tensorflow:global_step/sec: 106.131
INFO:tensorflow:loss = 0.3937424, step = 7200 (0.942 sec)
INFO:tensorflow:loss = 0.3937424, step = 7200 (0.942 sec)
INFO:tensorflow:global_step/sec: 105.7
INFO:tensorflow:global_step/sec: 105.7
INFO:tensorflow:loss = 0.38952962, step = 7300 (0.946 sec)
INFO:tensorflow:loss = 0.38952962, step = 7300 (0.946 sec)
INFO:tensorflow:global_step/sec: 102.797
INFO:tensorflow:global_step/sec: 102.797
INFO:tensorflow:loss = 0.37355947, step = 7400 (0.973 sec)
INFO:tensorflow:loss = 0.37355947, step = 7400 (0.973 sec)
INFO:tensorflow:global_step/sec: 102.454
INFO:tensorflow:global_step/sec: 102.454
INFO:tensorflow:loss = 0.36603284, step = 7500 (0.976 sec)
INFO:tensorflow:loss = 0.36603284, step = 7500 (0.976 sec)
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:loss = 0.3693564, step = 7600 (0.964 sec)
INFO:tensorflow:loss = 0.3693564, step = 7600 (0.964 sec)
INFO:tensorflow:global_step/sec: 104.262
INFO:tensorflow:global_step/sec: 104.262
INFO:tensorflow:loss = 0.37061787, step = 7700 (0.959 sec)
INFO:tensorflow:loss = 0.37061787, step = 7700 (0.959 sec)
INFO:tensorflow:global_step/sec: 104.767
INFO:tensorflow:global_step/sec: 104.767
INFO:tensorflow:loss = 0.39289498, step = 7800 (0.955 sec)
INFO:tensorflow:loss = 0.39289498, step = 7800 (0.955 sec)
INFO:tensorflow:global_step/sec: 105.669
INFO:tensorflow:global_step/sec: 105.669
INFO:tensorflow:loss = 0.39648676, step = 7900 (0.946 sec)
INFO:tensorflow:loss = 0.39648676, step = 7900 (0.946 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.931
INFO:tensorflow:global_step/sec: 105.931
INFO:tensorflow:loss = 0.4102661, step = 8000 (0.944 sec)
INFO:tensorflow:loss = 0.4102661, step = 8000 (0.944 sec)
INFO:tensorflow:global_step/sec: 104.541
INFO:tensorflow:global_step/sec: 104.541
INFO:tensorflow:loss = 0.38024917, step = 8100 (0.957 sec)
INFO:tensorflow:loss = 0.38024917, step = 8100 (0.957 sec)
INFO:tensorflow:global_step/sec: 102.663
INFO:tensorflow:global_step/sec: 102.663
INFO:tensorflow:loss = 0.37263972, step = 8200 (0.974 sec)
INFO:tensorflow:loss = 0.37263972, step = 8200 (0.974 sec)
INFO:tensorflow:global_step/sec: 101.803
INFO:tensorflow:global_step/sec: 101.803
INFO:tensorflow:loss = 0.35875428, step = 8300 (0.982 sec)
INFO:tensorflow:loss = 0.35875428, step = 8300 (0.982 sec)
INFO:tensorflow:global_step/sec: 101.443
INFO:tensorflow:global_step/sec: 101.443
INFO:tensorflow:loss = 0.35559803, step = 8400 (0.986 sec)
INFO:tensorflow:loss = 0.35559803, step = 8400 (0.986 sec)
INFO:tensorflow:global_step/sec: 100.077
INFO:tensorflow:global_step/sec: 100.077
INFO:tensorflow:loss = 0.3563253, step = 8500 (0.999 sec)
INFO:tensorflow:loss = 0.3563253, step = 8500 (0.999 sec)
INFO:tensorflow:global_step/sec: 100.147
INFO:tensorflow:global_step/sec: 100.147
INFO:tensorflow:loss = 0.34861985, step = 8600 (0.998 sec)
INFO:tensorflow:loss = 0.34861985, step = 8600 (0.998 sec)
INFO:tensorflow:global_step/sec: 99.9734
INFO:tensorflow:global_step/sec: 99.9734
INFO:tensorflow:loss = 0.35559162, step = 8700 (1.000 sec)
INFO:tensorflow:loss = 0.35559162, step = 8700 (1.000 sec)
INFO:tensorflow:global_step/sec: 99.5136
INFO:tensorflow:global_step/sec: 99.5136
INFO:tensorflow:loss = 0.36242756, step = 8800 (1.005 sec)
INFO:tensorflow:loss = 0.36242756, step = 8800 (1.005 sec)
INFO:tensorflow:global_step/sec: 104.811
INFO:tensorflow:global_step/sec: 104.811
INFO:tensorflow:loss = 0.3742514, step = 8900 (0.954 sec)
INFO:tensorflow:loss = 0.3742514, step = 8900 (0.954 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.372
INFO:tensorflow:global_step/sec: 106.372
INFO:tensorflow:loss = 0.3587474, step = 9000 (0.940 sec)
INFO:tensorflow:loss = 0.3587474, step = 9000 (0.940 sec)
INFO:tensorflow:global_step/sec: 104.249
INFO:tensorflow:global_step/sec: 104.249
INFO:tensorflow:loss = 0.35512, step = 9100 (0.960 sec)
INFO:tensorflow:loss = 0.35512, step = 9100 (0.960 sec)
INFO:tensorflow:global_step/sec: 106.583
INFO:tensorflow:global_step/sec: 106.583
INFO:tensorflow:loss = 0.35559082, step = 9200 (0.938 sec)
INFO:tensorflow:loss = 0.35559082, step = 9200 (0.938 sec)
INFO:tensorflow:global_step/sec: 105.826
INFO:tensorflow:global_step/sec: 105.826
INFO:tensorflow:loss = 0.35460055, step = 9300 (0.945 sec)
INFO:tensorflow:loss = 0.35460055, step = 9300 (0.945 sec)
INFO:tensorflow:global_step/sec: 106.072
INFO:tensorflow:global_step/sec: 106.072
INFO:tensorflow:loss = 0.34970692, step = 9400 (0.944 sec)
INFO:tensorflow:loss = 0.34970692, step = 9400 (0.944 sec)
INFO:tensorflow:global_step/sec: 105.836
INFO:tensorflow:global_step/sec: 105.836
INFO:tensorflow:loss = 0.3449042, step = 9500 (0.943 sec)
INFO:tensorflow:loss = 0.3449042, step = 9500 (0.943 sec)
INFO:tensorflow:global_step/sec: 108.679
INFO:tensorflow:global_step/sec: 108.679
INFO:tensorflow:loss = 0.34985757, step = 9600 (0.920 sec)
INFO:tensorflow:loss = 0.34985757, step = 9600 (0.920 sec)
INFO:tensorflow:global_step/sec: 106.07
INFO:tensorflow:global_step/sec: 106.07
INFO:tensorflow:loss = 0.3453308, step = 9700 (0.943 sec)
INFO:tensorflow:loss = 0.3453308, step = 9700 (0.943 sec)
INFO:tensorflow:global_step/sec: 100.979
INFO:tensorflow:global_step/sec: 100.979
INFO:tensorflow:loss = 0.34995228, step = 9800 (0.990 sec)
INFO:tensorflow:loss = 0.34995228, step = 9800 (0.990 sec)
INFO:tensorflow:global_step/sec: 104.247
INFO:tensorflow:global_step/sec: 104.247
INFO:tensorflow:loss = 0.35693988, step = 9900 (0.959 sec)
INFO:tensorflow:loss = 0.35693988, step = 9900 (0.959 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:12:31Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:12:31Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 47.01670s
INFO:tensorflow:Inference Time : 47.01670s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:13:18
INFO:tensorflow:Finished evaluation at 2021-04-23-09:13:18
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.39696866
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.39696866
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/saved_model.pb
INFO:tensorflow:Loss for final step: 0.3658929.
INFO:tensorflow:Loss for final step: 0.3658929.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/saved_model.pb
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb"
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"

Analyse du modèle TensorFlow

Maintenant que notre modèle est formé, développé et entraîné dans TFX, nous pouvons utiliser plusieurs composants supplémentaires dans l'exosystème TFX pour comprendre les performances de nos modèles un peu plus en détail. En examinant différentes métriques, nous sommes en mesure d'obtenir une meilleure image des performances du modèle global pour différentes tranches de notre modèle afin de nous assurer que notre modèle n'est sous-performant pour aucun sous-groupe.

Nous allons d'abord examiner TensorFlow Model Analysis, qui est une bibliothèque permettant d'évaluer les modèles TensorFlow. Il permet aux utilisateurs d'évaluer leurs modèles sur de grandes quantités de données de manière distribuée, en utilisant les mêmes métriques définies dans leur entraîneur. Ces métriques peuvent être calculées sur différentes tranches de données et visualisées dans un cahier.

Pour une liste des mesures possibles qui peuvent être ajoutés dans tensorflow modèle d' analyse voir ici .

# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer.outputs['model'],

    eval_config = text_format.Parse("""
      model_specs {
        label_key: 'is_recid'
      }
      metrics_specs {
        metrics {class_name: "BinaryAccuracy"}
        metrics {class_name: "AUC"}
        metrics {
          class_name: "FairnessIndicators"
          config: '{"thresholds": [0.25, 0.5, 0.75]}'
        }
      }
      slicing_specs {
        feature_keys: 'race'
      }
    """, tfma.EvalConfig())
)
context.run(model_analyzer)

Indicateurs d'équité

Chargez des indicateurs d'équité pour examiner les données sous-jacentes.

evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
FairnessIndicatorViewer(slicingMetrics=[{'sliceValue': 'Caucasian', 'slice': 'race:Caucasian', 'metrics': {'bi…

Les indicateurs d'équité nous permettront d'explorer les performances des différentes tranches et sont conçus pour aider les équipes à évaluer et à améliorer les modèles pour les problèmes d'équité. Il permet un calcul facile des classificateurs binaires et multiclasses et vous permettra d'évaluer à travers n'importe quelle taille de cas d'utilisation.

Nous allons charger des indicateurs d'équité dans ce cahier, analyser les résultats et examiner les résultats. Après avoir exploré un moment les indicateurs d'équité, examinez les onglets Taux de faux positifs et Taux de faux négatifs dans l'outil. Dans cette étude de cas, nous sommes préoccupés par essayer de réduire le nombre de fausses prédictions de la récidive, ce qui correspond au taux de faux positif .

Erreurs de type I et de type II

Dans l'outil Indicateur d'équité, vous verrez deux options de liste déroulante :

  1. Une option « de base » qui est définie par column_for_slicing .
  2. Une option « Seuils » qui est fixé par fairness_indicator_thresholds .

La « ligne de base » est la tranche à laquelle vous souhaitez comparer toutes les autres tranches. Le plus souvent, il est représenté par la tranche globale, mais peut également être l'une des tranches spécifiques.

« Seuil » est une valeur définie dans un modèle de classification binaire donné pour indiquer où une prédiction doit être placée. Lorsque vous définissez un seuil, vous devez garder à l'esprit deux choses.

  1. Précision : quel est l'inconvénient si votre prédiction aboutit à une erreur de type 1 ? Dans cette étude de cas , un seuil plus élevé signifierait nous prédire le nombre des accusés vont commettre un autre crime quand ils le font en réalité pas.
  2. Rappel : quel est l'inconvénient d'une erreur de type II ? Dans cette étude de cas , un seuil plus élevé signifierait nous prédire le nombre des accusés ne commette un autre crime quand ils le font en réalité.

Nous fixerons des seuils arbitraires à 0,75 et nous nous concentrerons uniquement sur les mesures d'équité pour les accusés afro-américains et caucasiens étant donné la petite taille des échantillons pour les autres races, qui ne sont pas assez grandes pour tirer des conclusions statistiquement significatives.

Les taux ci-dessous peuvent différer légèrement en fonction de la manière dont les données ont été mélangées au début de cette étude de cas, mais jetez un œil à la différence entre les données entre les accusés afro-américains et caucasiens. À un seuil inférieur, notre modèle est plus susceptible de prédire qu'un défendeur caucasien commettra un deuxième crime par rapport à un défendeur afro-américain. Cependant, cette prédiction s'inverse à mesure que nous augmentons notre seuil.

  • Taux de faux positifs à 0,75
    • Afro-américaine: ~ 30%
      • ASC : 0,71
      • Précision binaire : 0,67
    • Caucasien: ~ 8%
      • ASC : 0,71
      • ASC : 0.67

Plus d' informations sur les erreurs de type I / II et réglage du seuil se trouve ici .

Métadonnées ML

Pour comprendre d'où pourrait provenir la disparité et prendre un instantané de notre modèle actuel, nous pouvons utiliser ML Metadata pour enregistrer et récupérer les métadonnées associées à notre modèle. Les métadonnées ML font partie intégrante de TFX, mais sont conçues de manière à pouvoir être utilisées indépendamment.

Pour cette étude de cas, nous allons lister tous les artefacts que nous avons développés précédemment dans cette étude de cas. En parcourant les artefacts, les exécutions et le contexte, nous aurons une vue de haut niveau de notre modèle TFX pour déterminer d'où viennent les problèmes potentiels. Cela nous fournira un aperçu de base de la façon dont notre modèle a été développé et des composants TFX qui ont aidé à développer notre modèle initial.

Nous commencerons par présenter les artefacts de haut niveau, l'exécution et les types de contexte dans notre modèle.

# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()

connection_config.sqlite.filename_uri = os.path.join(
  context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)

def _mlmd_type_to_dataframe(mlmd_type):
  """Helper function to turn MLMD into a Pandas DataFrame.

  Args:
    mlmd_type: Metadata store type.

  Returns:
    DataFrame containing type ID, Name, and Properties.
  """
  pd.set_option('display.max_columns', None)  
  pd.set_option('display.expand_frame_repr', False)

  column_names = ['ID', 'Name', 'Properties']
  df = pd.DataFrame(columns=column_names)
  for a_type in mlmd_type:
    mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
                            columns=column_names)
    df = df.append(mlmd_row)
  return df

# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))

print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))

print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
Artifact Types:
Execution Types:
Context Types:

Identifier d'où pourrait provenir le problème d'équité

Pour chacun des artefacts, exécutions et types de contexte ci-dessus, nous pouvons utiliser les métadonnées ML pour explorer les attributs et la manière dont chaque partie de notre pipeline ML a été développée.

Nous allons commencer par la plongée dans le StatisticsGen d'examiner les données sous - jacentes que nous avons nourri d' abord dans le modèle. En connaissant les artefacts de notre modèle, nous pouvons utiliser les métadonnées ML et la validation des données TensorFlow pour regarder en arrière et en avant dans le modèle afin d'identifier d'où vient un problème potentiel.

Après avoir exécuté la cellule ci - dessous, sélectionnez Lift (Y=1) dans le deuxième tableau de la manière Chart to show l' onglet pour voir l' ascenseur entre les différentes tranches de données. Au sein de la race , l'ascenseur pour afro-américaine est approximatly 1,08 alors que du Caucase est approximatly 0,86.

statistics_gen = StatisticsGen(
    examples=example_gen.outputs['examples'],
    schema=infer_schema.outputs['schema'],
    stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)

for event in store.get_events_by_execution_ids([exec_result.execution_id]):
  if event.path.steps[0].key == 'statistics':
    statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri

model_stats = tfdv.load_statistics(
    os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.

Suivi d'un changement de modèle

Maintenant que nous avons une idée de la façon dont nous pourrions améliorer l'équité de notre modèle, nous allons d'abord documenter notre exécution initiale dans les métadonnées ML pour notre propre enregistrement et pour toute autre personne susceptible d'examiner nos modifications ultérieurement.

ML Metadata peut conserver un journal de nos modèles passés ainsi que toutes les notes que nous aimerions ajouter entre les exécutions. Nous ajouterons une note simple sur notre première exécution indiquant que cette exécution a été effectuée sur l'ensemble de données COMPAS complet

_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'

first_trained_model = store.get_artifacts_by_type('Model')[-1]

# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])

def _mlmd_model_to_dataframe(model, model_number):
  """Helper function to turn a MLMD modle into a Pandas DataFrame.

  Args:
    model: Metadata store model.
    model_number: Number of model run within ML Metadata.

  Returns:
    DataFrame containing the ML Metadata model.
  """
  pd.set_option('display.max_columns', None)  
  pd.set_option('display.expand_frame_repr', False)

  df = pd.DataFrame()
  custom_properties = ['name', 'note', 'state', 'producer_component',
                       'pipeline_name']
  df['id'] = [model[model_number].id]
  df['uri'] = [model[model_number].uri]
  for prop in custom_properties:
    df[prop] = model[model_number].custom_properties.get(prop)
    df[prop] = df[prop].astype(str).map(
        lambda x: x.lstrip('string_value: "').rstrip('"\n'))
  return df

# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))

Améliorer les problèmes d'équité en pondérant le modèle

Il existe plusieurs façons d'aborder la résolution des problèmes d'équité au sein d'un modèle. Les données observées / manipuler les étiquettes, la mise en œuvre des contraintes d'équité, ou l' élimination des préjugés par la régularisation sont des techniques 1 qui ont été utilisées pour fixer les préoccupations d'équité. Dans cette étude de cas, nous allons repondérer le modèle en implémentant une fonction de perte personnalisée dans Keras.

Le code ci - dessous est le même que le Transformer ci - dessus des composants , mais à l'exception d'une nouvelle classe appelée LogisticEndpoint que nous utiliserons pour notre perte dans les Keras et quelques changements de paramètres.


  1. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). Une enquête sur les biais et l'équité dans l'apprentissage automatique. https://arxiv.org/pdf/1908.09635.pdf
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import tensorflow as tf

import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils

from compas_transform import *

_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999


def transformed_names(keys):
  return [transformed_name(key) for key in keys]


def transformed_name(key):
  return '{}_xf'.format(key)


def _gzip_reader_fn(filenames):
  """Returns a record reader that can read gzip'ed files.

  Args:
    filenames: A tf.string tensor or tf.data.Dataset containing one or more
      filenames.

  Returns: A nested structure of tf.TypeSpec objects matching the structure of
    an element of this dataset and specifying the type of individual components.
  """
  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')


# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
  """Generates a feature spec from a Schema proto.

  Args:
    schema: A Schema proto.

  Returns:
    A feature spec defined as a dict whose keys are feature names and values are
      instances of FixedLenFeature, VarLenFeature or SparseFeature.
  """
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _example_serving_receiver_fn(tf_transform_output, schema):
  """Builds the serving in inputs.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    TensorFlow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(LABEL_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)
  transformed_features.pop(transformed_name(LABEL_KEY))
  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_output, schema):
  """Builds everything needed for the tf-model-analysis to run the model.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    EvalInputReceiver function, which contains:
      - TensorFlow graph which parses raw untransformed features, applies the
          tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  serialized_tf_example = tf.compat.v1.placeholder(
      dtype=tf.string, shape=[None], name='input_example_tensor')

  # Add a parse_example operator to the tensorflow graph, which will parse
  # raw, untransformed, tf examples.
  features = tf.io.parse_example(
      serialized=serialized_tf_example, features=raw_feature_spec)

  transformed_features = tf_transform_output.transform_raw_features(features)
  labels = transformed_features.pop(transformed_name(LABEL_KEY))

  receiver_tensors = {'examples': serialized_tf_example}

  return tfma.export.EvalInputReceiver(
      features=transformed_features,
      receiver_tensors=receiver_tensors,
      labels=labels)


def _input_fn(filenames, tf_transform_output, batch_size=200):
  """Generates features and labels for training or evaluation.

  Args:
    filenames: List of CSV files to read data from.
    tf_transform_output: A TFTransformOutput.
    batch_size: First dimension size of the Tensors returned by input_fn.

  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_output.transformed_feature_spec().copy())

  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
      filenames,
      batch_size,
      transformed_feature_spec,
      shuffle=False,
      reader=_gzip_reader_fn)

  transformed_features = dataset.make_one_shot_iterator().get_next()

  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      transformed_name(LABEL_KEY))


# TFX will call this function.
def trainer_fn(hparams, schema):
  """Build the estimator using the high level API.

  Args:
    hparams: Hyperparameters used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.

  Returns:
    A dict of the following:
      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)

  train_input_fn = lambda: _input_fn(
      hparams.train_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  eval_input_fn = lambda: _input_fn(
      hparams.eval_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  train_spec = tf.estimator.TrainSpec(
      train_input_fn,
      max_steps=hparams.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(
      tf_transform_output, schema)

  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=hparams.eval_steps,
      exporters=[exporter],
      name='compas-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
      keep_checkpoint_max=_MAX_CHECKPOINTS)

  run_config = run_config.replace(model_dir=hparams.serving_model_dir)

  estimator = tf.keras.estimator.model_to_estimator(
      keras_model=_keras_model_builder(), config=run_config)

  # Create an input receiver for TFMA processing.
  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)

  return {
      'estimator': estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }


def _keras_model_builder():
  """Build a keras model for COMPAS dataset classification.

  Returns:
    A compiled Keras model.
  """
  feature_columns = []
  feature_layer_inputs = {}

  for key in transformed_names(INT_FEATURE_KEYS):
    feature_columns.append(tf.feature_column.numeric_column(key))
    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)

  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
                              MAX_CATEGORICAL_FEATURE_VALUES):
    feature_columns.append(
        tf.feature_column.indicator_column(
            tf.feature_column.categorical_column_with_identity(
                key, num_buckets=num_buckets)))
    feature_layer_inputs[key] = tf.keras.Input(
        shape=(1,), name=key, dtype=tf.dtypes.int32)

  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
  feature_layer_outputs = feature_columns_input(feature_layer_inputs)

  dense_layers = tf.keras.layers.Dense(
      20, activation='relu', name='dense_1')(feature_layer_outputs)
  dense_layers = tf.keras.layers.Dense(
      10, activation='relu', name='dense_2')(dense_layers)
  output = tf.keras.layers.Dense(
      1, name='predictions')(dense_layers)

  model = tf.keras.Model(
      inputs=[v for v in feature_layer_inputs.values()], outputs=output)

  # To weight our model we will develop a custom loss class within Keras.
  # The old loss is commented out below and the new one is added in below.
  model.compile(
      # loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
      loss=LogisticEndpoint(),
      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))

  return model


class LogisticEndpoint(tf.keras.layers.Layer):

  def __init__(self, name=None):
    super(LogisticEndpoint, self).__init__(name=name)
    self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)

  def __call__(self, y_true, y_pred, sample_weight=None):
    inputs = [y_true, y_pred]
    inputs += sample_weight or ['sample_weight_xf']
    return super(LogisticEndpoint, self).__call__(inputs)

  def call(self, inputs):
    y_true, y_pred = inputs[0], inputs[1]
    if len(inputs) == 3:
      sample_weight = inputs[2]
    else:
      sample_weight = None
    loss = self.loss_fn(y_true, y_pred, sample_weight)
    self.add_loss(loss)
    reduce_loss = tf.math.divide_no_nan(
        tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
    return reduce_loss
Overwriting compas_trainer.py

Recycler le modèle TFX avec le modèle pondéré

Dans cette prochaine partie, nous utiliserons le composant de transformation pondéré pour réexécuter le même modèle Trainer qu'auparavant pour voir l'amélioration de l'équité après l'application de la pondération.

trainer_weighted = Trainer(
    module_file=_trainer_module_file,
    transformed_examples=transform.outputs['transformed_examples'],
    schema=infer_schema.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 0.47077793, step = 0
INFO:tensorflow:loss = 0.47077793, step = 0
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:loss = 0.49240756, step = 100 (0.966 sec)
INFO:tensorflow:loss = 0.49240756, step = 100 (0.966 sec)
INFO:tensorflow:global_step/sec: 107.004
INFO:tensorflow:global_step/sec: 107.004
INFO:tensorflow:loss = 0.5130932, step = 200 (0.934 sec)
INFO:tensorflow:loss = 0.5130932, step = 200 (0.934 sec)
INFO:tensorflow:global_step/sec: 107.626
INFO:tensorflow:global_step/sec: 107.626
INFO:tensorflow:loss = 0.50732946, step = 300 (0.929 sec)
INFO:tensorflow:loss = 0.50732946, step = 300 (0.929 sec)
INFO:tensorflow:global_step/sec: 109.147
INFO:tensorflow:global_step/sec: 109.147
INFO:tensorflow:loss = 0.478406, step = 400 (0.917 sec)
INFO:tensorflow:loss = 0.478406, step = 400 (0.917 sec)
INFO:tensorflow:global_step/sec: 106.691
INFO:tensorflow:global_step/sec: 106.691
INFO:tensorflow:loss = 0.46235517, step = 500 (0.937 sec)
INFO:tensorflow:loss = 0.46235517, step = 500 (0.937 sec)
INFO:tensorflow:global_step/sec: 105.369
INFO:tensorflow:global_step/sec: 105.369
INFO:tensorflow:loss = 0.45720923, step = 600 (0.949 sec)
INFO:tensorflow:loss = 0.45720923, step = 600 (0.949 sec)
INFO:tensorflow:global_step/sec: 108.051
INFO:tensorflow:global_step/sec: 108.051
INFO:tensorflow:loss = 0.45070276, step = 700 (0.925 sec)
INFO:tensorflow:loss = 0.45070276, step = 700 (0.925 sec)
INFO:tensorflow:global_step/sec: 109.233
INFO:tensorflow:global_step/sec: 109.233
INFO:tensorflow:loss = 0.46355185, step = 800 (0.915 sec)
INFO:tensorflow:loss = 0.46355185, step = 800 (0.915 sec)
INFO:tensorflow:global_step/sec: 109.367
INFO:tensorflow:global_step/sec: 109.367
INFO:tensorflow:loss = 0.48339045, step = 900 (0.914 sec)
INFO:tensorflow:loss = 0.48339045, step = 900 (0.914 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:13:43Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:13:43Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 46.00220s
INFO:tensorflow:Inference Time : 46.00220s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:14:29
INFO:tensorflow:Finished evaluation at 2021-04-23-09:14:29
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.48788843
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.48788843
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:global_step/sec: 2.11897
INFO:tensorflow:global_step/sec: 2.11897
INFO:tensorflow:loss = 0.5041351, step = 1000 (47.193 sec)
INFO:tensorflow:loss = 0.5041351, step = 1000 (47.193 sec)
INFO:tensorflow:global_step/sec: 112.962
INFO:tensorflow:global_step/sec: 112.962
INFO:tensorflow:loss = 0.5043556, step = 1100 (0.885 sec)
INFO:tensorflow:loss = 0.5043556, step = 1100 (0.885 sec)
INFO:tensorflow:global_step/sec: 106.062
INFO:tensorflow:global_step/sec: 106.062
INFO:tensorflow:loss = 0.49965087, step = 1200 (0.943 sec)
INFO:tensorflow:loss = 0.49965087, step = 1200 (0.943 sec)
INFO:tensorflow:global_step/sec: 107.054
INFO:tensorflow:global_step/sec: 107.054
INFO:tensorflow:loss = 0.479686, step = 1300 (0.934 sec)
INFO:tensorflow:loss = 0.479686, step = 1300 (0.934 sec)
INFO:tensorflow:global_step/sec: 110.532
INFO:tensorflow:global_step/sec: 110.532
INFO:tensorflow:loss = 0.47265288, step = 1400 (0.905 sec)
INFO:tensorflow:loss = 0.47265288, step = 1400 (0.905 sec)
INFO:tensorflow:global_step/sec: 109.283
INFO:tensorflow:global_step/sec: 109.283
INFO:tensorflow:loss = 0.4670694, step = 1500 (0.915 sec)
INFO:tensorflow:loss = 0.4670694, step = 1500 (0.915 sec)
INFO:tensorflow:global_step/sec: 108.905
INFO:tensorflow:global_step/sec: 108.905
INFO:tensorflow:loss = 0.45940527, step = 1600 (0.918 sec)
INFO:tensorflow:loss = 0.45940527, step = 1600 (0.918 sec)
INFO:tensorflow:global_step/sec: 107.007
INFO:tensorflow:global_step/sec: 107.007
INFO:tensorflow:loss = 0.4766834, step = 1700 (0.936 sec)
INFO:tensorflow:loss = 0.4766834, step = 1700 (0.936 sec)
INFO:tensorflow:global_step/sec: 107.121
INFO:tensorflow:global_step/sec: 107.121
INFO:tensorflow:loss = 0.46949837, step = 1800 (0.932 sec)
INFO:tensorflow:loss = 0.46949837, step = 1800 (0.932 sec)
INFO:tensorflow:global_step/sec: 109.537
INFO:tensorflow:global_step/sec: 109.537
INFO:tensorflow:loss = 0.47130463, step = 1900 (0.913 sec)
INFO:tensorflow:loss = 0.47130463, step = 1900 (0.913 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.565
INFO:tensorflow:global_step/sec: 105.565
INFO:tensorflow:loss = 0.45515984, step = 2000 (0.947 sec)
INFO:tensorflow:loss = 0.45515984, step = 2000 (0.947 sec)
INFO:tensorflow:global_step/sec: 111.265
INFO:tensorflow:global_step/sec: 111.265
INFO:tensorflow:loss = 0.43437228, step = 2100 (0.899 sec)
INFO:tensorflow:loss = 0.43437228, step = 2100 (0.899 sec)
INFO:tensorflow:global_step/sec: 108.639
INFO:tensorflow:global_step/sec: 108.639
INFO:tensorflow:loss = 0.4414773, step = 2200 (0.920 sec)
INFO:tensorflow:loss = 0.4414773, step = 2200 (0.920 sec)
INFO:tensorflow:global_step/sec: 103.783
INFO:tensorflow:global_step/sec: 103.783
INFO:tensorflow:loss = 0.4223846, step = 2300 (0.964 sec)
INFO:tensorflow:loss = 0.4223846, step = 2300 (0.964 sec)
INFO:tensorflow:global_step/sec: 109.882
INFO:tensorflow:global_step/sec: 109.882
INFO:tensorflow:loss = 0.4259975, step = 2400 (0.910 sec)
INFO:tensorflow:loss = 0.4259975, step = 2400 (0.910 sec)
INFO:tensorflow:global_step/sec: 108.38
INFO:tensorflow:global_step/sec: 108.38
INFO:tensorflow:loss = 0.43732366, step = 2500 (0.923 sec)
INFO:tensorflow:loss = 0.43732366, step = 2500 (0.923 sec)
INFO:tensorflow:global_step/sec: 106.671
INFO:tensorflow:global_step/sec: 106.671
INFO:tensorflow:loss = 0.44364113, step = 2600 (0.937 sec)
INFO:tensorflow:loss = 0.44364113, step = 2600 (0.937 sec)
INFO:tensorflow:global_step/sec: 107.267
INFO:tensorflow:global_step/sec: 107.267
INFO:tensorflow:loss = 0.43038422, step = 2700 (0.932 sec)
INFO:tensorflow:loss = 0.43038422, step = 2700 (0.932 sec)
INFO:tensorflow:global_step/sec: 110.393
INFO:tensorflow:global_step/sec: 110.393
INFO:tensorflow:loss = 0.41958278, step = 2800 (0.906 sec)
INFO:tensorflow:loss = 0.41958278, step = 2800 (0.906 sec)
INFO:tensorflow:global_step/sec: 105.96
INFO:tensorflow:global_step/sec: 105.96
INFO:tensorflow:loss = 0.41283488, step = 2900 (0.944 sec)
INFO:tensorflow:loss = 0.41283488, step = 2900 (0.944 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 104.287
INFO:tensorflow:global_step/sec: 104.287
INFO:tensorflow:loss = 0.39609566, step = 3000 (0.958 sec)
INFO:tensorflow:loss = 0.39609566, step = 3000 (0.958 sec)
INFO:tensorflow:global_step/sec: 108.021
INFO:tensorflow:global_step/sec: 108.021
INFO:tensorflow:loss = 0.39362195, step = 3100 (0.926 sec)
INFO:tensorflow:loss = 0.39362195, step = 3100 (0.926 sec)
INFO:tensorflow:global_step/sec: 108.451
INFO:tensorflow:global_step/sec: 108.451
INFO:tensorflow:loss = 0.40350518, step = 3200 (0.922 sec)
INFO:tensorflow:loss = 0.40350518, step = 3200 (0.922 sec)
INFO:tensorflow:global_step/sec: 107.884
INFO:tensorflow:global_step/sec: 107.884
INFO:tensorflow:loss = 0.42621797, step = 3300 (0.927 sec)
INFO:tensorflow:loss = 0.42621797, step = 3300 (0.927 sec)
INFO:tensorflow:global_step/sec: 108.506
INFO:tensorflow:global_step/sec: 108.506
INFO:tensorflow:loss = 0.41866535, step = 3400 (0.921 sec)
INFO:tensorflow:loss = 0.41866535, step = 3400 (0.921 sec)
INFO:tensorflow:global_step/sec: 107.08
INFO:tensorflow:global_step/sec: 107.08
INFO:tensorflow:loss = 0.4116188, step = 3500 (0.934 sec)
INFO:tensorflow:loss = 0.4116188, step = 3500 (0.934 sec)
INFO:tensorflow:global_step/sec: 107.495
INFO:tensorflow:global_step/sec: 107.495
INFO:tensorflow:loss = 0.4095764, step = 3600 (0.931 sec)
INFO:tensorflow:loss = 0.4095764, step = 3600 (0.931 sec)
INFO:tensorflow:global_step/sec: 107.481
INFO:tensorflow:global_step/sec: 107.481
INFO:tensorflow:loss = 0.40515175, step = 3700 (0.930 sec)
INFO:tensorflow:loss = 0.40515175, step = 3700 (0.930 sec)
INFO:tensorflow:global_step/sec: 107.701
INFO:tensorflow:global_step/sec: 107.701
INFO:tensorflow:loss = 0.37928, step = 3800 (0.929 sec)
INFO:tensorflow:loss = 0.37928, step = 3800 (0.929 sec)
INFO:tensorflow:global_step/sec: 106.99
INFO:tensorflow:global_step/sec: 106.99
INFO:tensorflow:loss = 0.3782839, step = 3900 (0.934 sec)
INFO:tensorflow:loss = 0.3782839, step = 3900 (0.934 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.371
INFO:tensorflow:global_step/sec: 106.371
INFO:tensorflow:loss = 0.40979695, step = 4000 (0.940 sec)
INFO:tensorflow:loss = 0.40979695, step = 4000 (0.940 sec)
INFO:tensorflow:global_step/sec: 110.509
INFO:tensorflow:global_step/sec: 110.509
INFO:tensorflow:loss = 0.4390851, step = 4100 (0.905 sec)
INFO:tensorflow:loss = 0.4390851, step = 4100 (0.905 sec)
INFO:tensorflow:global_step/sec: 109.02
INFO:tensorflow:global_step/sec: 109.02
INFO:tensorflow:loss = 0.43913904, step = 4200 (0.918 sec)
INFO:tensorflow:loss = 0.43913904, step = 4200 (0.918 sec)
INFO:tensorflow:global_step/sec: 109.836
INFO:tensorflow:global_step/sec: 109.836
INFO:tensorflow:loss = 0.41836765, step = 4300 (0.910 sec)
INFO:tensorflow:loss = 0.41836765, step = 4300 (0.910 sec)
INFO:tensorflow:global_step/sec: 112.894
INFO:tensorflow:global_step/sec: 112.894
INFO:tensorflow:loss = 0.402948, step = 4400 (0.886 sec)
INFO:tensorflow:loss = 0.402948, step = 4400 (0.886 sec)
INFO:tensorflow:global_step/sec: 108.879
INFO:tensorflow:global_step/sec: 108.879
INFO:tensorflow:loss = 0.40872148, step = 4500 (0.918 sec)
INFO:tensorflow:loss = 0.40872148, step = 4500 (0.918 sec)
INFO:tensorflow:global_step/sec: 108.843
INFO:tensorflow:global_step/sec: 108.843
INFO:tensorflow:loss = 0.41156477, step = 4600 (0.919 sec)
INFO:tensorflow:loss = 0.41156477, step = 4600 (0.919 sec)
INFO:tensorflow:global_step/sec: 108.463
INFO:tensorflow:global_step/sec: 108.463
INFO:tensorflow:loss = 0.41628867, step = 4700 (0.922 sec)
INFO:tensorflow:loss = 0.41628867, step = 4700 (0.922 sec)
INFO:tensorflow:global_step/sec: 105.419
INFO:tensorflow:global_step/sec: 105.419
INFO:tensorflow:loss = 0.43485588, step = 4800 (0.948 sec)
INFO:tensorflow:loss = 0.43485588, step = 4800 (0.948 sec)
INFO:tensorflow:global_step/sec: 108.522
INFO:tensorflow:global_step/sec: 108.522
INFO:tensorflow:loss = 0.42932, step = 4900 (0.922 sec)
INFO:tensorflow:loss = 0.42932, step = 4900 (0.922 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.885
INFO:tensorflow:global_step/sec: 106.885
INFO:tensorflow:loss = 0.40682846, step = 5000 (0.935 sec)
INFO:tensorflow:loss = 0.40682846, step = 5000 (0.935 sec)
INFO:tensorflow:global_step/sec: 111.019
INFO:tensorflow:global_step/sec: 111.019
INFO:tensorflow:loss = 0.38750562, step = 5100 (0.901 sec)
INFO:tensorflow:loss = 0.38750562, step = 5100 (0.901 sec)
INFO:tensorflow:global_step/sec: 108.979
INFO:tensorflow:global_step/sec: 108.979
INFO:tensorflow:loss = 0.38564628, step = 5200 (0.917 sec)
INFO:tensorflow:loss = 0.38564628, step = 5200 (0.917 sec)
INFO:tensorflow:global_step/sec: 109.045
INFO:tensorflow:global_step/sec: 109.045
INFO:tensorflow:loss = 0.37906387, step = 5300 (0.919 sec)
INFO:tensorflow:loss = 0.37906387, step = 5300 (0.919 sec)
INFO:tensorflow:global_step/sec: 108.653
INFO:tensorflow:global_step/sec: 108.653
INFO:tensorflow:loss = 0.38417932, step = 5400 (0.919 sec)
INFO:tensorflow:loss = 0.38417932, step = 5400 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.857
INFO:tensorflow:global_step/sec: 110.857
INFO:tensorflow:loss = 0.37717777, step = 5500 (0.902 sec)
INFO:tensorflow:loss = 0.37717777, step = 5500 (0.902 sec)
INFO:tensorflow:global_step/sec: 107.849
INFO:tensorflow:global_step/sec: 107.849
INFO:tensorflow:loss = 0.3948313, step = 5600 (0.927 sec)
INFO:tensorflow:loss = 0.3948313, step = 5600 (0.927 sec)
INFO:tensorflow:global_step/sec: 109.597
INFO:tensorflow:global_step/sec: 109.597
INFO:tensorflow:loss = 0.39357123, step = 5700 (0.912 sec)
INFO:tensorflow:loss = 0.39357123, step = 5700 (0.912 sec)
INFO:tensorflow:global_step/sec: 109.138
INFO:tensorflow:global_step/sec: 109.138
INFO:tensorflow:loss = 0.39145112, step = 5800 (0.916 sec)
INFO:tensorflow:loss = 0.39145112, step = 5800 (0.916 sec)
INFO:tensorflow:global_step/sec: 109.651
INFO:tensorflow:global_step/sec: 109.651
INFO:tensorflow:loss = 0.38264394, step = 5900 (0.914 sec)
INFO:tensorflow:loss = 0.38264394, step = 5900 (0.914 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.747
INFO:tensorflow:global_step/sec: 105.747
INFO:tensorflow:loss = 0.37979886, step = 6000 (0.944 sec)
INFO:tensorflow:loss = 0.37979886, step = 6000 (0.944 sec)
INFO:tensorflow:global_step/sec: 107.903
INFO:tensorflow:global_step/sec: 107.903
INFO:tensorflow:loss = 0.37065622, step = 6100 (0.927 sec)
INFO:tensorflow:loss = 0.37065622, step = 6100 (0.927 sec)
INFO:tensorflow:global_step/sec: 109.687
INFO:tensorflow:global_step/sec: 109.687
INFO:tensorflow:loss = 0.37019882, step = 6200 (0.912 sec)
INFO:tensorflow:loss = 0.37019882, step = 6200 (0.912 sec)
INFO:tensorflow:global_step/sec: 111.749
INFO:tensorflow:global_step/sec: 111.749
INFO:tensorflow:loss = 0.3635425, step = 6300 (0.895 sec)
INFO:tensorflow:loss = 0.3635425, step = 6300 (0.895 sec)
INFO:tensorflow:global_step/sec: 109.591
INFO:tensorflow:global_step/sec: 109.591
INFO:tensorflow:loss = 0.37183607, step = 6400 (0.913 sec)
INFO:tensorflow:loss = 0.37183607, step = 6400 (0.913 sec)
INFO:tensorflow:global_step/sec: 110.09
INFO:tensorflow:global_step/sec: 110.09
INFO:tensorflow:loss = 0.36981124, step = 6500 (0.908 sec)
INFO:tensorflow:loss = 0.36981124, step = 6500 (0.908 sec)
INFO:tensorflow:global_step/sec: 111.705
INFO:tensorflow:global_step/sec: 111.705
INFO:tensorflow:loss = 0.37439653, step = 6600 (0.895 sec)
INFO:tensorflow:loss = 0.37439653, step = 6600 (0.895 sec)
INFO:tensorflow:global_step/sec: 111.733
INFO:tensorflow:global_step/sec: 111.733
INFO:tensorflow:loss = 0.38192895, step = 6700 (0.895 sec)
INFO:tensorflow:loss = 0.38192895, step = 6700 (0.895 sec)
INFO:tensorflow:global_step/sec: 110.939
INFO:tensorflow:global_step/sec: 110.939
INFO:tensorflow:loss = 0.39505512, step = 6800 (0.901 sec)
INFO:tensorflow:loss = 0.39505512, step = 6800 (0.901 sec)
INFO:tensorflow:global_step/sec: 108.696
INFO:tensorflow:global_step/sec: 108.696
INFO:tensorflow:loss = 0.37721425, step = 6900 (0.920 sec)
INFO:tensorflow:loss = 0.37721425, step = 6900 (0.920 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 108.787
INFO:tensorflow:global_step/sec: 108.787
INFO:tensorflow:loss = 0.35651168, step = 7000 (0.919 sec)
INFO:tensorflow:loss = 0.35651168, step = 7000 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.463
INFO:tensorflow:global_step/sec: 110.463
INFO:tensorflow:loss = 0.35931125, step = 7100 (0.906 sec)
INFO:tensorflow:loss = 0.35931125, step = 7100 (0.906 sec)
INFO:tensorflow:global_step/sec: 110.653
INFO:tensorflow:global_step/sec: 110.653
INFO:tensorflow:loss = 0.4005883, step = 7200 (0.903 sec)
INFO:tensorflow:loss = 0.4005883, step = 7200 (0.903 sec)
INFO:tensorflow:global_step/sec: 109.584
INFO:tensorflow:global_step/sec: 109.584
INFO:tensorflow:loss = 0.39476267, step = 7300 (0.914 sec)
INFO:tensorflow:loss = 0.39476267, step = 7300 (0.914 sec)
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:loss = 0.38155714, step = 7400 (0.905 sec)
INFO:tensorflow:loss = 0.38155714, step = 7400 (0.905 sec)
INFO:tensorflow:global_step/sec: 112.264
INFO:tensorflow:global_step/sec: 112.264
INFO:tensorflow:loss = 0.3660822, step = 7500 (0.891 sec)
INFO:tensorflow:loss = 0.3660822, step = 7500 (0.891 sec)
INFO:tensorflow:global_step/sec: 107.973
INFO:tensorflow:global_step/sec: 107.973
INFO:tensorflow:loss = 0.37184823, step = 7600 (0.926 sec)
INFO:tensorflow:loss = 0.37184823, step = 7600 (0.926 sec)
INFO:tensorflow:global_step/sec: 112.386
INFO:tensorflow:global_step/sec: 112.386
INFO:tensorflow:loss = 0.37022683, step = 7700 (0.890 sec)
INFO:tensorflow:loss = 0.37022683, step = 7700 (0.890 sec)
INFO:tensorflow:global_step/sec: 108.054
INFO:tensorflow:global_step/sec: 108.054
INFO:tensorflow:loss = 0.39397115, step = 7800 (0.926 sec)
INFO:tensorflow:loss = 0.39397115, step = 7800 (0.926 sec)
INFO:tensorflow:global_step/sec: 109.51
INFO:tensorflow:global_step/sec: 109.51
INFO:tensorflow:loss = 0.4014641, step = 7900 (0.913 sec)
INFO:tensorflow:loss = 0.4014641, step = 7900 (0.913 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 110.755
INFO:tensorflow:global_step/sec: 110.755
INFO:tensorflow:loss = 0.41632578, step = 8000 (0.903 sec)
INFO:tensorflow:loss = 0.41632578, step = 8000 (0.903 sec)
INFO:tensorflow:global_step/sec: 111.974
INFO:tensorflow:global_step/sec: 111.974
INFO:tensorflow:loss = 0.38964537, step = 8100 (0.893 sec)
INFO:tensorflow:loss = 0.38964537, step = 8100 (0.893 sec)
INFO:tensorflow:global_step/sec: 109.464
INFO:tensorflow:global_step/sec: 109.464
INFO:tensorflow:loss = 0.3786476, step = 8200 (0.914 sec)
INFO:tensorflow:loss = 0.3786476, step = 8200 (0.914 sec)
INFO:tensorflow:global_step/sec: 110.488
INFO:tensorflow:global_step/sec: 110.488
INFO:tensorflow:loss = 0.36360282, step = 8300 (0.905 sec)
INFO:tensorflow:loss = 0.36360282, step = 8300 (0.905 sec)
INFO:tensorflow:global_step/sec: 111.241
INFO:tensorflow:global_step/sec: 111.241
INFO:tensorflow:loss = 0.35523522, step = 8400 (0.899 sec)
INFO:tensorflow:loss = 0.35523522, step = 8400 (0.899 sec)
INFO:tensorflow:global_step/sec: 109.894
INFO:tensorflow:global_step/sec: 109.894
INFO:tensorflow:loss = 0.36030933, step = 8500 (0.910 sec)
INFO:tensorflow:loss = 0.36030933, step = 8500 (0.910 sec)
INFO:tensorflow:global_step/sec: 110.548
INFO:tensorflow:global_step/sec: 110.548
INFO:tensorflow:loss = 0.35474238, step = 8600 (0.905 sec)
INFO:tensorflow:loss = 0.35474238, step = 8600 (0.905 sec)
INFO:tensorflow:global_step/sec: 108.786
INFO:tensorflow:global_step/sec: 108.786
INFO:tensorflow:loss = 0.36295354, step = 8700 (0.919 sec)
INFO:tensorflow:loss = 0.36295354, step = 8700 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.613
INFO:tensorflow:global_step/sec: 110.613
INFO:tensorflow:loss = 0.370992, step = 8800 (0.905 sec)
INFO:tensorflow:loss = 0.370992, step = 8800 (0.905 sec)
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:loss = 0.37704998, step = 8900 (0.907 sec)
INFO:tensorflow:loss = 0.37704998, step = 8900 (0.907 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 109.913
INFO:tensorflow:global_step/sec: 109.913
INFO:tensorflow:loss = 0.35852998, step = 9000 (0.908 sec)
INFO:tensorflow:loss = 0.35852998, step = 9000 (0.908 sec)
INFO:tensorflow:global_step/sec: 110.748
INFO:tensorflow:global_step/sec: 110.748
INFO:tensorflow:loss = 0.3526183, step = 9100 (0.903 sec)
INFO:tensorflow:loss = 0.3526183, step = 9100 (0.903 sec)
INFO:tensorflow:global_step/sec: 109.463
INFO:tensorflow:global_step/sec: 109.463
INFO:tensorflow:loss = 0.35498005, step = 9200 (0.914 sec)
INFO:tensorflow:loss = 0.35498005, step = 9200 (0.914 sec)
INFO:tensorflow:global_step/sec: 109.903
INFO:tensorflow:global_step/sec: 109.903
INFO:tensorflow:loss = 0.35461825, step = 9300 (0.909 sec)
INFO:tensorflow:loss = 0.35461825, step = 9300 (0.909 sec)
INFO:tensorflow:global_step/sec: 110.685
INFO:tensorflow:global_step/sec: 110.685
INFO:tensorflow:loss = 0.34659553, step = 9400 (0.904 sec)
INFO:tensorflow:loss = 0.34659553, step = 9400 (0.904 sec)
INFO:tensorflow:global_step/sec: 102.877
INFO:tensorflow:global_step/sec: 102.877
INFO:tensorflow:loss = 0.34350696, step = 9500 (0.972 sec)
INFO:tensorflow:loss = 0.34350696, step = 9500 (0.972 sec)
INFO:tensorflow:global_step/sec: 104.166
INFO:tensorflow:global_step/sec: 104.166
INFO:tensorflow:loss = 0.354497, step = 9600 (0.960 sec)
INFO:tensorflow:loss = 0.354497, step = 9600 (0.960 sec)
INFO:tensorflow:global_step/sec: 108.578
INFO:tensorflow:global_step/sec: 108.578
INFO:tensorflow:loss = 0.35038272, step = 9700 (0.921 sec)
INFO:tensorflow:loss = 0.35038272, step = 9700 (0.921 sec)
INFO:tensorflow:global_step/sec: 108.338
INFO:tensorflow:global_step/sec: 108.338
INFO:tensorflow:loss = 0.36009234, step = 9800 (0.923 sec)
INFO:tensorflow:loss = 0.36009234, step = 9800 (0.923 sec)
INFO:tensorflow:global_step/sec: 112.09
INFO:tensorflow:global_step/sec: 112.09
INFO:tensorflow:loss = 0.36380777, step = 9900 (0.892 sec)
INFO:tensorflow:loss = 0.36380777, step = 9900 (0.892 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:15:52Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:15:52Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 45.40978s
INFO:tensorflow:Inference Time : 45.40978s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:16:37
INFO:tensorflow:Finished evaluation at 2021-04-23-09:16:37
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.40231007
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.40231007
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/saved_model.pb
INFO:tensorflow:Loss for final step: 0.37667033.
INFO:tensorflow:Loss for final step: 0.37667033.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/saved_model.pb
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb"
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer_weighted.outputs['model'],

    eval_config = text_format.Parse("""
      model_specs {
        label_key: 'is_recid'
      }
      metrics_specs {
        metrics {class_name: 'BinaryAccuracy'}
        metrics {class_name: 'AUC'}
        metrics {
          class_name: 'FairnessIndicators'
          config: '{"thresholds": [0.25, 0.5, 0.75]}'
        }
      }
      slicing_specs {
        feature_keys: 'race'
      }
    """, tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)

multi_eval_results = {
    'Unweighted Model': eval_result,
    'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
    multi_eval_results=multi_eval_results)
FairnessIndicatorViewer(evalName='Unweighted Model', evalNameCompare='Weighted Model', slicingMetrics=[{'slice…

Après avoir recyclé nos résultats avec le modèle pondéré, nous pouvons à nouveau examiner les métriques d'équité pour évaluer toute amélioration du modèle. Cette fois, cependant, nous utiliserons la fonction de comparaison de modèles dans les indicateurs d'équité pour voir la différence entre le modèle pondéré et non pondéré. Bien que nous constations encore des problèmes d'équité avec le modèle pondéré, l'écart est beaucoup moins prononcé.

L'inconvénient, cependant, est que notre AUC et la précision binaire ont également chuté après la pondération du modèle.

  • Taux de faux positifs à 0,75
    • Afro-américaine: ~ 1%
      • ASC : 0,47
      • Précision binaire : 0,59
    • Caucasien: ~ 0%
      • ASC : 0,47
      • Précision binaire : 0,58

Examiner les données de la deuxième exécution

Enfin, nous pouvons visualiser les données avec TensorFlow Data Validation et superposer les modifications de données entre les deux modèles et ajouter une note supplémentaire aux métadonnées ML indiquant que ce modèle a amélioré les problèmes d'équité.

# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri

# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
    first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
    second_model_uri, 'eval/stats_tfrecord/'))

# Visualize the statistics between the two models.
tfdv.visualize_statistics(
    lhs_statistics=second_model_stats,
    lhs_name='Sampled Model',
    rhs_statistics=first_model_uri,
    rhs_name='COMPAS Orginal')
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'

# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]

# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])

display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))

Conclusion

Dans cette étude de cas, nous avons développé un classificateur Keras dans un pipeline TFX avec l'ensemble de données COMPAS pour examiner tout problème d'équité au sein de l'ensemble de données. Après avoir initialement développé le TFX, les problèmes d'équité n'étaient pas immédiatement apparents avant d'examiner les tranches individuelles de notre modèle par nos caractéristiques sensibles - dans notre cas, la race. Après avoir identifié les problèmes, nous avons pu localiser la source du problème d'équité avec TensorFlow DataValidation pour identifier une méthode permettant d'atténuer les problèmes d'équité via la pondération du modèle tout en suivant et en annotant les modifications via les métadonnées ML. Bien que nous ne soyons pas en mesure de résoudre complètement tous les problèmes d'équité au sein de l'ensemble de données, l'ajout d'une note à suivre pour les futurs développeurs permettra aux autres de comprendre les problèmes auxquels nous avons été confrontés lors du développement de ce modèle.

Enfin, il est important de noter que cette étude de cas n'a pas résolu les problèmes d'équité qui sont présents dans l'ensemble de données COMPAS. En améliorant les problèmes d'équité dans le modèle, nous avons également réduit l'AUC et la précision des performances du modèle. Ce que nous avons pu faire, cependant, a été de créer un modèle qui a présenté les problèmes d'équité et de localiser d'où les problèmes pourraient provenir en suivant ou en générant la lignée du modèle tout en annotant les problèmes de modèle dans les métadonnées.

Pour plus d' informations sur les questions que la prévision de la détention provisoire peut avoir voir le FAT * 2018 conférence sur « Comprendre le contexte et les conséquences de la détention provisoire »