O Dia da Comunidade de ML é dia 9 de novembro! Junte-nos para atualização de TensorFlow, JAX, e mais Saiba mais

Estudo de caso de linhagem de indicadores de justiça

Ver no TensorFlow.org Executar no Google Colab Ver no GitHub Baixar caderno Veja o modelo TF Hub

Conjunto de dados COMPAS

COMPAS (Profiling Correctional Offender Gestão de Sanções alternativas) é um conjunto de dados pública, que contém cerca de 18.000 casos criminais de Broward County, Florida, entre janeiro de 2013 e dezembro de 2014. Os dados contêm informações sobre 11.000 réus únicas, incluindo dados demográficos de antecedentes criminais, e uma pontuação de risco destinada a representar a probabilidade do réu de reincidência (reincidência). Um modelo de aprendizado de máquina treinado nesses dados foi usado por juízes e agentes de liberdade condicional para determinar se devem ou não definir a fiança e conceder ou não liberdade condicional.

Em 2016, um artigo publicado no ProPublica descobriu que o modelo COMPAS foi incorretamente prever que os réus afro-americanos seria recidivar a taxas muito mais elevadas do que os seus homólogos brancos enquanto Caucasiano não recidivar a uma taxa superior muito. Para réus caucasianos, o modelo cometeu erros na direção oposta, fazendo previsões incorretas de que eles não cometeriam outro crime. Os autores continuaram mostrando que esses vieses provavelmente se deviam a uma distribuição desigual nos dados entre afro-americanos e réus caucasianos. Especificamente, a verdade rótulo térreo de um exemplo negativo (um réu não cometer outro crime) e um exemplo positivo (réu cometer outro crime) foram desproporcional entre as duas raças. Desde 2016, o conjunto de dados de COMPAS apareceu frequentemente na literatura equidade ML 1, 2, 3, com investigadores usando-o para demonstrar técnicas para identificar e remediar problemas de equidade. Este tutorial do FAT * 2018 conferência ilustra como COMPAS pode afetar drasticamente as perspectivas de um réu no mundo real.

É importante observar que o desenvolvimento de um modelo de aprendizado de máquina para prever a prisão preventiva tem uma série de considerações éticas importantes. Você pode aprender mais sobre estas questões na Parceria sobre AI “ Relatório sobre Ferramentas de Avaliação de Risco Algorithmic no Sistema de Justiça Criminal US ”. A Partnership on AI é uma organização com várias partes interessadas - da qual o Google é membro - que cria diretrizes em torno da IA.

Estamos usando o conjunto de dados COMPAS apenas como um exemplo de como identificar e remediar questões de justiça nos dados. Este conjunto de dados é canônico na literatura de justiça algorítmica.

Sobre as ferramentas neste estudo de caso

  • TensorFlow Extensão (TFX) é uma plataforma de aprendizagem de máquina-produção em escala Google baseado em TensorFlow. Ele fornece uma estrutura de configuração e bibliotecas compartilhadas para integrar componentes comuns necessários para definir, iniciar e monitorar seu sistema de aprendizado de máquina.

  • TensorFlow Modelo de Análise é uma biblioteca para avaliar modelos de aprendizagem de máquina. Os usuários podem avaliar seus modelos em uma grande quantidade de dados de maneira distribuída e visualizar métricas em diferentes fatias em um notebook.

  • Fairness Indicators é um conjunto de ferramentas construídas em cima de TensorFlow Modelo de Análise, que permite a avaliação periódica das métricas de justiça em oleodutos de produtos.

  • ML Metadados é uma biblioteca para gravar e recuperar a linhagem e metadados de artefatos ML tais como modelos, conjuntos de dados e métricas. No TFX ML, os metadados nos ajudarão a entender os artefatos criados em um pipeline, que é uma unidade de dados que é passada entre os componentes do TFX.

  • TensorFlow Validação de dados é uma biblioteca para analisar os dados e verificar se há erros que podem afetar o treinamento do modelo ou de servir.

Visão geral do estudo de caso

Ao longo deste estudo de caso, definiremos “questões de justiça” como um viés dentro de um modelo que impacta negativamente uma fatia de nossos dados. Especificamente, estamos tentando limitar qualquer previsão de reincidência que possa ser tendenciosa para a raça.

A análise do estudo de caso ocorrerá da seguinte forma:

  1. Baixe os dados, pré-processe e explore o conjunto de dados inicial.
  2. Construa um pipeline TFX com o conjunto de dados COMPAS usando um classificador binário Keras.
  3. Execute nossos resultados por meio da análise de modelo do TensorFlow, validação de dados do TensorFlow e indicadores de imparcialidade de carga para explorar quaisquer possíveis preocupações com justiça em nosso modelo.
  4. Use ML Metadata para rastrear todos os artefatos de um modelo que treinamos com TFX.
  5. Pese o conjunto de dados COMPAS inicial para nosso segundo modelo para levar em conta a distribuição desigual entre reincidência e raça.
  6. Revise as mudanças de desempenho no novo conjunto de dados.
  7. Verifique as alterações subjacentes em nosso pipeline TFX com metadados de ML para entender quais alterações foram feitas entre os dois modelos.

Recursos Úteis

Este estudo de caso é uma extensão dos estudos de caso abaixo. Recomenda-se trabalhar primeiro com os estudos de caso abaixo.

Configurar

Para começar, iremos instalar os pacotes necessários, baixar os dados e importar os módulos necessários para o estudo de caso.

Para instalar os pacotes necessários para este estudo de caso em seu notebook, execute o comando PIP abaixo.


  1. Wadsworth, C., Vera, F., Piech, C. (2017). Atingindo a justiça por meio do aprendizado adversário: uma aplicação à previsão de reincidência. https://arxiv.org/abs/1807.00199

  2. Chouldechova, A., G'Sell, M., (2017). Mais justo e preciso, mas para quem? https://arxiv.org/abs/1707.00046

  3. Berk et al, (2017), Fairness em Justiça Criminal de risco Assessments:. O Estado da Arte, https://arxiv.org/abs/1703.09207

!python -m pip install -q -U pip==20.2

!python -m pip install -q -U \
  tensorflow==2.4.1 \
  tfx==0.28.0 \
  tensorflow-model-analysis==0.28.0 \
  tensorflow_data_validation==0.28.0 \
  tensorflow-metadata==0.28.0 \
  tensorflow-transform==0.28.0 \
  ml-metadata==0.28.0 \
  tfx-bsl==0.28.1 \
  absl-py==0.9

 # If prompted, please restart the Colab environment after the pip installs
 # as you might run into import errors.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import tempfile
import six.moves.urllib as urllib

from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2

import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv

import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view

import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2

Baixe e pré-processe o conjunto de dados

# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')

data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)

# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
  'age',
  'c_charge_desc',
  'c_charge_degree',
  'c_days_from_compas',
  'is_recid',
  'juv_fel_count',
  'juv_misd_count',
  'juv_other_count',
  'priors_count',
  'r_days_from_arrest',
  'race',
  'sex',
  'vr_charge_desc',                
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]

# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]

# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
  _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]

# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8

# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')

Construindo um Pipeline TFX


Existem vários TFX componentes de pipeline que pode ser usada para um modelo de produção, mas para o propósito do estudo deste caso vai se concentrar em usar apenas o abaixo componentes:

  • ExampleGen para ler nosso conjunto de dados.
  • StatisticsGen para calcular as estatísticas de nosso conjunto de dados.
  • SchemaGen para criar um esquema de dados.
  • Transforme para a engenharia de recurso.
  • Treinador para executar o nosso modelo de aprendizagem de máquina.

Crie o InteractiveContext

Para executar TFX dentro de um notebook, primeiro será necessário criar um InteractiveContext para executar os componentes de forma interativa.

InteractiveContext usará um diretório temporário com uma instância efêmera banco de dados ML Metadados. Para usar seu próprio raiz do pipeline ou banco de dados, as propriedades opcionais pipeline_root e metadata_connection_config pode ser passado para InteractiveContext .

context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r as root for pipeline outputs.
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/metadata.sqlite.

Componente TFX ExampleGen

# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.

example_gen = CsvExampleGen(input_base=_DATA_ROOT)
context.run(example_gen)
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.

Componente TFX StatisticsGen

# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.

statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)

Componente TFX SchemaGen

# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.

infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Componente de transformação TFX

A Transform executa componente de dados transformações e engenharia característica. Os resultados incluem um gráfico de entrada do TensorFlow que é usado durante o treinamento e a disponibilização para pré-processar os dados antes do treinamento ou inferência. Este gráfico torna-se parte do SavedModel que é o resultado do treinamento do modelo. Como o mesmo gráfico de entrada é usado para treinamento e veiculação, o pré-processamento sempre será o mesmo e só precisa ser escrito uma vez.

O componente Transform requer mais código do que muitos outros componentes devido à complexidade arbitrária da engenharia de recursos que você pode precisar para os dados e / ou modelo com o qual está trabalhando.

Definir algumas constantes e funções, tanto para o Transform componente eo Trainer componente. Defini-los em um módulo Python, neste caso salvos em disco usando o %%writefile comando mágica desde que você está trabalhando em um notebook.

As transformações que iremos realizar neste estudo de caso são as seguintes:

  • Para valores de string, geraremos um vocabulário que mapeia para um inteiro via tft.compute_and_apply_vocabulary.
  • Para valores inteiros, padronizaremos a média da coluna 0 e a variância 1 por meio de tft.scale_to_z_score.
  • Remova os valores da linha vazia e substitua-os por uma string vazia ou 0 dependendo do tipo de recurso.
  • Anexe '_xf' aos nomes das colunas para denotar os recursos que foram processados ​​no Transform Component.

Agora vamos definir um módulo que contém o preprocessing_fn() função que vamos passar para o Transform componente:

# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
%%writefile {_transform_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import tensorflow_transform as tft

CATEGORICAL_FEATURE_KEYS = [
    'sex',
    'race',
    'c_charge_desc',
    'c_charge_degree',
]

INT_FEATURE_KEYS = [
    'age',
    'c_days_from_compas',
    'juv_fel_count',
    'juv_misd_count',
    'juv_other_count',
    'priors_count',
    'sample_weight',
]

LABEL_KEY = 'is_recid'

# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
    2,
    6,
    513,
    14,
]


def transformed_name(key):
  return '{}_xf'.format(key)


def preprocessing_fn(inputs):
  """tf.transform's callback function for preprocessing inputs.

  Args:
    inputs: Map from feature keys to raw features.

  Returns:
    Map from string feature key to transformed feature operations.
  """
  outputs = {}
  for key in CATEGORICAL_FEATURE_KEYS:
    outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
        _fill_in_missing(inputs[key]),
        vocab_filename=key)

  for key in INT_FEATURE_KEYS:
    outputs[transformed_name(key)] = tft.scale_to_z_score(
        _fill_in_missing(inputs[key]))

  # Target label will be to see if the defendant is charged for another crime.
  outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
  return outputs


def _fill_in_missing(tensor_value):
  """Replaces a missing values in a SparseTensor.

  Fills in missing values of `tensor_value` with '' or 0, and converts to a
  dense tensor.

  Args:
    tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
      at most 1 in the second dimension.

  Returns:
    A rank 1 tensor where missing values of `tensor_value` are filled in.
  """
  if not isinstance(tensor_value, tf.sparse.SparseTensor):
    return tensor_value
  default_value = '' if tensor_value.dtype == tf.string else 0
  sparse_tensor = tf.SparseTensor(
      tensor_value.indices,
      tensor_value.values,
      [tensor_value.dense_shape[0], 1])
  dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
  return tf.squeeze(dense_tensor, axis=1)
Writing compas_transform.py
# Build and run the Transform Component.
transform = Transform(
    examples=example_gen.outputs['examples'],
    schema=infer_schema.outputs['schema'],
    module_file=_transform_module_file
)
context.run(transform)
WARNING:absl:The default value of `force_tf_compat_v1` will change in a future release from `True` to `False`. Since this pipeline has TF 2 behaviors enabled, Transform will use native TF 2 at that point. You can test this behavior now by passing `force_tf_compat_v1=False` or disable it by explicitly setting `force_tf_compat_v1=True` in the Transform component.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version.
Instructions for updating:
Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version.
Instructions for updating:
Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/34923099dd2444f1a12dd79e9e93b9d2/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/34923099dd2444f1a12dd79e9e93b9d2/saved_model.pb
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/2d5bc9f0641646379cb0c6d04efedee6/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/2d5bc9f0641646379cb0c6d04efedee6/saved_model.pb
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'>
WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'>
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Transform/transform_graph/4/.temp_path/tftransform_tmp/8fb9d0492a5f4c0b994fd3acb409dff6/saved_model.pb
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore

Componente TFX Trainer

O Trainer Component treina um modelo TensorFlow especificado.

A fim de executar o componente formador precisamos criar um módulo Python contendo uma trainer_fn função que irá retornar um estimador para o nosso modelo. Se você preferir criar um modelo Keras, você pode fazê-lo e, em seguida, convertê-lo para um estimador usando keras.model_to_estimator() .

Os Trainer trens componentes um modelo TensorFlow especificado. A fim de executar o modelo que precisamos para criar um módulo Python contendo aa função chamada trainer_fn função que TFX vai chamar.

Para o nosso estudo de caso vamos construir um modelo Keras que irá retornar voltará keras.model_to_estimator() .

# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf

import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils

from compas_transform import *

_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999


def transformed_names(keys):
  return [transformed_name(key) for key in keys]


def transformed_name(key):
  return '{}_xf'.format(key)


def _gzip_reader_fn(filenames):
  """Returns a record reader that can read gzip'ed files.

  Args:
    filenames: A tf.string tensor or tf.data.Dataset containing one or more
      filenames.

  Returns: A nested structure of tf.TypeSpec objects matching the structure of
    an element of this dataset and specifying the type of individual components.
  """
  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')


# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
  """Generates a feature spec from a Schema proto.

  Args:
    schema: A Schema proto.

  Returns:
    A feature spec defined as a dict whose keys are feature names and values are
      instances of FixedLenFeature, VarLenFeature or SparseFeature.
  """
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _example_serving_receiver_fn(tf_transform_output, schema):
  """Builds the serving in inputs.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    TensorFlow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(LABEL_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)
  transformed_features.pop(transformed_name(LABEL_KEY))
  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_output, schema):
  """Builds everything needed for the tf-model-analysis to run the model.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    EvalInputReceiver function, which contains:

      - TensorFlow graph which parses raw untransformed features, applies the
          tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  serialized_tf_example = tf.compat.v1.placeholder(
      dtype=tf.string, shape=[None], name='input_example_tensor')

  # Add a parse_example operator to the tensorflow graph, which will parse
  # raw, untransformed, tf examples.
  features = tf.io.parse_example(
      serialized=serialized_tf_example, features=raw_feature_spec)

  transformed_features = tf_transform_output.transform_raw_features(features)
  labels = transformed_features.pop(transformed_name(LABEL_KEY))

  receiver_tensors = {'examples': serialized_tf_example}

  return tfma.export.EvalInputReceiver(
      features=transformed_features,
      receiver_tensors=receiver_tensors,
      labels=labels)


def _input_fn(filenames, tf_transform_output, batch_size=200):
  """Generates features and labels for training or evaluation.

  Args:
    filenames: List of CSV files to read data from.
    tf_transform_output: A TFTransformOutput.
    batch_size: First dimension size of the Tensors returned by input_fn.

  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_output.transformed_feature_spec().copy())

  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
      filenames,
      batch_size,
      transformed_feature_spec,
      shuffle=False,
      reader=_gzip_reader_fn)

  transformed_features = dataset.make_one_shot_iterator().get_next()

  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      transformed_name(LABEL_KEY))


def _keras_model_builder():
  """Build a keras model for COMPAS dataset classification.

  Returns:
    A compiled Keras model.
  """
  feature_columns = []
  feature_layer_inputs = {}

  for key in transformed_names(INT_FEATURE_KEYS):
    feature_columns.append(tf.feature_column.numeric_column(key))
    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)

  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
                              MAX_CATEGORICAL_FEATURE_VALUES):
    feature_columns.append(
        tf.feature_column.indicator_column(
            tf.feature_column.categorical_column_with_identity(
                key, num_buckets=num_buckets)))
    feature_layer_inputs[key] = tf.keras.Input(
        shape=(1,), name=key, dtype=tf.dtypes.int32)

  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
  feature_layer_outputs = feature_columns_input(feature_layer_inputs)

  dense_layers = tf.keras.layers.Dense(
      20, activation='relu', name='dense_1')(feature_layer_outputs)
  dense_layers = tf.keras.layers.Dense(
      10, activation='relu', name='dense_2')(dense_layers)
  output = tf.keras.layers.Dense(
      1, name='predictions')(dense_layers)

  model = tf.keras.Model(
      inputs=[v for v in feature_layer_inputs.values()], outputs=output)

  model.compile(
      loss=tf.keras.losses.MeanAbsoluteError(),
      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))

  return model


# TFX will call this function.
def trainer_fn(hparams, schema):
  """Build the estimator using the high level API.

  Args:
    hparams: Hyperparameters used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.

  Returns:
    A dict of the following:

      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)

  train_input_fn = lambda: _input_fn(
      hparams.train_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  eval_input_fn = lambda: _input_fn(
      hparams.eval_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  train_spec = tf.estimator.TrainSpec(
      train_input_fn,
      max_steps=hparams.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(
      tf_transform_output, schema)

  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=hparams.eval_steps,
      exporters=[exporter],
      name='compas-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
      keep_checkpoint_max=_MAX_CHECKPOINTS)

  run_config = run_config.replace(model_dir=hparams.serving_model_dir)

  estimator = tf.keras.estimator.model_to_estimator(
      keras_model=_keras_model_builder(), config=run_config)

  # Create an input receiver for TFMA processing.
  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)

  return {
      'estimator': estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }
Writing compas_trainer.py
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
    module_file=_trainer_module_file,
    transformed_examples=transform.outputs['transformed_examples'],
    schema=infer_schema.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 0.47416827, step = 0
INFO:tensorflow:loss = 0.47416827, step = 0
INFO:tensorflow:global_step/sec: 103.552
INFO:tensorflow:global_step/sec: 103.552
INFO:tensorflow:loss = 0.4922419, step = 100 (0.968 sec)
INFO:tensorflow:loss = 0.4922419, step = 100 (0.968 sec)
INFO:tensorflow:global_step/sec: 106.369
INFO:tensorflow:global_step/sec: 106.369
INFO:tensorflow:loss = 0.50697845, step = 200 (0.939 sec)
INFO:tensorflow:loss = 0.50697845, step = 200 (0.939 sec)
INFO:tensorflow:global_step/sec: 108.028
INFO:tensorflow:global_step/sec: 108.028
INFO:tensorflow:loss = 0.50335556, step = 300 (0.926 sec)
INFO:tensorflow:loss = 0.50335556, step = 300 (0.926 sec)
INFO:tensorflow:global_step/sec: 106.316
INFO:tensorflow:global_step/sec: 106.316
INFO:tensorflow:loss = 0.47721145, step = 400 (0.941 sec)
INFO:tensorflow:loss = 0.47721145, step = 400 (0.941 sec)
INFO:tensorflow:global_step/sec: 107.036
INFO:tensorflow:global_step/sec: 107.036
INFO:tensorflow:loss = 0.45895657, step = 500 (0.934 sec)
INFO:tensorflow:loss = 0.45895657, step = 500 (0.934 sec)
INFO:tensorflow:global_step/sec: 106.896
INFO:tensorflow:global_step/sec: 106.896
INFO:tensorflow:loss = 0.45208624, step = 600 (0.935 sec)
INFO:tensorflow:loss = 0.45208624, step = 600 (0.935 sec)
INFO:tensorflow:global_step/sec: 105.365
INFO:tensorflow:global_step/sec: 105.365
INFO:tensorflow:loss = 0.4489294, step = 700 (0.949 sec)
INFO:tensorflow:loss = 0.4489294, step = 700 (0.949 sec)
INFO:tensorflow:global_step/sec: 107.341
INFO:tensorflow:global_step/sec: 107.341
INFO:tensorflow:loss = 0.46455735, step = 800 (0.932 sec)
INFO:tensorflow:loss = 0.46455735, step = 800 (0.932 sec)
INFO:tensorflow:global_step/sec: 103.443
INFO:tensorflow:global_step/sec: 103.443
INFO:tensorflow:loss = 0.47789398, step = 900 (0.967 sec)
INFO:tensorflow:loss = 0.47789398, step = 900 (0.967 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:10:14Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:10:14Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 48.79983s
INFO:tensorflow:Inference Time : 48.79983s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:11:03
INFO:tensorflow:Finished evaluation at 2021-04-23-09:11:03
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4798829
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4798829
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-999
INFO:tensorflow:global_step/sec: 1.99761
INFO:tensorflow:global_step/sec: 1.99761
INFO:tensorflow:loss = 0.49395803, step = 1000 (50.059 sec)
INFO:tensorflow:loss = 0.49395803, step = 1000 (50.059 sec)
INFO:tensorflow:global_step/sec: 103.094
INFO:tensorflow:global_step/sec: 103.094
INFO:tensorflow:loss = 0.48954606, step = 1100 (0.970 sec)
INFO:tensorflow:loss = 0.48954606, step = 1100 (0.970 sec)
INFO:tensorflow:global_step/sec: 101.109
INFO:tensorflow:global_step/sec: 101.109
INFO:tensorflow:loss = 0.49123546, step = 1200 (0.989 sec)
INFO:tensorflow:loss = 0.49123546, step = 1200 (0.989 sec)
INFO:tensorflow:global_step/sec: 100.528
INFO:tensorflow:global_step/sec: 100.528
INFO:tensorflow:loss = 0.4701535, step = 1300 (0.995 sec)
INFO:tensorflow:loss = 0.4701535, step = 1300 (0.995 sec)
INFO:tensorflow:global_step/sec: 100.192
INFO:tensorflow:global_step/sec: 100.192
INFO:tensorflow:loss = 0.46582404, step = 1400 (0.999 sec)
INFO:tensorflow:loss = 0.46582404, step = 1400 (0.999 sec)
INFO:tensorflow:global_step/sec: 100.13
INFO:tensorflow:global_step/sec: 100.13
INFO:tensorflow:loss = 0.45980436, step = 1500 (0.998 sec)
INFO:tensorflow:loss = 0.45980436, step = 1500 (0.998 sec)
INFO:tensorflow:global_step/sec: 101.085
INFO:tensorflow:global_step/sec: 101.085
INFO:tensorflow:loss = 0.46045718, step = 1600 (0.989 sec)
INFO:tensorflow:loss = 0.46045718, step = 1600 (0.989 sec)
INFO:tensorflow:global_step/sec: 100.746
INFO:tensorflow:global_step/sec: 100.746
INFO:tensorflow:loss = 0.47194332, step = 1700 (0.995 sec)
INFO:tensorflow:loss = 0.47194332, step = 1700 (0.995 sec)
INFO:tensorflow:global_step/sec: 99.8541
INFO:tensorflow:global_step/sec: 99.8541
INFO:tensorflow:loss = 0.45978338, step = 1800 (0.999 sec)
INFO:tensorflow:loss = 0.45978338, step = 1800 (0.999 sec)
INFO:tensorflow:global_step/sec: 97.982
INFO:tensorflow:global_step/sec: 97.982
INFO:tensorflow:loss = 0.45745283, step = 1900 (1.021 sec)
INFO:tensorflow:loss = 0.45745283, step = 1900 (1.021 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 96.2637
INFO:tensorflow:global_step/sec: 96.2637
INFO:tensorflow:loss = 0.44210017, step = 2000 (1.039 sec)
INFO:tensorflow:loss = 0.44210017, step = 2000 (1.039 sec)
INFO:tensorflow:global_step/sec: 104.181
INFO:tensorflow:global_step/sec: 104.181
INFO:tensorflow:loss = 0.4267306, step = 2100 (0.960 sec)
INFO:tensorflow:loss = 0.4267306, step = 2100 (0.960 sec)
INFO:tensorflow:global_step/sec: 100.628
INFO:tensorflow:global_step/sec: 100.628
INFO:tensorflow:loss = 0.43270233, step = 2200 (0.994 sec)
INFO:tensorflow:loss = 0.43270233, step = 2200 (0.994 sec)
INFO:tensorflow:global_step/sec: 102.274
INFO:tensorflow:global_step/sec: 102.274
INFO:tensorflow:loss = 0.42014548, step = 2300 (0.978 sec)
INFO:tensorflow:loss = 0.42014548, step = 2300 (0.978 sec)
INFO:tensorflow:global_step/sec: 99.5664
INFO:tensorflow:global_step/sec: 99.5664
INFO:tensorflow:loss = 0.42362845, step = 2400 (1.004 sec)
INFO:tensorflow:loss = 0.42362845, step = 2400 (1.004 sec)
INFO:tensorflow:global_step/sec: 101.008
INFO:tensorflow:global_step/sec: 101.008
INFO:tensorflow:loss = 0.43012613, step = 2500 (0.990 sec)
INFO:tensorflow:loss = 0.43012613, step = 2500 (0.990 sec)
INFO:tensorflow:global_step/sec: 102.62
INFO:tensorflow:global_step/sec: 102.62
INFO:tensorflow:loss = 0.435121, step = 2600 (0.974 sec)
INFO:tensorflow:loss = 0.435121, step = 2600 (0.974 sec)
INFO:tensorflow:global_step/sec: 102.1
INFO:tensorflow:global_step/sec: 102.1
INFO:tensorflow:loss = 0.42686707, step = 2700 (0.981 sec)
INFO:tensorflow:loss = 0.42686707, step = 2700 (0.981 sec)
INFO:tensorflow:global_step/sec: 103.746
INFO:tensorflow:global_step/sec: 103.746
INFO:tensorflow:loss = 0.41858014, step = 2800 (0.964 sec)
INFO:tensorflow:loss = 0.41858014, step = 2800 (0.964 sec)
INFO:tensorflow:global_step/sec: 102.04
INFO:tensorflow:global_step/sec: 102.04
INFO:tensorflow:loss = 0.41823772, step = 2900 (0.978 sec)
INFO:tensorflow:loss = 0.41823772, step = 2900 (0.978 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 100.291
INFO:tensorflow:global_step/sec: 100.291
INFO:tensorflow:loss = 0.40824187, step = 3000 (0.997 sec)
INFO:tensorflow:loss = 0.40824187, step = 3000 (0.997 sec)
INFO:tensorflow:global_step/sec: 106.907
INFO:tensorflow:global_step/sec: 106.907
INFO:tensorflow:loss = 0.40978715, step = 3100 (0.936 sec)
INFO:tensorflow:loss = 0.40978715, step = 3100 (0.936 sec)
INFO:tensorflow:global_step/sec: 104.101
INFO:tensorflow:global_step/sec: 104.101
INFO:tensorflow:loss = 0.417184, step = 3200 (0.960 sec)
INFO:tensorflow:loss = 0.417184, step = 3200 (0.960 sec)
INFO:tensorflow:global_step/sec: 99.6517
INFO:tensorflow:global_step/sec: 99.6517
INFO:tensorflow:loss = 0.43127513, step = 3300 (1.004 sec)
INFO:tensorflow:loss = 0.43127513, step = 3300 (1.004 sec)
INFO:tensorflow:global_step/sec: 99.7764
INFO:tensorflow:global_step/sec: 99.7764
INFO:tensorflow:loss = 0.41585788, step = 3400 (1.002 sec)
INFO:tensorflow:loss = 0.41585788, step = 3400 (1.002 sec)
INFO:tensorflow:global_step/sec: 104.479
INFO:tensorflow:global_step/sec: 104.479
INFO:tensorflow:loss = 0.40642825, step = 3500 (0.957 sec)
INFO:tensorflow:loss = 0.40642825, step = 3500 (0.957 sec)
INFO:tensorflow:global_step/sec: 99.2027
INFO:tensorflow:global_step/sec: 99.2027
INFO:tensorflow:loss = 0.40078893, step = 3600 (1.008 sec)
INFO:tensorflow:loss = 0.40078893, step = 3600 (1.008 sec)
INFO:tensorflow:global_step/sec: 99.5083
INFO:tensorflow:global_step/sec: 99.5083
INFO:tensorflow:loss = 0.4084859, step = 3700 (1.005 sec)
INFO:tensorflow:loss = 0.4084859, step = 3700 (1.005 sec)
INFO:tensorflow:global_step/sec: 101.837
INFO:tensorflow:global_step/sec: 101.837
INFO:tensorflow:loss = 0.38706055, step = 3800 (0.982 sec)
INFO:tensorflow:loss = 0.38706055, step = 3800 (0.982 sec)
INFO:tensorflow:global_step/sec: 100.761
INFO:tensorflow:global_step/sec: 100.761
INFO:tensorflow:loss = 0.38369697, step = 3900 (0.992 sec)
INFO:tensorflow:loss = 0.38369697, step = 3900 (0.992 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 99.897
INFO:tensorflow:global_step/sec: 99.897
INFO:tensorflow:loss = 0.4063977, step = 4000 (1.001 sec)
INFO:tensorflow:loss = 0.4063977, step = 4000 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.4043
INFO:tensorflow:global_step/sec: 99.4043
INFO:tensorflow:loss = 0.42966503, step = 4100 (1.005 sec)
INFO:tensorflow:loss = 0.42966503, step = 4100 (1.005 sec)
INFO:tensorflow:global_step/sec: 99.4718
INFO:tensorflow:global_step/sec: 99.4718
INFO:tensorflow:loss = 0.43339205, step = 4200 (1.006 sec)
INFO:tensorflow:loss = 0.43339205, step = 4200 (1.006 sec)
INFO:tensorflow:global_step/sec: 99.881
INFO:tensorflow:global_step/sec: 99.881
INFO:tensorflow:loss = 0.41945544, step = 4300 (1.001 sec)
INFO:tensorflow:loss = 0.41945544, step = 4300 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.7086
INFO:tensorflow:global_step/sec: 99.7086
INFO:tensorflow:loss = 0.39942062, step = 4400 (1.003 sec)
INFO:tensorflow:loss = 0.39942062, step = 4400 (1.003 sec)
INFO:tensorflow:global_step/sec: 100.605
INFO:tensorflow:global_step/sec: 100.605
INFO:tensorflow:loss = 0.40324017, step = 4500 (0.994 sec)
INFO:tensorflow:loss = 0.40324017, step = 4500 (0.994 sec)
INFO:tensorflow:global_step/sec: 103.285
INFO:tensorflow:global_step/sec: 103.285
INFO:tensorflow:loss = 0.40799192, step = 4600 (0.968 sec)
INFO:tensorflow:loss = 0.40799192, step = 4600 (0.968 sec)
INFO:tensorflow:global_step/sec: 105.19
INFO:tensorflow:global_step/sec: 105.19
INFO:tensorflow:loss = 0.4159081, step = 4700 (0.951 sec)
INFO:tensorflow:loss = 0.4159081, step = 4700 (0.951 sec)
INFO:tensorflow:global_step/sec: 104.719
INFO:tensorflow:global_step/sec: 104.719
INFO:tensorflow:loss = 0.43424368, step = 4800 (0.955 sec)
INFO:tensorflow:loss = 0.43424368, step = 4800 (0.955 sec)
INFO:tensorflow:global_step/sec: 107.189
INFO:tensorflow:global_step/sec: 107.189
INFO:tensorflow:loss = 0.41860652, step = 4900 (0.933 sec)
INFO:tensorflow:loss = 0.41860652, step = 4900 (0.933 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 103.085
INFO:tensorflow:global_step/sec: 103.085
INFO:tensorflow:loss = 0.3955871, step = 5000 (0.970 sec)
INFO:tensorflow:loss = 0.3955871, step = 5000 (0.970 sec)
INFO:tensorflow:global_step/sec: 102.244
INFO:tensorflow:global_step/sec: 102.244
INFO:tensorflow:loss = 0.38054687, step = 5100 (0.979 sec)
INFO:tensorflow:loss = 0.38054687, step = 5100 (0.979 sec)
INFO:tensorflow:global_step/sec: 102.199
INFO:tensorflow:global_step/sec: 102.199
INFO:tensorflow:loss = 0.37835938, step = 5200 (0.979 sec)
INFO:tensorflow:loss = 0.37835938, step = 5200 (0.979 sec)
INFO:tensorflow:global_step/sec: 102.192
INFO:tensorflow:global_step/sec: 102.192
INFO:tensorflow:loss = 0.3742793, step = 5300 (0.978 sec)
INFO:tensorflow:loss = 0.3742793, step = 5300 (0.978 sec)
INFO:tensorflow:global_step/sec: 100.049
INFO:tensorflow:global_step/sec: 100.049
INFO:tensorflow:loss = 0.37766984, step = 5400 (0.999 sec)
INFO:tensorflow:loss = 0.37766984, step = 5400 (0.999 sec)
INFO:tensorflow:global_step/sec: 101.413
INFO:tensorflow:global_step/sec: 101.413
INFO:tensorflow:loss = 0.37288016, step = 5500 (0.989 sec)
INFO:tensorflow:loss = 0.37288016, step = 5500 (0.989 sec)
INFO:tensorflow:global_step/sec: 99.4785
INFO:tensorflow:global_step/sec: 99.4785
INFO:tensorflow:loss = 0.39033508, step = 5600 (1.002 sec)
INFO:tensorflow:loss = 0.39033508, step = 5600 (1.002 sec)
INFO:tensorflow:global_step/sec: 101.706
INFO:tensorflow:global_step/sec: 101.706
INFO:tensorflow:loss = 0.3888662, step = 5700 (0.983 sec)
INFO:tensorflow:loss = 0.3888662, step = 5700 (0.983 sec)
INFO:tensorflow:global_step/sec: 103.171
INFO:tensorflow:global_step/sec: 103.171
INFO:tensorflow:loss = 0.39443827, step = 5800 (0.969 sec)
INFO:tensorflow:loss = 0.39443827, step = 5800 (0.969 sec)
INFO:tensorflow:global_step/sec: 100.242
INFO:tensorflow:global_step/sec: 100.242
INFO:tensorflow:loss = 0.3824133, step = 5900 (0.998 sec)
INFO:tensorflow:loss = 0.3824133, step = 5900 (0.998 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 101.746
INFO:tensorflow:global_step/sec: 101.746
INFO:tensorflow:loss = 0.38710442, step = 6000 (0.983 sec)
INFO:tensorflow:loss = 0.38710442, step = 6000 (0.983 sec)
INFO:tensorflow:global_step/sec: 100.1
INFO:tensorflow:global_step/sec: 100.1
INFO:tensorflow:loss = 0.37636378, step = 6100 (0.999 sec)
INFO:tensorflow:loss = 0.37636378, step = 6100 (0.999 sec)
INFO:tensorflow:global_step/sec: 99.9325
INFO:tensorflow:global_step/sec: 99.9325
INFO:tensorflow:loss = 0.37966123, step = 6200 (1.001 sec)
INFO:tensorflow:loss = 0.37966123, step = 6200 (1.001 sec)
INFO:tensorflow:global_step/sec: 99.0218
INFO:tensorflow:global_step/sec: 99.0218
INFO:tensorflow:loss = 0.36940622, step = 6300 (1.010 sec)
INFO:tensorflow:loss = 0.36940622, step = 6300 (1.010 sec)
INFO:tensorflow:global_step/sec: 102.772
INFO:tensorflow:global_step/sec: 102.772
INFO:tensorflow:loss = 0.37147108, step = 6400 (0.972 sec)
INFO:tensorflow:loss = 0.37147108, step = 6400 (0.972 sec)
INFO:tensorflow:global_step/sec: 105.027
INFO:tensorflow:global_step/sec: 105.027
INFO:tensorflow:loss = 0.36456805, step = 6500 (0.952 sec)
INFO:tensorflow:loss = 0.36456805, step = 6500 (0.952 sec)
INFO:tensorflow:global_step/sec: 103.18
INFO:tensorflow:global_step/sec: 103.18
INFO:tensorflow:loss = 0.3684589, step = 6600 (0.969 sec)
INFO:tensorflow:loss = 0.3684589, step = 6600 (0.969 sec)
INFO:tensorflow:global_step/sec: 99.3375
INFO:tensorflow:global_step/sec: 99.3375
INFO:tensorflow:loss = 0.376545, step = 6700 (1.007 sec)
INFO:tensorflow:loss = 0.376545, step = 6700 (1.007 sec)
INFO:tensorflow:global_step/sec: 105.682
INFO:tensorflow:global_step/sec: 105.682
INFO:tensorflow:loss = 0.3895915, step = 6800 (0.947 sec)
INFO:tensorflow:loss = 0.3895915, step = 6800 (0.947 sec)
INFO:tensorflow:global_step/sec: 114.848
INFO:tensorflow:global_step/sec: 114.848
INFO:tensorflow:loss = 0.37849602, step = 6900 (0.870 sec)
INFO:tensorflow:loss = 0.37849602, step = 6900 (0.870 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 109.616
INFO:tensorflow:global_step/sec: 109.616
INFO:tensorflow:loss = 0.35964197, step = 7000 (0.912 sec)
INFO:tensorflow:loss = 0.35964197, step = 7000 (0.912 sec)
INFO:tensorflow:global_step/sec: 105.581
INFO:tensorflow:global_step/sec: 105.581
INFO:tensorflow:loss = 0.36216918, step = 7100 (0.947 sec)
INFO:tensorflow:loss = 0.36216918, step = 7100 (0.947 sec)
INFO:tensorflow:global_step/sec: 106.131
INFO:tensorflow:global_step/sec: 106.131
INFO:tensorflow:loss = 0.3937424, step = 7200 (0.942 sec)
INFO:tensorflow:loss = 0.3937424, step = 7200 (0.942 sec)
INFO:tensorflow:global_step/sec: 105.7
INFO:tensorflow:global_step/sec: 105.7
INFO:tensorflow:loss = 0.38952962, step = 7300 (0.946 sec)
INFO:tensorflow:loss = 0.38952962, step = 7300 (0.946 sec)
INFO:tensorflow:global_step/sec: 102.797
INFO:tensorflow:global_step/sec: 102.797
INFO:tensorflow:loss = 0.37355947, step = 7400 (0.973 sec)
INFO:tensorflow:loss = 0.37355947, step = 7400 (0.973 sec)
INFO:tensorflow:global_step/sec: 102.454
INFO:tensorflow:global_step/sec: 102.454
INFO:tensorflow:loss = 0.36603284, step = 7500 (0.976 sec)
INFO:tensorflow:loss = 0.36603284, step = 7500 (0.976 sec)
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:loss = 0.3693564, step = 7600 (0.964 sec)
INFO:tensorflow:loss = 0.3693564, step = 7600 (0.964 sec)
INFO:tensorflow:global_step/sec: 104.262
INFO:tensorflow:global_step/sec: 104.262
INFO:tensorflow:loss = 0.37061787, step = 7700 (0.959 sec)
INFO:tensorflow:loss = 0.37061787, step = 7700 (0.959 sec)
INFO:tensorflow:global_step/sec: 104.767
INFO:tensorflow:global_step/sec: 104.767
INFO:tensorflow:loss = 0.39289498, step = 7800 (0.955 sec)
INFO:tensorflow:loss = 0.39289498, step = 7800 (0.955 sec)
INFO:tensorflow:global_step/sec: 105.669
INFO:tensorflow:global_step/sec: 105.669
INFO:tensorflow:loss = 0.39648676, step = 7900 (0.946 sec)
INFO:tensorflow:loss = 0.39648676, step = 7900 (0.946 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.931
INFO:tensorflow:global_step/sec: 105.931
INFO:tensorflow:loss = 0.4102661, step = 8000 (0.944 sec)
INFO:tensorflow:loss = 0.4102661, step = 8000 (0.944 sec)
INFO:tensorflow:global_step/sec: 104.541
INFO:tensorflow:global_step/sec: 104.541
INFO:tensorflow:loss = 0.38024917, step = 8100 (0.957 sec)
INFO:tensorflow:loss = 0.38024917, step = 8100 (0.957 sec)
INFO:tensorflow:global_step/sec: 102.663
INFO:tensorflow:global_step/sec: 102.663
INFO:tensorflow:loss = 0.37263972, step = 8200 (0.974 sec)
INFO:tensorflow:loss = 0.37263972, step = 8200 (0.974 sec)
INFO:tensorflow:global_step/sec: 101.803
INFO:tensorflow:global_step/sec: 101.803
INFO:tensorflow:loss = 0.35875428, step = 8300 (0.982 sec)
INFO:tensorflow:loss = 0.35875428, step = 8300 (0.982 sec)
INFO:tensorflow:global_step/sec: 101.443
INFO:tensorflow:global_step/sec: 101.443
INFO:tensorflow:loss = 0.35559803, step = 8400 (0.986 sec)
INFO:tensorflow:loss = 0.35559803, step = 8400 (0.986 sec)
INFO:tensorflow:global_step/sec: 100.077
INFO:tensorflow:global_step/sec: 100.077
INFO:tensorflow:loss = 0.3563253, step = 8500 (0.999 sec)
INFO:tensorflow:loss = 0.3563253, step = 8500 (0.999 sec)
INFO:tensorflow:global_step/sec: 100.147
INFO:tensorflow:global_step/sec: 100.147
INFO:tensorflow:loss = 0.34861985, step = 8600 (0.998 sec)
INFO:tensorflow:loss = 0.34861985, step = 8600 (0.998 sec)
INFO:tensorflow:global_step/sec: 99.9734
INFO:tensorflow:global_step/sec: 99.9734
INFO:tensorflow:loss = 0.35559162, step = 8700 (1.000 sec)
INFO:tensorflow:loss = 0.35559162, step = 8700 (1.000 sec)
INFO:tensorflow:global_step/sec: 99.5136
INFO:tensorflow:global_step/sec: 99.5136
INFO:tensorflow:loss = 0.36242756, step = 8800 (1.005 sec)
INFO:tensorflow:loss = 0.36242756, step = 8800 (1.005 sec)
INFO:tensorflow:global_step/sec: 104.811
INFO:tensorflow:global_step/sec: 104.811
INFO:tensorflow:loss = 0.3742514, step = 8900 (0.954 sec)
INFO:tensorflow:loss = 0.3742514, step = 8900 (0.954 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.372
INFO:tensorflow:global_step/sec: 106.372
INFO:tensorflow:loss = 0.3587474, step = 9000 (0.940 sec)
INFO:tensorflow:loss = 0.3587474, step = 9000 (0.940 sec)
INFO:tensorflow:global_step/sec: 104.249
INFO:tensorflow:global_step/sec: 104.249
INFO:tensorflow:loss = 0.35512, step = 9100 (0.960 sec)
INFO:tensorflow:loss = 0.35512, step = 9100 (0.960 sec)
INFO:tensorflow:global_step/sec: 106.583
INFO:tensorflow:global_step/sec: 106.583
INFO:tensorflow:loss = 0.35559082, step = 9200 (0.938 sec)
INFO:tensorflow:loss = 0.35559082, step = 9200 (0.938 sec)
INFO:tensorflow:global_step/sec: 105.826
INFO:tensorflow:global_step/sec: 105.826
INFO:tensorflow:loss = 0.35460055, step = 9300 (0.945 sec)
INFO:tensorflow:loss = 0.35460055, step = 9300 (0.945 sec)
INFO:tensorflow:global_step/sec: 106.072
INFO:tensorflow:global_step/sec: 106.072
INFO:tensorflow:loss = 0.34970692, step = 9400 (0.944 sec)
INFO:tensorflow:loss = 0.34970692, step = 9400 (0.944 sec)
INFO:tensorflow:global_step/sec: 105.836
INFO:tensorflow:global_step/sec: 105.836
INFO:tensorflow:loss = 0.3449042, step = 9500 (0.943 sec)
INFO:tensorflow:loss = 0.3449042, step = 9500 (0.943 sec)
INFO:tensorflow:global_step/sec: 108.679
INFO:tensorflow:global_step/sec: 108.679
INFO:tensorflow:loss = 0.34985757, step = 9600 (0.920 sec)
INFO:tensorflow:loss = 0.34985757, step = 9600 (0.920 sec)
INFO:tensorflow:global_step/sec: 106.07
INFO:tensorflow:global_step/sec: 106.07
INFO:tensorflow:loss = 0.3453308, step = 9700 (0.943 sec)
INFO:tensorflow:loss = 0.3453308, step = 9700 (0.943 sec)
INFO:tensorflow:global_step/sec: 100.979
INFO:tensorflow:global_step/sec: 100.979
INFO:tensorflow:loss = 0.34995228, step = 9800 (0.990 sec)
INFO:tensorflow:loss = 0.34995228, step = 9800 (0.990 sec)
INFO:tensorflow:global_step/sec: 104.247
INFO:tensorflow:global_step/sec: 104.247
INFO:tensorflow:loss = 0.35693988, step = 9900 (0.959 sec)
INFO:tensorflow:loss = 0.35693988, step = 9900 (0.959 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:12:31Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:12:31Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 47.01670s
INFO:tensorflow:Inference Time : 47.01670s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:13:18
INFO:tensorflow:Finished evaluation at 2021-04-23-09:13:18
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.39696866
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.39696866
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/export/compas/temp-1619169198/saved_model.pb
INFO:tensorflow:Loss for final step: 0.3658929.
INFO:tensorflow:Loss for final step: 0.3658929.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/5/eval_model_dir/temp-1619169198/saved_model.pb
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb"
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"

Análise de modelo do TensorFlow

Agora que nosso modelo foi desenvolvido e treinado dentro do TFX, podemos usar vários componentes adicionais dentro do exossistema TFX para entender o desempenho de nossos modelos com um pouco mais de detalhes. Ao observar diferentes métricas, podemos obter uma imagem melhor de como o modelo geral se comporta para diferentes fatias dentro do nosso modelo para garantir que nosso modelo não tenha um desempenho inferior para nenhum subgrupo.

Primeiro, examinaremos a Análise de modelo do TensorFlow, que é uma biblioteca para avaliar os modelos do TensorFlow. Ele permite que os usuários avaliem seus modelos em grandes quantidades de dados de maneira distribuída, usando as mesmas métricas definidas em seu treinador. Essas métricas podem ser calculadas em diferentes fatias de dados e visualizadas em um notebook.

Para obter uma lista de possíveis métricas que podem ser adicionados em TensorFlow Modelo de Análise de ver aqui .

# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer.outputs['model'],

    eval_config = text_format.Parse("""
      model_specs {
        label_key: 'is_recid'
      }
      metrics_specs {
        metrics {class_name: "BinaryAccuracy"}
        metrics {class_name: "AUC"}
        metrics {
          class_name: "FairnessIndicators"
          config: '{"thresholds": [0.25, 0.5, 0.75]}'
        }
      }
      slicing_specs {
        feature_keys: 'race'
      }
    """, tfma.EvalConfig())
)
context.run(model_analyzer)

Indicadores de justiça

Carregue os indicadores de imparcialidade para examinar os dados subjacentes.

evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
FairnessIndicatorViewer(slicingMetrics=[{'sliceValue': 'Caucasian', 'slice': 'race:Caucasian', 'metrics': {'bi…

Os indicadores de justiça nos permitirão aprofundar para ver o desempenho de diferentes fatias e são projetados para apoiar as equipes na avaliação e melhoria dos modelos para questões de justiça. Ele permite fácil computação de classificadores binários e multiclasse e permite que você avalie em qualquer tamanho de caso de uso.

Carregaremos os Indicadores de Equidade neste caderno e analisaremos os resultados e daremos uma olhada nos resultados. Depois de explorar um momento com os Indicadores de Equidade, examine as guias Taxa de Falso Positivo e Taxa de Falso Negativo na ferramenta. Neste estudo de caso, nós estamos preocupados com a tentativa de reduzir o número de falsos previsões de reincidência, correspondente ao Falso taxa positiva .

Erros tipo I e tipo II

Na ferramenta Fairness Indicator, você verá duas opções suspensas:

  1. A opção "linha de base", que é definido pelo column_for_slicing .
  2. A opção "Limites", que é definido pelo fairness_indicator_thresholds .

“Linha de base” é a fatia com a qual você deseja comparar todas as outras fatias. Mais comumente, é representado pela fatia geral, mas também pode ser uma das fatias específicas.

"Limite" é um valor definido em um determinado modelo de classificação binária para indicar onde uma previsão deve ser colocada. Ao definir um limite, há duas coisas que você deve ter em mente.

  1. Precisão: qual é a desvantagem se sua previsão resultar em um erro Tipo 1? Neste estudo de caso um limiar mais elevado significaria que estamos prevendo mais réus vai cometer outro crime quando eles realmente não.
  2. Lembre-se: Qual é a desvantagem de um erro Tipo II? Neste estudo de caso um limiar mais elevado significaria que estamos prevendo mais réus não vai cometer outro crime, quando eles realmente fazem.

Definiremos limites arbitrários em 0,75 e nos concentraremos apenas nas métricas de justiça para réus afro-americanos e caucasianos, dados os pequenos tamanhos de amostra para as outras raças, que não são grandes o suficiente para tirar conclusões estatisticamente significativas.

As taxas abaixo podem diferir ligeiramente com base em como os dados foram misturados no início deste estudo de caso, mas observe a diferença entre os dados entre os réus afro-americanos e caucasianos. Em um limiar mais baixo, nosso modelo tem mais probabilidade de prever que um caucasiano defendido cometerá um segundo crime em comparação com um afro-americano defendido. No entanto, essa previsão se inverte à medida que aumentamos nosso limite.

  • Taxa de falso positivo @ 0,75
    • Africano-Americano: ~ 30%
      • AUC: 0,71
      • Precisão binária: 0,67
    • Caucasiano: ~ 8%
      • AUC: 0,71
      • AUC: 0,67

Mais informações sobre os erros de Tipo I / II e configuração de limite pode ser encontrada aqui .

Metadados de ML

Para entender de onde vem a disparidade e para tirar um instantâneo do nosso modelo atual, podemos usar os metadados de ML para gravar e recuperar metadados associados ao nosso modelo. ML Metadata é uma parte integrante do TFX, mas é projetado para que possa ser usado de forma independente.

Para este estudo de caso, listaremos todos os artefatos que desenvolvemos anteriormente neste estudo de caso. Percorrendo os artefatos, execuções e contexto, teremos uma visão de alto nível de nosso modelo TFX para descobrir de onde vêm quaisquer problemas em potencial. Isso nos fornecerá uma visão geral básica de como nosso modelo foi desenvolvido e quais componentes TFX ajudaram a desenvolver nosso modelo inicial.

Começaremos apresentando os artefatos de alto nível, a execução e os tipos de contexto em nosso modelo.

# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()

connection_config.sqlite.filename_uri = os.path.join(
  context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)

def _mlmd_type_to_dataframe(mlmd_type):
  """Helper function to turn MLMD into a Pandas DataFrame.

  Args:
    mlmd_type: Metadata store type.

  Returns:
    DataFrame containing type ID, Name, and Properties.
  """
  pd.set_option('display.max_columns', None)  
  pd.set_option('display.expand_frame_repr', False)

  column_names = ['ID', 'Name', 'Properties']
  df = pd.DataFrame(columns=column_names)
  for a_type in mlmd_type:
    mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
                            columns=column_names)
    df = df.append(mlmd_row)
  return df

# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))

print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))

print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
Artifact Types:
Execution Types:
Context Types:

Identifique de onde pode vir a questão da justiça

Para cada um dos artefatos, execução e tipos de contexto acima, podemos usar os metadados de ML para aprofundar os atributos e como cada parte de nosso pipeline de ML foi desenvolvida.

Vamos começar por mergulhar no StatisticsGen para examinar os dados subjacentes que inicialmente alimentadas no modelo. Conhecendo os artefatos em nosso modelo, podemos usar os metadados de ML e a validação de dados do TensorFlow para olhar para trás e para a frente no modelo e identificar de onde um possível problema está vindo.

Depois de executar o abaixo célula, seleccione Lift (Y=1) no segundo gráfico no Chart to show separador para ver o elevador entre os diferentes fatias de dados. Dentro de race , o elevador para Africano-Americano é cerca de 1,08 enquanto Caucasiano é cerca de 0,86.

statistics_gen = StatisticsGen(
    examples=example_gen.outputs['examples'],
    schema=infer_schema.outputs['schema'],
    stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)

for event in store.get_events_by_execution_ids([exec_result.execution_id]):
  if event.path.steps[0].key == 'statistics':
    statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri

model_stats = tfdv.load_statistics(
    os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead.
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.

Rastreando uma Mudança de Modelo

Agora que temos uma ideia de como podemos melhorar a justiça de nosso modelo, primeiro documentaremos nossa execução inicial nos metadados de ML para nosso próprio registro e para qualquer pessoa que possa revisar nossas alterações no futuro.

Os metadados de ML podem manter um registro de nossos modelos anteriores, juntamente com quaisquer notas que gostaríamos de adicionar entre as execuções. Adicionaremos uma nota simples em nossa primeira execução, denotando que essa execução foi feita no conjunto de dados COMPAS completo

_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'

first_trained_model = store.get_artifacts_by_type('Model')[-1]

# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])

def _mlmd_model_to_dataframe(model, model_number):
  """Helper function to turn a MLMD modle into a Pandas DataFrame.

  Args:
    model: Metadata store model.
    model_number: Number of model run within ML Metadata.

  Returns:
    DataFrame containing the ML Metadata model.
  """
  pd.set_option('display.max_columns', None)  
  pd.set_option('display.expand_frame_repr', False)

  df = pd.DataFrame()
  custom_properties = ['name', 'note', 'state', 'producer_component',
                       'pipeline_name']
  df['id'] = [model[model_number].id]
  df['uri'] = [model[model_number].uri]
  for prop in custom_properties:
    df[prop] = model[model_number].custom_properties.get(prop)
    df[prop] = df[prop].astype(str).map(
        lambda x: x.lstrip('string_value: "').rstrip('"\n'))
  return df

# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))

Melhorar as preocupações com justiça ao ponderar o modelo

Existem várias maneiras de resolver questões de justiça dentro de um modelo. Manipulando observados dados / rótulos, implementar restrições de equidade, ou remoção de preconceito por regularização estão algumas técnicas 1 que foram usados para preocupações correção de justiça. Neste estudo de caso, vamos repensar o modelo implementando uma função de perda personalizada no Keras.

O código a seguir é o mesmo que o acima Transform Component mas com a excepção de uma nova classe chamada LogisticEndpoint que vamos usar para a nossa perda dentro Keras e algumas alterações de parâmetros.


  1. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). Uma pesquisa sobre preconceito e justiça no aprendizado de máquina. https://arxiv.org/pdf/1908.09635.pdf
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import tensorflow as tf

import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils

from compas_transform import *

_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999


def transformed_names(keys):
  return [transformed_name(key) for key in keys]


def transformed_name(key):
  return '{}_xf'.format(key)


def _gzip_reader_fn(filenames):
  """Returns a record reader that can read gzip'ed files.

  Args:
    filenames: A tf.string tensor or tf.data.Dataset containing one or more
      filenames.

  Returns: A nested structure of tf.TypeSpec objects matching the structure of
    an element of this dataset and specifying the type of individual components.
  """
  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')


# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
  """Generates a feature spec from a Schema proto.

  Args:
    schema: A Schema proto.

  Returns:
    A feature spec defined as a dict whose keys are feature names and values are
      instances of FixedLenFeature, VarLenFeature or SparseFeature.
  """
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _example_serving_receiver_fn(tf_transform_output, schema):
  """Builds the serving in inputs.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    TensorFlow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(LABEL_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)
  transformed_features.pop(transformed_name(LABEL_KEY))
  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_output, schema):
  """Builds everything needed for the tf-model-analysis to run the model.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    EvalInputReceiver function, which contains:

      - TensorFlow graph which parses raw untransformed features, applies the
          tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  serialized_tf_example = tf.compat.v1.placeholder(
      dtype=tf.string, shape=[None], name='input_example_tensor')

  # Add a parse_example operator to the tensorflow graph, which will parse
  # raw, untransformed, tf examples.
  features = tf.io.parse_example(
      serialized=serialized_tf_example, features=raw_feature_spec)

  transformed_features = tf_transform_output.transform_raw_features(features)
  labels = transformed_features.pop(transformed_name(LABEL_KEY))

  receiver_tensors = {'examples': serialized_tf_example}

  return tfma.export.EvalInputReceiver(
      features=transformed_features,
      receiver_tensors=receiver_tensors,
      labels=labels)


def _input_fn(filenames, tf_transform_output, batch_size=200):
  """Generates features and labels for training or evaluation.

  Args:
    filenames: List of CSV files to read data from.
    tf_transform_output: A TFTransformOutput.
    batch_size: First dimension size of the Tensors returned by input_fn.

  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_output.transformed_feature_spec().copy())

  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
      filenames,
      batch_size,
      transformed_feature_spec,
      shuffle=False,
      reader=_gzip_reader_fn)

  transformed_features = dataset.make_one_shot_iterator().get_next()

  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      transformed_name(LABEL_KEY))


# TFX will call this function.
def trainer_fn(hparams, schema):
  """Build the estimator using the high level API.

  Args:
    hparams: Hyperparameters used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.

  Returns:
    A dict of the following:

      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)

  train_input_fn = lambda: _input_fn(
      hparams.train_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  eval_input_fn = lambda: _input_fn(
      hparams.eval_files,
      tf_transform_output,
      batch_size=_BATCH_SIZE)

  train_spec = tf.estimator.TrainSpec(
      train_input_fn,
      max_steps=hparams.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(
      tf_transform_output, schema)

  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=hparams.eval_steps,
      exporters=[exporter],
      name='compas-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
      keep_checkpoint_max=_MAX_CHECKPOINTS)

  run_config = run_config.replace(model_dir=hparams.serving_model_dir)

  estimator = tf.keras.estimator.model_to_estimator(
      keras_model=_keras_model_builder(), config=run_config)

  # Create an input receiver for TFMA processing.
  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)

  return {
      'estimator': estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }


def _keras_model_builder():
  """Build a keras model for COMPAS dataset classification.

  Returns:
    A compiled Keras model.
  """
  feature_columns = []
  feature_layer_inputs = {}

  for key in transformed_names(INT_FEATURE_KEYS):
    feature_columns.append(tf.feature_column.numeric_column(key))
    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)

  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
                              MAX_CATEGORICAL_FEATURE_VALUES):
    feature_columns.append(
        tf.feature_column.indicator_column(
            tf.feature_column.categorical_column_with_identity(
                key, num_buckets=num_buckets)))
    feature_layer_inputs[key] = tf.keras.Input(
        shape=(1,), name=key, dtype=tf.dtypes.int32)

  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
  feature_layer_outputs = feature_columns_input(feature_layer_inputs)

  dense_layers = tf.keras.layers.Dense(
      20, activation='relu', name='dense_1')(feature_layer_outputs)
  dense_layers = tf.keras.layers.Dense(
      10, activation='relu', name='dense_2')(dense_layers)
  output = tf.keras.layers.Dense(
      1, name='predictions')(dense_layers)

  model = tf.keras.Model(
      inputs=[v for v in feature_layer_inputs.values()], outputs=output)

  # To weight our model we will develop a custom loss class within Keras.
  # The old loss is commented out below and the new one is added in below.
  model.compile(
      # loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
      loss=LogisticEndpoint(),
      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))

  return model


class LogisticEndpoint(tf.keras.layers.Layer):

  def __init__(self, name=None):
    super(LogisticEndpoint, self).__init__(name=name)
    self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)

  def __call__(self, y_true, y_pred, sample_weight=None):
    inputs = [y_true, y_pred]
    inputs += sample_weight or ['sample_weight_xf']
    return super(LogisticEndpoint, self).__call__(inputs)

  def call(self, inputs):
    y_true, y_pred = inputs[0], inputs[1]
    if len(inputs) == 3:
      sample_weight = inputs[2]
    else:
      sample_weight = None
    loss = self.loss_fn(y_true, y_pred, sample_weight)
    self.add_loss(loss)
    reduce_loss = tf.math.divide_no_nan(
        tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
    return reduce_loss
Overwriting compas_trainer.py

Treine novamente o modelo TFX com o modelo ponderado

Nesta próxima parte, usaremos o componente de transformação ponderado para executar novamente o mesmo modelo do Trainer de antes para ver a melhoria na justiça após a aplicação da ponderação.

trainer_weighted = Trainer(
    module_file=_trainer_module_file,
    transformed_examples=transform.outputs['transformed_examples'],
    schema=infer_schema.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Warm-started 6 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 0.47077793, step = 0
INFO:tensorflow:loss = 0.47077793, step = 0
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:global_step/sec: 103.682
INFO:tensorflow:loss = 0.49240756, step = 100 (0.966 sec)
INFO:tensorflow:loss = 0.49240756, step = 100 (0.966 sec)
INFO:tensorflow:global_step/sec: 107.004
INFO:tensorflow:global_step/sec: 107.004
INFO:tensorflow:loss = 0.5130932, step = 200 (0.934 sec)
INFO:tensorflow:loss = 0.5130932, step = 200 (0.934 sec)
INFO:tensorflow:global_step/sec: 107.626
INFO:tensorflow:global_step/sec: 107.626
INFO:tensorflow:loss = 0.50732946, step = 300 (0.929 sec)
INFO:tensorflow:loss = 0.50732946, step = 300 (0.929 sec)
INFO:tensorflow:global_step/sec: 109.147
INFO:tensorflow:global_step/sec: 109.147
INFO:tensorflow:loss = 0.478406, step = 400 (0.917 sec)
INFO:tensorflow:loss = 0.478406, step = 400 (0.917 sec)
INFO:tensorflow:global_step/sec: 106.691
INFO:tensorflow:global_step/sec: 106.691
INFO:tensorflow:loss = 0.46235517, step = 500 (0.937 sec)
INFO:tensorflow:loss = 0.46235517, step = 500 (0.937 sec)
INFO:tensorflow:global_step/sec: 105.369
INFO:tensorflow:global_step/sec: 105.369
INFO:tensorflow:loss = 0.45720923, step = 600 (0.949 sec)
INFO:tensorflow:loss = 0.45720923, step = 600 (0.949 sec)
INFO:tensorflow:global_step/sec: 108.051
INFO:tensorflow:global_step/sec: 108.051
INFO:tensorflow:loss = 0.45070276, step = 700 (0.925 sec)
INFO:tensorflow:loss = 0.45070276, step = 700 (0.925 sec)
INFO:tensorflow:global_step/sec: 109.233
INFO:tensorflow:global_step/sec: 109.233
INFO:tensorflow:loss = 0.46355185, step = 800 (0.915 sec)
INFO:tensorflow:loss = 0.46355185, step = 800 (0.915 sec)
INFO:tensorflow:global_step/sec: 109.367
INFO:tensorflow:global_step/sec: 109.367
INFO:tensorflow:loss = 0.48339045, step = 900 (0.914 sec)
INFO:tensorflow:loss = 0.48339045, step = 900 (0.914 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:13:43Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:13:43Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 46.00220s
INFO:tensorflow:Inference Time : 46.00220s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:14:29
INFO:tensorflow:Finished evaluation at 2021-04-23-09:14:29
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.48788843
INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.48788843
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-999
INFO:tensorflow:global_step/sec: 2.11897
INFO:tensorflow:global_step/sec: 2.11897
INFO:tensorflow:loss = 0.5041351, step = 1000 (47.193 sec)
INFO:tensorflow:loss = 0.5041351, step = 1000 (47.193 sec)
INFO:tensorflow:global_step/sec: 112.962
INFO:tensorflow:global_step/sec: 112.962
INFO:tensorflow:loss = 0.5043556, step = 1100 (0.885 sec)
INFO:tensorflow:loss = 0.5043556, step = 1100 (0.885 sec)
INFO:tensorflow:global_step/sec: 106.062
INFO:tensorflow:global_step/sec: 106.062
INFO:tensorflow:loss = 0.49965087, step = 1200 (0.943 sec)
INFO:tensorflow:loss = 0.49965087, step = 1200 (0.943 sec)
INFO:tensorflow:global_step/sec: 107.054
INFO:tensorflow:global_step/sec: 107.054
INFO:tensorflow:loss = 0.479686, step = 1300 (0.934 sec)
INFO:tensorflow:loss = 0.479686, step = 1300 (0.934 sec)
INFO:tensorflow:global_step/sec: 110.532
INFO:tensorflow:global_step/sec: 110.532
INFO:tensorflow:loss = 0.47265288, step = 1400 (0.905 sec)
INFO:tensorflow:loss = 0.47265288, step = 1400 (0.905 sec)
INFO:tensorflow:global_step/sec: 109.283
INFO:tensorflow:global_step/sec: 109.283
INFO:tensorflow:loss = 0.4670694, step = 1500 (0.915 sec)
INFO:tensorflow:loss = 0.4670694, step = 1500 (0.915 sec)
INFO:tensorflow:global_step/sec: 108.905
INFO:tensorflow:global_step/sec: 108.905
INFO:tensorflow:loss = 0.45940527, step = 1600 (0.918 sec)
INFO:tensorflow:loss = 0.45940527, step = 1600 (0.918 sec)
INFO:tensorflow:global_step/sec: 107.007
INFO:tensorflow:global_step/sec: 107.007
INFO:tensorflow:loss = 0.4766834, step = 1700 (0.936 sec)
INFO:tensorflow:loss = 0.4766834, step = 1700 (0.936 sec)
INFO:tensorflow:global_step/sec: 107.121
INFO:tensorflow:global_step/sec: 107.121
INFO:tensorflow:loss = 0.46949837, step = 1800 (0.932 sec)
INFO:tensorflow:loss = 0.46949837, step = 1800 (0.932 sec)
INFO:tensorflow:global_step/sec: 109.537
INFO:tensorflow:global_step/sec: 109.537
INFO:tensorflow:loss = 0.47130463, step = 1900 (0.913 sec)
INFO:tensorflow:loss = 0.47130463, step = 1900 (0.913 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.565
INFO:tensorflow:global_step/sec: 105.565
INFO:tensorflow:loss = 0.45515984, step = 2000 (0.947 sec)
INFO:tensorflow:loss = 0.45515984, step = 2000 (0.947 sec)
INFO:tensorflow:global_step/sec: 111.265
INFO:tensorflow:global_step/sec: 111.265
INFO:tensorflow:loss = 0.43437228, step = 2100 (0.899 sec)
INFO:tensorflow:loss = 0.43437228, step = 2100 (0.899 sec)
INFO:tensorflow:global_step/sec: 108.639
INFO:tensorflow:global_step/sec: 108.639
INFO:tensorflow:loss = 0.4414773, step = 2200 (0.920 sec)
INFO:tensorflow:loss = 0.4414773, step = 2200 (0.920 sec)
INFO:tensorflow:global_step/sec: 103.783
INFO:tensorflow:global_step/sec: 103.783
INFO:tensorflow:loss = 0.4223846, step = 2300 (0.964 sec)
INFO:tensorflow:loss = 0.4223846, step = 2300 (0.964 sec)
INFO:tensorflow:global_step/sec: 109.882
INFO:tensorflow:global_step/sec: 109.882
INFO:tensorflow:loss = 0.4259975, step = 2400 (0.910 sec)
INFO:tensorflow:loss = 0.4259975, step = 2400 (0.910 sec)
INFO:tensorflow:global_step/sec: 108.38
INFO:tensorflow:global_step/sec: 108.38
INFO:tensorflow:loss = 0.43732366, step = 2500 (0.923 sec)
INFO:tensorflow:loss = 0.43732366, step = 2500 (0.923 sec)
INFO:tensorflow:global_step/sec: 106.671
INFO:tensorflow:global_step/sec: 106.671
INFO:tensorflow:loss = 0.44364113, step = 2600 (0.937 sec)
INFO:tensorflow:loss = 0.44364113, step = 2600 (0.937 sec)
INFO:tensorflow:global_step/sec: 107.267
INFO:tensorflow:global_step/sec: 107.267
INFO:tensorflow:loss = 0.43038422, step = 2700 (0.932 sec)
INFO:tensorflow:loss = 0.43038422, step = 2700 (0.932 sec)
INFO:tensorflow:global_step/sec: 110.393
INFO:tensorflow:global_step/sec: 110.393
INFO:tensorflow:loss = 0.41958278, step = 2800 (0.906 sec)
INFO:tensorflow:loss = 0.41958278, step = 2800 (0.906 sec)
INFO:tensorflow:global_step/sec: 105.96
INFO:tensorflow:global_step/sec: 105.96
INFO:tensorflow:loss = 0.41283488, step = 2900 (0.944 sec)
INFO:tensorflow:loss = 0.41283488, step = 2900 (0.944 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 104.287
INFO:tensorflow:global_step/sec: 104.287
INFO:tensorflow:loss = 0.39609566, step = 3000 (0.958 sec)
INFO:tensorflow:loss = 0.39609566, step = 3000 (0.958 sec)
INFO:tensorflow:global_step/sec: 108.021
INFO:tensorflow:global_step/sec: 108.021
INFO:tensorflow:loss = 0.39362195, step = 3100 (0.926 sec)
INFO:tensorflow:loss = 0.39362195, step = 3100 (0.926 sec)
INFO:tensorflow:global_step/sec: 108.451
INFO:tensorflow:global_step/sec: 108.451
INFO:tensorflow:loss = 0.40350518, step = 3200 (0.922 sec)
INFO:tensorflow:loss = 0.40350518, step = 3200 (0.922 sec)
INFO:tensorflow:global_step/sec: 107.884
INFO:tensorflow:global_step/sec: 107.884
INFO:tensorflow:loss = 0.42621797, step = 3300 (0.927 sec)
INFO:tensorflow:loss = 0.42621797, step = 3300 (0.927 sec)
INFO:tensorflow:global_step/sec: 108.506
INFO:tensorflow:global_step/sec: 108.506
INFO:tensorflow:loss = 0.41866535, step = 3400 (0.921 sec)
INFO:tensorflow:loss = 0.41866535, step = 3400 (0.921 sec)
INFO:tensorflow:global_step/sec: 107.08
INFO:tensorflow:global_step/sec: 107.08
INFO:tensorflow:loss = 0.4116188, step = 3500 (0.934 sec)
INFO:tensorflow:loss = 0.4116188, step = 3500 (0.934 sec)
INFO:tensorflow:global_step/sec: 107.495
INFO:tensorflow:global_step/sec: 107.495
INFO:tensorflow:loss = 0.4095764, step = 3600 (0.931 sec)
INFO:tensorflow:loss = 0.4095764, step = 3600 (0.931 sec)
INFO:tensorflow:global_step/sec: 107.481
INFO:tensorflow:global_step/sec: 107.481
INFO:tensorflow:loss = 0.40515175, step = 3700 (0.930 sec)
INFO:tensorflow:loss = 0.40515175, step = 3700 (0.930 sec)
INFO:tensorflow:global_step/sec: 107.701
INFO:tensorflow:global_step/sec: 107.701
INFO:tensorflow:loss = 0.37928, step = 3800 (0.929 sec)
INFO:tensorflow:loss = 0.37928, step = 3800 (0.929 sec)
INFO:tensorflow:global_step/sec: 106.99
INFO:tensorflow:global_step/sec: 106.99
INFO:tensorflow:loss = 0.3782839, step = 3900 (0.934 sec)
INFO:tensorflow:loss = 0.3782839, step = 3900 (0.934 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.371
INFO:tensorflow:global_step/sec: 106.371
INFO:tensorflow:loss = 0.40979695, step = 4000 (0.940 sec)
INFO:tensorflow:loss = 0.40979695, step = 4000 (0.940 sec)
INFO:tensorflow:global_step/sec: 110.509
INFO:tensorflow:global_step/sec: 110.509
INFO:tensorflow:loss = 0.4390851, step = 4100 (0.905 sec)
INFO:tensorflow:loss = 0.4390851, step = 4100 (0.905 sec)
INFO:tensorflow:global_step/sec: 109.02
INFO:tensorflow:global_step/sec: 109.02
INFO:tensorflow:loss = 0.43913904, step = 4200 (0.918 sec)
INFO:tensorflow:loss = 0.43913904, step = 4200 (0.918 sec)
INFO:tensorflow:global_step/sec: 109.836
INFO:tensorflow:global_step/sec: 109.836
INFO:tensorflow:loss = 0.41836765, step = 4300 (0.910 sec)
INFO:tensorflow:loss = 0.41836765, step = 4300 (0.910 sec)
INFO:tensorflow:global_step/sec: 112.894
INFO:tensorflow:global_step/sec: 112.894
INFO:tensorflow:loss = 0.402948, step = 4400 (0.886 sec)
INFO:tensorflow:loss = 0.402948, step = 4400 (0.886 sec)
INFO:tensorflow:global_step/sec: 108.879
INFO:tensorflow:global_step/sec: 108.879
INFO:tensorflow:loss = 0.40872148, step = 4500 (0.918 sec)
INFO:tensorflow:loss = 0.40872148, step = 4500 (0.918 sec)
INFO:tensorflow:global_step/sec: 108.843
INFO:tensorflow:global_step/sec: 108.843
INFO:tensorflow:loss = 0.41156477, step = 4600 (0.919 sec)
INFO:tensorflow:loss = 0.41156477, step = 4600 (0.919 sec)
INFO:tensorflow:global_step/sec: 108.463
INFO:tensorflow:global_step/sec: 108.463
INFO:tensorflow:loss = 0.41628867, step = 4700 (0.922 sec)
INFO:tensorflow:loss = 0.41628867, step = 4700 (0.922 sec)
INFO:tensorflow:global_step/sec: 105.419
INFO:tensorflow:global_step/sec: 105.419
INFO:tensorflow:loss = 0.43485588, step = 4800 (0.948 sec)
INFO:tensorflow:loss = 0.43485588, step = 4800 (0.948 sec)
INFO:tensorflow:global_step/sec: 108.522
INFO:tensorflow:global_step/sec: 108.522
INFO:tensorflow:loss = 0.42932, step = 4900 (0.922 sec)
INFO:tensorflow:loss = 0.42932, step = 4900 (0.922 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 106.885
INFO:tensorflow:global_step/sec: 106.885
INFO:tensorflow:loss = 0.40682846, step = 5000 (0.935 sec)
INFO:tensorflow:loss = 0.40682846, step = 5000 (0.935 sec)
INFO:tensorflow:global_step/sec: 111.019
INFO:tensorflow:global_step/sec: 111.019
INFO:tensorflow:loss = 0.38750562, step = 5100 (0.901 sec)
INFO:tensorflow:loss = 0.38750562, step = 5100 (0.901 sec)
INFO:tensorflow:global_step/sec: 108.979
INFO:tensorflow:global_step/sec: 108.979
INFO:tensorflow:loss = 0.38564628, step = 5200 (0.917 sec)
INFO:tensorflow:loss = 0.38564628, step = 5200 (0.917 sec)
INFO:tensorflow:global_step/sec: 109.045
INFO:tensorflow:global_step/sec: 109.045
INFO:tensorflow:loss = 0.37906387, step = 5300 (0.919 sec)
INFO:tensorflow:loss = 0.37906387, step = 5300 (0.919 sec)
INFO:tensorflow:global_step/sec: 108.653
INFO:tensorflow:global_step/sec: 108.653
INFO:tensorflow:loss = 0.38417932, step = 5400 (0.919 sec)
INFO:tensorflow:loss = 0.38417932, step = 5400 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.857
INFO:tensorflow:global_step/sec: 110.857
INFO:tensorflow:loss = 0.37717777, step = 5500 (0.902 sec)
INFO:tensorflow:loss = 0.37717777, step = 5500 (0.902 sec)
INFO:tensorflow:global_step/sec: 107.849
INFO:tensorflow:global_step/sec: 107.849
INFO:tensorflow:loss = 0.3948313, step = 5600 (0.927 sec)
INFO:tensorflow:loss = 0.3948313, step = 5600 (0.927 sec)
INFO:tensorflow:global_step/sec: 109.597
INFO:tensorflow:global_step/sec: 109.597
INFO:tensorflow:loss = 0.39357123, step = 5700 (0.912 sec)
INFO:tensorflow:loss = 0.39357123, step = 5700 (0.912 sec)
INFO:tensorflow:global_step/sec: 109.138
INFO:tensorflow:global_step/sec: 109.138
INFO:tensorflow:loss = 0.39145112, step = 5800 (0.916 sec)
INFO:tensorflow:loss = 0.39145112, step = 5800 (0.916 sec)
INFO:tensorflow:global_step/sec: 109.651
INFO:tensorflow:global_step/sec: 109.651
INFO:tensorflow:loss = 0.38264394, step = 5900 (0.914 sec)
INFO:tensorflow:loss = 0.38264394, step = 5900 (0.914 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 105.747
INFO:tensorflow:global_step/sec: 105.747
INFO:tensorflow:loss = 0.37979886, step = 6000 (0.944 sec)
INFO:tensorflow:loss = 0.37979886, step = 6000 (0.944 sec)
INFO:tensorflow:global_step/sec: 107.903
INFO:tensorflow:global_step/sec: 107.903
INFO:tensorflow:loss = 0.37065622, step = 6100 (0.927 sec)
INFO:tensorflow:loss = 0.37065622, step = 6100 (0.927 sec)
INFO:tensorflow:global_step/sec: 109.687
INFO:tensorflow:global_step/sec: 109.687
INFO:tensorflow:loss = 0.37019882, step = 6200 (0.912 sec)
INFO:tensorflow:loss = 0.37019882, step = 6200 (0.912 sec)
INFO:tensorflow:global_step/sec: 111.749
INFO:tensorflow:global_step/sec: 111.749
INFO:tensorflow:loss = 0.3635425, step = 6300 (0.895 sec)
INFO:tensorflow:loss = 0.3635425, step = 6300 (0.895 sec)
INFO:tensorflow:global_step/sec: 109.591
INFO:tensorflow:global_step/sec: 109.591
INFO:tensorflow:loss = 0.37183607, step = 6400 (0.913 sec)
INFO:tensorflow:loss = 0.37183607, step = 6400 (0.913 sec)
INFO:tensorflow:global_step/sec: 110.09
INFO:tensorflow:global_step/sec: 110.09
INFO:tensorflow:loss = 0.36981124, step = 6500 (0.908 sec)
INFO:tensorflow:loss = 0.36981124, step = 6500 (0.908 sec)
INFO:tensorflow:global_step/sec: 111.705
INFO:tensorflow:global_step/sec: 111.705
INFO:tensorflow:loss = 0.37439653, step = 6600 (0.895 sec)
INFO:tensorflow:loss = 0.37439653, step = 6600 (0.895 sec)
INFO:tensorflow:global_step/sec: 111.733
INFO:tensorflow:global_step/sec: 111.733
INFO:tensorflow:loss = 0.38192895, step = 6700 (0.895 sec)
INFO:tensorflow:loss = 0.38192895, step = 6700 (0.895 sec)
INFO:tensorflow:global_step/sec: 110.939
INFO:tensorflow:global_step/sec: 110.939
INFO:tensorflow:loss = 0.39505512, step = 6800 (0.901 sec)
INFO:tensorflow:loss = 0.39505512, step = 6800 (0.901 sec)
INFO:tensorflow:global_step/sec: 108.696
INFO:tensorflow:global_step/sec: 108.696
INFO:tensorflow:loss = 0.37721425, step = 6900 (0.920 sec)
INFO:tensorflow:loss = 0.37721425, step = 6900 (0.920 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 108.787
INFO:tensorflow:global_step/sec: 108.787
INFO:tensorflow:loss = 0.35651168, step = 7000 (0.919 sec)
INFO:tensorflow:loss = 0.35651168, step = 7000 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.463
INFO:tensorflow:global_step/sec: 110.463
INFO:tensorflow:loss = 0.35931125, step = 7100 (0.906 sec)
INFO:tensorflow:loss = 0.35931125, step = 7100 (0.906 sec)
INFO:tensorflow:global_step/sec: 110.653
INFO:tensorflow:global_step/sec: 110.653
INFO:tensorflow:loss = 0.4005883, step = 7200 (0.903 sec)
INFO:tensorflow:loss = 0.4005883, step = 7200 (0.903 sec)
INFO:tensorflow:global_step/sec: 109.584
INFO:tensorflow:global_step/sec: 109.584
INFO:tensorflow:loss = 0.39476267, step = 7300 (0.914 sec)
INFO:tensorflow:loss = 0.39476267, step = 7300 (0.914 sec)
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:loss = 0.38155714, step = 7400 (0.905 sec)
INFO:tensorflow:loss = 0.38155714, step = 7400 (0.905 sec)
INFO:tensorflow:global_step/sec: 112.264
INFO:tensorflow:global_step/sec: 112.264
INFO:tensorflow:loss = 0.3660822, step = 7500 (0.891 sec)
INFO:tensorflow:loss = 0.3660822, step = 7500 (0.891 sec)
INFO:tensorflow:global_step/sec: 107.973
INFO:tensorflow:global_step/sec: 107.973
INFO:tensorflow:loss = 0.37184823, step = 7600 (0.926 sec)
INFO:tensorflow:loss = 0.37184823, step = 7600 (0.926 sec)
INFO:tensorflow:global_step/sec: 112.386
INFO:tensorflow:global_step/sec: 112.386
INFO:tensorflow:loss = 0.37022683, step = 7700 (0.890 sec)
INFO:tensorflow:loss = 0.37022683, step = 7700 (0.890 sec)
INFO:tensorflow:global_step/sec: 108.054
INFO:tensorflow:global_step/sec: 108.054
INFO:tensorflow:loss = 0.39397115, step = 7800 (0.926 sec)
INFO:tensorflow:loss = 0.39397115, step = 7800 (0.926 sec)
INFO:tensorflow:global_step/sec: 109.51
INFO:tensorflow:global_step/sec: 109.51
INFO:tensorflow:loss = 0.4014641, step = 7900 (0.913 sec)
INFO:tensorflow:loss = 0.4014641, step = 7900 (0.913 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 110.755
INFO:tensorflow:global_step/sec: 110.755
INFO:tensorflow:loss = 0.41632578, step = 8000 (0.903 sec)
INFO:tensorflow:loss = 0.41632578, step = 8000 (0.903 sec)
INFO:tensorflow:global_step/sec: 111.974
INFO:tensorflow:global_step/sec: 111.974
INFO:tensorflow:loss = 0.38964537, step = 8100 (0.893 sec)
INFO:tensorflow:loss = 0.38964537, step = 8100 (0.893 sec)
INFO:tensorflow:global_step/sec: 109.464
INFO:tensorflow:global_step/sec: 109.464
INFO:tensorflow:loss = 0.3786476, step = 8200 (0.914 sec)
INFO:tensorflow:loss = 0.3786476, step = 8200 (0.914 sec)
INFO:tensorflow:global_step/sec: 110.488
INFO:tensorflow:global_step/sec: 110.488
INFO:tensorflow:loss = 0.36360282, step = 8300 (0.905 sec)
INFO:tensorflow:loss = 0.36360282, step = 8300 (0.905 sec)
INFO:tensorflow:global_step/sec: 111.241
INFO:tensorflow:global_step/sec: 111.241
INFO:tensorflow:loss = 0.35523522, step = 8400 (0.899 sec)
INFO:tensorflow:loss = 0.35523522, step = 8400 (0.899 sec)
INFO:tensorflow:global_step/sec: 109.894
INFO:tensorflow:global_step/sec: 109.894
INFO:tensorflow:loss = 0.36030933, step = 8500 (0.910 sec)
INFO:tensorflow:loss = 0.36030933, step = 8500 (0.910 sec)
INFO:tensorflow:global_step/sec: 110.548
INFO:tensorflow:global_step/sec: 110.548
INFO:tensorflow:loss = 0.35474238, step = 8600 (0.905 sec)
INFO:tensorflow:loss = 0.35474238, step = 8600 (0.905 sec)
INFO:tensorflow:global_step/sec: 108.786
INFO:tensorflow:global_step/sec: 108.786
INFO:tensorflow:loss = 0.36295354, step = 8700 (0.919 sec)
INFO:tensorflow:loss = 0.36295354, step = 8700 (0.919 sec)
INFO:tensorflow:global_step/sec: 110.613
INFO:tensorflow:global_step/sec: 110.613
INFO:tensorflow:loss = 0.370992, step = 8800 (0.905 sec)
INFO:tensorflow:loss = 0.370992, step = 8800 (0.905 sec)
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:global_step/sec: 110.296
INFO:tensorflow:loss = 0.37704998, step = 8900 (0.907 sec)
INFO:tensorflow:loss = 0.37704998, step = 8900 (0.907 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 109.913
INFO:tensorflow:global_step/sec: 109.913
INFO:tensorflow:loss = 0.35852998, step = 9000 (0.908 sec)
INFO:tensorflow:loss = 0.35852998, step = 9000 (0.908 sec)
INFO:tensorflow:global_step/sec: 110.748
INFO:tensorflow:global_step/sec: 110.748
INFO:tensorflow:loss = 0.3526183, step = 9100 (0.903 sec)
INFO:tensorflow:loss = 0.3526183, step = 9100 (0.903 sec)
INFO:tensorflow:global_step/sec: 109.463
INFO:tensorflow:global_step/sec: 109.463
INFO:tensorflow:loss = 0.35498005, step = 9200 (0.914 sec)
INFO:tensorflow:loss = 0.35498005, step = 9200 (0.914 sec)
INFO:tensorflow:global_step/sec: 109.903
INFO:tensorflow:global_step/sec: 109.903
INFO:tensorflow:loss = 0.35461825, step = 9300 (0.909 sec)
INFO:tensorflow:loss = 0.35461825, step = 9300 (0.909 sec)
INFO:tensorflow:global_step/sec: 110.685
INFO:tensorflow:global_step/sec: 110.685
INFO:tensorflow:loss = 0.34659553, step = 9400 (0.904 sec)
INFO:tensorflow:loss = 0.34659553, step = 9400 (0.904 sec)
INFO:tensorflow:global_step/sec: 102.877
INFO:tensorflow:global_step/sec: 102.877
INFO:tensorflow:loss = 0.34350696, step = 9500 (0.972 sec)
INFO:tensorflow:loss = 0.34350696, step = 9500 (0.972 sec)
INFO:tensorflow:global_step/sec: 104.166
INFO:tensorflow:global_step/sec: 104.166
INFO:tensorflow:loss = 0.354497, step = 9600 (0.960 sec)
INFO:tensorflow:loss = 0.354497, step = 9600 (0.960 sec)
INFO:tensorflow:global_step/sec: 108.578
INFO:tensorflow:global_step/sec: 108.578
INFO:tensorflow:loss = 0.35038272, step = 9700 (0.921 sec)
INFO:tensorflow:loss = 0.35038272, step = 9700 (0.921 sec)
INFO:tensorflow:global_step/sec: 108.338
INFO:tensorflow:global_step/sec: 108.338
INFO:tensorflow:loss = 0.36009234, step = 9800 (0.923 sec)
INFO:tensorflow:loss = 0.36009234, step = 9800 (0.923 sec)
INFO:tensorflow:global_step/sec: 112.09
INFO:tensorflow:global_step/sec: 112.09
INFO:tensorflow:loss = 0.36380777, step = 9900 (0.892 sec)
INFO:tensorflow:loss = 0.36380777, step = 9900 (0.892 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-04-23T09:15:52Z
INFO:tensorflow:Starting evaluation at 2021-04-23T09:15:52Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 45.40978s
INFO:tensorflow:Inference Time : 45.40978s
INFO:tensorflow:Finished evaluation at 2021-04-23-09:16:37
INFO:tensorflow:Finished evaluation at 2021-04-23-09:16:37
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.40231007
INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.40231007
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/export/compas/temp-1619169397/saved_model.pb
INFO:tensorflow:Loss for final step: 0.37667033.
INFO:tensorflow:Loss for final step: 0.37667033.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\003sex"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\004race"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\rc_charge_desc"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\017c_charge_degree"
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/assets
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-04-23T09_09_30.909861-b_me_83r/Trainer/model_run/8/eval_model_dir/temp-1619169397/saved_model.pb
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb"
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer_weighted.outputs['model'],

    eval_config = text_format.Parse("""
      model_specs {
        label_key: 'is_recid'
      }
      metrics_specs {
        metrics {class_name: 'BinaryAccuracy'}
        metrics {class_name: 'AUC'}
        metrics {
          class_name: 'FairnessIndicators'
          config: '{"thresholds": [0.25, 0.5, 0.75]}'
        }
      }
      slicing_specs {
        feature_keys: 'race'
      }
    """, tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)

multi_eval_results = {
    'Unweighted Model': eval_result,
    'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
    multi_eval_results=multi_eval_results)
FairnessIndicatorViewer(evalName='Unweighted Model', evalNameCompare='Weighted Model', slicingMetrics=[{'slice…

Depois de retreinar nossos resultados com o modelo ponderado, podemos mais uma vez olhar para as métricas de justiça para avaliar quaisquer melhorias no modelo. Desta vez, no entanto, usaremos o recurso de comparação de modelo nos Indicadores de Equidade para ver a diferença entre o modelo ponderado e não ponderado. Embora ainda estejamos vendo algumas preocupações de justiça com o modelo ponderado, a discrepância é muito menos pronunciada.

A desvantagem, no entanto, é que nossa AUC e precisão binária também caíram após ponderar o modelo.

  • Taxa de falso positivo @ 0,75
    • Africano-Americano: ~ 1%
      • AUC: 0,47
      • Precisão binária: 0,59
    • Caucasiano: ~ 0%
      • AUC: 0,47
      • Precisão binária: 0,58

Examine os dados da segunda execução

Por fim, podemos visualizar os dados com a validação de dados do TensorFlow e sobrepor as alterações de dados entre os dois modelos e adicionar uma nota adicional aos metadados de ML, indicando que este modelo melhorou as questões de justiça.

# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri

# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
    first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
    second_model_uri, 'eval/stats_tfrecord/'))

# Visualize the statistics between the two models.
tfdv.visualize_statistics(
    lhs_statistics=second_model_stats,
    lhs_name='Sampled Model',
    rhs_statistics=first_model_uri,
    rhs_name='COMPAS Orginal')
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'

# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]

# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])

display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))

Conclusão

Neste estudo de caso, desenvolvemos um classificador Keras dentro de um pipeline TFX com o conjunto de dados COMPAS para examinar quaisquer questões de justiça dentro do conjunto de dados. Depois de desenvolver inicialmente o TFX, as preocupações com a justiça não eram imediatamente aparentes até examinarmos as fatias individuais dentro de nosso modelo por nossos recursos sensíveis - em nossa corrida de caso. Depois de identificar os problemas, conseguimos rastrear a origem do problema de imparcialidade com o TensorFlow DataValidation para identificar um método para mitigar as preocupações de imparcialidade por meio da ponderação do modelo enquanto rastreamos e anotamos as alterações por meio de metadados de ML. Embora não possamos consertar totalmente todas as questões de justiça dentro do conjunto de dados, adicionar uma nota para futuros desenvolvedores a seguir permitirá que outros entendam e os problemas que enfrentamos durante o desenvolvimento deste modelo.

Finalmente, é importante observar que este estudo de caso não corrigiu os problemas de justiça que estão presentes no conjunto de dados COMPAS. Ao melhorar as preocupações com a justiça no modelo, também reduzimos a AUC e a precisão no desempenho do modelo. O que fomos capazes de fazer, no entanto, foi construir um modelo que mostrasse as preocupações com justiça e rastreasse de onde os problemas poderiam estar vindo, rastreando a linhagem do modelo enquanto anotava quaisquer preocupações do modelo nos metadados.

Para mais informações sobre os problemas que a previsão de detenção pré-julgamento pode ter ver o FAT * 2018 palestra sobre "Entender o contexto e as consequências da detenção antes do julgamento"