![]() |
![]() |
![]() |
![]() |
![]() |
COMPAS Dataset
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole.
In 2016, an article published in ProPublica found that the COMPAS model was incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant would not commit another crime) and a positive example (defendant would commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature 1, 2, 3, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This tutorial from the FAT* 2018 conference illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world.
It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System.” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.
We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature.
About the Tools in this Case Study
TensorFlow Extended (TFX) is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.
TensorFlow Model Analysis is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.
Fairness Indicators is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.
ML Metadata is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.
TensorFlow Data Validation is a library to analyze your data and check for errors that can affect model training or serving.
Case Study Overview
For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.
The walk through of the case study will proceed as follows:
- Download the data, preprocess, and explore the initial dataset.
- Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.
- Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.
- Use ML Metadata to track all the artifacts for a model that we trained with TFX.
- Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.
- Review the performance changes within the new dataset.
- Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models.
Helpful Resources
This case study is an extension of the below case studies. It is recommended working through the below case studies first.
Setup
To start, we will install the necessary packages, download the data, and import the required modules for the case study.
To install the required packages for this case study in your notebook run the below PIP command.
Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199
Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046
Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207
!python -m pip install -q -U pip==20.2
!python -m pip install -q -U \
tensorflow==2.4.1 \
tfx==0.27.0 \
tensorflow-model-analysis==0.27.0 \
tensorflow_data_validation==0.27.0 \
tensorflow-metadata==0.27.0 \
tensorflow-transform==0.27.0 \
ml-metadata==0.27.0 \
tfx-bsl==0.27.1 \
absl-py==0.9
# If prompted, please restart the Colab environment after the pip installs
# as you might run into import errors.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import tempfile
import six.moves.urllib as urllib
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2
from tfx.utils.dsl_utils import external_input
Download and preprocess the dataset
# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')
data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)
# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
'age',
'c_charge_desc',
'c_charge_degree',
'c_days_from_compas',
'is_recid',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'r_days_from_arrest',
'race',
'sex',
'vr_charge_desc',
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]
# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]
# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
_COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]
# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8
# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')
Building a TFX Pipeline
There are several TFX Pipeline Components that can be used for a production model, but for the purpose the this case study will focus on using only the below components:
- ExampleGen to read our dataset.
- StatisticsGen to calculate the statistics of our dataset.
- SchemaGen to create a data schema.
- Transform for feature engineering.
- Trainer to run our machine learning model.
Create the InteractiveContext
To run TFX within a notebook, we first will need to create an InteractiveContext
to run the components interactively.
InteractiveContext
will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties pipeline_root
and metadata_connection_config
may be passed to InteractiveContext
.
context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_ as root for pipeline outputs. WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/metadata.sqlite.
TFX ExampleGen Component
# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.
example_gen = CsvExampleGen(input=external_input(_DATA_ROOT))
context.run(example_gen)
WARNING:absl:From <ipython-input-1-c7eccc81d86d>:6: external_input (from tfx.utils.dsl_utils) is deprecated and will be removed in a future version. Instructions for updating: external_input is deprecated, directly pass the uri to ExampleGen. WARNING:absl:The "input" argument to the CsvExampleGen component has been deprecated by "input_base". Please update your usage as support for this argument will be removed soon. WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features. WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
TFX StatisticsGen Component
# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)
TFX SchemaGen Component
# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.
infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)` WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_util.py:247: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
TFX Transform Component
The Transform
component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.
Define some constants and functions for both the Transform
component and the Trainer
component. Define them in a Python module, in this case saved to disk using the %%writefile
magic command since you are working in a notebook.
The transformation that we will be performing in this case study are as follows:
- For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.
- For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.
- Remove empty row values and replace them with an empty string or 0 depending on the feature type.
- Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.
Now let's define a module containing the preprocessing_fn()
function that we will pass to the Transform
component:
# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
%%writefile {_transform_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import tensorflow_transform as tft
CATEGORICAL_FEATURE_KEYS = [
'sex',
'race',
'c_charge_desc',
'c_charge_degree',
]
INT_FEATURE_KEYS = [
'age',
'c_days_from_compas',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'sample_weight',
]
LABEL_KEY = 'is_recid'
# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
2,
6,
513,
14,
]
def transformed_name(key):
return '{}_xf'.format(key)
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: Map from feature keys to raw features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
vocab_filename=key)
for key in INT_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
# Target label will be to see if the defendant is charged for another crime.
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(tensor_value):
"""Replaces a missing values in a SparseTensor.
Fills in missing values of `tensor_value` with '' or 0, and converts to a
dense tensor.
Args:
tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
at most 1 in the second dimension.
Returns:
A rank 1 tensor where missing values of `tensor_value` are filled in.
"""
if not isinstance(tensor_value, tf.sparse.SparseTensor):
return tensor_value
default_value = '' if tensor_value.dtype == tf.string else 0
sparse_tensor = tf.SparseTensor(
tensor_value.indices,
tensor_value.values,
[tensor_value.dense_shape[0], 1])
dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
return tf.squeeze(dense_tensor, axis=1)
Writing compas_transform.py
# Build and run the Transform Component.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=_transform_module_file
)
context.run(transform)
WARNING:absl:The default value of `force_tf_compat_v1` will change in a future release from `True` to `False`. Since this pipeline has TF 2 behaviors enabled, Transform will use native TF 2 at that point. You can test this behavior now by passing `force_tf_compat_v1=False` or disable it by explicitly setting `force_tf_compat_v1=True` in the Transform component. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version. Instructions for updating: Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema` WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:573: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version. Instructions for updating: Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema` WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use ref() instead. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:266: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use ref() instead. WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead. WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType]] instead. WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:No assets to write. INFO:tensorflow:No assets to write. WARNING:tensorflow:Issue encountered when serializing tft_mapper_use. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Counter' object has no attribute 'name' WARNING:tensorflow:Issue encountered when serializing tft_mapper_use. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Counter' object has no attribute 'name' INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/c321aa608c674966aea7cfe4b9d0fea4/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/c321aa608c674966aea7cfe4b9d0fea4/saved_model.pb INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:No assets to write. INFO:tensorflow:No assets to write. WARNING:tensorflow:Issue encountered when serializing tft_mapper_use. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Counter' object has no attribute 'name' WARNING:tensorflow:Issue encountered when serializing tft_mapper_use. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Counter' object has no attribute 'name' INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/64b846963f9f4ff2a4e9d06a3b0d16f2/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/64b846963f9f4ff2a4e9d06a3b0d16f2/saved_model.pb WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:tensorflow:Tensorflow version (2.4.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring send_type hint: <class 'NoneType'> WARNING:apache_beam.typehints.typehints:Ignoring return_type hint: <class 'NoneType'> INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/d659944c37cb4a6f9f2fb789116747e5/assets INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/d659944c37cb4a6f9f2fb789116747e5/assets INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/d659944c37cb4a6f9f2fb789116747e5/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Transform/transform_graph/4/.temp_path/tftransform_tmp/d659944c37cb4a6f9f2fb789116747e5/saved_model.pb WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore
TFX Trainer Component
The Trainer
Component trains a specified TensorFlow model.
In order to run the trainer component we need to create a Python module containing a trainer_fn
function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator()
.
The Trainer
component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called trainer_fn
function that TFX will call.
For our case study we will build a Keras model that will return will return keras.model_to_estimator()
.
# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
model.compile(
loss=tf.keras.losses.MeanAbsoluteError(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
Writing compas_trainer.py
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE INFO:tensorflow:Using the Keras model provided. INFO:tensorflow:Using the Keras model provided. /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model. warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and ' INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Not using Distribute Coordinator. INFO:tensorflow:Not using Distribute Coordinator. INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None. INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. WARNING:tensorflow:From compas_trainer.py:136: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through `tf.compat.v1`. In all other situations -- namely, eager mode and inside `tf.function` -- you can consume dataset elements using `for elem in dataset: ...` or by explicitly creating iterator via `iterator = iter(dataset)` and fetching its elements via `values = next(iterator)`. Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)` to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/keras/keras_model.ckpt INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES. INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES. INFO:tensorflow:Warm-started 6 variables. INFO:tensorflow:Warm-started 6 variables. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... INFO:tensorflow:loss = 0.47922394, step = 0 INFO:tensorflow:loss = 0.47922394, step = 0 INFO:tensorflow:global_step/sec: 94.3811 INFO:tensorflow:global_step/sec: 94.3811 INFO:tensorflow:loss = 0.49894086, step = 100 (1.061 sec) INFO:tensorflow:loss = 0.49894086, step = 100 (1.061 sec) INFO:tensorflow:global_step/sec: 96.7212 INFO:tensorflow:global_step/sec: 96.7212 INFO:tensorflow:loss = 0.51356375, step = 200 (1.034 sec) INFO:tensorflow:loss = 0.51356375, step = 200 (1.034 sec) INFO:tensorflow:global_step/sec: 98.2638 INFO:tensorflow:global_step/sec: 98.2638 INFO:tensorflow:loss = 0.50377905, step = 300 (1.017 sec) INFO:tensorflow:loss = 0.50377905, step = 300 (1.017 sec) INFO:tensorflow:global_step/sec: 97.2782 INFO:tensorflow:global_step/sec: 97.2782 INFO:tensorflow:loss = 0.48146102, step = 400 (1.028 sec) INFO:tensorflow:loss = 0.48146102, step = 400 (1.028 sec) INFO:tensorflow:global_step/sec: 97.244 INFO:tensorflow:global_step/sec: 97.244 INFO:tensorflow:loss = 0.46280116, step = 500 (1.028 sec) INFO:tensorflow:loss = 0.46280116, step = 500 (1.028 sec) INFO:tensorflow:global_step/sec: 97.3269 INFO:tensorflow:global_step/sec: 97.3269 INFO:tensorflow:loss = 0.46365348, step = 600 (1.028 sec) INFO:tensorflow:loss = 0.46365348, step = 600 (1.028 sec) INFO:tensorflow:global_step/sec: 97.7121 INFO:tensorflow:global_step/sec: 97.7121 INFO:tensorflow:loss = 0.45898142, step = 700 (1.023 sec) INFO:tensorflow:loss = 0.45898142, step = 700 (1.023 sec) INFO:tensorflow:global_step/sec: 98.9776 INFO:tensorflow:global_step/sec: 98.9776 INFO:tensorflow:loss = 0.47444424, step = 800 (1.010 sec) INFO:tensorflow:loss = 0.47444424, step = 800 (1.010 sec) INFO:tensorflow:global_step/sec: 97.1598 INFO:tensorflow:global_step/sec: 97.1598 INFO:tensorflow:loss = 0.48700526, step = 900 (1.029 sec) INFO:tensorflow:loss = 0.48700526, step = 900 (1.029 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999... INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999... INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically. warnings.warn('`Model.state_updates` will be removed in a future version. ' INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2021-02-12T10:08:36Z INFO:tensorflow:Starting evaluation at 2021-02-12T10:08:36Z INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-999 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-999 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Inference Time : 51.91331s INFO:tensorflow:Inference Time : 51.91331s INFO:tensorflow:Finished evaluation at 2021-02-12-10:09:28 INFO:tensorflow:Finished evaluation at 2021-02-12-10:09:28 INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4890774 INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.4890774 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-999 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-999 INFO:tensorflow:global_step/sec: 1.87828 INFO:tensorflow:global_step/sec: 1.87828 INFO:tensorflow:loss = 0.5009028, step = 1000 (53.240 sec) INFO:tensorflow:loss = 0.5009028, step = 1000 (53.240 sec) INFO:tensorflow:global_step/sec: 94.1559 INFO:tensorflow:global_step/sec: 94.1559 INFO:tensorflow:loss = 0.4985329, step = 1100 (1.063 sec) INFO:tensorflow:loss = 0.4985329, step = 1100 (1.063 sec) INFO:tensorflow:global_step/sec: 94.9667 INFO:tensorflow:global_step/sec: 94.9667 INFO:tensorflow:loss = 0.49361834, step = 1200 (1.053 sec) INFO:tensorflow:loss = 0.49361834, step = 1200 (1.053 sec) INFO:tensorflow:global_step/sec: 96.5863 INFO:tensorflow:global_step/sec: 96.5863 INFO:tensorflow:loss = 0.4756228, step = 1300 (1.037 sec) INFO:tensorflow:loss = 0.4756228, step = 1300 (1.037 sec) INFO:tensorflow:global_step/sec: 95 INFO:tensorflow:global_step/sec: 95 INFO:tensorflow:loss = 0.47359818, step = 1400 (1.051 sec) INFO:tensorflow:loss = 0.47359818, step = 1400 (1.051 sec) INFO:tensorflow:global_step/sec: 96.1671 INFO:tensorflow:global_step/sec: 96.1671 INFO:tensorflow:loss = 0.46600685, step = 1500 (1.040 sec) INFO:tensorflow:loss = 0.46600685, step = 1500 (1.040 sec) INFO:tensorflow:global_step/sec: 96.0579 INFO:tensorflow:global_step/sec: 96.0579 INFO:tensorflow:loss = 0.4596146, step = 1600 (1.041 sec) INFO:tensorflow:loss = 0.4596146, step = 1600 (1.041 sec) INFO:tensorflow:global_step/sec: 95.5226 INFO:tensorflow:global_step/sec: 95.5226 INFO:tensorflow:loss = 0.46728122, step = 1700 (1.047 sec) INFO:tensorflow:loss = 0.46728122, step = 1700 (1.047 sec) INFO:tensorflow:global_step/sec: 96.9115 INFO:tensorflow:global_step/sec: 96.9115 INFO:tensorflow:loss = 0.46324626, step = 1800 (1.032 sec) INFO:tensorflow:loss = 0.46324626, step = 1800 (1.032 sec) INFO:tensorflow:global_step/sec: 95.911 INFO:tensorflow:global_step/sec: 95.911 INFO:tensorflow:loss = 0.46125287, step = 1900 (1.043 sec) INFO:tensorflow:loss = 0.46125287, step = 1900 (1.043 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998... INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 94.8453 INFO:tensorflow:global_step/sec: 94.8453 INFO:tensorflow:loss = 0.44622257, step = 2000 (1.054 sec) INFO:tensorflow:loss = 0.44622257, step = 2000 (1.054 sec) INFO:tensorflow:global_step/sec: 98.5904 INFO:tensorflow:global_step/sec: 98.5904 INFO:tensorflow:loss = 0.42857996, step = 2100 (1.015 sec) INFO:tensorflow:loss = 0.42857996, step = 2100 (1.015 sec) INFO:tensorflow:global_step/sec: 97.1137 INFO:tensorflow:global_step/sec: 97.1137 INFO:tensorflow:loss = 0.43820554, step = 2200 (1.030 sec) INFO:tensorflow:loss = 0.43820554, step = 2200 (1.030 sec) INFO:tensorflow:global_step/sec: 98.0854 INFO:tensorflow:global_step/sec: 98.0854 INFO:tensorflow:loss = 0.42717767, step = 2300 (1.020 sec) INFO:tensorflow:loss = 0.42717767, step = 2300 (1.020 sec) INFO:tensorflow:global_step/sec: 97.5624 INFO:tensorflow:global_step/sec: 97.5624 INFO:tensorflow:loss = 0.43106484, step = 2400 (1.025 sec) INFO:tensorflow:loss = 0.43106484, step = 2400 (1.025 sec) INFO:tensorflow:global_step/sec: 95.9239 INFO:tensorflow:global_step/sec: 95.9239 INFO:tensorflow:loss = 0.43376133, step = 2500 (1.042 sec) INFO:tensorflow:loss = 0.43376133, step = 2500 (1.042 sec) INFO:tensorflow:global_step/sec: 96.7785 INFO:tensorflow:global_step/sec: 96.7785 INFO:tensorflow:loss = 0.43910393, step = 2600 (1.033 sec) INFO:tensorflow:loss = 0.43910393, step = 2600 (1.033 sec) INFO:tensorflow:global_step/sec: 97.5282 INFO:tensorflow:global_step/sec: 97.5282 INFO:tensorflow:loss = 0.43142965, step = 2700 (1.026 sec) INFO:tensorflow:loss = 0.43142965, step = 2700 (1.026 sec) INFO:tensorflow:global_step/sec: 97.9094 INFO:tensorflow:global_step/sec: 97.9094 INFO:tensorflow:loss = 0.41977334, step = 2800 (1.022 sec) INFO:tensorflow:loss = 0.41977334, step = 2800 (1.022 sec) INFO:tensorflow:global_step/sec: 99.7624 INFO:tensorflow:global_step/sec: 99.7624 INFO:tensorflow:loss = 0.41966102, step = 2900 (1.002 sec) INFO:tensorflow:loss = 0.41966102, step = 2900 (1.002 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997... INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 95.4759 INFO:tensorflow:global_step/sec: 95.4759 INFO:tensorflow:loss = 0.41101885, step = 3000 (1.047 sec) INFO:tensorflow:loss = 0.41101885, step = 3000 (1.047 sec) INFO:tensorflow:global_step/sec: 96.9193 INFO:tensorflow:global_step/sec: 96.9193 INFO:tensorflow:loss = 0.41291398, step = 3100 (1.032 sec) INFO:tensorflow:loss = 0.41291398, step = 3100 (1.032 sec) INFO:tensorflow:global_step/sec: 95.8515 INFO:tensorflow:global_step/sec: 95.8515 INFO:tensorflow:loss = 0.42101908, step = 3200 (1.043 sec) INFO:tensorflow:loss = 0.42101908, step = 3200 (1.043 sec) INFO:tensorflow:global_step/sec: 97.4185 INFO:tensorflow:global_step/sec: 97.4185 INFO:tensorflow:loss = 0.43216836, step = 3300 (1.026 sec) INFO:tensorflow:loss = 0.43216836, step = 3300 (1.026 sec) INFO:tensorflow:global_step/sec: 96.3894 INFO:tensorflow:global_step/sec: 96.3894 INFO:tensorflow:loss = 0.41804755, step = 3400 (1.038 sec) INFO:tensorflow:loss = 0.41804755, step = 3400 (1.038 sec) INFO:tensorflow:global_step/sec: 97.2452 INFO:tensorflow:global_step/sec: 97.2452 INFO:tensorflow:loss = 0.41176826, step = 3500 (1.028 sec) INFO:tensorflow:loss = 0.41176826, step = 3500 (1.028 sec) INFO:tensorflow:global_step/sec: 98.7065 INFO:tensorflow:global_step/sec: 98.7065 INFO:tensorflow:loss = 0.40319732, step = 3600 (1.013 sec) INFO:tensorflow:loss = 0.40319732, step = 3600 (1.013 sec) INFO:tensorflow:global_step/sec: 95.9427 INFO:tensorflow:global_step/sec: 95.9427 INFO:tensorflow:loss = 0.40816277, step = 3700 (1.042 sec) INFO:tensorflow:loss = 0.40816277, step = 3700 (1.042 sec) INFO:tensorflow:global_step/sec: 96.5037 INFO:tensorflow:global_step/sec: 96.5037 INFO:tensorflow:loss = 0.38662672, step = 3800 (1.036 sec) INFO:tensorflow:loss = 0.38662672, step = 3800 (1.036 sec) INFO:tensorflow:global_step/sec: 96.8869 INFO:tensorflow:global_step/sec: 96.8869 INFO:tensorflow:loss = 0.38534278, step = 3900 (1.032 sec) INFO:tensorflow:loss = 0.38534278, step = 3900 (1.032 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996... INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 95.7875 INFO:tensorflow:global_step/sec: 95.7875 INFO:tensorflow:loss = 0.40951696, step = 4000 (1.043 sec) INFO:tensorflow:loss = 0.40951696, step = 4000 (1.043 sec) INFO:tensorflow:global_step/sec: 97.2092 INFO:tensorflow:global_step/sec: 97.2092 INFO:tensorflow:loss = 0.42987037, step = 4100 (1.030 sec) INFO:tensorflow:loss = 0.42987037, step = 4100 (1.030 sec) INFO:tensorflow:global_step/sec: 95.0253 INFO:tensorflow:global_step/sec: 95.0253 INFO:tensorflow:loss = 0.4280005, step = 4200 (1.052 sec) INFO:tensorflow:loss = 0.4280005, step = 4200 (1.052 sec) INFO:tensorflow:global_step/sec: 96.3128 INFO:tensorflow:global_step/sec: 96.3128 INFO:tensorflow:loss = 0.41340095, step = 4300 (1.038 sec) INFO:tensorflow:loss = 0.41340095, step = 4300 (1.038 sec) INFO:tensorflow:global_step/sec: 98.4076 INFO:tensorflow:global_step/sec: 98.4076 INFO:tensorflow:loss = 0.39796284, step = 4400 (1.016 sec) INFO:tensorflow:loss = 0.39796284, step = 4400 (1.016 sec) INFO:tensorflow:global_step/sec: 97.8374 INFO:tensorflow:global_step/sec: 97.8374 INFO:tensorflow:loss = 0.41151893, step = 4500 (1.022 sec) INFO:tensorflow:loss = 0.41151893, step = 4500 (1.022 sec) INFO:tensorflow:global_step/sec: 97.175 INFO:tensorflow:global_step/sec: 97.175 INFO:tensorflow:loss = 0.41181087, step = 4600 (1.029 sec) INFO:tensorflow:loss = 0.41181087, step = 4600 (1.029 sec) INFO:tensorflow:global_step/sec: 96.1238 INFO:tensorflow:global_step/sec: 96.1238 INFO:tensorflow:loss = 0.42015508, step = 4700 (1.040 sec) INFO:tensorflow:loss = 0.42015508, step = 4700 (1.040 sec) INFO:tensorflow:global_step/sec: 96.7397 INFO:tensorflow:global_step/sec: 96.7397 INFO:tensorflow:loss = 0.43351883, step = 4800 (1.034 sec) INFO:tensorflow:loss = 0.43351883, step = 4800 (1.034 sec) INFO:tensorflow:global_step/sec: 98.5802 INFO:tensorflow:global_step/sec: 98.5802 INFO:tensorflow:loss = 0.4266643, step = 4900 (1.015 sec) INFO:tensorflow:loss = 0.4266643, step = 4900 (1.015 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995... INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to delete files with this prefix. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/saver.py:970: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to delete files with this prefix. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 97.4561 INFO:tensorflow:global_step/sec: 97.4561 INFO:tensorflow:loss = 0.4040032, step = 5000 (1.025 sec) INFO:tensorflow:loss = 0.4040032, step = 5000 (1.025 sec) INFO:tensorflow:global_step/sec: 97.6907 INFO:tensorflow:global_step/sec: 97.6907 INFO:tensorflow:loss = 0.38233745, step = 5100 (1.024 sec) INFO:tensorflow:loss = 0.38233745, step = 5100 (1.024 sec) INFO:tensorflow:global_step/sec: 96.9882 INFO:tensorflow:global_step/sec: 96.9882 INFO:tensorflow:loss = 0.37964943, step = 5200 (1.030 sec) INFO:tensorflow:loss = 0.37964943, step = 5200 (1.030 sec) INFO:tensorflow:global_step/sec: 96.7067 INFO:tensorflow:global_step/sec: 96.7067 INFO:tensorflow:loss = 0.37728003, step = 5300 (1.034 sec) INFO:tensorflow:loss = 0.37728003, step = 5300 (1.034 sec) INFO:tensorflow:global_step/sec: 96.6252 INFO:tensorflow:global_step/sec: 96.6252 INFO:tensorflow:loss = 0.38539094, step = 5400 (1.035 sec) INFO:tensorflow:loss = 0.38539094, step = 5400 (1.035 sec) INFO:tensorflow:global_step/sec: 98.4444 INFO:tensorflow:global_step/sec: 98.4444 INFO:tensorflow:loss = 0.37843025, step = 5500 (1.016 sec) INFO:tensorflow:loss = 0.37843025, step = 5500 (1.016 sec) INFO:tensorflow:global_step/sec: 97.3515 INFO:tensorflow:global_step/sec: 97.3515 INFO:tensorflow:loss = 0.39487544, step = 5600 (1.027 sec) INFO:tensorflow:loss = 0.39487544, step = 5600 (1.027 sec) INFO:tensorflow:global_step/sec: 96.5302 INFO:tensorflow:global_step/sec: 96.5302 INFO:tensorflow:loss = 0.39453104, step = 5700 (1.038 sec) INFO:tensorflow:loss = 0.39453104, step = 5700 (1.038 sec) INFO:tensorflow:global_step/sec: 98.0272 INFO:tensorflow:global_step/sec: 98.0272 INFO:tensorflow:loss = 0.39804193, step = 5800 (1.018 sec) INFO:tensorflow:loss = 0.39804193, step = 5800 (1.018 sec) INFO:tensorflow:global_step/sec: 98.0358 INFO:tensorflow:global_step/sec: 98.0358 INFO:tensorflow:loss = 0.3882218, step = 5900 (1.020 sec) INFO:tensorflow:loss = 0.3882218, step = 5900 (1.020 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994... INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 96.6509 INFO:tensorflow:global_step/sec: 96.6509 INFO:tensorflow:loss = 0.39083716, step = 6000 (1.034 sec) INFO:tensorflow:loss = 0.39083716, step = 6000 (1.034 sec) INFO:tensorflow:global_step/sec: 98.091 INFO:tensorflow:global_step/sec: 98.091 INFO:tensorflow:loss = 0.38294113, step = 6100 (1.020 sec) INFO:tensorflow:loss = 0.38294113, step = 6100 (1.020 sec) INFO:tensorflow:global_step/sec: 97.3081 INFO:tensorflow:global_step/sec: 97.3081 INFO:tensorflow:loss = 0.38354096, step = 6200 (1.027 sec) INFO:tensorflow:loss = 0.38354096, step = 6200 (1.027 sec) INFO:tensorflow:global_step/sec: 96.5014 INFO:tensorflow:global_step/sec: 96.5014 INFO:tensorflow:loss = 0.37706545, step = 6300 (1.036 sec) INFO:tensorflow:loss = 0.37706545, step = 6300 (1.036 sec) INFO:tensorflow:global_step/sec: 97.2271 INFO:tensorflow:global_step/sec: 97.2271 INFO:tensorflow:loss = 0.3778651, step = 6400 (1.029 sec) INFO:tensorflow:loss = 0.3778651, step = 6400 (1.029 sec) INFO:tensorflow:global_step/sec: 96.9559 INFO:tensorflow:global_step/sec: 96.9559 INFO:tensorflow:loss = 0.3718098, step = 6500 (1.031 sec) INFO:tensorflow:loss = 0.3718098, step = 6500 (1.031 sec) INFO:tensorflow:global_step/sec: 95.6525 INFO:tensorflow:global_step/sec: 95.6525 INFO:tensorflow:loss = 0.37361807, step = 6600 (1.046 sec) INFO:tensorflow:loss = 0.37361807, step = 6600 (1.046 sec) INFO:tensorflow:global_step/sec: 98.036 INFO:tensorflow:global_step/sec: 98.036 INFO:tensorflow:loss = 0.38031766, step = 6700 (1.020 sec) INFO:tensorflow:loss = 0.38031766, step = 6700 (1.020 sec) INFO:tensorflow:global_step/sec: 96.7685 INFO:tensorflow:global_step/sec: 96.7685 INFO:tensorflow:loss = 0.39801577, step = 6800 (1.033 sec) INFO:tensorflow:loss = 0.39801577, step = 6800 (1.033 sec) INFO:tensorflow:global_step/sec: 96.6665 INFO:tensorflow:global_step/sec: 96.6665 INFO:tensorflow:loss = 0.38564965, step = 6900 (1.034 sec) INFO:tensorflow:loss = 0.38564965, step = 6900 (1.034 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993... INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 98.3975 INFO:tensorflow:global_step/sec: 98.3975 INFO:tensorflow:loss = 0.36775878, step = 7000 (1.016 sec) INFO:tensorflow:loss = 0.36775878, step = 7000 (1.016 sec) INFO:tensorflow:global_step/sec: 97.635 INFO:tensorflow:global_step/sec: 97.635 INFO:tensorflow:loss = 0.36725703, step = 7100 (1.025 sec) INFO:tensorflow:loss = 0.36725703, step = 7100 (1.025 sec) INFO:tensorflow:global_step/sec: 96.0396 INFO:tensorflow:global_step/sec: 96.0396 INFO:tensorflow:loss = 0.39668813, step = 7200 (1.041 sec) INFO:tensorflow:loss = 0.39668813, step = 7200 (1.041 sec) INFO:tensorflow:global_step/sec: 98.2633 INFO:tensorflow:global_step/sec: 98.2633 INFO:tensorflow:loss = 0.38743353, step = 7300 (1.018 sec) INFO:tensorflow:loss = 0.38743353, step = 7300 (1.018 sec) INFO:tensorflow:global_step/sec: 97.1114 INFO:tensorflow:global_step/sec: 97.1114 INFO:tensorflow:loss = 0.37874678, step = 7400 (1.030 sec) INFO:tensorflow:loss = 0.37874678, step = 7400 (1.030 sec) INFO:tensorflow:global_step/sec: 97.7336 INFO:tensorflow:global_step/sec: 97.7336 INFO:tensorflow:loss = 0.36681843, step = 7500 (1.023 sec) INFO:tensorflow:loss = 0.36681843, step = 7500 (1.023 sec) INFO:tensorflow:global_step/sec: 97.3916 INFO:tensorflow:global_step/sec: 97.3916 INFO:tensorflow:loss = 0.37758845, step = 7600 (1.027 sec) INFO:tensorflow:loss = 0.37758845, step = 7600 (1.027 sec) INFO:tensorflow:global_step/sec: 96.3677 INFO:tensorflow:global_step/sec: 96.3677 INFO:tensorflow:loss = 0.38054034, step = 7700 (1.038 sec) INFO:tensorflow:loss = 0.38054034, step = 7700 (1.038 sec) INFO:tensorflow:global_step/sec: 97.118 INFO:tensorflow:global_step/sec: 97.118 INFO:tensorflow:loss = 0.40383145, step = 7800 (1.029 sec) INFO:tensorflow:loss = 0.40383145, step = 7800 (1.029 sec) INFO:tensorflow:global_step/sec: 97.6346 INFO:tensorflow:global_step/sec: 97.6346 INFO:tensorflow:loss = 0.40430906, step = 7900 (1.025 sec) INFO:tensorflow:loss = 0.40430906, step = 7900 (1.025 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992... INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 97.356 INFO:tensorflow:global_step/sec: 97.356 INFO:tensorflow:loss = 0.41828004, step = 8000 (1.027 sec) INFO:tensorflow:loss = 0.41828004, step = 8000 (1.027 sec) INFO:tensorflow:global_step/sec: 96.0068 INFO:tensorflow:global_step/sec: 96.0068 INFO:tensorflow:loss = 0.3902998, step = 8100 (1.042 sec) INFO:tensorflow:loss = 0.3902998, step = 8100 (1.042 sec) INFO:tensorflow:global_step/sec: 96.7042 INFO:tensorflow:global_step/sec: 96.7042 INFO:tensorflow:loss = 0.3780637, step = 8200 (1.034 sec) INFO:tensorflow:loss = 0.3780637, step = 8200 (1.034 sec) INFO:tensorflow:global_step/sec: 99.4818 INFO:tensorflow:global_step/sec: 99.4818 INFO:tensorflow:loss = 0.36338168, step = 8300 (1.005 sec) INFO:tensorflow:loss = 0.36338168, step = 8300 (1.005 sec) INFO:tensorflow:global_step/sec: 97.5383 INFO:tensorflow:global_step/sec: 97.5383 INFO:tensorflow:loss = 0.35984778, step = 8400 (1.025 sec) INFO:tensorflow:loss = 0.35984778, step = 8400 (1.025 sec) INFO:tensorflow:global_step/sec: 97.9847 INFO:tensorflow:global_step/sec: 97.9847 INFO:tensorflow:loss = 0.36569968, step = 8500 (1.020 sec) INFO:tensorflow:loss = 0.36569968, step = 8500 (1.020 sec) INFO:tensorflow:global_step/sec: 96.7656 INFO:tensorflow:global_step/sec: 96.7656 INFO:tensorflow:loss = 0.36257055, step = 8600 (1.034 sec) INFO:tensorflow:loss = 0.36257055, step = 8600 (1.034 sec) INFO:tensorflow:global_step/sec: 98.2275 INFO:tensorflow:global_step/sec: 98.2275 INFO:tensorflow:loss = 0.3683621, step = 8700 (1.020 sec) INFO:tensorflow:loss = 0.3683621, step = 8700 (1.020 sec) INFO:tensorflow:global_step/sec: 98.35 INFO:tensorflow:global_step/sec: 98.35 INFO:tensorflow:loss = 0.37625507, step = 8800 (1.015 sec) INFO:tensorflow:loss = 0.37625507, step = 8800 (1.015 sec) INFO:tensorflow:global_step/sec: 98.0856 INFO:tensorflow:global_step/sec: 98.0856 INFO:tensorflow:loss = 0.38243973, step = 8900 (1.020 sec) INFO:tensorflow:loss = 0.38243973, step = 8900 (1.020 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991... INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 99.5294 INFO:tensorflow:global_step/sec: 99.5294 INFO:tensorflow:loss = 0.36767513, step = 9000 (1.005 sec) INFO:tensorflow:loss = 0.36767513, step = 9000 (1.005 sec) INFO:tensorflow:global_step/sec: 96.4794 INFO:tensorflow:global_step/sec: 96.4794 INFO:tensorflow:loss = 0.3620384, step = 9100 (1.036 sec) INFO:tensorflow:loss = 0.3620384, step = 9100 (1.036 sec) INFO:tensorflow:global_step/sec: 97.1683 INFO:tensorflow:global_step/sec: 97.1683 INFO:tensorflow:loss = 0.36794767, step = 9200 (1.029 sec) INFO:tensorflow:loss = 0.36794767, step = 9200 (1.029 sec) INFO:tensorflow:global_step/sec: 99.3686 INFO:tensorflow:global_step/sec: 99.3686 INFO:tensorflow:loss = 0.3666119, step = 9300 (1.008 sec) INFO:tensorflow:loss = 0.3666119, step = 9300 (1.008 sec) INFO:tensorflow:global_step/sec: 96.923 INFO:tensorflow:global_step/sec: 96.923 INFO:tensorflow:loss = 0.36144882, step = 9400 (1.030 sec) INFO:tensorflow:loss = 0.36144882, step = 9400 (1.030 sec) INFO:tensorflow:global_step/sec: 97.618 INFO:tensorflow:global_step/sec: 97.618 INFO:tensorflow:loss = 0.35642093, step = 9500 (1.026 sec) INFO:tensorflow:loss = 0.35642093, step = 9500 (1.026 sec) INFO:tensorflow:global_step/sec: 96.3603 INFO:tensorflow:global_step/sec: 96.3603 INFO:tensorflow:loss = 0.36219785, step = 9600 (1.036 sec) INFO:tensorflow:loss = 0.36219785, step = 9600 (1.036 sec) INFO:tensorflow:global_step/sec: 97.3716 INFO:tensorflow:global_step/sec: 97.3716 INFO:tensorflow:loss = 0.35386264, step = 9700 (1.027 sec) INFO:tensorflow:loss = 0.35386264, step = 9700 (1.027 sec) INFO:tensorflow:global_step/sec: 97.9925 INFO:tensorflow:global_step/sec: 97.9925 INFO:tensorflow:loss = 0.36135137, step = 9800 (1.021 sec) INFO:tensorflow:loss = 0.36135137, step = 9800 (1.021 sec) INFO:tensorflow:global_step/sec: 96.7148 INFO:tensorflow:global_step/sec: 96.7148 INFO:tensorflow:loss = 0.36757234, step = 9900 (1.033 sec) INFO:tensorflow:loss = 0.36757234, step = 9900 (1.033 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990... INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000... INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2021-02-12T10:11:01Z INFO:tensorflow:Starting evaluation at 2021-02-12T10:11:01Z INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Inference Time : 51.21220s INFO:tensorflow:Inference Time : 51.21220s INFO:tensorflow:Finished evaluation at 2021-02-12-10:11:53 INFO:tensorflow:Finished evaluation at 2021-02-12-10:11:53 INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.4067987 INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.4067987 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Performing the final export in the end of training. INFO:tensorflow:Performing the final export in the end of training. WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'] INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'] INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Eval: None INFO:tensorflow:Signatures INCLUDED in export for Eval: None INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/export/compas/temp-1613124713/assets INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/export/compas/temp-1613124713/assets INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/export/compas/temp-1613124713/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/export/compas/temp-1613124713/saved_model.pb INFO:tensorflow:Loss for final step: 0.3785927. INFO:tensorflow:Loss for final step: 0.3785927. WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Predict: None INFO:tensorflow:Signatures INCLUDED in export for Predict: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval'] INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval'] WARNING:tensorflow:Export includes no default signature! WARNING:tensorflow:Export includes no default signature! INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/eval_model_dir/temp-1613124713/assets INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/eval_model_dir/temp-1613124713/assets INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/eval_model_dir/temp-1613124713/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/5/eval_model_dir/temp-1613124713/saved_model.pb WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb" WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"
TensorFlow Model Analysis
Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.
First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.
For a list of possible metrics that can be added into TensorFlow Model Analysis see here.
# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: "BinaryAccuracy"}
metrics {class_name: "AUC"}
metrics {
class_name: "FairnessIndicators"
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer)
Fairness Indicators
Load Fairness Indicators to examine the underlying data.
evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
FairnessIndicatorViewer(slicingMetrics=[{'sliceValue': 'Caucasian', 'slice': 'race:Caucasian', 'metrics': {'bi…
Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.
We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the False Positive Rate.
Within Fairness Indicator tool you'll see two dropdowns options:
- A "Baseline" option that is set by
column_for_slicing
. - A "Thresholds" option that is set by
fairness_indicator_thresholds
.
“Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well.
"Threshold" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.
- Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants will commit another crime when they actually don't.
- Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants will not commit another crime when they actually do.
We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.
The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold.
- False Positive Rate @ 0.75
- African-American: ~30%
- AUC: 0.71
- Binary Accuracy: 0.67
- Caucasian: ~8%
- AUC: 0.71
- AUC: 0.67
- African-American: ~30%
More information on Type I/II errors and threshold setting can be found here.
ML Metadata
To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.
For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.
We will start by first laying out the high level artifacts, execution, and context types in our model.
# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = os.path.join(
context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)
def _mlmd_type_to_dataframe(mlmd_type):
"""Helper function to turn MLMD into a Pandas DataFrame.
Args:
mlmd_type: Metadata store type.
Returns:
DataFrame containing type ID, Name, and Properties.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
column_names = ['ID', 'Name', 'Properties']
df = pd.DataFrame(columns=column_names)
for a_type in mlmd_type:
mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
columns=column_names)
df = df.append(mlmd_row)
return df
# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))
print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))
print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
Artifact Types:
Execution Types:
Context Types:
Identify where the fairness issue could be coming from
For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.
We'll start by diving into the StatisticsGen
to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.
After running the below cell, select Lift (Y=1)
in the second chart on the Chart to show
tab to see the lift between the different data slices. Within race
, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)
for event in store.get_events_by_execution_ids([exec_result.execution_id]):
if event.path.steps[0].key == 'statistics':
statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri
model_stats = tfdv.load_statistics(
os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[_SlicedXKey, Union[float, int]], _SlicedYKey] instead. WARNING:root:This input type hint will be ignored and not used for type-checking purposes. Typically, input type hints for a PTransform are single (or nested) types wrapped by a PCollection, or PBegin. Got: Tuple[Tuple[Union[NoneType, str], RecordBatch], _SlicedYKey] instead.
Tracking a Model Change
Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.
ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset
_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'
first_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])
def _mlmd_model_to_dataframe(model, model_number):
"""Helper function to turn a MLMD modle into a Pandas DataFrame.
Args:
model: Metadata store model.
model_number: Number of model run within ML Metadata.
Returns:
DataFrame containing the ML Metadata model.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
df = pd.DataFrame()
custom_properties = ['name', 'note', 'state', 'producer_component',
'pipeline_name']
df['id'] = [model[model_number].id]
df['uri'] = [model[model_number].uri]
for prop in custom_properties:
df[prop] = model[model_number].custom_properties.get(prop)
df[prop] = df[prop].astype(str).map(
lambda x: x.lstrip('string_value: "').rstrip('"\n'))
return df
# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
Improving fairness concerns by weighting the model
There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques1 that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.
The code below is the same as the above Transform Component but with the exception of a new class called LogisticEndpoint
that we will use for our loss within Keras and a few parameter changes.
- Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf
%%writefile {_trainer_module_file}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
# To weight our model we will develop a custom loss class within Keras.
# The old loss is commented out below and the new one is added in below.
model.compile(
# loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
loss=LogisticEndpoint(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
class LogisticEndpoint(tf.keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def __call__(self, y_true, y_pred, sample_weight=None):
inputs = [y_true, y_pred]
inputs += sample_weight or ['sample_weight_xf']
return super(LogisticEndpoint, self).__call__(inputs)
def call(self, inputs):
y_true, y_pred = inputs[0], inputs[1]
if len(inputs) == 3:
sample_weight = inputs[2]
else:
sample_weight = None
loss = self.loss_fn(y_true, y_pred, sample_weight)
self.add_loss(loss)
reduce_loss = tf.math.divide_no_nan(
tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
return reduce_loss
Overwriting compas_trainer.py
Retrain the TFX model with the weighted model
In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.
trainer_weighted = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE INFO:tensorflow:Using the Keras model provided. INFO:tensorflow:Using the Keras model provided. /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model. warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and ' INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true graph_options { rewrite_options { meta_optimizer_iterations: ONE } } , '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Not using Distribute Coordinator. INFO:tensorflow:Not using Distribute Coordinator. INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None. INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt INFO:tensorflow:Warm-starting from: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/keras/keras_model.ckpt INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES. INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES. INFO:tensorflow:Warm-started 6 variables. INFO:tensorflow:Warm-started 6 variables. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0... INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0... INFO:tensorflow:loss = 0.57258624, step = 0 INFO:tensorflow:loss = 0.57258624, step = 0 INFO:tensorflow:global_step/sec: 95.0111 INFO:tensorflow:global_step/sec: 95.0111 INFO:tensorflow:loss = 0.59111136, step = 100 (1.054 sec) INFO:tensorflow:loss = 0.59111136, step = 100 (1.054 sec) INFO:tensorflow:global_step/sec: 98.8586 INFO:tensorflow:global_step/sec: 98.8586 INFO:tensorflow:loss = 0.6063444, step = 200 (1.011 sec) INFO:tensorflow:loss = 0.6063444, step = 200 (1.011 sec) INFO:tensorflow:global_step/sec: 96.6968 INFO:tensorflow:global_step/sec: 96.6968 INFO:tensorflow:loss = 0.5827305, step = 300 (1.034 sec) INFO:tensorflow:loss = 0.5827305, step = 300 (1.034 sec) INFO:tensorflow:global_step/sec: 96.9541 INFO:tensorflow:global_step/sec: 96.9541 INFO:tensorflow:loss = 0.5447484, step = 400 (1.031 sec) INFO:tensorflow:loss = 0.5447484, step = 400 (1.031 sec) INFO:tensorflow:global_step/sec: 98.6979 INFO:tensorflow:global_step/sec: 98.6979 INFO:tensorflow:loss = 0.5124405, step = 500 (1.013 sec) INFO:tensorflow:loss = 0.5124405, step = 500 (1.013 sec) INFO:tensorflow:global_step/sec: 97.6462 INFO:tensorflow:global_step/sec: 97.6462 INFO:tensorflow:loss = 0.5208768, step = 600 (1.024 sec) INFO:tensorflow:loss = 0.5208768, step = 600 (1.024 sec) INFO:tensorflow:global_step/sec: 98.605 INFO:tensorflow:global_step/sec: 98.605 INFO:tensorflow:loss = 0.50591975, step = 700 (1.014 sec) INFO:tensorflow:loss = 0.50591975, step = 700 (1.014 sec) INFO:tensorflow:global_step/sec: 97.2728 INFO:tensorflow:global_step/sec: 97.2728 INFO:tensorflow:loss = 0.5266091, step = 800 (1.028 sec) INFO:tensorflow:loss = 0.5266091, step = 800 (1.028 sec) INFO:tensorflow:global_step/sec: 98.6189 INFO:tensorflow:global_step/sec: 98.6189 INFO:tensorflow:loss = 0.53206897, step = 900 (1.014 sec) INFO:tensorflow:loss = 0.53206897, step = 900 (1.014 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999... INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999... INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically. warnings.warn('`Model.state_updates` will be removed in a future version. ' INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2021-02-12T10:12:19Z INFO:tensorflow:Starting evaluation at 2021-02-12T10:12:19Z INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-999 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-999 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Inference Time : 50.70643s INFO:tensorflow:Inference Time : 50.70643s INFO:tensorflow:Finished evaluation at 2021-02-12-10:13:10 INFO:tensorflow:Finished evaluation at 2021-02-12-10:13:10 INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.5356221 INFO:tensorflow:Saving dict for global step 999: global_step = 999, loss = 0.5356221 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-999 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-999 INFO:tensorflow:global_step/sec: 1.92244 INFO:tensorflow:global_step/sec: 1.92244 INFO:tensorflow:loss = 0.5542614, step = 1000 (52.017 sec) INFO:tensorflow:loss = 0.5542614, step = 1000 (52.017 sec) INFO:tensorflow:global_step/sec: 99.7484 INFO:tensorflow:global_step/sec: 99.7484 INFO:tensorflow:loss = 0.55367213, step = 1100 (1.003 sec) INFO:tensorflow:loss = 0.55367213, step = 1100 (1.003 sec) INFO:tensorflow:global_step/sec: 99.6117 INFO:tensorflow:global_step/sec: 99.6117 INFO:tensorflow:loss = 0.5548304, step = 1200 (1.004 sec) INFO:tensorflow:loss = 0.5548304, step = 1200 (1.004 sec) INFO:tensorflow:global_step/sec: 97.5115 INFO:tensorflow:global_step/sec: 97.5115 INFO:tensorflow:loss = 0.52583724, step = 1300 (1.026 sec) INFO:tensorflow:loss = 0.52583724, step = 1300 (1.026 sec) INFO:tensorflow:global_step/sec: 99.0655 INFO:tensorflow:global_step/sec: 99.0655 INFO:tensorflow:loss = 0.51107824, step = 1400 (1.009 sec) INFO:tensorflow:loss = 0.51107824, step = 1400 (1.009 sec) INFO:tensorflow:global_step/sec: 97.5888 INFO:tensorflow:global_step/sec: 97.5888 INFO:tensorflow:loss = 0.49790362, step = 1500 (1.025 sec) INFO:tensorflow:loss = 0.49790362, step = 1500 (1.025 sec) INFO:tensorflow:global_step/sec: 98.7066 INFO:tensorflow:global_step/sec: 98.7066 INFO:tensorflow:loss = 0.48616275, step = 1600 (1.013 sec) INFO:tensorflow:loss = 0.48616275, step = 1600 (1.013 sec) INFO:tensorflow:global_step/sec: 98.295 INFO:tensorflow:global_step/sec: 98.295 INFO:tensorflow:loss = 0.50088644, step = 1700 (1.017 sec) INFO:tensorflow:loss = 0.50088644, step = 1700 (1.017 sec) INFO:tensorflow:global_step/sec: 98.4861 INFO:tensorflow:global_step/sec: 98.4861 INFO:tensorflow:loss = 0.4960216, step = 1800 (1.016 sec) INFO:tensorflow:loss = 0.4960216, step = 1800 (1.016 sec) INFO:tensorflow:global_step/sec: 97.8413 INFO:tensorflow:global_step/sec: 97.8413 INFO:tensorflow:loss = 0.5072418, step = 1900 (1.022 sec) INFO:tensorflow:loss = 0.5072418, step = 1900 (1.022 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998... INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 95.1403 INFO:tensorflow:global_step/sec: 95.1403 INFO:tensorflow:loss = 0.49312633, step = 2000 (1.051 sec) INFO:tensorflow:loss = 0.49312633, step = 2000 (1.051 sec) INFO:tensorflow:global_step/sec: 101.574 INFO:tensorflow:global_step/sec: 101.574 INFO:tensorflow:loss = 0.46199903, step = 2100 (0.985 sec) INFO:tensorflow:loss = 0.46199903, step = 2100 (0.985 sec) INFO:tensorflow:global_step/sec: 98.4059 INFO:tensorflow:global_step/sec: 98.4059 INFO:tensorflow:loss = 0.4787897, step = 2200 (1.016 sec) INFO:tensorflow:loss = 0.4787897, step = 2200 (1.016 sec) INFO:tensorflow:global_step/sec: 100.457 INFO:tensorflow:global_step/sec: 100.457 INFO:tensorflow:loss = 0.45680276, step = 2300 (0.995 sec) INFO:tensorflow:loss = 0.45680276, step = 2300 (0.995 sec) INFO:tensorflow:global_step/sec: 97.968 INFO:tensorflow:global_step/sec: 97.968 INFO:tensorflow:loss = 0.46752274, step = 2400 (1.022 sec) INFO:tensorflow:loss = 0.46752274, step = 2400 (1.022 sec) INFO:tensorflow:global_step/sec: 98.5731 INFO:tensorflow:global_step/sec: 98.5731 INFO:tensorflow:loss = 0.46715197, step = 2500 (1.014 sec) INFO:tensorflow:loss = 0.46715197, step = 2500 (1.014 sec) INFO:tensorflow:global_step/sec: 97.9695 INFO:tensorflow:global_step/sec: 97.9695 INFO:tensorflow:loss = 0.48805335, step = 2600 (1.020 sec) INFO:tensorflow:loss = 0.48805335, step = 2600 (1.020 sec) INFO:tensorflow:global_step/sec: 97.8165 INFO:tensorflow:global_step/sec: 97.8165 INFO:tensorflow:loss = 0.4729743, step = 2700 (1.023 sec) INFO:tensorflow:loss = 0.4729743, step = 2700 (1.023 sec) INFO:tensorflow:global_step/sec: 98.3728 INFO:tensorflow:global_step/sec: 98.3728 INFO:tensorflow:loss = 0.47308907, step = 2800 (1.016 sec) INFO:tensorflow:loss = 0.47308907, step = 2800 (1.016 sec) INFO:tensorflow:global_step/sec: 97.3659 INFO:tensorflow:global_step/sec: 97.3659 INFO:tensorflow:loss = 0.46293324, step = 2900 (1.027 sec) INFO:tensorflow:loss = 0.46293324, step = 2900 (1.027 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997... INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 98.4144 INFO:tensorflow:global_step/sec: 98.4144 INFO:tensorflow:loss = 0.4564404, step = 3000 (1.016 sec) INFO:tensorflow:loss = 0.4564404, step = 3000 (1.016 sec) INFO:tensorflow:global_step/sec: 99.926 INFO:tensorflow:global_step/sec: 99.926 INFO:tensorflow:loss = 0.465839, step = 3100 (1.002 sec) INFO:tensorflow:loss = 0.465839, step = 3100 (1.002 sec) INFO:tensorflow:global_step/sec: 99.6795 INFO:tensorflow:global_step/sec: 99.6795 INFO:tensorflow:loss = 0.4838277, step = 3200 (1.003 sec) INFO:tensorflow:loss = 0.4838277, step = 3200 (1.003 sec) INFO:tensorflow:global_step/sec: 97.5392 INFO:tensorflow:global_step/sec: 97.5392 INFO:tensorflow:loss = 0.5124685, step = 3300 (1.025 sec) INFO:tensorflow:loss = 0.5124685, step = 3300 (1.025 sec) INFO:tensorflow:global_step/sec: 100.332 INFO:tensorflow:global_step/sec: 100.332 INFO:tensorflow:loss = 0.49425405, step = 3400 (0.997 sec) INFO:tensorflow:loss = 0.49425405, step = 3400 (0.997 sec) INFO:tensorflow:global_step/sec: 98.303 INFO:tensorflow:global_step/sec: 98.303 INFO:tensorflow:loss = 0.47847462, step = 3500 (1.017 sec) INFO:tensorflow:loss = 0.47847462, step = 3500 (1.017 sec) INFO:tensorflow:global_step/sec: 100.119 INFO:tensorflow:global_step/sec: 100.119 INFO:tensorflow:loss = 0.45442203, step = 3600 (0.999 sec) INFO:tensorflow:loss = 0.45442203, step = 3600 (0.999 sec) INFO:tensorflow:global_step/sec: 98.3327 INFO:tensorflow:global_step/sec: 98.3327 INFO:tensorflow:loss = 0.46036306, step = 3700 (1.017 sec) INFO:tensorflow:loss = 0.46036306, step = 3700 (1.017 sec) INFO:tensorflow:global_step/sec: 98.1763 INFO:tensorflow:global_step/sec: 98.1763 INFO:tensorflow:loss = 0.449889, step = 3800 (1.019 sec) INFO:tensorflow:loss = 0.449889, step = 3800 (1.019 sec) INFO:tensorflow:global_step/sec: 98.1824 INFO:tensorflow:global_step/sec: 98.1824 INFO:tensorflow:loss = 0.47323698, step = 3900 (1.018 sec) INFO:tensorflow:loss = 0.47323698, step = 3900 (1.018 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996... INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 97.0251 INFO:tensorflow:global_step/sec: 97.0251 WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 4000 vs previous value: 4000. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 4000 vs previous value: 4000. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. INFO:tensorflow:loss = 0.47969532, step = 4000 (1.032 sec) INFO:tensorflow:loss = 0.47969532, step = 4000 (1.032 sec) INFO:tensorflow:global_step/sec: 100.087 INFO:tensorflow:global_step/sec: 100.087 INFO:tensorflow:loss = 0.49813706, step = 4100 (0.998 sec) INFO:tensorflow:loss = 0.49813706, step = 4100 (0.998 sec) INFO:tensorflow:global_step/sec: 100 INFO:tensorflow:global_step/sec: 100 INFO:tensorflow:loss = 0.5126548, step = 4200 (1.000 sec) INFO:tensorflow:loss = 0.5126548, step = 4200 (1.000 sec) INFO:tensorflow:global_step/sec: 98.4966 INFO:tensorflow:global_step/sec: 98.4966 INFO:tensorflow:loss = 0.49735364, step = 4300 (1.015 sec) INFO:tensorflow:loss = 0.49735364, step = 4300 (1.015 sec) INFO:tensorflow:global_step/sec: 98.7413 INFO:tensorflow:global_step/sec: 98.7413 INFO:tensorflow:loss = 0.49041206, step = 4400 (1.013 sec) INFO:tensorflow:loss = 0.49041206, step = 4400 (1.013 sec) INFO:tensorflow:global_step/sec: 98.6944 INFO:tensorflow:global_step/sec: 98.6944 INFO:tensorflow:loss = 0.48148644, step = 4500 (1.013 sec) INFO:tensorflow:loss = 0.48148644, step = 4500 (1.013 sec) INFO:tensorflow:global_step/sec: 97.2658 INFO:tensorflow:global_step/sec: 97.2658 INFO:tensorflow:loss = 0.4657409, step = 4600 (1.028 sec) INFO:tensorflow:loss = 0.4657409, step = 4600 (1.028 sec) INFO:tensorflow:global_step/sec: 100.851 INFO:tensorflow:global_step/sec: 100.851 INFO:tensorflow:loss = 0.46458036, step = 4700 (0.992 sec) INFO:tensorflow:loss = 0.46458036, step = 4700 (0.992 sec) INFO:tensorflow:global_step/sec: 96.6637 INFO:tensorflow:global_step/sec: 96.6637 INFO:tensorflow:loss = 0.4591354, step = 4800 (1.035 sec) INFO:tensorflow:loss = 0.4591354, step = 4800 (1.035 sec) INFO:tensorflow:global_step/sec: 97.7683 INFO:tensorflow:global_step/sec: 97.7683 INFO:tensorflow:loss = 0.47176364, step = 4900 (1.023 sec) INFO:tensorflow:loss = 0.47176364, step = 4900 (1.023 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995... INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 96.4806 INFO:tensorflow:global_step/sec: 96.4806 INFO:tensorflow:loss = 0.45831317, step = 5000 (1.036 sec) INFO:tensorflow:loss = 0.45831317, step = 5000 (1.036 sec) INFO:tensorflow:global_step/sec: 98.8629 INFO:tensorflow:global_step/sec: 98.8629 INFO:tensorflow:loss = 0.4510315, step = 5100 (1.012 sec) INFO:tensorflow:loss = 0.4510315, step = 5100 (1.012 sec) INFO:tensorflow:global_step/sec: 98.2905 INFO:tensorflow:global_step/sec: 98.2905 INFO:tensorflow:loss = 0.446196, step = 5200 (1.018 sec) INFO:tensorflow:loss = 0.446196, step = 5200 (1.018 sec) INFO:tensorflow:global_step/sec: 97.242 INFO:tensorflow:global_step/sec: 97.242 INFO:tensorflow:loss = 0.43933666, step = 5300 (1.028 sec) INFO:tensorflow:loss = 0.43933666, step = 5300 (1.028 sec) INFO:tensorflow:global_step/sec: 97.4923 INFO:tensorflow:global_step/sec: 97.4923 INFO:tensorflow:loss = 0.45289323, step = 5400 (1.026 sec) INFO:tensorflow:loss = 0.45289323, step = 5400 (1.026 sec) INFO:tensorflow:global_step/sec: 98.7767 INFO:tensorflow:global_step/sec: 98.7767 INFO:tensorflow:loss = 0.43395495, step = 5500 (1.012 sec) INFO:tensorflow:loss = 0.43395495, step = 5500 (1.012 sec) INFO:tensorflow:global_step/sec: 98.7646 INFO:tensorflow:global_step/sec: 98.7646 INFO:tensorflow:loss = 0.45283514, step = 5600 (1.012 sec) INFO:tensorflow:loss = 0.45283514, step = 5600 (1.012 sec) INFO:tensorflow:global_step/sec: 97.4594 INFO:tensorflow:global_step/sec: 97.4594 INFO:tensorflow:loss = 0.44984227, step = 5700 (1.026 sec) INFO:tensorflow:loss = 0.44984227, step = 5700 (1.026 sec) INFO:tensorflow:global_step/sec: 98.772 INFO:tensorflow:global_step/sec: 98.772 INFO:tensorflow:loss = 0.4434341, step = 5800 (1.013 sec) INFO:tensorflow:loss = 0.4434341, step = 5800 (1.013 sec) INFO:tensorflow:global_step/sec: 96.5804 INFO:tensorflow:global_step/sec: 96.5804 INFO:tensorflow:loss = 0.4415862, step = 5900 (1.035 sec) INFO:tensorflow:loss = 0.4415862, step = 5900 (1.035 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994... INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 98.6752 INFO:tensorflow:global_step/sec: 98.6752 INFO:tensorflow:loss = 0.42540377, step = 6000 (1.015 sec) INFO:tensorflow:loss = 0.42540377, step = 6000 (1.015 sec) INFO:tensorflow:global_step/sec: 97.8003 INFO:tensorflow:global_step/sec: 97.8003 INFO:tensorflow:loss = 0.4296548, step = 6100 (1.021 sec) INFO:tensorflow:loss = 0.4296548, step = 6100 (1.021 sec) INFO:tensorflow:global_step/sec: 98.751 INFO:tensorflow:global_step/sec: 98.751 INFO:tensorflow:loss = 0.42561662, step = 6200 (1.013 sec) INFO:tensorflow:loss = 0.42561662, step = 6200 (1.013 sec) INFO:tensorflow:global_step/sec: 98.4394 INFO:tensorflow:global_step/sec: 98.4394 INFO:tensorflow:loss = 0.4394623, step = 6300 (1.016 sec) INFO:tensorflow:loss = 0.4394623, step = 6300 (1.016 sec) INFO:tensorflow:global_step/sec: 98.7076 INFO:tensorflow:global_step/sec: 98.7076 INFO:tensorflow:loss = 0.4530936, step = 6400 (1.015 sec) INFO:tensorflow:loss = 0.4530936, step = 6400 (1.015 sec) INFO:tensorflow:global_step/sec: 100.425 INFO:tensorflow:global_step/sec: 100.425 INFO:tensorflow:loss = 0.44297406, step = 6500 (0.994 sec) INFO:tensorflow:loss = 0.44297406, step = 6500 (0.994 sec) INFO:tensorflow:global_step/sec: 96.0748 INFO:tensorflow:global_step/sec: 96.0748 INFO:tensorflow:loss = 0.4397682, step = 6600 (1.041 sec) INFO:tensorflow:loss = 0.4397682, step = 6600 (1.041 sec) INFO:tensorflow:global_step/sec: 99.3254 INFO:tensorflow:global_step/sec: 99.3254 INFO:tensorflow:loss = 0.42386428, step = 6700 (1.007 sec) INFO:tensorflow:loss = 0.42386428, step = 6700 (1.007 sec) INFO:tensorflow:global_step/sec: 98.7462 INFO:tensorflow:global_step/sec: 98.7462 INFO:tensorflow:loss = 0.42940405, step = 6800 (1.012 sec) INFO:tensorflow:loss = 0.42940405, step = 6800 (1.012 sec) INFO:tensorflow:global_step/sec: 98.4818 INFO:tensorflow:global_step/sec: 98.4818 INFO:tensorflow:loss = 0.42235553, step = 6900 (1.015 sec) INFO:tensorflow:loss = 0.42235553, step = 6900 (1.015 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993... INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 98.073 INFO:tensorflow:global_step/sec: 98.073 INFO:tensorflow:loss = 0.41164792, step = 7000 (1.019 sec) INFO:tensorflow:loss = 0.41164792, step = 7000 (1.019 sec) INFO:tensorflow:global_step/sec: 99.1551 INFO:tensorflow:global_step/sec: 99.1551 INFO:tensorflow:loss = 0.42683214, step = 7100 (1.009 sec) INFO:tensorflow:loss = 0.42683214, step = 7100 (1.009 sec) INFO:tensorflow:global_step/sec: 98.0736 INFO:tensorflow:global_step/sec: 98.0736 INFO:tensorflow:loss = 0.44137853, step = 7200 (1.020 sec) INFO:tensorflow:loss = 0.44137853, step = 7200 (1.020 sec) INFO:tensorflow:global_step/sec: 102.075 INFO:tensorflow:global_step/sec: 102.075 INFO:tensorflow:loss = 0.45727393, step = 7300 (0.980 sec) INFO:tensorflow:loss = 0.45727393, step = 7300 (0.980 sec) INFO:tensorflow:global_step/sec: 97.5912 INFO:tensorflow:global_step/sec: 97.5912 INFO:tensorflow:loss = 0.44265467, step = 7400 (1.025 sec) INFO:tensorflow:loss = 0.44265467, step = 7400 (1.025 sec) INFO:tensorflow:global_step/sec: 97.5234 INFO:tensorflow:global_step/sec: 97.5234 INFO:tensorflow:loss = 0.43695024, step = 7500 (1.025 sec) INFO:tensorflow:loss = 0.43695024, step = 7500 (1.025 sec) INFO:tensorflow:global_step/sec: 97.7086 INFO:tensorflow:global_step/sec: 97.7086 INFO:tensorflow:loss = 0.4321368, step = 7600 (1.024 sec) INFO:tensorflow:loss = 0.4321368, step = 7600 (1.024 sec) INFO:tensorflow:global_step/sec: 99.064 INFO:tensorflow:global_step/sec: 99.064 INFO:tensorflow:loss = 0.42050543, step = 7700 (1.009 sec) INFO:tensorflow:loss = 0.42050543, step = 7700 (1.009 sec) INFO:tensorflow:global_step/sec: 99.7688 INFO:tensorflow:global_step/sec: 99.7688 INFO:tensorflow:loss = 0.43072397, step = 7800 (1.004 sec) INFO:tensorflow:loss = 0.43072397, step = 7800 (1.004 sec) INFO:tensorflow:global_step/sec: 98.3964 INFO:tensorflow:global_step/sec: 98.3964 INFO:tensorflow:loss = 0.4334941, step = 7900 (1.015 sec) INFO:tensorflow:loss = 0.4334941, step = 7900 (1.015 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992... INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 100.873 INFO:tensorflow:global_step/sec: 100.873 INFO:tensorflow:loss = 0.43893683, step = 8000 (0.991 sec) INFO:tensorflow:loss = 0.43893683, step = 8000 (0.991 sec) INFO:tensorflow:global_step/sec: 98.1373 INFO:tensorflow:global_step/sec: 98.1373 INFO:tensorflow:loss = 0.42160377, step = 8100 (1.019 sec) INFO:tensorflow:loss = 0.42160377, step = 8100 (1.019 sec) INFO:tensorflow:global_step/sec: 98.9972 INFO:tensorflow:global_step/sec: 98.9972 INFO:tensorflow:loss = 0.42020878, step = 8200 (1.009 sec) INFO:tensorflow:loss = 0.42020878, step = 8200 (1.009 sec) INFO:tensorflow:global_step/sec: 100.468 INFO:tensorflow:global_step/sec: 100.468 INFO:tensorflow:loss = 0.41984835, step = 8300 (0.996 sec) INFO:tensorflow:loss = 0.41984835, step = 8300 (0.996 sec) INFO:tensorflow:global_step/sec: 98.1667 INFO:tensorflow:global_step/sec: 98.1667 INFO:tensorflow:loss = 0.4014363, step = 8400 (1.019 sec) INFO:tensorflow:loss = 0.4014363, step = 8400 (1.019 sec) INFO:tensorflow:global_step/sec: 98.6839 INFO:tensorflow:global_step/sec: 98.6839 INFO:tensorflow:loss = 0.40265986, step = 8500 (1.013 sec) INFO:tensorflow:loss = 0.40265986, step = 8500 (1.013 sec) INFO:tensorflow:global_step/sec: 98.179 INFO:tensorflow:global_step/sec: 98.179 INFO:tensorflow:loss = 0.39198336, step = 8600 (1.019 sec) INFO:tensorflow:loss = 0.39198336, step = 8600 (1.019 sec) INFO:tensorflow:global_step/sec: 98.7311 INFO:tensorflow:global_step/sec: 98.7311 INFO:tensorflow:loss = 0.39926583, step = 8700 (1.012 sec) INFO:tensorflow:loss = 0.39926583, step = 8700 (1.012 sec) INFO:tensorflow:global_step/sec: 98.4281 INFO:tensorflow:global_step/sec: 98.4281 INFO:tensorflow:loss = 0.40075237, step = 8800 (1.016 sec) INFO:tensorflow:loss = 0.40075237, step = 8800 (1.016 sec) INFO:tensorflow:global_step/sec: 99.6468 INFO:tensorflow:global_step/sec: 99.6468 INFO:tensorflow:loss = 0.40891582, step = 8900 (1.003 sec) INFO:tensorflow:loss = 0.40891582, step = 8900 (1.003 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991... INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:global_step/sec: 93.0185 INFO:tensorflow:global_step/sec: 93.0185 INFO:tensorflow:loss = 0.40045896, step = 9000 (1.075 sec) INFO:tensorflow:loss = 0.40045896, step = 9000 (1.075 sec) INFO:tensorflow:global_step/sec: 93.7229 INFO:tensorflow:global_step/sec: 93.7229 INFO:tensorflow:loss = 0.39390317, step = 9100 (1.067 sec) INFO:tensorflow:loss = 0.39390317, step = 9100 (1.067 sec) INFO:tensorflow:global_step/sec: 94.2949 INFO:tensorflow:global_step/sec: 94.2949 INFO:tensorflow:loss = 0.39068067, step = 9200 (1.060 sec) INFO:tensorflow:loss = 0.39068067, step = 9200 (1.060 sec) INFO:tensorflow:global_step/sec: 94.8195 INFO:tensorflow:global_step/sec: 94.8195 INFO:tensorflow:loss = 0.39105687, step = 9300 (1.055 sec) INFO:tensorflow:loss = 0.39105687, step = 9300 (1.055 sec) INFO:tensorflow:global_step/sec: 93.4232 INFO:tensorflow:global_step/sec: 93.4232 INFO:tensorflow:loss = 0.38374734, step = 9400 (1.072 sec) INFO:tensorflow:loss = 0.38374734, step = 9400 (1.072 sec) INFO:tensorflow:global_step/sec: 95.8094 INFO:tensorflow:global_step/sec: 95.8094 INFO:tensorflow:loss = 0.3901114, step = 9500 (1.042 sec) INFO:tensorflow:loss = 0.3901114, step = 9500 (1.042 sec) INFO:tensorflow:global_step/sec: 94.098 INFO:tensorflow:global_step/sec: 94.098 INFO:tensorflow:loss = 0.39067653, step = 9600 (1.063 sec) INFO:tensorflow:loss = 0.39067653, step = 9600 (1.063 sec) INFO:tensorflow:global_step/sec: 94.4483 INFO:tensorflow:global_step/sec: 94.4483 INFO:tensorflow:loss = 0.38588935, step = 9700 (1.059 sec) INFO:tensorflow:loss = 0.38588935, step = 9700 (1.059 sec) INFO:tensorflow:global_step/sec: 95.1051 INFO:tensorflow:global_step/sec: 95.1051 INFO:tensorflow:loss = 0.3792027, step = 9800 (1.052 sec) INFO:tensorflow:loss = 0.3792027, step = 9800 (1.052 sec) INFO:tensorflow:global_step/sec: 94.9405 INFO:tensorflow:global_step/sec: 94.9405 INFO:tensorflow:loss = 0.3875277, step = 9900 (1.053 sec) INFO:tensorflow:loss = 0.3875277, step = 9900 (1.053 sec) INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990... INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000... INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000... INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt. INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000... INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000... INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs). INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2021-02-12T10:14:42Z INFO:tensorflow:Starting evaluation at 2021-02-12T10:14:42Z INFO:tensorflow:Graph was finalized. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [500/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1000/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [1500/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2000/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [2500/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3000/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [3500/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4000/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [4500/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Evaluation [5000/5000] INFO:tensorflow:Inference Time : 51.00252s INFO:tensorflow:Inference Time : 51.00252s INFO:tensorflow:Finished evaluation at 2021-02-12-10:15:33 INFO:tensorflow:Finished evaluation at 2021-02-12-10:15:33 INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.42188707 INFO:tensorflow:Saving dict for global step 10000: global_step = 10000, loss = 0.42188707 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Performing the final export in the end of training. INFO:tensorflow:Performing the final export in the end of training. WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'] INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'] INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Eval: None INFO:tensorflow:Signatures INCLUDED in export for Eval: None INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/export/compas/temp-1613124933/assets INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/export/compas/temp-1613124933/assets INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/export/compas/temp-1613124933/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/export/compas/temp-1613124933/saved_model.pb INFO:tensorflow:Loss for final step: 0.39652213. INFO:tensorflow:Loss for final step: 0.39652213. WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_3:0\022\003sex" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_5:0\022\004race" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_7:0\022\rc_charge_desc" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef" value: "\n\013\n\tConst_9:0\022\017c_charge_degree" INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Saver not created because there are no variables in the graph to restore INFO:tensorflow:Calling model_fn. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Classify: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Regress: None INFO:tensorflow:Signatures INCLUDED in export for Predict: None INFO:tensorflow:Signatures INCLUDED in export for Predict: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Train: None INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval'] INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval'] WARNING:tensorflow:Export includes no default signature! WARNING:tensorflow:Export includes no default signature! INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/serving_model_dir/model.ckpt-10000 INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets added to graph. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/eval_model_dir/temp-1613124933/assets INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/eval_model_dir/temp-1613124933/assets INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/eval_model_dir/temp-1613124933/saved_model.pb INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2021-02-12T10_07_52.895000-0wa4k2c_/Trainer/model_run/8/eval_model_dir/temp-1613124933/saved_model.pb WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb" WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer_weighted.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: 'BinaryAccuracy'}
metrics {class_name: 'AUC'}
metrics {
class_name: 'FairnessIndicators'
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)
multi_eval_results = {
'Unweighted Model': eval_result,
'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
multi_eval_results=multi_eval_results)
FairnessIndicatorViewer(evalName='Unweighted Model', evalNameCompare='Weighted Model', slicingMetrics=[{'slice…
After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.
The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.
- False Positive Rate @ 0.75
- African-American: ~1%
- AUC: 0.47
- Binary Accuracy: 0.59
- Caucasian: ~0%
- AUC: 0.47
- Binary Accuracy: 0.58
- African-American: ~1%
Examine the data of the second run
Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns.
# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri
# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
second_model_uri, 'eval/stats_tfrecord/'))
# Visualize the statistics between the two models.
tfdv.visualize_statistics(
lhs_statistics=second_model_stats,
lhs_name='Sampled Model',
rhs_statistics=first_model_uri,
rhs_name='COMPAS Orginal')
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'
# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
Conclusion
Within this case study we developed a Keras classifier within a TFX pipeline with the COMPAS dataset to examine any fairness concerns within the dataset. After initially developing the TFX, fairness concerns were not immediately apparent until examining the individual slices within our model by our sensitive features --in our case race. After identifying the issues, we were able to track down the source of the fairness issue with TensorFlow DataValidation to identify a method to mitigate the fairness concerns via model weighting while tracking and annotating the changes via ML Metadata. Although we are not able to fully fix all the fairness concerns within the dataset, by adding a note for future developers to follow will allow others to understand and issues we faced while developing this model.
Finally it is important to note that this case study did not fix the fairness issues that are present in the COMPAS dataset. By improving the fairness concerns in the model we also reduced the AUC and accuracy in the performance of the model. What we were able to do, however, was build a model that showcased the fairness concerns and track down where the problems could be coming from by tracking or model's lineage while annotating any model concerns within the metadata.
For more information on the issues that the predicting pre-trial detention can have see the FAT* 2018 talk on "Understanding the Context and Consequences of Pre-trial Detention"