TFX Estimator Component Tutorial

A Component-by-Component Introduction to TensorFlow Extended (TFX)

This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).

It covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.

When you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.

Background

This notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.

Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.

Orchestration

In a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.

Metadata

In a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the /tmp directory on the Jupyter notebook or Colab server.

Setup

First, we install and import the necessary packages, set up paths, and download data.

Install TFX

pip install -q "tfx>=0.21.1,<0.22" "tensorflow>=2.1,<2.2" "tensorboard>=2.1,<2.3"
ERROR: tensorflow 2.1.1 has requirement tensorboard<2.2.0,>=2.1.0, but you'll have tensorboard 2.2.1 which is incompatible.
ERROR: kubernetes 10.1.0 has requirement pyyaml~=3.12, but you'll have pyyaml 5.3.1 which is incompatible.

Did you restart the runtime?

If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.

Import packages

We import necessary packages, including standard TFX component classes.

import os
import pprint
import tempfile
import urllib

import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()

import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.proto.evaluator_pb2 import SingleSlicingSpec
from tfx.utils.dsl_utils import external_input
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing

%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/orchestration/data_types.py:191: UserWarning: RuntimeParameter is only supported on KubeflowDagRunner currently.
  warnings.warn('RuntimeParameter is only supported on KubeflowDagRunner '

Let's check the library versions.

print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
TensorFlow version: 2.1.1
TFX version: 0.21.4

Set up pipeline paths

# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]

# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')

# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
    tempfile.mkdtemp(), 'serving_model/taxi_simple')

# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)

Download example data

We download the example dataset for use in our TFX pipeline.

The dataset we're using is the Taxi Trips dataset released by the City of Chicago. The columns in this dataset are:

pickup_community_areafaretrip_start_month
trip_start_hourtrip_start_daytrip_start_timestamp
pickup_latitudepickup_longitudedropoff_latitude
dropoff_longitudetrip_milespickup_census_tract
dropoff_census_tractpayment_typecompany
trip_secondsdropoff_community_areatips

With this dataset, we will build a model that predicts the tips of a trip.

_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
('/tmp/tfx-dataec5cgw8n/data.csv', <http.client.HTTPMessage at 0x7fa6a7949828>)

Take a quick look at the CSV file.

%%skip_for_export

!head {_data_filepath}
pickup_community_area,fare,trip_start_month,trip_start_hour,trip_start_day,trip_start_timestamp,pickup_latitude,pickup_longitude,dropoff_latitude,dropoff_longitude,trip_miles,pickup_census_tract,dropoff_census_tract,payment_type,company,trip_seconds,dropoff_community_area,tips
60,27.05,10,2,3,1380593700,41.836150155,-87.648787952,,,12.6,,,Cash,Taxi Affiliation Services,1380,,0.0
10,5.85,10,1,2,1382319000,41.985015101,-87.804532006,,,0.0,,,Cash,Taxi Affiliation Services,180,,0.0
14,16.65,5,7,5,1369897200,41.968069,-87.721559063,,,0.0,,,Cash,Dispatch Taxi Affiliation,1080,,0.0
13,16.45,11,12,3,1446554700,41.983636307,-87.723583185,,,6.9,,,Cash,,780,,0.0
16,32.05,12,1,1,1417916700,41.953582125,-87.72345239,,,15.4,,,Cash,,1200,,0.0
30,38.45,10,10,5,1444301100,41.839086906,-87.714003807,,,14.6,,,Cash,,2580,,0.0
11,14.65,1,1,3,1358213400,41.978829526,-87.771166703,,,5.81,,,Cash,,1080,,0.0
33,3.25,5,17,1,1368985500,41.849246754,-87.624135298,,,0.0,,,Cash,Taxi Affiliation Services,0,,0.0
19,47.65,6,15,4,1372258800,41.927260956,-87.765501609,,,0.0,,,Cash,Taxi Affiliation Services,3480,,0.0
This cell will be skipped during export to pipeline.

Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.

Create the InteractiveContext

Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.

# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy as root for pipeline outputs.
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/metadata.sqlite.

Run TFX components interactively

In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.

ExampleGen

The ExampleGen component is usually at the start of a TFX pipeline. It will:

  1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)
  2. Convert data into the tf.Example format
  3. Copy data into the _tfx_root directory for other components to access

ExampleGen takes as input the path to your data source. In our case, this is the _data_root path that contains the downloaded CSV.

example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
INFO:absl:Running driver for CsvExampleGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for CsvExampleGen
INFO:absl:Generating examples.
INFO:absl:Using 1 process(es) for Beam pipeline execution.
INFO:absl:Processing input csv data /tmp/tfx-dataec5cgw8n/* to TFExample.
WARNING:root:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
INFO:absl:Examples generated.
INFO:absl:Running publisher for CsvExampleGen
INFO:absl:MetadataStore with DB connection initialized

Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples:

%%skip_for_export

artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
["train", "eval"] /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/CsvExampleGen/examples/1
This cell will be skipped during export to pipeline.

We can also take a look at the first three training examples:

%%skip_for_export

# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')

# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
                      for name in os.listdir(train_uri)]

# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")

# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
  serialized_example = tfrecord.numpy()
  example = tf.train.Example()
  example.ParseFromString(serialized_example)
  pp.pprint(example)
features {
  feature {
    key: "company"
    value {
      bytes_list {
        value: "Taxi Affiliation Services"
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 27.049999237060547
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Cash"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 60
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
        value: 41.836151123046875
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
        value: -87.64878845214844
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 12.600000381469727
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 1380
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 2
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 10
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1380593700
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
      bytes_list {
        value: "Dispatch Taxi Affiliation"
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 16.649999618530273
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Cash"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 14
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
        value: 41.96806716918945
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
        value: -87.7215576171875
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 1080
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 5
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 7
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 5
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1369897200
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 16.450000762939453
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Cash"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 13
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
        value: 41.98363494873047
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
        value: -87.72357940673828
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 6.900000095367432
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 780
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 12
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 11
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1446554700
      }
    }
  }
}

This cell will be skipped during export to pipeline.

Now that ExampleGen has finished ingesting the data, the next step is data analysis.

StatisticsGen

The StatisticsGen component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the TensorFlow Data Validation library.

StatisticsGen takes as input the dataset we just ingested using ExampleGen.

statistics_gen = StatisticsGen(
    examples=example_gen.outputs['examples'])
context.run(statistics_gen)
INFO:absl:Running driver for StatisticsGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for StatisticsGen
INFO:absl:Using 1 process(es) for Beam pipeline execution.
INFO:absl:Generating statistics for split train
INFO:absl:Statistics for split train written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/StatisticsGen/statistics/2/train.
INFO:absl:Generating statistics for split eval
INFO:absl:Statistics for split eval written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/StatisticsGen/statistics/2/eval.
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/arrow/arrow_util.py:239: FutureWarning: Calling .data on ChunkedArray is provided for compatibility after Column was removed, simply drop this attribute
  types.FeaturePath([column_name]), column.data.chunk(0), weights):
INFO:absl:Running publisher for StatisticsGen
INFO:absl:MetadataStore with DB connection initialized

After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!

%%skip_for_export

context.show(statistics_gen.outputs['statistics'])
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_data_validation/utils/stats_gen_lib.py:366: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

This cell will be skipped during export to pipeline.

SchemaGen

The SchemaGen component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the TensorFlow Data Validation library.

SchemaGen will take as input the statistics that we generated with StatisticsGen, looking at the training split by default.

schema_gen = SchemaGen(
    statistics=statistics_gen.outputs['statistics'],
    infer_feature_shape=False)
context.run(schema_gen)
INFO:absl:Running driver for SchemaGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for SchemaGen
INFO:absl:Infering schema from statistics.
INFO:absl:Schema written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/SchemaGen/schema/3/schema.pbtxt.
INFO:absl:Running publisher for SchemaGen
INFO:absl:MetadataStore with DB connection initialized

After SchemaGen finishes running, we can visualize the generated schema as a table.

%%skip_for_export

context.show(schema_gen.outputs['schema'])
This cell will be skipped during export to pipeline.

Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.

To learn more about schemas, see the SchemaGen documentation.

ExampleValidator

The ExampleValidator component detects anomalies in your data, based on the expectations defined by the schema. It also uses the TensorFlow Data Validation library.

ExampleValidator will take as input the statistics from StatisticsGen, and the schema from SchemaGen.

By default, it compares the statistics from the evaluation split to the schema from the training split.

example_validator = ExampleValidator(
    statistics=statistics_gen.outputs['statistics'],
    schema=schema_gen.outputs['schema'])
context.run(example_validator)
INFO:absl:Running driver for ExampleValidator
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for ExampleValidator
INFO:absl:Validating schema against the computed statistics.
INFO:absl:Validation complete. Anomalies written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/ExampleValidator/anomalies/4.
INFO:absl:Running publisher for ExampleValidator
INFO:absl:MetadataStore with DB connection initialized

After ExampleValidator finishes running, we can visualize the anomalies as a table.

%%skip_for_export

context.show(example_validator.outputs['anomalies'])
This cell will be skipped during export to pipeline.

In the anomalies table, we can see that the company feature takes on new values that were not in the training split. This information can be used to debug model performance, understand how your data evolves over time, and identify data errors.

In our case, this company anomaly is innocuous, but the payment_type could be fixed. For now we move on to the next step of transforming the data.

Transform

The Transform component performs feature engineering for both training and serving. It uses the TensorFlow Transform library.

Transform will take as input the data from ExampleGen, the schema from SchemaGen, as well as a module that contains user-defined Transform code.

Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, see the tutorial). First, we define a few constants for feature engineering:

_taxi_constants_module_file = 'taxi_constants.py'
%%skip_for_export
%%writefile {_taxi_constants_module_file}

# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]

CATEGORICAL_FEATURE_KEYS = [
    'trip_start_hour', 'trip_start_day', 'trip_start_month',
    'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
    'dropoff_community_area'
]

DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']

# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10

BUCKET_FEATURE_KEYS = [
    'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
    'dropoff_longitude'
]

# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000

# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10

VOCAB_FEATURE_KEYS = [
    'payment_type',
    'company',
]

# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'

def transformed_name(key):
  return key + '_xf'
Writing taxi_constants.py
This cell will be skipped during export to pipeline.

Next, we write a preprocessing_fn that takes in raw data as input, and returns transformed features that our model can train on:

_taxi_transform_module_file = 'taxi_transform.py'
%%skip_for_export
%%writefile {_taxi_transform_module_file}

import tensorflow as tf
import tensorflow_transform as tft

import taxi_constants

_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name


def preprocessing_fn(inputs):
  """tf.transform's callback function for preprocessing inputs.
  Args:
    inputs: map from feature keys to raw not-yet-transformed features.
  Returns:
    Map from string feature key to transformed feature operations.
  """
  outputs = {}
  for key in _DENSE_FLOAT_FEATURE_KEYS:
    # Preserve this feature as a dense float, setting nan's to the mean.
    outputs[_transformed_name(key)] = tft.scale_to_z_score(
        _fill_in_missing(inputs[key]))

  for key in _VOCAB_FEATURE_KEYS:
    # Build a vocabulary for this feature.
    outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
        _fill_in_missing(inputs[key]),
        top_k=_VOCAB_SIZE,
        num_oov_buckets=_OOV_SIZE)

  for key in _BUCKET_FEATURE_KEYS:
    outputs[_transformed_name(key)] = tft.bucketize(
        _fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)

  for key in _CATEGORICAL_FEATURE_KEYS:
    outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])

  # Was this passenger a big tipper?
  taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
  tips = _fill_in_missing(inputs[_LABEL_KEY])
  outputs[_transformed_name(_LABEL_KEY)] = tf.where(
      tf.math.is_nan(taxi_fare),
      tf.cast(tf.zeros_like(taxi_fare), tf.int64),
      # Test if the tip was > 20% of the fare.
      tf.cast(
          tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))

  return outputs


def _fill_in_missing(x):
  """Replace missing values in a SparseTensor.
  Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
  Args:
    x: A `SparseTensor` of rank 2.  Its dense shape should have size at most 1
      in the second dimension.
  Returns:
    A rank 1 tensor where missing values of `x` have been filled in.
  """
  default_value = '' if x.dtype == tf.string else 0
  return tf.squeeze(
      tf.sparse.to_dense(
          tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
          default_value),
      axis=1)
Writing taxi_transform.py
This cell will be skipped during export to pipeline.

Now, we pass in this feature engineering code to the Transform component and run it to transform your data.

transform = Transform(
    examples=example_gen.outputs['examples'],
    schema=schema_gen.outputs['schema'],
    module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform)
INFO:absl:Running driver for Transform
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Transform

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tfx/components/transform/executor.py:511: Schema (from tensorflow_transform.tf_metadata.dataset_schema) is deprecated and will be removed in a future version.
Instructions for updating:
Schema is a deprecated, use schema_utils.schema_from_feature_spec to create a `Schema`

INFO:absl:Using 1 process(es) for Beam pipeline execution.

Warning:tensorflow:Tensorflow version (2.1.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transform_graph/5/.temp_path/tftransform_tmp/dd22fca53876474cba56cb819d8804ba/saved_model.pb
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
WARNING:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transform_graph/5/.temp_path/tftransform_tmp/281f5c983fc946e4ab458488fe1ce391/saved_model.pb
WARNING:tensorflow:Tensorflow version (2.1.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:tensorflow:Tensorflow version (2.1.1) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transform_graph/5/.temp_path/tftransform_tmp/cbb9359a25544998bbf9e19aa618c67e/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transform_graph/5/.temp_path/tftransform_tmp/cbb9359a25544998bbf9e19aa618c67e/saved_model.pb
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022-vocab_compute_and_apply_vocabulary_vocabulary"

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022/vocab_compute_and_apply_vocabulary_1_vocabulary"

INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022-vocab_compute_and_apply_vocabulary_vocabulary"

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022/vocab_compute_and_apply_vocabulary_1_vocabulary"

INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022-vocab_compute_and_apply_vocabulary_vocabulary"

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022/vocab_compute_and_apply_vocabulary_1_vocabulary"

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:absl:Running publisher for Transform
INFO:absl:MetadataStore with DB connection initialized

Let's examine the output artifacts of Transform. This component produces two types of outputs:

  • transform_graph is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
  • transformed_examples represents the preprocessed training and evaluation data.
%%skip_for_export

transform.outputs
{'transform_graph': Channel(
    type_name: TransformGraph
    artifacts: [Artifact(type_name: TransformGraph, uri: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transform_graph/5, id: 6)]
), 'transformed_examples': Channel(
    type_name: Examples
    artifacts: [Artifact(type_name: Examples, uri: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Transform/transformed_examples/5, id: 7)]
)}
This cell will be skipped during export to pipeline.

Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.

%%skip_for_export

train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
['transform_fn', 'transformed_metadata', 'metadata']
This cell will be skipped during export to pipeline.

The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data.

We can also take a look at the first three transformed examples:

%%skip_for_export

# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'train')

# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
                      for name in os.listdir(train_uri)]

# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")

# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
  serialized_example = tfrecord.numpy()
  example = tf.train.Example()
  example.ParseFromString(serialized_example)
  pp.pprint(example)
features {
  feature {
    key: "company_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude_xf"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare_xf"
    value {
      float_list {
        value: 1.5174685716629028
      }
    }
  }
  feature {
    key: "payment_type_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area_xf"
    value {
      int64_list {
        value: 60
      }
    }
  }
  feature {
    key: "pickup_latitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_longitude_xf"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "tips_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles_xf"
    value {
      float_list {
        value: 1.6439942121505737
      }
    }
  }
  feature {
    key: "trip_seconds_xf"
    value {
      float_list {
        value: 0.7374526262283325
      }
    }
  }
  feature {
    key: "trip_start_day_xf"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour_xf"
    value {
      int64_list {
        value: 2
      }
    }
  }
  feature {
    key: "trip_start_month_xf"
    value {
      int64_list {
        value: 10
      }
    }
  }
}

features {
  feature {
    key: "company_xf"
    value {
      int64_list {
        value: 1
      }
    }
  }
  feature {
    key: "dropoff_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude_xf"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare_xf"
    value {
      float_list {
        value: 0.4919655919075012
      }
    }
  }
  feature {
    key: "payment_type_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area_xf"
    value {
      int64_list {
        value: 14
      }
    }
  }
  feature {
    key: "pickup_latitude_xf"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "pickup_longitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "tips_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles_xf"
    value {
      float_list {
        value: -0.44967928528785706
      }
    }
  }
  feature {
    key: "trip_seconds_xf"
    value {
      float_list {
        value: 0.37378188967704773
      }
    }
  }
  feature {
    key: "trip_start_day_xf"
    value {
      int64_list {
        value: 5
      }
    }
  }
  feature {
    key: "trip_start_hour_xf"
    value {
      int64_list {
        value: 7
      }
    }
  }
  feature {
    key: "trip_start_month_xf"
    value {
      int64_list {
        value: 5
      }
    }
  }
}

features {
  feature {
    key: "company_xf"
    value {
      int64_list {
        value: 56
      }
    }
  }
  feature {
    key: "dropoff_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude_xf"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare_xf"
    value {
      float_list {
        value: 0.4722445011138916
      }
    }
  }
  feature {
    key: "payment_type_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_census_tract_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area_xf"
    value {
      int64_list {
        value: 13
      }
    }
  }
  feature {
    key: "pickup_latitude_xf"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "pickup_longitude_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "tips_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles_xf"
    value {
      float_list {
        value: 0.6968561410903931
      }
    }
  }
  feature {
    key: "trip_seconds_xf"
    value {
      float_list {
        value: 0.010111115872859955
      }
    }
  }
  feature {
    key: "trip_start_day_xf"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour_xf"
    value {
      int64_list {
        value: 12
      }
    }
  }
  feature {
    key: "trip_start_month_xf"
    value {
      int64_list {
        value: 11
      }
    }
  }
}

This cell will be skipped during export to pipeline.

After the Transform component has transformed your data into features, and the next step is to train a model.

Trainer

The Trainer component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with model_to_estimator).

Trainer takes as input the schema from SchemaGen, the transformed data and graph from Transform, training parameters, as well as a module that contains user-defined model code.

Let's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, see the tutorial):

_taxi_trainer_module_file = 'taxi_trainer.py'
%%skip_for_export
%%writefile {_taxi_trainer_module_file}

import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils

import taxi_constants

_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name


def _transformed_names(keys):
  return [_transformed_name(key) for key in keys]


# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _gzip_reader_fn(filenames):
  """Small utility returning a record reader that can read gzip'ed files."""
  return tf.data.TFRecordDataset(
      filenames,
      compression_type='GZIP')


def _build_estimator(config, hidden_units=None, warm_start_from=None):
  """Build an estimator for predicting the tipping behavior of taxi riders.
  Args:
    config: tf.estimator.RunConfig defining the runtime environment for the
      estimator (including model_dir).
    hidden_units: [int], the layer sizes of the DNN (input layer first)
    warm_start_from: Optional directory to warm start from.
  Returns:
    A dict of the following:

      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  real_valued_columns = [
      tf.feature_column.numeric_column(key, shape=())
      for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
  ]
  categorical_columns = [
      tf.feature_column.categorical_column_with_identity(
          key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
      for key in _transformed_names(_VOCAB_FEATURE_KEYS)
  ]
  categorical_columns += [
      tf.feature_column.categorical_column_with_identity(
          key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
      for key in _transformed_names(_BUCKET_FEATURE_KEYS)
  ]
  categorical_columns += [
      tf.feature_column.categorical_column_with_identity(  # pylint: disable=g-complex-comprehension
          key,
          num_buckets=num_buckets,
          default_value=0) for key, num_buckets in zip(
              _transformed_names(_CATEGORICAL_FEATURE_KEYS),
              _MAX_CATEGORICAL_FEATURE_VALUES)
  ]
  return tf.estimator.DNNLinearCombinedClassifier(
      config=config,
      linear_feature_columns=categorical_columns,
      dnn_feature_columns=real_valued_columns,
      dnn_hidden_units=hidden_units or [100, 70, 50, 25],
      warm_start_from=warm_start_from)


def _example_serving_receiver_fn(tf_transform_graph, schema):
  """Build the serving in inputs.
  Args:
    tf_transform_graph: A TFTransformOutput.
    schema: the schema of the input data.
  Returns:
    Tensorflow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(_LABEL_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec, default_batch_size=None)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_graph.transform_raw_features(
      serving_input_receiver.features)

  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_graph, schema):
  """Build everything needed for the tf-model-analysis to run the model.
  Args:
    tf_transform_graph: A TFTransformOutput.
    schema: the schema of the input data.
  Returns:
    EvalInputReceiver function, which contains:

      - Tensorflow graph which parses raw untransformed features, applies the
        tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  serialized_tf_example = tf.compat.v1.placeholder(
      dtype=tf.string, shape=[None], name='input_example_tensor')

  # Add a parse_example operator to the tensorflow graph, which will parse
  # raw, untransformed, tf examples.
  features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)

  # Now that we have our raw examples, process them through the tf-transform
  # function computed during the preprocessing step.
  transformed_features = tf_transform_graph.transform_raw_features(
      features)

  # The key name MUST be 'examples'.
  receiver_tensors = {'examples': serialized_tf_example}

  # NOTE: Model is driven by transformed features (since training works on the
  # materialized output of TFT, but slicing will happen on raw features.
  features.update(transformed_features)

  return tfma.export.EvalInputReceiver(
      features=features,
      receiver_tensors=receiver_tensors,
      labels=transformed_features[_transformed_name(_LABEL_KEY)])


def _input_fn(filenames, tf_transform_graph, batch_size=200):
  """Generates features and labels for training or evaluation.
  Args:
    filenames: [str] list of CSV files to read data from.
    tf_transform_graph: A TFTransformOutput.
    batch_size: int First dimension size of the Tensors returned by input_fn
  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_graph.transformed_feature_spec().copy())

  dataset = tf.data.experimental.make_batched_features_dataset(
      filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)

  transformed_features = (
      tf.compat.v1.data.make_one_shot_iterator(dataset).get_next())
  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      _transformed_name(_LABEL_KEY))


# TFX will call this function
def trainer_fn(trainer_fn_args, schema):
  """Build the estimator using the high level API.
  Args:
    trainer_fn_args: Holds args used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.
  Returns:
    A dict of the following:

      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  # Number of nodes in the first layer of the DNN
  first_dnn_layer_size = 100
  num_dnn_layers = 4
  dnn_decay_factor = 0.7

  train_batch_size = 40
  eval_batch_size = 40

  tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)

  train_input_fn = lambda: _input_fn(  # pylint: disable=g-long-lambda
      trainer_fn_args.train_files,
      tf_transform_graph,
      batch_size=train_batch_size)

  eval_input_fn = lambda: _input_fn(  # pylint: disable=g-long-lambda
      trainer_fn_args.eval_files,
      tf_transform_graph,
      batch_size=eval_batch_size)

  train_spec = tf.estimator.TrainSpec(  # pylint: disable=g-long-lambda
      train_input_fn,
      max_steps=trainer_fn_args.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(  # pylint: disable=g-long-lambda
      tf_transform_graph, schema)

  exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=trainer_fn_args.eval_steps,
      exporters=[exporter],
      name='chicago-taxi-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=999, keep_checkpoint_max=1)

  run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)

  estimator = _build_estimator(
      # Construct layers sizes with exponetial decay
      hidden_units=[
          max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
          for i in range(num_dnn_layers)
      ],
      config=run_config,
      warm_start_from=trainer_fn_args.base_model)

  # Create an input receiver for TFMA processing
  receiver_fn = lambda: _eval_input_receiver_fn(  # pylint: disable=g-long-lambda
      tf_transform_graph, schema)

  return {
      'estimator': estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }
Writing taxi_trainer.py
This cell will be skipped during export to pipeline.

Now, we pass in this model code to the Trainer component and run it to train the model.

trainer = Trainer(
    module_file=os.path.abspath(_taxi_trainer_module_file),
    transformed_examples=transform.outputs['transformed_examples'],
    schema=schema_gen.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000))
context.run(trainer)
INFO:absl:Running driver for Trainer
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Trainer

INFO:tensorflow:Using config: {'_model_dir': '/tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

INFO:absl:Training model.

INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:518: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/adagrad.py:103: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:loss = 0.6984333, step = 0
INFO:tensorflow:global_step/sec: 72.2149
INFO:tensorflow:loss = 0.5862382, step = 100 (1.386 sec)
INFO:tensorflow:global_step/sec: 104.783
INFO:tensorflow:loss = 0.54194295, step = 200 (0.954 sec)
INFO:tensorflow:global_step/sec: 102.054
INFO:tensorflow:loss = 0.49397546, step = 300 (0.980 sec)
INFO:tensorflow:global_step/sec: 102.861
INFO:tensorflow:loss = 0.5485538, step = 400 (0.972 sec)
INFO:tensorflow:global_step/sec: 103.501
INFO:tensorflow:loss = 0.5183837, step = 500 (0.966 sec)
INFO:tensorflow:global_step/sec: 101.767
INFO:tensorflow:loss = 0.50588, step = 600 (0.982 sec)
INFO:tensorflow:global_step/sec: 103.562
INFO:tensorflow:loss = 0.5133789, step = 700 (0.966 sec)
INFO:tensorflow:global_step/sec: 102.542
INFO:tensorflow:loss = 0.45994943, step = 800 (0.975 sec)
INFO:tensorflow:global_step/sec: 103.509
INFO:tensorflow:loss = 0.45925412, step = 900 (0.966 sec)
INFO:tensorflow:Saving checkpoints for 999 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py:963: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-05-27T09:11:36Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 49.52512s
INFO:tensorflow:Finished evaluation at 2020-05-27-09:12:26
INFO:tensorflow:Saving dict for global step 999: accuracy = 0.777735, accuracy_baseline = 0.777735, auc = 0.9272522, auc_precision_recall = 0.6637888, average_loss = 0.4552333, global_step = 999, label/mean = 0.222265, loss = 0.4552343, precision = 0.0, prediction/mean = 0.2485783, recall = 0.0
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-999
INFO:tensorflow:global_step/sec: 1.92066
INFO:tensorflow:loss = 0.44744802, step = 1000 (52.065 sec)
INFO:tensorflow:global_step/sec: 102.351
INFO:tensorflow:loss = 0.44944835, step = 1100 (0.977 sec)
INFO:tensorflow:global_step/sec: 100.169
INFO:tensorflow:loss = 0.53610814, step = 1200 (0.998 sec)
INFO:tensorflow:global_step/sec: 103.944
INFO:tensorflow:loss = 0.47364718, step = 1300 (0.962 sec)
INFO:tensorflow:global_step/sec: 103.717
INFO:tensorflow:loss = 0.44870013, step = 1400 (0.964 sec)
INFO:tensorflow:global_step/sec: 103.965
INFO:tensorflow:loss = 0.4782989, step = 1500 (0.962 sec)
INFO:tensorflow:global_step/sec: 103.03
INFO:tensorflow:loss = 0.54422456, step = 1600 (0.970 sec)
INFO:tensorflow:global_step/sec: 104.707
INFO:tensorflow:loss = 0.45471936, step = 1700 (0.955 sec)
INFO:tensorflow:global_step/sec: 104.11
INFO:tensorflow:loss = 0.46723112, step = 1800 (0.961 sec)
INFO:tensorflow:global_step/sec: 106.103
INFO:tensorflow:loss = 0.39712003, step = 1900 (0.942 sec)
INFO:tensorflow:Saving checkpoints for 1998 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 91.0664
INFO:tensorflow:loss = 0.46371168, step = 2000 (1.098 sec)
INFO:tensorflow:global_step/sec: 102.839
INFO:tensorflow:loss = 0.35536176, step = 2100 (0.972 sec)
INFO:tensorflow:global_step/sec: 101.013
INFO:tensorflow:loss = 0.41226754, step = 2200 (0.990 sec)
INFO:tensorflow:global_step/sec: 101.671
INFO:tensorflow:loss = 0.43293634, step = 2300 (0.984 sec)
INFO:tensorflow:global_step/sec: 104.42
INFO:tensorflow:loss = 0.42857465, step = 2400 (0.958 sec)
INFO:tensorflow:global_step/sec: 104.676
INFO:tensorflow:loss = 0.39183232, step = 2500 (0.955 sec)
INFO:tensorflow:global_step/sec: 102.295
INFO:tensorflow:loss = 0.49129695, step = 2600 (0.978 sec)
INFO:tensorflow:global_step/sec: 104.71
INFO:tensorflow:loss = 0.3808311, step = 2700 (0.955 sec)
INFO:tensorflow:global_step/sec: 105.753
INFO:tensorflow:loss = 0.44571137, step = 2800 (0.946 sec)
INFO:tensorflow:global_step/sec: 102.607
INFO:tensorflow:loss = 0.37970456, step = 2900 (0.975 sec)
INFO:tensorflow:Saving checkpoints for 2997 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 92.2494
INFO:tensorflow:loss = 0.40763655, step = 3000 (1.084 sec)
INFO:tensorflow:global_step/sec: 105.348
INFO:tensorflow:loss = 0.42827383, step = 3100 (0.949 sec)
INFO:tensorflow:global_step/sec: 101.901
INFO:tensorflow:loss = 0.42003018, step = 3200 (0.981 sec)
INFO:tensorflow:global_step/sec: 102.603
INFO:tensorflow:loss = 0.37676257, step = 3300 (0.975 sec)
INFO:tensorflow:global_step/sec: 104.961
INFO:tensorflow:loss = 0.48305735, step = 3400 (0.953 sec)
INFO:tensorflow:global_step/sec: 102.215
INFO:tensorflow:loss = 0.350054, step = 3500 (0.979 sec)
INFO:tensorflow:global_step/sec: 99.1871
INFO:tensorflow:loss = 0.37264428, step = 3600 (1.008 sec)
INFO:tensorflow:global_step/sec: 102.044
INFO:tensorflow:loss = 0.29440594, step = 3700 (0.980 sec)
INFO:tensorflow:global_step/sec: 103.792
INFO:tensorflow:loss = 0.37270775, step = 3800 (0.963 sec)
INFO:tensorflow:global_step/sec: 102.582
INFO:tensorflow:loss = 0.38087654, step = 3900 (0.975 sec)
INFO:tensorflow:Saving checkpoints for 3996 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 87.4711
INFO:tensorflow:loss = 0.37357658, step = 4000 (1.143 sec)
INFO:tensorflow:global_step/sec: 98.9954
INFO:tensorflow:loss = 0.38262895, step = 4100 (1.010 sec)
INFO:tensorflow:global_step/sec: 102.568
INFO:tensorflow:loss = 0.42870012, step = 4200 (0.975 sec)
INFO:tensorflow:global_step/sec: 101.973
INFO:tensorflow:loss = 0.2687908, step = 4300 (0.981 sec)
INFO:tensorflow:global_step/sec: 102.092
INFO:tensorflow:loss = 0.35723084, step = 4400 (0.979 sec)
INFO:tensorflow:global_step/sec: 101.43
INFO:tensorflow:loss = 0.35634392, step = 4500 (0.986 sec)
INFO:tensorflow:global_step/sec: 100.863
INFO:tensorflow:loss = 0.4163393, step = 4600 (0.992 sec)
INFO:tensorflow:global_step/sec: 103.592
INFO:tensorflow:loss = 0.44617748, step = 4700 (0.965 sec)
INFO:tensorflow:global_step/sec: 104.887
INFO:tensorflow:loss = 0.33811408, step = 4800 (0.953 sec)
INFO:tensorflow:global_step/sec: 102.68
INFO:tensorflow:loss = 0.4346467, step = 4900 (0.974 sec)
INFO:tensorflow:Saving checkpoints for 4995 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 89.7358
INFO:tensorflow:loss = 0.44326392, step = 5000 (1.114 sec)
INFO:tensorflow:global_step/sec: 102.324
INFO:tensorflow:loss = 0.47261944, step = 5100 (0.977 sec)
INFO:tensorflow:global_step/sec: 102.366
INFO:tensorflow:loss = 0.42687616, step = 5200 (0.977 sec)
INFO:tensorflow:global_step/sec: 102.552
INFO:tensorflow:loss = 0.3577564, step = 5300 (0.975 sec)
INFO:tensorflow:global_step/sec: 102.35
INFO:tensorflow:loss = 0.37536854, step = 5400 (0.977 sec)
INFO:tensorflow:global_step/sec: 101.251
INFO:tensorflow:loss = 0.3709663, step = 5500 (0.988 sec)
INFO:tensorflow:global_step/sec: 101.645
INFO:tensorflow:loss = 0.3255383, step = 5600 (0.984 sec)
INFO:tensorflow:global_step/sec: 99.3509
INFO:tensorflow:loss = 0.34593886, step = 5700 (1.007 sec)
INFO:tensorflow:global_step/sec: 100.429
INFO:tensorflow:loss = 0.43063337, step = 5800 (0.996 sec)
INFO:tensorflow:global_step/sec: 102.542
INFO:tensorflow:loss = 0.3622859, step = 5900 (0.975 sec)
INFO:tensorflow:Saving checkpoints for 5994 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 88.9693
INFO:tensorflow:loss = 0.39426452, step = 6000 (1.124 sec)
INFO:tensorflow:global_step/sec: 102.542
INFO:tensorflow:loss = 0.30851725, step = 6100 (0.975 sec)
INFO:tensorflow:global_step/sec: 97.5997
INFO:tensorflow:loss = 0.40363818, step = 6200 (1.025 sec)
INFO:tensorflow:global_step/sec: 89.9901
INFO:tensorflow:loss = 0.37745276, step = 6300 (1.111 sec)
INFO:tensorflow:global_step/sec: 89.8046
INFO:tensorflow:loss = 0.45378247, step = 6400 (1.114 sec)
INFO:tensorflow:global_step/sec: 91.2902
INFO:tensorflow:loss = 0.33474225, step = 6500 (1.095 sec)
INFO:tensorflow:global_step/sec: 92.1437
INFO:tensorflow:loss = 0.36932656, step = 6600 (1.085 sec)
INFO:tensorflow:global_step/sec: 91.4902
INFO:tensorflow:loss = 0.37926945, step = 6700 (1.093 sec)
INFO:tensorflow:global_step/sec: 91.9756
INFO:tensorflow:loss = 0.33420405, step = 6800 (1.087 sec)
INFO:tensorflow:global_step/sec: 91.0478
INFO:tensorflow:loss = 0.33151346, step = 6900 (1.098 sec)
INFO:tensorflow:Saving checkpoints for 6993 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 83.2524
INFO:tensorflow:loss = 0.42657262, step = 7000 (1.201 sec)
INFO:tensorflow:global_step/sec: 89.8794
INFO:tensorflow:loss = 0.43752956, step = 7100 (1.113 sec)
INFO:tensorflow:global_step/sec: 98.2997
INFO:tensorflow:loss = 0.3497017, step = 7200 (1.017 sec)
INFO:tensorflow:global_step/sec: 100.4
INFO:tensorflow:loss = 0.37151724, step = 7300 (0.996 sec)
INFO:tensorflow:global_step/sec: 100.238
INFO:tensorflow:loss = 0.34255582, step = 7400 (0.998 sec)
INFO:tensorflow:global_step/sec: 100.587
INFO:tensorflow:loss = 0.39731765, step = 7500 (0.994 sec)
INFO:tensorflow:global_step/sec: 101.008
INFO:tensorflow:loss = 0.23524141, step = 7600 (0.990 sec)
INFO:tensorflow:global_step/sec: 101.665
INFO:tensorflow:loss = 0.3849465, step = 7700 (0.984 sec)
INFO:tensorflow:global_step/sec: 99.4729
INFO:tensorflow:loss = 0.33175096, step = 7800 (1.005 sec)
INFO:tensorflow:global_step/sec: 100.996
INFO:tensorflow:loss = 0.33854565, step = 7900 (0.990 sec)
INFO:tensorflow:Saving checkpoints for 7992 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 88.6455
INFO:tensorflow:loss = 0.32627138, step = 8000 (1.128 sec)
INFO:tensorflow:global_step/sec: 100.88
INFO:tensorflow:loss = 0.34629735, step = 8100 (0.991 sec)
INFO:tensorflow:global_step/sec: 100.147
INFO:tensorflow:loss = 0.38391894, step = 8200 (0.999 sec)
INFO:tensorflow:global_step/sec: 100.506
INFO:tensorflow:loss = 0.2810569, step = 8300 (0.995 sec)
INFO:tensorflow:global_step/sec: 100.636
INFO:tensorflow:loss = 0.34451574, step = 8400 (0.994 sec)
INFO:tensorflow:global_step/sec: 101.492
INFO:tensorflow:loss = 0.36056346, step = 8500 (0.985 sec)
INFO:tensorflow:global_step/sec: 101.646
INFO:tensorflow:loss = 0.4275628, step = 8600 (0.984 sec)
INFO:tensorflow:global_step/sec: 101.286
INFO:tensorflow:loss = 0.3214528, step = 8700 (0.987 sec)
INFO:tensorflow:global_step/sec: 99.9094
INFO:tensorflow:loss = 0.30571482, step = 8800 (1.001 sec)
INFO:tensorflow:global_step/sec: 101.482
INFO:tensorflow:loss = 0.35132694, step = 8900 (0.985 sec)
INFO:tensorflow:Saving checkpoints for 8991 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 90.4366
INFO:tensorflow:loss = 0.3960054, step = 9000 (1.106 sec)
INFO:tensorflow:global_step/sec: 102.044
INFO:tensorflow:loss = 0.29289806, step = 9100 (0.980 sec)
INFO:tensorflow:global_step/sec: 102.681
INFO:tensorflow:loss = 0.44199973, step = 9200 (0.974 sec)
INFO:tensorflow:global_step/sec: 103.016
INFO:tensorflow:loss = 0.3032685, step = 9300 (0.971 sec)
INFO:tensorflow:global_step/sec: 100.677
INFO:tensorflow:loss = 0.3266808, step = 9400 (0.993 sec)
INFO:tensorflow:global_step/sec: 100.77
INFO:tensorflow:loss = 0.35128814, step = 9500 (0.992 sec)
INFO:tensorflow:global_step/sec: 100.742
INFO:tensorflow:loss = 0.30496806, step = 9600 (0.993 sec)
INFO:tensorflow:global_step/sec: 99.9584
INFO:tensorflow:loss = 0.35444087, step = 9700 (1.000 sec)
INFO:tensorflow:global_step/sec: 100.3
INFO:tensorflow:loss = 0.47141695, step = 9800 (0.997 sec)
INFO:tensorflow:global_step/sec: 99.5938
INFO:tensorflow:loss = 0.32136053, step = 9900 (1.004 sec)
INFO:tensorflow:Saving checkpoints for 9990 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Saving checkpoints for 10000 into /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt.
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-05-27T09:13:58Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 49.00349s
INFO:tensorflow:Finished evaluation at 2020-05-27-09:14:47
INFO:tensorflow:Saving dict for global step 10000: accuracy = 0.797965, accuracy_baseline = 0.777725, auc = 0.93858325, auc_precision_recall = 0.69958484, average_loss = 0.3372825, global_step = 10000, label/mean = 0.222275, loss = 0.33728203, precision = 0.71008927, prediction/mean = 0.2254097, recall = 0.15388595
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022-vocab_compute_and_apply_vocabulary_vocabulary"

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022/vocab_compute_and_apply_vocabulary_1_vocabulary"

INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: ['serving_default', 'classification']
INFO:tensorflow:Signatures INCLUDED in export for Regress: ['regression']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/export/chicago-taxi/temp-b'1590570887'/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/export/chicago-taxi/temp-b'1590570887'/saved_model.pb
INFO:tensorflow:Loss for final step: 0.4001155.

INFO:absl:Training complete.  Model written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir
INFO:absl:Exporting eval_savedmodel for TFMA.

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022-vocab_compute_and_apply_vocabulary_vocabulary"

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022/vocab_compute_and_apply_vocabulary_1_vocabulary"

INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/serving_model_dir/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/eval_model_dir/temp-b'1590570888'/assets
INFO:tensorflow:SavedModel written to: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/eval_model_dir/temp-b'1590570888'/saved_model.pb

INFO:absl:Exported eval_savedmodel to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/eval_model_dir.
INFO:absl:Running publisher for Trainer
INFO:absl:MetadataStore with DB connection initialized

Analyze Training with TensorBoard

Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.

%%skip_for_export

# Get the URI of the output artifact representing the training logs, which is a directory
model_dir = trainer.outputs['model'].get()[0].uri

%load_ext tensorboard
%tensorboard --logdir {model_dir}
This cell will be skipped during export to pipeline.

Evaluator

The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library. The Evaluator can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the Evaluator automatically will label the model as "good".

Evaluator will take as input the data from ExampleGen, the trained model from Trainer, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:

eval_config = tfma.EvalConfig(
    model_specs=[
        # Using signature 'eval' implies the use of an EvalSavedModel. To use
        # a serving model remove the signature to defaults to 'serving_default'
        # and add a label_key.
        tfma.ModelSpec(signature_name='eval')
    ],
    metrics_specs=[
        tfma.MetricsSpec(
            # The metrics added here are in addition to those saved with the
            # model (assuming either a keras model or EvalSavedModel is used).
            # Any metrics added into the saved model (for example using
            # model.compile(..., metrics=[...]), etc) will be computed
            # automatically.
            metrics=[
                tfma.MetricConfig(class_name='ExampleCount')
            ],
            # To add validation thresholds for metrics saved with the model,
            # add them keyed by metric name to the thresholds map.
            thresholds = {
                'accuracy': tfma.MetricThreshold(
                    value_threshold=tfma.GenericValueThreshold(
                        lower_bound={'value': 0.5}),
                    change_threshold=tfma.GenericChangeThreshold(
                       direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                       absolute={'value': -1e-10}))
            }
        )
    ],
    slicing_specs=[
        # An empty slice spec means the overall slice, i.e. the whole dataset.
        tfma.SlicingSpec(),
        # Data can be sliced along a feature column. In this case, data is
        # sliced along feature column trip_start_hour.
        tfma.SlicingSpec(feature_keys=['trip_start_hour'])
    ])

Next, we give this configuration to Evaluator and run it.

# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.

# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = ResolverNode(
      instance_name='latest_blessed_model_resolver',
      resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
      model=Channel(type=Model),
      model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)

evaluator = Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer.outputs['model'],
    #baseline_model=model_resolver.outputs['model'],
    # Change threshold will be ignored if there is no baseline (first run).
    eval_config=eval_config)
context.run(evaluator)
INFO:absl:Running driver for ResolverNode.latest_blessed_model_resolver
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running publisher for ResolverNode.latest_blessed_model_resolver
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running driver for Evaluator
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Evaluator
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  signature_name: "eval"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "trip_start_hour"
}
metrics_specs {
  metrics {
    class_name: "ExampleCount"
    threshold {
    }
  }
  thresholds {
    key: "accuracy"
    value {
      value_threshold {
        lower_bound {
          value: 0.5
        }
      }
    }
  }
}

INFO:absl:Using /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/eval_model_dir/1590570888 as  model.
INFO:absl:Evaluating model.
INFO:absl:Using 1 process(es) for Beam pipeline execution.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/load.py:169: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
INFO:tensorflow:Restoring parameters from /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Trainer/model/6/eval_model_dir/1590570888/variables/variables
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/graph_ref.py:189: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info.

INFO:absl:Evaluation complete. Results written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Evaluator/evaluation/8.
INFO:absl:Checking validation results.
INFO:absl:Blessing result True written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Evaluator/blessing/8.
INFO:absl:Running publisher for Evaluator
INFO:absl:MetadataStore with DB connection initialized

Now let's examine the output artifacts of Evaluator.

%%skip_for_export

evaluator.outputs
{'evaluation': Channel(
    type_name: ModelEvaluation
    artifacts: [Artifact(type_name: ModelEvaluation, uri: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Evaluator/evaluation/8, id: 9)]
), 'blessing': Channel(
    type_name: ModelBlessing
    artifacts: [Artifact(type_name: ModelBlessing, uri: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Evaluator/blessing/8, id: 10)]
)}
This cell will be skipped during export to pipeline.

Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.

%%skip_for_export

context.show(evaluator.outputs['evaluation'])
This cell will be skipped during export to pipeline.

To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.

%%skip_for_export

import tensorflow_model_analysis as tfma

# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)

# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
    tfma_result, slicing_column='trip_start_hour')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_hour:1', …
This cell will be skipped during export to pipeline.

This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set.

TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see the tutorial.

Since we added thresholds to our config, validation output is also available. The precence of a blessing artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.

%%skip_for_export

blessing_uri = evaluator.outputs.blessing.get()[0].uri
!ls -l {blessing_uri}
total 0
-rw-rw-r-- 1 kbuilder kbuilder 0 May 27 09:15 BLESSED
This cell will be skipped during export to pipeline.

Now can also verify the success by loading the validation result record:

%%skip_for_export

PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
validation_ok: true

This cell will be skipped during export to pipeline.

Pusher

The Pusher component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to _serving_model_dir.

pusher = Pusher(
    model=trainer.outputs['model'],
    model_blessing=evaluator.outputs['blessing'],
    push_destination=pusher_pb2.PushDestination(
        filesystem=pusher_pb2.PushDestination.Filesystem(
            base_directory=_serving_model_dir)))
context.run(pusher)
INFO:absl:Running driver for Pusher
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Pusher
INFO:absl:Model pushing.
INFO:absl:Model version is 1590570887
INFO:absl:Model written to /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Pusher/pushed_model/9.
INFO:absl:Model written to serving path /tmp/tmpak9gtad_/serving_model/taxi_simple/1590570887.
INFO:absl:Model pushed to /tmp/tmpak9gtad_/serving_model/taxi_simple/1590570887.
INFO:absl:Running publisher for Pusher
INFO:absl:MetadataStore with DB connection initialized

Let's examine the output artifacts of Pusher.

%%skip_for_export

pusher.outputs
{'pushed_model': Channel(
    type_name: PushedModel
    artifacts: [Artifact(type_name: PushedModel, uri: /tmp/tfx-interactive-2020-05-27T09_10_54.525791-2i1nvuqy/Pusher/pushed_model/9, id: 11)]
)}
This cell will be skipped during export to pipeline.

In particular, the Pusher will export your model in the SavedModel format, which looks like this:

%%skip_for_export

push_uri = pusher.outputs.model_push.get()[0].uri
latest_version = max(os.listdir(push_uri))
latest_version_path = os.path.join(push_uri, latest_version)
model = tf.saved_model.load(latest_version_path)

for item in model.signatures.items():
  pp.pprint(item)
('predict',
 <tensorflow.python.eager.wrap_function.WrappedFunction object at 0x7fa5d77f7da0>)
('classification',
 <tensorflow.python.eager.wrap_function.WrappedFunction object at 0x7fa5d4136e80>)
('regression',
 <tensorflow.python.eager.wrap_function.WrappedFunction object at 0x7fa5b04b2f28>)
('serving_default',
 <tensorflow.python.eager.wrap_function.WrappedFunction object at 0x7fa5d42d9d30>)
This cell will be skipped during export to pipeline.

We're finished our tour of built-in TFX components!

After you're happy with experimenting with TFX components and code in this notebook, you may want to export it as a pipeline to be orchestrated with Apache Airflow or Apache Beam. See the final section.

Export to pipeline

To export the contents of this notebook as a pipeline to be orchestrated with Airflow or Beam, follow the instructions below.

If you're using Colab, make sure to save this notebook to Google Drive (FileSave a Copy in Drive) before exporting.

1. Mount Google Drive (Colab-only)

If you're using Colab, this notebook needs to mount your Google Drive to be able to access its own .ipynb file.

%%skip_for_export



import sys

if 'google.colab' in sys.modules:
  # Colab.
  from google.colab import drive

  drive.mount('/content/drive')

2. Select an orchestrator

_runner_type = 'beam' 
_pipeline_name = 'chicago_taxi_%s' % _runner_type

3. Set up paths for the pipeline

# For Colab notebooks only.
# TODO(USER): Fill out the path to this notebook.
_notebook_filepath = (
    '/content/drive/My Drive/Colab Notebooks/components.ipynb')

# For Jupyter notebooks only.
# _notebook_filepath = os.path.join(os.getcwd(),
#                                   'taxi_pipeline_interactive.ipynb')

# TODO(USER): Fill out the paths for the exported pipeline.
_tfx_root = os.path.join(os.environ['HOME'], 'tfx')
_taxi_root = os.path.join(os.environ['HOME'], 'taxi')
_serving_model_dir = os.path.join(_taxi_root, 'serving_model')
_data_root = os.path.join(_taxi_root, 'data', 'simple')
_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)
_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,
                              'metadata.db')

4. Choose components to include in the pipeline

# TODO(USER): Specify components to be included in the exported pipeline.
components = [
    example_gen, statistics_gen, schema_gen, example_validator, transform,
    trainer, evaluator, pusher
]

5. Generate pipeline files

%%skip_for_export



if get_ipython().magics_manager.auto_magic:
  print('Warning: %automagic is ON. Line magics specified without the % prefix '
        'will not be scrubbed during export to pipeline.')

_pipeline_export_filepath = 'export_%s.py' % _pipeline_name
context.export_to_pipeline(notebook_filepath=_notebook_filepath,
                           export_filepath=_pipeline_export_filepath,
                           runner_type=_runner_type)

6. Download pipeline files

%%skip_for_export



if 'google.colab' in sys.modules:
  from google.colab import files
  import zipfile

  zip_export_path = os.path.join(
      tempfile.mkdtemp(), 'export.zip')
  with zipfile.ZipFile(zip_export_path, mode='w') as export_zip:
    export_zip.write(_pipeline_export_filepath)
    export_zip.write(_taxi_constants_module_file)
    export_zip.write(_taxi_transform_module_file)
    export_zip.write(_taxi_trainer_module_file)

  files.download(zip_export_path)

To learn how to run the orchestrated pipeline with Apache Airflow, please refer to the TFX Orchestration Tutorial.