このページは Cloud Translation API によって翻訳されました。
Switch to English

TensorFlowと前処理データを変換します

拡張TensorFlowの特徴・エンジニアリングコンポーネント(TFX)

この例コラボノートがどのの幾分より高度な例を提供TensorFlowが変換tf.Transform )モデルを訓練し、生産に推論をサービングの両方のための正確に同じコードを使用して、前処理データを用いることができます。

TensorFlow変換は、トレーニングデータセットを完全にパスを必要とする機能を作成するなど、TensorFlowのための入力データを前処理するためのライブラリです。例えば、TensorFlowは、あなたはできる変換を用いました:

  • 平均値と標準偏差を用いて入力値を正規化
  • 入力値のすべての上に語彙を生成することによって、整数に文字列を変換
  • 変換は、観測データの分布に基づいて、バケットに割り当てることで整数に浮かびます

TensorFlowに組み込まれた単一の実施例又は実施例のバッチに操作するためのサポート。 tf.Transform全体のトレーニングデータセットを完全にパスをサポートするために、これらの機能を拡張します。

出力tf.Transformあなたがトレーニングとサービス提供の両方に使用できるTensorFlowグラフとしてエクスポートされます。両方のトレーニングのために同じグラフを用いて、同じ変換が両方の段階で適用されるため、スキュー防止することができる働きます。

私たちは、この例では何をやっています

この例では、処理されます国勢調査のデータを含んで広く使われているデータセットを 、分類を行うためのモデルを訓練します。道に沿って私たちは使用してデータを変換することがありますtf.Transform

Pythonのチェック、輸入、およびグローバル

まず、我々は、Python 3を使用していることを確認してくださいよ、その後、先に行くと私たちに必要なものをインストールしてインポートします。

 import sys

# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
 
 import os
import pprint

import tensorflow as tf
print('TF: {}'.format(tf.__version__))

print('Installing Apache Beam')
!pip install -Uq apache_beam==2.21.0
import apache_beam as beam
print('Beam: {}'.format(beam.__version__))

print('Installing TensorFlow Transform')
!pip install -q tensorflow-transform==0.22.0
import tensorflow_transform as tft
print('Transform: {}'.format(tft.__version__))

import tensorflow_transform.beam as tft_beam

!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.test

train = './adult.data'
test = './adult.test'
 
TF: 2.2.0
Installing Apache Beam
Beam: 2.21.0
Installing TensorFlow Transform
Transform: 0.22.0
--2020-07-27 09:13:34--  https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data
Resolving storage.googleapis.com (storage.googleapis.com)... 64.233.187.128, 74.125.203.128, 74.125.23.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|64.233.187.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3974305 (3.8M) [application/octet-stream]
Saving to: ‘adult.data’

adult.data          100%[===================>]   3.79M  --.-KB/s    in 0.02s   

2020-07-27 09:13:34 (244 MB/s) - ‘adult.data’ saved [3974305/3974305]

--2020-07-27 09:13:34--  https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.test
Resolving storage.googleapis.com (storage.googleapis.com)... 64.233.187.128, 74.125.203.128, 74.125.23.128, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|64.233.187.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2003153 (1.9M) [application/octet-stream]
Saving to: ‘adult.test’

adult.test          100%[===================>]   1.91M  --.-KB/s    in 0.02s   

2020-07-27 09:13:35 (107 MB/s) - ‘adult.test’ saved [2003153/2003153]


私たちの列に名前を付けます

私たちは、データセット内の列を参照するためにいくつかの便利なリストを作成します。

 CATEGORICAL_FEATURE_KEYS = [
    'workclass',
    'education',
    'marital-status',
    'occupation',
    'relationship',
    'race',
    'sex',
    'native-country',
]
NUMERIC_FEATURE_KEYS = [
    'age',
    'capital-gain',
    'capital-loss',
    'hours-per-week',
]
OPTIONAL_NUMERIC_FEATURE_KEYS = [
    'education-num',
]
LABEL_KEY = 'label'
 

私たちの特徴やスキーマを定義します

のは、列が私達の入力であるかの種類に基づいて、スキーマを定義してみましょう。とりわけ、これはそれらを正しくインポートするに役立ちます。

 RAW_DATA_FEATURE_SPEC = dict(
    [(name, tf.io.FixedLenFeature([], tf.string))
     for name in CATEGORICAL_FEATURE_KEYS] +
    [(name, tf.io.FixedLenFeature([], tf.float32))
     for name in NUMERIC_FEATURE_KEYS] +
    [(name, tf.io.VarLenFeature(tf.float32))
     for name in OPTIONAL_NUMERIC_FEATURE_KEYS] +
    [(LABEL_KEY, tf.io.FixedLenFeature([], tf.string))]
)

RAW_DATA_METADATA = tft.tf_metadata.dataset_metadata.DatasetMetadata(
    tft.tf_metadata.dataset_schema.schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC))
 

設定ハイパーパラメータと基本的なハウスキーピング

訓練のために使用される定数とハイパー。バケットサイズは、リストされたすべてのデータセットの説明のカテゴリなどのために1つの余分を含みます「?」これは、未知を表します。

 testing = os.getenv("WEB_TEST_BROWSER", False)
if testing:
  TRAIN_NUM_EPOCHS = 1
  NUM_TRAIN_INSTANCES = 1
  TRAIN_BATCH_SIZE = 1
  NUM_TEST_INSTANCES = 1
else:
  TRAIN_NUM_EPOCHS = 16
  NUM_TRAIN_INSTANCES = 32561
  TRAIN_BATCH_SIZE = 128
  NUM_TEST_INSTANCES = 16281

# Names of temp files
TRANSFORMED_TRAIN_DATA_FILEBASE = 'train_transformed'
TRANSFORMED_TEST_DATA_FILEBASE = 'test_transformed'
EXPORTED_MODEL_DIR = 'exported_model_dir'
 

クリーニング

ビームを作成し、当社の入力データを清掃するための変換

私たちは、Apacheビームのサブクラスの作成することにより、 変換ビームを作成しますPTransformクラスをオーバーライドしてexpand実際の処理ロジックを指定するための方法を。 A PTransformあなたのパイプラインでのデータ処理動作、またはステップを表しています。すべてPTransform一つ以上のかかりPCollection入力としてオブジェクト、実行あなたがその要素に対して提供する処理機能PCollection 、ゼロ以上の出力PCollectionオブジェクトを生成します。

私たちの変換クラスは、ビームの適用されますParDo入力の上PCollection出力にクリーンなデータを生成する、私たちの国勢調査のデータセットを含むPCollection

 class MapAndFilterErrors(beam.PTransform):
  """Like beam.Map but filters out errors in the map_fn."""

  class _MapAndFilterErrorsDoFn(beam.DoFn):
    """Count the bad examples using a beam metric."""

    def __init__(self, fn):
      self._fn = fn
      # Create a counter to measure number of bad elements.
      self._bad_elements_counter = beam.metrics.Metrics.counter(
          'census_example', 'bad_elements')

    def process(self, element):
      try:
        yield self._fn(element)
      except Exception:  # pylint: disable=broad-except
        # Catch any exception the above call.
        self._bad_elements_counter.inc(1)

  def __init__(self, fn):
    self._fn = fn

  def expand(self, pcoll):
    return pcoll | beam.ParDo(self._MapAndFilterErrorsDoFn(self._fn))
 

で前処理tf.Transform

作成tf.Transform preprocessing_fnを

前処理機能は tf.Transformの最も重要な概念です。データセットの変換が実際に起こる場所の前処理機能があります。それは受け入れ、テンソルは意味テンソルの辞書、返しTensorまたはSparseTensor 。通常、前処理機能の中核を形成するAPI呼び出しの2つのグループがあります。

  1. TensorFlowオプス:テンソルを受け入れ、返す任意の関数で、通常TensorFlowオプスを意味しています。グラフにこれらの追加TensorFlow操作時に変換されたデータに変換生データが1つの特徴ベクトルという。これらは、トレーニングとサービス提供の両方の間に、すべての例のために実行されます。
  2. アナライザを変換TensorFlow:tf.Transformが提供する分析器のいずれか。アナライザも受け入れ、テンソルを返しますが、TensorFlowオプスとは異なり、彼らは唯一の訓練中に、一度実行し、一般的に全体のトレーニングデータセットを完全にパスを作ります。彼らは、作成テンソル定数あなたのグラフに追加されています、。例えば、 tft.minトレーニングデータセットを超えるテンソルの最小値を計算します。 tf.Transformは、分析装置の固定されたセットを提供し、これは将来のバージョンで拡張されます。
 def preprocessing_fn(inputs):
  """Preprocess input columns into transformed columns."""
  # Since we are modifying some features and leaving others unchanged, we
  # start by setting `outputs` to a copy of `inputs.
  outputs = inputs.copy()

  # Scale numeric columns to have range [0, 1].
  for key in NUMERIC_FEATURE_KEYS:
    outputs[key] = tft.scale_to_0_1(outputs[key])

  for key in OPTIONAL_NUMERIC_FEATURE_KEYS:
    # This is a SparseTensor because it is optional. Here we fill in a default
    # value when it is missing.
    sparse = tf.sparse.SparseTensor(outputs[key].indices, outputs[key].values,
                                    [outputs[key].dense_shape[0], 1])
    dense = tf.sparse.to_dense(sp_input=sparse, default_value=0.)
    # Reshaping from a batch of vectors of size 1 to a batch to scalars.
    dense = tf.squeeze(dense, axis=1)
    outputs[key] = tft.scale_to_0_1(dense)

  # For all categorical columns except the label column, we generate a
  # vocabulary but do not modify the feature.  This vocabulary is instead
  # used in the trainer, by means of a feature column, to convert the feature
  # from a string to an integer id.
  for key in CATEGORICAL_FEATURE_KEYS:
    tft.vocabulary(inputs[key], vocab_filename=key)

  # For the label column we provide the mapping from string to index.
  table_keys = ['>50K', '<=50K']
  initializer = tf.lookup.KeyValueTensorInitializer(
      keys=table_keys,
      values=tf.cast(tf.range(len(table_keys)), tf.int64),
      key_dtype=tf.string,
      value_dtype=tf.int64)
  table = tf.lookup.StaticHashTable(initializer, default_value=-1)
  outputs[LABEL_KEY] = table.lookup(outputs[LABEL_KEY])

  return outputs
 

データを変換します

今、私たちは、Apacheビームパイプラインで我々のデータを変換を開始する準備が整いました。

  1. CSVリーダーを使用してデータを読み込み
  2. 私たちの新しい使用して、それをきれいにMapAndFilterErrors変換
  3. そのInt64の値インデックスに文字列からスケール数値データと変換データカテゴリ、カテゴリごとに語彙を作成することによって前処理パイプラインを使用して変換
  4. その結果を書き出すTFRecordExample我々は、後でモデルのトレーニングに使用するプロト、
 def transform_data(train_data_file, test_data_file, working_dir):
  """Transform the data and write out as a TFRecord of Example protos.

  Read in the data using the CSV reader, and transform it using a
  preprocessing pipeline that scales numeric data and converts categorical data
  from strings to int64 values indices, by creating a vocabulary for each
  category.

  Args:
    train_data_file: File containing training data
    test_data_file: File containing test data
    working_dir: Directory to write transformed data and metadata to
  """

  # The "with" block will create a pipeline, and run that pipeline at the exit
  # of the block.
  with beam.Pipeline() as pipeline:
    with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
      # Create a coder to read the census data with the schema.  To do this we
      # need to list all columns in order since the schema doesn't specify the
      # order of columns in the csv.
      ordered_columns = [
          'age', 'workclass', 'fnlwgt', 'education', 'education-num',
          'marital-status', 'occupation', 'relationship', 'race', 'sex',
          'capital-gain', 'capital-loss', 'hours-per-week', 'native-country',
          'label'
      ]
      converter = tft.coders.CsvCoder(ordered_columns, RAW_DATA_METADATA.schema)

      # Read in raw data and convert using CSV converter.  Note that we apply
      # some Beam transformations here, which will not be encoded in the TF
      # graph since we don't do them from within tf.Transform's methods
      # (AnalyzeDataset, TransformDataset etc.).  These transformations are just
      # to get data into a format that the CSV converter can read, in particular
      # removing spaces after commas.
      #
      # We use MapAndFilterErrors instead of Map to filter out decode errors in
      # convert.decode which should only occur for the trailing blank line.
      raw_data = (
          pipeline
          | 'ReadTrainData' >> beam.io.ReadFromText(train_data_file)
          | 'FixCommasTrainData' >> beam.Map(
              lambda line: line.replace(', ', ','))
          | 'DecodeTrainData' >> MapAndFilterErrors(converter.decode))

      # Combine data and schema into a dataset tuple.  Note that we already used
      # the schema to read the CSV data, but we also need it to interpret
      # raw_data.
      raw_dataset = (raw_data, RAW_DATA_METADATA)
      transformed_dataset, transform_fn = (
          raw_dataset | tft_beam.AnalyzeAndTransformDataset(preprocessing_fn))
      transformed_data, transformed_metadata = transformed_dataset
      transformed_data_coder = tft.coders.ExampleProtoCoder(
          transformed_metadata.schema)

      _ = (
          transformed_data
          | 'EncodeTrainData' >> beam.Map(transformed_data_coder.encode)
          | 'WriteTrainData' >> beam.io.WriteToTFRecord(
              os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE)))

      # Now apply transform function to test data.  In this case we remove the
      # trailing period at the end of each line, and also ignore the header line
      # that is present in the test data file.
      raw_test_data = (
          pipeline
          | 'ReadTestData' >> beam.io.ReadFromText(test_data_file,
                                                   skip_header_lines=1)
          | 'FixCommasTestData' >> beam.Map(
              lambda line: line.replace(', ', ','))
          | 'RemoveTrailingPeriodsTestData' >> beam.Map(lambda line: line[:-1])
          | 'DecodeTestData' >> MapAndFilterErrors(converter.decode))

      raw_test_dataset = (raw_test_data, RAW_DATA_METADATA)

      transformed_test_dataset = (
          (raw_test_dataset, transform_fn) | tft_beam.TransformDataset())
      # Don't need transformed data schema, it's the same as before.
      transformed_test_data, _ = transformed_test_dataset

      _ = (
          transformed_test_data
          | 'EncodeTestData' >> beam.Map(transformed_data_coder.encode)
          | 'WriteTestData' >> beam.io.WriteToTFRecord(
              os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE)))

      # Will write a SavedModel and metadata to working_dir, which can then
      # be read by the tft.TFTransformOutput class.
      _ = (
          transform_fn
          | 'WriteTransformFn' >> tft_beam.WriteTransformFn(working_dir))
 

モデルを訓練するために私達の前処理されたデータを使用して、

どのように表示するにはtf.Transformトレーニングとサービス提供の両方に同じコードを使用することを可能にし、したがってスキュー防止し、我々はモデルをトレーニングするつもりです。我々のモデルを訓練し、生産のために私たちの訓練を受けたモデルを準備するために、我々は、入力関数を作成する必要があります。私たちのトレーニング入力機能と、当社の提供する入力機能の主な違いは、そのトレーニングデータは、ラベルが含まれ、かつ生産データはありません。引数と戻りも多少異なります。

訓練のための入力機能を作成します。

 def _make_training_input_fn(tf_transform_output, transformed_examples,
                            batch_size):
  """Creates an input function reading from transformed data.

  Args:
    tf_transform_output: Wrapper around output of tf.Transform.
    transformed_examples: Base filename of examples.
    batch_size: Batch size.

  Returns:
    The input function for training or eval.
  """
  def input_fn():
    """Input function for training and eval."""
    dataset = tf.data.experimental.make_batched_features_dataset(
        file_pattern=transformed_examples,
        batch_size=batch_size,
        features=tf_transform_output.transformed_feature_spec(),
        reader=tf.data.TFRecordDataset,
        shuffle=True)

    transformed_features = tf.compat.v1.data.make_one_shot_iterator(
        dataset).get_next()

    # Extract features and label from the transformed tensors.
    transformed_labels = transformed_features.pop(LABEL_KEY)

    return transformed_features, transformed_labels

  return input_fn
 

サービス提供のための入力機能を作成します。

我々は生産に使用し、サービス提供のために私たちの訓練を受けたモデルを準備することができると入力機能を作成してみましょう。

 def _make_serving_input_fn(tf_transform_output):
  """Creates an input function reading from raw data.

  Args:
    tf_transform_output: Wrapper around output of tf.Transform.

  Returns:
    The serving input function.
  """
  raw_feature_spec = RAW_DATA_FEATURE_SPEC.copy()
  # Remove label since it is not available during serving.
  raw_feature_spec.pop(LABEL_KEY)

  def serving_input_fn():
    """Input function for serving."""
    # Get raw features by generating the basic serving input_fn and calling it.
    # Here we generate an input_fn that expects a parsed Example proto to be fed
    # to the model at serving time.  See also
    # tf.estimator.export.build_raw_serving_input_receiver_fn.
    raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
        raw_feature_spec, default_batch_size=None)
    serving_input_receiver = raw_input_fn()

    # Apply the transform function that was used to generate the materialized
    # data.
    raw_features = serving_input_receiver.features
    transformed_features = tf_transform_output.transform_raw_features(
        raw_features)

    return tf.estimator.export.ServingInputReceiver(
        transformed_features, serving_input_receiver.receiver_tensors)

  return serving_input_fn
 

FeatureColumnsにおける当社の入力データをラップ

我々のモデルはTensorFlow FeatureColumnsで私たちのデータを期待しています。

 def get_feature_columns(tf_transform_output):
  """Returns the FeatureColumns for the model.

  Args:
    tf_transform_output: A `TFTransformOutput` object.

  Returns:
    A list of FeatureColumns.
  """
  # Wrap scalars as real valued columns.
  real_valued_columns = [tf.feature_column.numeric_column(key, shape=())
                         for key in NUMERIC_FEATURE_KEYS]

  # Wrap categorical columns.
  one_hot_columns = [
      tf.feature_column.categorical_column_with_vocabulary_file(
          key=key,
          vocabulary_file=tf_transform_output.vocabulary_file_by_name(
              vocab_filename=key))
      for key in CATEGORICAL_FEATURE_KEYS]

  return real_valued_columns + one_hot_columns
 

列車は、評価し、我々のモデルをエクスポート

 def train_and_evaluate(working_dir, num_train_instances=NUM_TRAIN_INSTANCES,
                       num_test_instances=NUM_TEST_INSTANCES):
  """Train the model on training data and evaluate on test data.

  Args:
    working_dir: Directory to read transformed data and metadata from and to
        write exported model to.
    num_train_instances: Number of instances in train set
    num_test_instances: Number of instances in test set

  Returns:
    The results from the estimator's 'evaluate' method
  """
  tf_transform_output = tft.TFTransformOutput(working_dir)

  run_config = tf.estimator.RunConfig()

  estimator = tf.estimator.LinearClassifier(
      feature_columns=get_feature_columns(tf_transform_output),
      config=run_config,
      loss_reduction=tf.losses.Reduction.SUM)

  # Fit the model using the default optimizer.
  train_input_fn = _make_training_input_fn(
      tf_transform_output,
      os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE + '*'),
      batch_size=TRAIN_BATCH_SIZE)
  estimator.train(
      input_fn=train_input_fn,
      max_steps=TRAIN_NUM_EPOCHS * num_train_instances / TRAIN_BATCH_SIZE)

  # Evaluate model on test dataset.
  eval_input_fn = _make_training_input_fn(
      tf_transform_output,
      os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE + '*'),
      batch_size=1)

  # Export the model.
  serving_input_fn = _make_serving_input_fn(tf_transform_output)
  exported_model_dir = os.path.join(working_dir, EXPORTED_MODEL_DIR)
  estimator.export_saved_model(exported_model_dir, serving_input_fn)

  return estimator.evaluate(input_fn=eval_input_fn, steps=num_test_instances)
 

すべて一緒にそれを置きます

私たちは、私たちの国勢調査のデータを前処理するモデルを訓練し、サービス提供のためにそれを準備するために必要なすべてのものを作成しました。これまでのところ我々だけで物事が準備してきました。これは、実行を開始する時間です!

 import tempfile
temp = tempfile.gettempdir()

transform_data(train, test, temp)
results = train_and_evaluate(temp)
pprint.pprint(results)
 
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:220: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_transform/tf_utils.py:220: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:No assets to write.

INFO:tensorflow:No assets to write.

Warning:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/dcabb80023c14178be58171288b007ff/saved_model.pb

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/dcabb80023c14178be58171288b007ff/saved_model.pb

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:No assets to write.

INFO:tensorflow:No assets to write.

Warning:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

Warning:tensorflow:Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/49dab3a217e64749824fc5066de4bf7f/saved_model.pb

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/49dab3a217e64749824fc5066de4bf7f/saved_model.pb

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 

Warning:tensorflow:Tensorflow version (2.2.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended. 
WARNING:apache_beam.utils.interactive_utils:Failed to alter the label of a transform with the ipython prompt metadata. Cannot figure out the pipeline that the given pvalueish ((<PCollection[DecodeTestData/ParDo(_MapAndFilterErrorsDoFn).None] at 0x7fe7780bb2b0>, {'_schema': feature {
  name: "age"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "capital-gain"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "capital-loss"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "education"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "education-num"
  type: FLOAT
}
feature {
  name: "hours-per-week"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "label"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "marital-status"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "native-country"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "occupation"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "race"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "relationship"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "sex"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "workclass"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
}), (<PCollection[AnalyzeAndTransformDataset/AnalyzeDataset/CreateSavedModel/BindTensors/ReplaceWithConstants.None] at 0x7fe778158f60>, BeamDatasetMetadata(dataset_metadata={'_schema': feature {
  name: "age"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "capital-gain"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "capital-loss"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "education"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "education-num"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "hours-per-week"
  type: FLOAT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "label"
  type: INT
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "marital-status"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "native-country"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "occupation"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "race"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "relationship"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "sex"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
feature {
  name: "workclass"
  type: BYTES
  presence {
    min_fraction: 1.0
  }
  shape {
  }
}
}, deferred_metadata=<PCollection[AnalyzeAndTransformDataset/AnalyzeDataset/ComputeDeferredMetadata.None] at 0x7fe7780ed7b8>))) belongs to. Thus noop.

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets written to: /tmp/tmpbflj3wnm/tftransform_tmp/94352f4b103a4aeb8a141e3efbaafea3/assets

INFO:tensorflow:Assets written to: /tmp/tmpbflj3wnm/tftransform_tmp/94352f4b103a4aeb8a141e3efbaafea3/assets

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/94352f4b103a4aeb8a141e3efbaafea3/saved_model.pb

INFO:tensorflow:SavedModel written to: /tmp/tmpbflj3wnm/tftransform_tmp/94352f4b103a4aeb8a141e3efbaafea3/saved_model.pb

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:vocabulary_size = 9 in workclass is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/workclass.

INFO:tensorflow:vocabulary_size = 9 in workclass is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/workclass.

INFO:tensorflow:vocabulary_size = 16 in education is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/education.

INFO:tensorflow:vocabulary_size = 16 in education is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/education.

INFO:tensorflow:vocabulary_size = 7 in marital-status is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/marital-status.

INFO:tensorflow:vocabulary_size = 7 in marital-status is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/marital-status.

INFO:tensorflow:vocabulary_size = 15 in occupation is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/occupation.

INFO:tensorflow:vocabulary_size = 15 in occupation is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/occupation.

INFO:tensorflow:vocabulary_size = 6 in relationship is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/relationship.

INFO:tensorflow:vocabulary_size = 6 in relationship is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/relationship.

INFO:tensorflow:vocabulary_size = 5 in race is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/race.

INFO:tensorflow:vocabulary_size = 5 in race is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/race.

INFO:tensorflow:vocabulary_size = 2 in sex is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/sex.

INFO:tensorflow:vocabulary_size = 2 in sex is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/sex.

INFO:tensorflow:vocabulary_size = 42 in native-country is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/native-country.

INFO:tensorflow:vocabulary_size = 42 in native-country is inferred from the number of elements in the vocabulary_file /tmp/transform_fn/assets/native-country.

Warning:tensorflow:Using temporary folder as model directory: /tmp/tmp1z2zsfze

Warning:tensorflow:Using temporary folder as model directory: /tmp/tmp1z2zsfze

INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp1z2zsfze', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp1z2zsfze', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1666: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1666: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.

INFO:tensorflow:Calling model_fn.

INFO:tensorflow:Calling model_fn.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/feature_column/feature_column_v2.py:540: Layer.add_variable (from tensorflow.python.keras.engine.base_layer_v1) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/feature_column/feature_column_v2.py:540: Layer.add_variable (from tensorflow.python.keras.engine.base_layer_v1) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/ftrl.py:144: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/ftrl.py:144: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Create CheckpointSaverHook.

INFO:tensorflow:Create CheckpointSaverHook.

INFO:tensorflow:Graph was finalized.

INFO:tensorflow:Graph was finalized.

INFO:tensorflow:Running local_init_op.

INFO:tensorflow:Running local_init_op.

INFO:tensorflow:Done running local_init_op.

INFO:tensorflow:Done running local_init_op.

INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...

INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...

INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp1z2zsfze/model.ckpt.

INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp1z2zsfze/model.ckpt.

INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...

INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...

INFO:tensorflow:loss = 88.72284, step = 0

INFO:tensorflow:loss = 88.72284, step = 0

INFO:tensorflow:global_step/sec: 104.093

INFO:tensorflow:global_step/sec: 104.093

INFO:tensorflow:loss = 43.419994, step = 100 (0.962 sec)

INFO:tensorflow:loss = 43.419994, step = 100 (0.962 sec)

INFO:tensorflow:global_step/sec: 142.387

INFO:tensorflow:global_step/sec: 142.387

INFO:tensorflow:loss = 49.636517, step = 200 (0.702 sec)

INFO:tensorflow:loss = 49.636517, step = 200 (0.702 sec)

INFO:tensorflow:global_step/sec: 143.583

INFO:tensorflow:global_step/sec: 143.583

INFO:tensorflow:loss = 48.2191, step = 300 (0.696 sec)

INFO:tensorflow:loss = 48.2191, step = 300 (0.696 sec)

INFO:tensorflow:global_step/sec: 132.771

INFO:tensorflow:global_step/sec: 132.771

INFO:tensorflow:loss = 43.51023, step = 400 (0.753 sec)

INFO:tensorflow:loss = 43.51023, step = 400 (0.753 sec)

INFO:tensorflow:global_step/sec: 136.735

INFO:tensorflow:global_step/sec: 136.735

INFO:tensorflow:loss = 39.578957, step = 500 (0.731 sec)

INFO:tensorflow:loss = 39.578957, step = 500 (0.731 sec)

INFO:tensorflow:global_step/sec: 141.726

INFO:tensorflow:global_step/sec: 141.726

INFO:tensorflow:loss = 31.377352, step = 600 (0.706 sec)

INFO:tensorflow:loss = 31.377352, step = 600 (0.706 sec)

INFO:tensorflow:global_step/sec: 140.978

INFO:tensorflow:global_step/sec: 140.978

INFO:tensorflow:loss = 43.05789, step = 700 (0.709 sec)

INFO:tensorflow:loss = 43.05789, step = 700 (0.709 sec)

INFO:tensorflow:global_step/sec: 141.092

INFO:tensorflow:global_step/sec: 141.092

INFO:tensorflow:loss = 31.173666, step = 800 (0.709 sec)

INFO:tensorflow:loss = 31.173666, step = 800 (0.709 sec)

INFO:tensorflow:global_step/sec: 140.01

INFO:tensorflow:global_step/sec: 140.01

INFO:tensorflow:loss = 51.318253, step = 900 (0.714 sec)

INFO:tensorflow:loss = 51.318253, step = 900 (0.714 sec)

INFO:tensorflow:global_step/sec: 143.036

INFO:tensorflow:global_step/sec: 143.036

INFO:tensorflow:loss = 44.896477, step = 1000 (0.699 sec)

INFO:tensorflow:loss = 44.896477, step = 1000 (0.699 sec)

INFO:tensorflow:global_step/sec: 142.212

INFO:tensorflow:global_step/sec: 142.212

INFO:tensorflow:loss = 40.37133, step = 1100 (0.703 sec)

INFO:tensorflow:loss = 40.37133, step = 1100 (0.703 sec)

INFO:tensorflow:global_step/sec: 137.841

INFO:tensorflow:global_step/sec: 137.841

INFO:tensorflow:loss = 39.548, step = 1200 (0.726 sec)

INFO:tensorflow:loss = 39.548, step = 1200 (0.726 sec)

INFO:tensorflow:global_step/sec: 140.89

INFO:tensorflow:global_step/sec: 140.89

INFO:tensorflow:loss = 54.550842, step = 1300 (0.710 sec)

INFO:tensorflow:loss = 54.550842, step = 1300 (0.710 sec)

INFO:tensorflow:global_step/sec: 143.092

INFO:tensorflow:global_step/sec: 143.092

INFO:tensorflow:loss = 39.631958, step = 1400 (0.699 sec)

INFO:tensorflow:loss = 39.631958, step = 1400 (0.699 sec)

INFO:tensorflow:global_step/sec: 139.315

INFO:tensorflow:global_step/sec: 139.315

INFO:tensorflow:loss = 49.820618, step = 1500 (0.718 sec)

INFO:tensorflow:loss = 49.820618, step = 1500 (0.718 sec)

INFO:tensorflow:global_step/sec: 139.872

INFO:tensorflow:global_step/sec: 139.872

INFO:tensorflow:loss = 45.730972, step = 1600 (0.715 sec)

INFO:tensorflow:loss = 45.730972, step = 1600 (0.715 sec)

INFO:tensorflow:global_step/sec: 138.04

INFO:tensorflow:global_step/sec: 138.04

INFO:tensorflow:loss = 36.139393, step = 1700 (0.724 sec)

INFO:tensorflow:loss = 36.139393, step = 1700 (0.724 sec)

INFO:tensorflow:global_step/sec: 140.354

INFO:tensorflow:global_step/sec: 140.354

INFO:tensorflow:loss = 40.87008, step = 1800 (0.712 sec)

INFO:tensorflow:loss = 40.87008, step = 1800 (0.712 sec)

INFO:tensorflow:global_step/sec: 142.71

INFO:tensorflow:global_step/sec: 142.71

INFO:tensorflow:loss = 53.13055, step = 1900 (0.701 sec)

INFO:tensorflow:loss = 53.13055, step = 1900 (0.701 sec)

INFO:tensorflow:global_step/sec: 141.784

INFO:tensorflow:global_step/sec: 141.784

INFO:tensorflow:loss = 40.821915, step = 2000 (0.705 sec)

INFO:tensorflow:loss = 40.821915, step = 2000 (0.705 sec)

INFO:tensorflow:global_step/sec: 140.922

INFO:tensorflow:global_step/sec: 140.922

INFO:tensorflow:loss = 48.694305, step = 2100 (0.710 sec)

INFO:tensorflow:loss = 48.694305, step = 2100 (0.710 sec)

INFO:tensorflow:global_step/sec: 142.652

INFO:tensorflow:global_step/sec: 142.652

INFO:tensorflow:loss = 39.659786, step = 2200 (0.701 sec)

INFO:tensorflow:loss = 39.659786, step = 2200 (0.701 sec)

INFO:tensorflow:global_step/sec: 124.202

INFO:tensorflow:global_step/sec: 124.202

INFO:tensorflow:loss = 42.969543, step = 2300 (0.805 sec)

INFO:tensorflow:loss = 42.969543, step = 2300 (0.805 sec)

INFO:tensorflow:global_step/sec: 122.688

INFO:tensorflow:global_step/sec: 122.688

INFO:tensorflow:loss = 38.666878, step = 2400 (0.815 sec)

INFO:tensorflow:loss = 38.666878, step = 2400 (0.815 sec)

INFO:tensorflow:global_step/sec: 121.456

INFO:tensorflow:global_step/sec: 121.456

INFO:tensorflow:loss = 41.72638, step = 2500 (0.824 sec)

INFO:tensorflow:loss = 41.72638, step = 2500 (0.824 sec)

INFO:tensorflow:global_step/sec: 124.748

INFO:tensorflow:global_step/sec: 124.748

INFO:tensorflow:loss = 43.654724, step = 2600 (0.801 sec)

INFO:tensorflow:loss = 43.654724, step = 2600 (0.801 sec)

INFO:tensorflow:global_step/sec: 122.324

INFO:tensorflow:global_step/sec: 122.324

INFO:tensorflow:loss = 31.018797, step = 2700 (0.818 sec)

INFO:tensorflow:loss = 31.018797, step = 2700 (0.818 sec)

INFO:tensorflow:global_step/sec: 125.791

INFO:tensorflow:global_step/sec: 125.791

INFO:tensorflow:loss = 44.809124, step = 2800 (0.795 sec)

INFO:tensorflow:loss = 44.809124, step = 2800 (0.795 sec)

INFO:tensorflow:global_step/sec: 123.229

INFO:tensorflow:global_step/sec: 123.229

INFO:tensorflow:loss = 39.58447, step = 2900 (0.811 sec)

INFO:tensorflow:loss = 39.58447, step = 2900 (0.811 sec)

INFO:tensorflow:global_step/sec: 126.829

INFO:tensorflow:global_step/sec: 126.829

INFO:tensorflow:loss = 55.24282, step = 3000 (0.789 sec)

INFO:tensorflow:loss = 55.24282, step = 3000 (0.789 sec)

INFO:tensorflow:global_step/sec: 123.026

INFO:tensorflow:global_step/sec: 123.026

INFO:tensorflow:loss = 45.018, step = 3100 (0.813 sec)

INFO:tensorflow:loss = 45.018, step = 3100 (0.813 sec)

INFO:tensorflow:global_step/sec: 125.527

INFO:tensorflow:global_step/sec: 125.527

INFO:tensorflow:loss = 32.267475, step = 3200 (0.797 sec)

INFO:tensorflow:loss = 32.267475, step = 3200 (0.797 sec)

INFO:tensorflow:global_step/sec: 125.278

INFO:tensorflow:global_step/sec: 125.278

INFO:tensorflow:loss = 39.706387, step = 3300 (0.799 sec)

INFO:tensorflow:loss = 39.706387, step = 3300 (0.799 sec)

INFO:tensorflow:global_step/sec: 123.954

INFO:tensorflow:global_step/sec: 123.954

INFO:tensorflow:loss = 44.980774, step = 3400 (0.806 sec)

INFO:tensorflow:loss = 44.980774, step = 3400 (0.806 sec)

INFO:tensorflow:global_step/sec: 122.245

INFO:tensorflow:global_step/sec: 122.245

INFO:tensorflow:loss = 34.893124, step = 3500 (0.818 sec)

INFO:tensorflow:loss = 34.893124, step = 3500 (0.818 sec)

INFO:tensorflow:global_step/sec: 142.51

INFO:tensorflow:global_step/sec: 142.51

INFO:tensorflow:loss = 46.19402, step = 3600 (0.701 sec)

INFO:tensorflow:loss = 46.19402, step = 3600 (0.701 sec)

INFO:tensorflow:global_step/sec: 143.921

INFO:tensorflow:global_step/sec: 143.921

INFO:tensorflow:loss = 40.14846, step = 3700 (0.695 sec)

INFO:tensorflow:loss = 40.14846, step = 3700 (0.695 sec)

INFO:tensorflow:global_step/sec: 141.015

INFO:tensorflow:global_step/sec: 141.015

INFO:tensorflow:loss = 39.931377, step = 3800 (0.709 sec)

INFO:tensorflow:loss = 39.931377, step = 3800 (0.709 sec)

INFO:tensorflow:global_step/sec: 139.846

INFO:tensorflow:global_step/sec: 139.846

INFO:tensorflow:loss = 41.347256, step = 3900 (0.716 sec)

INFO:tensorflow:loss = 41.347256, step = 3900 (0.716 sec)

INFO:tensorflow:global_step/sec: 142.112

INFO:tensorflow:global_step/sec: 142.112

INFO:tensorflow:loss = 41.49544, step = 4000 (0.703 sec)

INFO:tensorflow:loss = 41.49544, step = 4000 (0.703 sec)

INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4071...

INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4071...

INFO:tensorflow:Saving checkpoints for 4071 into /tmp/tmp1z2zsfze/model.ckpt.

INFO:tensorflow:Saving checkpoints for 4071 into /tmp/tmp1z2zsfze/model.ckpt.

INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4071...

INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4071...

INFO:tensorflow:Loss for final step: 48.927376.

INFO:tensorflow:Loss for final step: 48.927376.

Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_3:0\022\tworkclass"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_5:0\022\teducation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_7:0\022\016marital-status"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\013\n\tConst_9:0\022\noccupation"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_11:0\022\014relationship"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_13:0\022\004race"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_15:0\022\003sex"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


Warning:tensorflow:Expected binary or unicode string, got type_url: "type.googleapis.com/tensorflow.AssetFileDef"
value: "\n\014\n\nConst_17:0\022\016native-country"


INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Saver not created because there are no variables in the graph to restore

INFO:tensorflow:Calling model_fn.

INFO:tensorflow:Calling model_fn.

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Signatures INCLUDED in export for Classify: ['serving_default', 'classification']

INFO:tensorflow:Signatures INCLUDED in export for Classify: ['serving_default', 'classification']

INFO:tensorflow:Signatures INCLUDED in export for Regress: ['regression']

INFO:tensorflow:Signatures INCLUDED in export for Regress: ['regression']

INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']

INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']

INFO:tensorflow:Signatures INCLUDED in export for Train: None

INFO:tensorflow:Signatures INCLUDED in export for Train: None

INFO:tensorflow:Signatures INCLUDED in export for Eval: None

INFO:tensorflow:Signatures INCLUDED in export for Eval: None

INFO:tensorflow:Restoring parameters from /tmp/tmp1z2zsfze/model.ckpt-4071

INFO:tensorflow:Restoring parameters from /tmp/tmp1z2zsfze/model.ckpt-4071

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets added to graph.

INFO:tensorflow:Assets written to: /tmp/exported_model_dir/temp-1595841268/assets

INFO:tensorflow:Assets written to: /tmp/exported_model_dir/temp-1595841268/assets

INFO:tensorflow:SavedModel written to: /tmp/exported_model_dir/temp-1595841268/saved_model.pb

INFO:tensorflow:SavedModel written to: /tmp/exported_model_dir/temp-1595841268/saved_model.pb

INFO:tensorflow:Calling model_fn.

INFO:tensorflow:Calling model_fn.

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Done calling model_fn.

INFO:tensorflow:Starting evaluation at 2020-07-27T09:14:30Z

INFO:tensorflow:Starting evaluation at 2020-07-27T09:14:30Z

INFO:tensorflow:Graph was finalized.

INFO:tensorflow:Graph was finalized.

INFO:tensorflow:Restoring parameters from /tmp/tmp1z2zsfze/model.ckpt-4071

INFO:tensorflow:Restoring parameters from /tmp/tmp1z2zsfze/model.ckpt-4071

INFO:tensorflow:Running local_init_op.

INFO:tensorflow:Running local_init_op.

INFO:tensorflow:Done running local_init_op.

INFO:tensorflow:Done running local_init_op.

INFO:tensorflow:Evaluation [1628/16281]

INFO:tensorflow:Evaluation [1628/16281]

INFO:tensorflow:Evaluation [3256/16281]

INFO:tensorflow:Evaluation [3256/16281]

INFO:tensorflow:Evaluation [4884/16281]

INFO:tensorflow:Evaluation [4884/16281]

INFO:tensorflow:Evaluation [6512/16281]

INFO:tensorflow:Evaluation [6512/16281]

INFO:tensorflow:Evaluation [8140/16281]

INFO:tensorflow:Evaluation [8140/16281]

INFO:tensorflow:Evaluation [9768/16281]

INFO:tensorflow:Evaluation [9768/16281]

INFO:tensorflow:Evaluation [11396/16281]

INFO:tensorflow:Evaluation [11396/16281]

INFO:tensorflow:Evaluation [13024/16281]

INFO:tensorflow:Evaluation [13024/16281]

INFO:tensorflow:Evaluation [14652/16281]

INFO:tensorflow:Evaluation [14652/16281]

INFO:tensorflow:Evaluation [16280/16281]

INFO:tensorflow:Evaluation [16280/16281]

INFO:tensorflow:Evaluation [16281/16281]

INFO:tensorflow:Evaluation [16281/16281]

INFO:tensorflow:Inference Time : 118.49958s

INFO:tensorflow:Inference Time : 118.49958s

INFO:tensorflow:Finished evaluation at 2020-07-27-09:16:29

INFO:tensorflow:Finished evaluation at 2020-07-27-09:16:29

INFO:tensorflow:Saving dict for global step 4071: accuracy = 0.850562, accuracy_baseline = 0.76377374, auc = 0.9016513, auc_precision_recall = 0.96709853, average_loss = 0.3241848, global_step = 4071, label/mean = 0.76377374, loss = 0.3241848, precision = 0.8778903, prediction/mean = 0.7650077, recall = 0.93429834

INFO:tensorflow:Saving dict for global step 4071: accuracy = 0.850562, accuracy_baseline = 0.76377374, auc = 0.9016513, auc_precision_recall = 0.96709853, average_loss = 0.3241848, global_step = 4071, label/mean = 0.76377374, loss = 0.3241848, precision = 0.8778903, prediction/mean = 0.7650077, recall = 0.93429834

INFO:tensorflow:Saving 'checkpoint_path' summary for global step 4071: /tmp/tmp1z2zsfze/model.ckpt-4071

INFO:tensorflow:Saving 'checkpoint_path' summary for global step 4071: /tmp/tmp1z2zsfze/model.ckpt-4071

{'accuracy': 0.850562,
 'accuracy_baseline': 0.76377374,
 'auc': 0.9016513,
 'auc_precision_recall': 0.96709853,
 'average_loss': 0.3241848,
 'global_step': 4071,
 'label/mean': 0.76377374,
 'loss': 0.3241848,
 'precision': 0.8778903,
 'prediction/mean': 0.7650077,
 'recall': 0.93429834}

我々のしたこと

この例では、使用tf.Transform国勢調査データのデータセットを前処理すると、洗浄され、変換されたデータとモデルを訓練します。我々はまた、我々は推論を実行するために、本番環境で私たちの訓練を受けたモデルを展開するときに我々が使用することができると入力機能を作成しました。両方の訓練のために同じコードを使用すると推論することにより、我々は、データスキューを持つすべての問題を避けます。道に沿って私たちは、データをクリーニングするために必要なことを、変換を実行するために変換するApacheのビームを作成する方法について学び、TensorFlowにおける当社のデータ包まFeatureColumns 。これは行うことができます変換TensorFlowもののほんの一枚です!我々はに飛び込むことをお勧めしtf.Transform 、それはあなたのために何ができるかを発見します。