Учебное пособие по компонентам TFX Keras

Поэтапное знакомство с TensorFlow Extended (TFX)

Это руководство на основе Colab будет интерактивно проходить через каждый встроенный компонент TensorFlow Extended (TFX).

Он охватывает каждый этап непрерывного конвейера машинного обучения, от приема данных до продвижения модели и обслуживания.

Когда вы закончите, содержимое этой записной книжки может быть автоматически экспортировано как исходный код конвейера TFX, который можно организовать с помощью Apache Airflow и Apache Beam.

Задний план

Этот блокнот демонстрирует, как использовать TFX в среде Jupyter / Colab. Здесь мы рассмотрим пример Chicago Taxi в интерактивной записной книжке.

Работа в интерактивной записной книжке - полезный способ познакомиться со структурой конвейера TFX. Это также полезно при разработке ваших собственных конвейеров в качестве облегченной среды разработки, но вы должны знать, что существуют различия в способах оркестровки интерактивных записных книжек и в том, как они получают доступ к артефактам метаданных.

Оркестровка

В производственном развертывании TFX вы будете использовать оркестратор, такой как Apache Airflow, Kubeflow Pipelines или Apache Beam, для оркестрации заранее определенного конвейерного графа компонентов TFX. В интерактивной записной книжке записная книжка сама по себе является оркестратором, запускающим каждый компонент TFX по мере того, как вы выполняете ячейки записной книжки.

Метаданные

В производственном развертывании TFX вы получите доступ к метаданным через API метаданных ML (MLMD). MLMD хранит свойства метаданных в базе данных, такой как MySQL или SQLite, и сохраняет полезные данные метаданных в постоянном хранилище, например в вашей файловой системе. В интерактивном ноутбуке, оба свойства и полезные нагрузки, хранятся в базе данных SQLite эфемерных в /tmp каталог на портативном компьютере или сервере Jupyter Colab.

Настраивать

Сначала мы устанавливаем и импортируем необходимые пакеты, настраиваем пути и загружаем данные.

Обновить Pip

Чтобы избежать обновления Pip в системе при локальном запуске, убедитесь, что мы работаем в Colab. Конечно, локальные системы можно модернизировать отдельно.

try:
  import colab
  !pip install --upgrade pip
except:
  pass

Установить TFX

pip install -U tfx

Вы перезапускали среду выполнения?

Если вы используете Google Colab, при первом запуске указанной выше ячейки вы должны перезапустить среду выполнения (Runtime> Restart runtime ...). Это связано с тем, как Colab загружает пакеты.

Импортировать пакеты

Импортируем необходимые пакеты, в том числе стандартные классы компонентов TFX.

import os
import pprint
import tempfile
import urllib

import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()

from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext

%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip

Проверим версии библиотеки.

print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
TensorFlow version: 2.7.0
TFX version: 1.5.0

Настроить пути трубопровода

# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]

# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')

# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
    tempfile.mkdtemp(), 'serving_model/taxi_simple')

# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)

Скачать пример данных

Мы загружаем пример набора данных для использования в нашем конвейере TFX.

Набор данных мы используем это такси Trips набор данных выпущен городом Чикаго. Столбцы в этом наборе данных:

pickup_community_area транспортные расходы trip_start_month
trip_start_hour trip_start_day trip_start_timestamp
pickup_latitude pickup_longitude dropoff_latitude
dropoff_longitude trip_miles pickup_census_tract
dropoff_census_tract способ оплаты Компания
trip_seconds dropoff_community_area подсказки

С помощью этого набора данных, мы будем строить модель , которая предсказывает tips о поездке.

_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
('/tmp/tfx-datacz9xjro6/data.csv', <http.client.HTTPMessage at 0x7f889af49250>)

Взгляните на файл CSV.

head {_data_filepath}
pickup_community_area,fare,trip_start_month,trip_start_hour,trip_start_day,trip_start_timestamp,pickup_latitude,pickup_longitude,dropoff_latitude,dropoff_longitude,trip_miles,pickup_census_tract,dropoff_census_tract,payment_type,company,trip_seconds,dropoff_community_area,tips
,12.45,5,19,6,1400269500,,,,,0.0,,,Credit Card,Chicago Elite Cab Corp. (Chicago Carriag,0,,0.0
,0,3,19,5,1362683700,,,,,0,,,Unknown,Chicago Elite Cab Corp.,300,,0
60,27.05,10,2,3,1380593700,41.836150155,-87.648787952,,,12.6,,,Cash,Taxi Affiliation Services,1380,,0.0
10,5.85,10,1,2,1382319000,41.985015101,-87.804532006,,,0.0,,,Cash,Taxi Affiliation Services,180,,0.0
14,16.65,5,7,5,1369897200,41.968069,-87.721559063,,,0.0,,,Cash,Dispatch Taxi Affiliation,1080,,0.0
13,16.45,11,12,3,1446554700,41.983636307,-87.723583185,,,6.9,,,Cash,,780,,0.0
16,32.05,12,1,1,1417916700,41.953582125,-87.72345239,,,15.4,,,Cash,,1200,,0.0
30,38.45,10,10,5,1444301100,41.839086906,-87.714003807,,,14.6,,,Cash,,2580,,0.0
11,14.65,1,1,3,1358213400,41.978829526,-87.771166703,,,5.81,,,Cash,,1080,,0.0

Заявление об ограничении ответственности: на этом сайте представлены приложения, использующие данные, которые были изменены для использования из их исходного источника, www.cityofchicago.org, официального сайта города Чикаго. Городские власти Чикаго не претендуют на содержание, точность, своевременность или полноту каких-либо данных, представленных на этом сайте. Данные, представленные на этом сайте, могут быть изменены в любое время. Понятно, что данные, представленные на этом сайте, используются на свой страх и риск.

Создайте InteractiveContext

Наконец, мы создаем InteractiveContext, который позволит нам интерактивно запускать компоненты TFX в этой записной книжке.

# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq as root for pipeline outputs.
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/metadata.sqlite.

Интерактивный запуск компонентов TFX

В следующих ячейках мы создаем компоненты TFX один за другим, запускаем каждый из них и визуализируем их выходные артефакты.

ExampleGen

ExampleGen компонент, как правило , в начале TFX трубопровода. Будет:

  1. Разделить данные на обучающие и оценочные наборы (по умолчанию 2/3 обучения + 1/3 оценки)
  2. Преобразование данных в tf.Example формате (узнать больше здесь )
  3. Копирование данных в _tfx_root каталог для других компонентов для доступа

ExampleGen принимает в качестве входных данных пути к источнику данных. В нашем случае, это _data_root путь , который содержит загруженное CSV.

example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen)
INFO:absl:Running driver for CsvExampleGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:select span and version = (0, None)
INFO:absl:latest span and version = (0, None)
INFO:absl:Running executor for CsvExampleGen
INFO:absl:Generating examples.
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
INFO:absl:Processing input csv data /tmp/tfx-datacz9xjro6/* to TFExample.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
INFO:absl:Examples generated.
INFO:absl:Running publisher for CsvExampleGen
INFO:absl:MetadataStore with DB connection initialized

Давайте рассмотрим выходные артефакты ExampleGen . Этот компонент создает два артефакта, примеры обучения и примеры оценки:

artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
["train", "eval"] /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/CsvExampleGen/examples/1

Мы также можем взглянуть на первые три обучающих примера:

# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')

# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
                      for name in os.listdir(train_uri)]

# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")

# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
  serialized_example = tfrecord.numpy()
  example = tf.train.Example()
  example.ParseFromString(serialized_example)
  pp.pprint(example)
features {
  feature {
    key: "company"
    value {
      bytes_list {
        value: "Chicago Elite Cab Corp. (Chicago Carriag"
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 12.449999809265137
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Credit Card"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 6
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 19
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 5
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1400269500
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
      bytes_list {
        value: "Taxi Affiliation Services"
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 27.049999237060547
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Cash"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 60
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
        value: 41.836151123046875
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
        value: -87.64878845214844
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 12.600000381469727
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 1380
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 2
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 10
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1380593700
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
      bytes_list {
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      float_list {
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 16.450000762939453
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      bytes_list {
        value: "Cash"
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 13
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      float_list {
        value: 41.98363494873047
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      float_list {
        value: -87.72357940673828
      }
    }
  }
  feature {
    key: "tips"
    value {
      float_list {
        value: 0.0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 6.900000095367432
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      int64_list {
        value: 780
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 12
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 11
      }
    }
  }
  feature {
    key: "trip_start_timestamp"
    value {
      int64_list {
        value: 1446554700
      }
    }
  }
}

Теперь, когда ExampleGen закончил глотание данных, следующий шаг является анализом данных.

СтатистикаGen

В StatisticsGen статистика компонента Вычисляет более набор данных для анализа данных, а также для использования в последующих компонентах. Он использует TensorFlow Data Validation библиотеку.

StatisticsGen принимает в качестве входных данных набора данных , мы просто проглоченные с помощью ExampleGen .

statistics_gen = tfx.components.StatisticsGen(
    examples=example_gen.outputs['examples'])
context.run(statistics_gen)
INFO:absl:Excluding no splits because exclude_splits is not set.
INFO:absl:Running driver for StatisticsGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for StatisticsGen
INFO:absl:Generating statistics for split train.
INFO:absl:Statistics for split train written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/StatisticsGen/statistics/2/Split-train.
INFO:absl:Generating statistics for split eval.
INFO:absl:Statistics for split eval written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/StatisticsGen/statistics/2/Split-eval.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:absl:Running publisher for StatisticsGen
INFO:absl:MetadataStore with DB connection initialized

После StatisticsGen завершения работы, мы можем визуализировать выведенную статистику. Попробуйте поиграть с разными сюжетами!

context.show(statistics_gen.outputs['statistics'])

SchemaGen

SchemaGen компонент создает схему , основанную на статистике данных. (Схема определяет ожидаемые границы, типы и свойства функций в наборе данных) . Он также использует TensorFlow Data Validation библиотеку.

SchemaGen будет принимать в качестве входных данных статистики , которые мы получили с StatisticsGen , глядя на учебный раскол по умолчанию.

schema_gen = tfx.components.SchemaGen(
    statistics=statistics_gen.outputs['statistics'],
    infer_feature_shape=False)
context.run(schema_gen)
INFO:absl:Excluding no splits because exclude_splits is not set.
INFO:absl:Running driver for SchemaGen
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for SchemaGen
INFO:absl:Processing schema from statistics for split train.
INFO:absl:Processing schema from statistics for split eval.
INFO:absl:Schema written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/SchemaGen/schema/3/schema.pbtxt.
INFO:absl:Running publisher for SchemaGen
INFO:absl:MetadataStore with DB connection initialized

После SchemaGen завершения работы, мы можем визуализировать сформированную схему в виде таблицы.

context.show(schema_gen.outputs['schema'])

Каждая функция в вашем наборе данных отображается в таблице схемы в виде строки вместе со своими свойствами. Схема также фиксирует все значения, которые принимает категориальный признак, обозначенный как его домен.

Чтобы узнать больше о схемах, см документации SchemaGen .

ExampleValidator

ExampleValidator компонент обнаруживает аномалии в данных, на основе ожиданий , определенных в схеме. Он также использует TensorFlow Data Validation библиотеку.

ExampleValidator будет принимать в качестве входных данных статистики из StatisticsGen и схемы из SchemaGen .

example_validator = tfx.components.ExampleValidator(
    statistics=statistics_gen.outputs['statistics'],
    schema=schema_gen.outputs['schema'])
context.run(example_validator)
INFO:absl:Excluding no splits because exclude_splits is not set.
INFO:absl:Running driver for ExampleValidator
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for ExampleValidator
INFO:absl:Validating schema against the computed statistics for split train.
INFO:absl:Validation complete for split train. Anomalies written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/ExampleValidator/anomalies/4/Split-train.
INFO:absl:Validating schema against the computed statistics for split eval.
INFO:absl:Validation complete for split eval. Anomalies written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/ExampleValidator/anomalies/4/Split-eval.
INFO:absl:Running publisher for ExampleValidator
INFO:absl:MetadataStore with DB connection initialized

После ExampleValidator завершения работы, мы можем визуализировать аномалии в виде таблицы.

context.show(example_validator.outputs['anomalies'])

В таблице аномалий мы видим, что аномалий нет. Это то, чего мы ожидали, поскольку это первый набор данных, который мы проанализировали, и схема адаптирована к нему. Вам следует просмотреть эту схему - все неожиданное означает аномалию в данных. После проверки схему можно использовать для защиты будущих данных, а созданные здесь аномалии можно использовать для отладки производительности модели, понимания того, как ваши данные развиваются с течением времени, и выявления ошибок данных.

Преобразовать

Transform компонент выполняет функцию инженерии как для подготовки и обслуживания. Он использует TensorFlow преобразование библиотеки.

Transform будет принимать в качестве входных данных из ExampleGen , схемы из SchemaGen , а также модуль, содержащий определенные пользователем преобразования кода.

Давайте рассмотрим пример определяемого пользователя преобразования кода ниже (для введения в TensorFlow Transform API - интерфейсы, можно найти в учебнике ). Сначала мы определяем несколько констант для проектирования функций:

_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}

# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]

CATEGORICAL_FEATURE_KEYS = [
    'trip_start_hour', 'trip_start_day', 'trip_start_month',
    'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
    'dropoff_community_area'
]

DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']

# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10

BUCKET_FEATURE_KEYS = [
    'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
    'dropoff_longitude'
]

# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000

# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10

VOCAB_FEATURE_KEYS = [
    'payment_type',
    'company',
]

# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
Writing taxi_constants.py

Далее мы пишем preprocessing_fn , который принимает в необработанных данных в качестве входных данных и возвращает трансформированных функции , что наша модель может тренироваться на:

_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}

import tensorflow as tf
import tensorflow_transform as tft

import taxi_constants

_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY


def preprocessing_fn(inputs):
  """tf.transform's callback function for preprocessing inputs.
  Args:
    inputs: map from feature keys to raw not-yet-transformed features.
  Returns:
    Map from string feature key to transformed feature operations.
  """
  outputs = {}
  for key in _DENSE_FLOAT_FEATURE_KEYS:
    # If sparse make it dense, setting nan's to 0 or '', and apply zscore.
    outputs[key] = tft.scale_to_z_score(
        _fill_in_missing(inputs[key]))

  for key in _VOCAB_FEATURE_KEYS:
    # Build a vocabulary for this feature.
    outputs[key] = tft.compute_and_apply_vocabulary(
        _fill_in_missing(inputs[key]),
        top_k=_VOCAB_SIZE,
        num_oov_buckets=_OOV_SIZE)

  for key in _BUCKET_FEATURE_KEYS:
    outputs[key] = tft.bucketize(
        _fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)

  for key in _CATEGORICAL_FEATURE_KEYS:
    outputs[key] = _fill_in_missing(inputs[key])

  # Was this passenger a big tipper?
  taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
  tips = _fill_in_missing(inputs[_LABEL_KEY])
  outputs[_LABEL_KEY] = tf.where(
      tf.math.is_nan(taxi_fare),
      tf.cast(tf.zeros_like(taxi_fare), tf.int64),
      # Test if the tip was > 20% of the fare.
      tf.cast(
          tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))

  return outputs


def _fill_in_missing(x):
  """Replace missing values in a SparseTensor.
  Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
  Args:
    x: A `SparseTensor` of rank 2.  Its dense shape should have size at most 1
      in the second dimension.
  Returns:
    A rank 1 tensor where missing values of `x` have been filled in.
  """
  if not isinstance(x, tf.sparse.SparseTensor):
    return x

  default_value = '' if x.dtype == tf.string else 0
  return tf.squeeze(
      tf.sparse.to_dense(
          tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
          default_value),
      axis=1)
Writing taxi_transform.py

Теперь мы переходим в этой особенности инженерного кода в Transform компонента и запустить его для преобразования данных.

transform = tfx.components.Transform(
    examples=example_gen.outputs['examples'],
    schema=schema_gen.outputs['schema'],
    module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform)
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_transform.py' (including modules: ['taxi_transform', 'taxi_constants']).
INFO:absl:User module package has hash fingerprint version f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmp9qnpryw9/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmppaskl3va', '--dist-dir', '/tmp/tmpr6oorqji']
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  setuptools.SetuptoolsDeprecationWarning,
INFO:absl:Successfully built user code wheel distribution at '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'; target user module is 'taxi_transform'.
INFO:absl:Full user module path is 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'
INFO:absl:Running driver for Transform
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Transform
INFO:absl:Analyze the 'train' split and transform all splits when splits_config is not set.
INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'preprocessing_fn': None} 'preprocessing_fn'
INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmpbvbj9r5b', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying taxi_transform.py -> build/lib
copying taxi_constants.py -> build/lib
running install
running install_lib
running install_egg_info
running egg_info
creating tfx_user_code_Transform.egg-info
writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'
writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'
Copying tfx_user_code_Transform.egg-info to /tmp/tmppaskl3va/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3.7.egg-info
running install_scripts
Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl
INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.
INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'stats_options_updater_fn': None} 'stats_options_updater_fn'
INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmpbzwdie1a', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424
Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl
INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.
INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmp09euava5', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424
Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl
INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_transform/tf_utils.py:289: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor.
INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor.
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
2021-12-21 10:10:18.679569: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5/.temp_path/tftransform_tmp/80dbc09e6ded4a93b5c506e252c8f536/assets
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5/.temp_path/tftransform_tmp/572eacb7c64f4f6e9262f7d496a95f86/assets
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:absl:Running publisher for Transform
INFO:absl:MetadataStore with DB connection initialized

Рассмотрим выходные артефакты Transform . Этот компонент производит два типа выходных данных:

  • transform_graph является графиком , который может выполнять предварительную обработку операций (этот график будет включен в моделях обслуживающих и оценки).
  • transformed_examples представляет препроцессированные данные обучения и оценки.
transform.outputs
{'transform_graph': Channel(
     type_name: TransformGraph
     artifacts: [Artifact(artifact: id: 5
 type_id: 22
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5"
 custom_properties {
   key: "name"
   value {
     string_value: "transform_graph"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 22
 name: "TransformGraph"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'transformed_examples': Channel(
     type_name: Examples
     artifacts: [Artifact(artifact: id: 6
 type_id: 14
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transformed_examples/5"
 properties {
   key: "split_names"
   value {
     string_value: "[\"train\", \"eval\"]"
   }
 }
 custom_properties {
   key: "name"
   value {
     string_value: "transformed_examples"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 14
 name: "Examples"
 properties {
   key: "span"
   value: INT
 }
 properties {
   key: "split_names"
   value: STRING
 }
 properties {
   key: "version"
   value: INT
 }
 base_type: DATASET
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'updated_analyzer_cache': Channel(
     type_name: TransformCache
     artifacts: [Artifact(artifact: id: 7
 type_id: 23
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/updated_analyzer_cache/5"
 custom_properties {
   key: "name"
   value {
     string_value: "updated_analyzer_cache"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 23
 name: "TransformCache"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'pre_transform_schema': Channel(
     type_name: Schema
     artifacts: [Artifact(artifact: id: 8
 type_id: 18
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/pre_transform_schema/5"
 custom_properties {
   key: "name"
   value {
     string_value: "pre_transform_schema"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 18
 name: "Schema"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'pre_transform_stats': Channel(
     type_name: ExampleStatistics
     artifacts: [Artifact(artifact: id: 9
 type_id: 16
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/pre_transform_stats/5"
 custom_properties {
   key: "name"
   value {
     string_value: "pre_transform_stats"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 16
 name: "ExampleStatistics"
 properties {
   key: "span"
   value: INT
 }
 properties {
   key: "split_names"
   value: STRING
 }
 base_type: STATISTICS
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'post_transform_schema': Channel(
     type_name: Schema
     artifacts: [Artifact(artifact: id: 10
 type_id: 18
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_schema/5"
 custom_properties {
   key: "name"
   value {
     string_value: "post_transform_schema"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 18
 name: "Schema"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'post_transform_stats': Channel(
     type_name: ExampleStatistics
     artifacts: [Artifact(artifact: id: 11
 type_id: 16
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_stats/5"
 custom_properties {
   key: "name"
   value {
     string_value: "post_transform_stats"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 16
 name: "ExampleStatistics"
 properties {
   key: "span"
   value: INT
 }
 properties {
   key: "split_names"
   value: STRING
 }
 base_type: STATISTICS
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'post_transform_anomalies': Channel(
     type_name: ExampleAnomalies
     artifacts: [Artifact(artifact: id: 12
 type_id: 20
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_anomalies/5"
 custom_properties {
   key: "name"
   value {
     string_value: "post_transform_anomalies"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Transform"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 20
 name: "ExampleAnomalies"
 properties {
   key: "span"
   value: INT
 }
 properties {
   key: "split_names"
   value: STRING
 }
 )]
     additional_properties: {}
     additional_custom_properties: {}
 )}

Взгляните на transform_graph артефакт. Он указывает на каталог, содержащий три подкаталога.

train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
['transform_fn', 'transformed_metadata', 'metadata']

transformed_metadata подкаталог содержит схему препроцессированных данных. transform_fn подкаталог содержит фактический график предварительной обработки. metadata подкаталог содержит схему исходных данных.

Мы также можем взглянуть на первые три преобразованных примера:

# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')

# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
                      for name in os.listdir(train_uri)]

# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")

# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
  serialized_example = tfrecord.numpy()
  example = tf.train.Example()
  example.ParseFromString(serialized_example)
  pp.pprint(example)
features {
  feature {
    key: "company"
    value {
      int64_list {
        value: 8
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 0.061060599982738495
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      int64_list {
        value: 1
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "tips"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: -0.15886741876602173
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      float_list {
        value: -0.7118487358093262
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 6
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 19
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 5
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 1.2521240711212158
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 60
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "tips"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 0.532160758972168
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      float_list {
        value: 0.5509493350982666
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 2
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 10
      }
    }
  }
}

features {
  feature {
    key: "company"
    value {
      int64_list {
        value: 48
      }
    }
  }
  feature {
    key: "dropoff_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_community_area"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_latitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "dropoff_longitude"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "fare"
    value {
      float_list {
        value: 0.3873794376850128
      }
    }
  }
  feature {
    key: "payment_type"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_census_tract"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "pickup_community_area"
    value {
      int64_list {
        value: 13
      }
    }
  }
  feature {
    key: "pickup_latitude"
    value {
      int64_list {
        value: 9
      }
    }
  }
  feature {
    key: "pickup_longitude"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "tips"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "trip_miles"
    value {
      float_list {
        value: 0.21955277025699615
      }
    }
  }
  feature {
    key: "trip_seconds"
    value {
      float_list {
        value: 0.0019067146349698305
      }
    }
  }
  feature {
    key: "trip_start_day"
    value {
      int64_list {
        value: 3
      }
    }
  }
  feature {
    key: "trip_start_hour"
    value {
      int64_list {
        value: 12
      }
    }
  }
  feature {
    key: "trip_start_month"
    value {
      int64_list {
        value: 11
      }
    }
  }
}

После Transform компонент преобразовал данные в особенности, и следующим шагом является подготовка модели.

Тренер

Trainer компонент будет тренировать модель , которую вы определяете в TensorFlow. По умолчанию поддержка Тренер оценщик API, чтобы использовать Keras API, вам необходимо указать Generic тренер по установке custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor) в застройщик преподавателя.

Trainer принимает в качестве входных данных в схему из SchemaGen , преобразованные данные и график из Transform , обучение, параметры, а также модуль , который содержит определенный пользователем код модели.

Давайте рассмотрим пример пользовательского кода модели ниже (для введения в TensorFlow Keras APIs, можно найти в учебнике ):

_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}

from typing import List, Text

import os
from absl import logging

import datetime
import tensorflow as tf
import tensorflow_transform as tft

from tfx import v1 as tfx
from tfx_bsl.public import tfxio

import taxi_constants

_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = taxi_constants.LABEL_KEY


def _get_tf_examples_serving_signature(model, tf_transform_output):
  """Returns a serving signature that accepts `tensorflow.Example`."""

  # We need to track the layers in the model in order to save it.
  # TODO(b/162357359): Revise once the bug is resolved.
  model.tft_layer_inference = tf_transform_output.transform_features_layer()

  @tf.function(input_signature=[
      tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
  ])
  def serve_tf_examples_fn(serialized_tf_example):
    """Returns the output to be used in the serving signature."""
    raw_feature_spec = tf_transform_output.raw_feature_spec()
    # Remove label feature since these will not be present at serving time.
    raw_feature_spec.pop(_LABEL_KEY)
    raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
    transformed_features = model.tft_layer_inference(raw_features)
    logging.info('serve_transformed_features = %s', transformed_features)

    outputs = model(transformed_features)
    # TODO(b/154085620): Convert the predicted labels from the model using a
    # reverse-lookup (opposite of transform.py).
    return {'outputs': outputs}

  return serve_tf_examples_fn


def _get_transform_features_signature(model, tf_transform_output):
  """Returns a serving signature that applies tf.Transform to features."""

  # We need to track the layers in the model in order to save it.
  # TODO(b/162357359): Revise once the bug is resolved.
  model.tft_layer_eval = tf_transform_output.transform_features_layer()

  @tf.function(input_signature=[
      tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
  ])
  def transform_features_fn(serialized_tf_example):
    """Returns the transformed_features to be fed as input to evaluator."""
    raw_feature_spec = tf_transform_output.raw_feature_spec()
    raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
    transformed_features = model.tft_layer_eval(raw_features)
    logging.info('eval_transformed_features = %s', transformed_features)
    return transformed_features

  return transform_features_fn


def _input_fn(file_pattern: List[Text],
              data_accessor: tfx.components.DataAccessor,
              tf_transform_output: tft.TFTransformOutput,
              batch_size: int = 200) -> tf.data.Dataset:
  """Generates features and label for tuning/training.

  Args:
    file_pattern: List of paths or patterns of input tfrecord files.
    data_accessor: DataAccessor for converting input to RecordBatch.
    tf_transform_output: A TFTransformOutput.
    batch_size: representing the number of consecutive elements of returned
      dataset to combine in a single batch

  Returns:
    A dataset that contains (features, indices) tuple where features is a
      dictionary of Tensors, and indices is a single Tensor of label indices.
  """
  return data_accessor.tf_dataset_factory(
      file_pattern,
      tfxio.TensorFlowDatasetOptions(
          batch_size=batch_size, label_key=_LABEL_KEY),
      tf_transform_output.transformed_metadata.schema)


def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
  """Creates a DNN Keras model for classifying taxi data.

  Args:
    hidden_units: [int], the layer sizes of the DNN (input layer first).

  Returns:
    A keras Model.
  """
  real_valued_columns = [
      tf.feature_column.numeric_column(key, shape=())
      for key in _DENSE_FLOAT_FEATURE_KEYS
  ]
  categorical_columns = [
      tf.feature_column.categorical_column_with_identity(
          key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
      for key in _VOCAB_FEATURE_KEYS
  ]
  categorical_columns += [
      tf.feature_column.categorical_column_with_identity(
          key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
      for key in _BUCKET_FEATURE_KEYS
  ]
  categorical_columns += [
      tf.feature_column.categorical_column_with_identity(  # pylint: disable=g-complex-comprehension
          key,
          num_buckets=num_buckets,
          default_value=0) for key, num_buckets in zip(
              _CATEGORICAL_FEATURE_KEYS,
              _MAX_CATEGORICAL_FEATURE_VALUES)
  ]
  indicator_column = [
      tf.feature_column.indicator_column(categorical_column)
      for categorical_column in categorical_columns
  ]

  model = _wide_and_deep_classifier(
      # TODO(b/139668410) replace with premade wide_and_deep keras model
      wide_columns=indicator_column,
      deep_columns=real_valued_columns,
      dnn_hidden_units=hidden_units or [100, 70, 50, 25])
  return model


def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
  """Build a simple keras wide and deep model.

  Args:
    wide_columns: Feature columns wrapped in indicator_column for wide (linear)
      part of the model.
    deep_columns: Feature columns for deep part of the model.
    dnn_hidden_units: [int], the layer sizes of the hidden DNN.

  Returns:
    A Wide and Deep Keras model
  """
  # Following values are hard coded for simplicity in this example,
  # However prefarably they should be passsed in as hparams.

  # Keras needs the feature definitions at compile time.
  # TODO(b/139081439): Automate generation of input layers from FeatureColumn.
  input_layers = {
      colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
      for colname in _DENSE_FLOAT_FEATURE_KEYS
  }
  input_layers.update({
      colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
      for colname in _VOCAB_FEATURE_KEYS
  })
  input_layers.update({
      colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
      for colname in _BUCKET_FEATURE_KEYS
  })
  input_layers.update({
      colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
      for colname in _CATEGORICAL_FEATURE_KEYS
  })

  # TODO(b/161952382): Replace with Keras preprocessing layers.
  deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
  for numnodes in dnn_hidden_units:
    deep = tf.keras.layers.Dense(numnodes)(deep)
  wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)

  output = tf.keras.layers.Dense(1)(
          tf.keras.layers.concatenate([deep, wide]))

  model = tf.keras.Model(input_layers, output)
  model.compile(
      loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
      optimizer=tf.keras.optimizers.Adam(lr=0.001),
      metrics=[tf.keras.metrics.BinaryAccuracy()])
  model.summary(print_fn=logging.info)
  return model


# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
  """Train the model based on given args.

  Args:
    fn_args: Holds args used to train the model as name/value pairs.
  """
  # Number of nodes in the first layer of the DNN
  first_dnn_layer_size = 100
  num_dnn_layers = 4
  dnn_decay_factor = 0.7

  tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)

  train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor, 
                            tf_transform_output, 40)
  eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor, 
                           tf_transform_output, 40)

  model = _build_keras_model(
      # Construct layers sizes with exponetial decay
      hidden_units=[
          max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
          for i in range(num_dnn_layers)
      ])

  tensorboard_callback = tf.keras.callbacks.TensorBoard(
      log_dir=fn_args.model_run_dir, update_freq='batch')
  model.fit(
      train_dataset,
      steps_per_epoch=fn_args.train_steps,
      validation_data=eval_dataset,
      validation_steps=fn_args.eval_steps,
      callbacks=[tensorboard_callback])

  signatures = {
      'serving_default':
          _get_tf_examples_serving_signature(model, tf_transform_output),
      'transform_features':
          _get_transform_features_signature(model, tf_transform_output),
  }
  model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
Writing taxi_trainer.py

Теперь мы переходим в этой модели код на Trainer компонент и запустить его для обучения модели.

trainer = tfx.components.Trainer(
    module_file=os.path.abspath(_taxi_trainer_module_file),
    examples=transform.outputs['transformed_examples'],
    transform_graph=transform.outputs['transform_graph'],
    schema=schema_gen.outputs['schema'],
    train_args=tfx.proto.TrainArgs(num_steps=10000),
    eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer)
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py' (including modules: ['taxi_transform', 'taxi_constants', 'taxi_trainer']).
INFO:absl:User module package has hash fingerprint version ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmpzxd5b1yc/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmpbg9ly6tr', '--dist-dir', '/tmp/tmpx43qh690']
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  setuptools.SetuptoolsDeprecationWarning,
INFO:absl:Successfully built user code wheel distribution at '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'; target user module is 'taxi_trainer'.
INFO:absl:Full user module path is 'taxi_trainer@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'
INFO:absl:Running driver for Trainer
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Trainer
INFO:absl:Train on the 'train' split when train_args.splits is not set.
INFO:absl:Evaluate on the 'eval' split when eval_args.splits is not set.
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
INFO:absl:udf_utils.get_fn {'train_args': '{\n  "num_steps": 10000\n}', 'eval_args': '{\n  "num_steps": 5000\n}', 'module_file': None, 'run_fn': None, 'trainer_fn': None, 'custom_config': 'null', 'module_path': 'taxi_trainer@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'} 'run_fn'
INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmp1osq6e1x', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl']
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying taxi_transform.py -> build/lib
copying taxi_constants.py -> build/lib
copying taxi_trainer.py -> build/lib
running install
running install_lib
running install_egg_info
running egg_info
creating tfx_user_code_Trainer.egg-info
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
Copying tfx_user_code_Trainer.egg-info to /tmp/tmpbg9ly6tr/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3.7.egg-info
running install_scripts
Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl
INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'.
INFO:absl:Training model.
INFO:absl:Feature company has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature fare has a shape . Setting to DenseTensor.
INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature tips has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.
Installing collected packages: tfx-user-code-Trainer
Successfully installed tfx-user-code-Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af
INFO:absl:Feature company has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature fare has a shape . Setting to DenseTensor.
INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature tips has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.
INFO:absl:Feature company has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature fare has a shape . Setting to DenseTensor.
INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature tips has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.
INFO:absl:Feature company has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature fare has a shape . Setting to DenseTensor.
INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.
INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.
INFO:absl:Feature tips has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.
INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
INFO:absl:Model: "model"
INFO:absl:__________________________________________________________________________________________________
INFO:absl: Layer (type)                   Output Shape         Param #     Connected to                     
INFO:absl:==================================================================================================
INFO:absl: company (InputLayer)           [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: dropoff_census_tract (InputLay  [(None,)]           0           []                               
INFO:absl: er)                                                                                              
INFO:absl:                                                                                                  
INFO:absl: dropoff_community_area (InputL  [(None,)]           0           []                               
INFO:absl: ayer)                                                                                            
INFO:absl:                                                                                                  
INFO:absl: dropoff_latitude (InputLayer)  [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: dropoff_longitude (InputLayer)  [(None,)]           0           []                               
INFO:absl:                                                                                                  
INFO:absl: fare (InputLayer)              [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: payment_type (InputLayer)      [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: pickup_census_tract (InputLaye  [(None,)]           0           []                               
INFO:absl: r)                                                                                               
INFO:absl:                                                                                                  
INFO:absl: pickup_community_area (InputLa  [(None,)]           0           []                               
INFO:absl: yer)                                                                                             
INFO:absl:                                                                                                  
INFO:absl: pickup_latitude (InputLayer)   [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: pickup_longitude (InputLayer)  [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: trip_miles (InputLayer)        [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: trip_seconds (InputLayer)      [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: trip_start_day (InputLayer)    [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: trip_start_hour (InputLayer)   [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: trip_start_month (InputLayer)  [(None,)]            0           []                               
INFO:absl:                                                                                                  
INFO:absl: dense_features (DenseFeatures)  (None, 3)           0           ['company[0][0]',                
INFO:absl:                                                                  'dropoff_census_tract[0][0]',   
INFO:absl:                                                                  'dropoff_community_area[0][0]', 
INFO:absl:                                                                  'dropoff_latitude[0][0]',       
INFO:absl:                                                                  'dropoff_longitude[0][0]',      
INFO:absl:                                                                  'fare[0][0]',                   
INFO:absl:                                                                  'payment_type[0][0]',           
INFO:absl:                                                                  'pickup_census_tract[0][0]',    
INFO:absl:                                                                  'pickup_community_area[0][0]',  
INFO:absl:                                                                  'pickup_latitude[0][0]',        
INFO:absl:                                                                  'pickup_longitude[0][0]',       
INFO:absl:                                                                  'trip_miles[0][0]',             
INFO:absl:                                                                  'trip_seconds[0][0]',           
INFO:absl:                                                                  'trip_start_day[0][0]',         
INFO:absl:                                                                  'trip_start_hour[0][0]',        
INFO:absl:                                                                  'trip_start_month[0][0]']       
INFO:absl:                                                                                                  
INFO:absl: dense (Dense)                  (None, 100)          400         ['dense_features[0][0]']         
INFO:absl:                                                                                                  
INFO:absl: dense_1 (Dense)                (None, 70)           7070        ['dense[0][0]']                  
INFO:absl:                                                                                                  
INFO:absl: dense_2 (Dense)                (None, 48)           3408        ['dense_1[0][0]']                
INFO:absl:                                                                                                  
INFO:absl: dense_3 (Dense)                (None, 34)           1666        ['dense_2[0][0]']                
INFO:absl:                                                                                                  
INFO:absl: dense_features_1 (DenseFeature  (None, 2127)        0           ['company[0][0]',                
INFO:absl: s)                                                               'dropoff_census_tract[0][0]',   
INFO:absl:                                                                  'dropoff_community_area[0][0]', 
INFO:absl:                                                                  'dropoff_latitude[0][0]',       
INFO:absl:                                                                  'dropoff_longitude[0][0]',      
INFO:absl:                                                                  'fare[0][0]',                   
INFO:absl:                                                                  'payment_type[0][0]',           
INFO:absl:                                                                  'pickup_census_tract[0][0]',    
INFO:absl:                                                                  'pickup_community_area[0][0]',  
INFO:absl:                                                                  'pickup_latitude[0][0]',        
INFO:absl:                                                                  'pickup_longitude[0][0]',       
INFO:absl:                                                                  'trip_miles[0][0]',             
INFO:absl:                                                                  'trip_seconds[0][0]',           
INFO:absl:                                                                  'trip_start_day[0][0]',         
INFO:absl:                                                                  'trip_start_hour[0][0]',        
INFO:absl:                                                                  'trip_start_month[0][0]']       
INFO:absl:                                                                                                  
INFO:absl: concatenate (Concatenate)      (None, 2161)         0           ['dense_3[0][0]',                
INFO:absl:                                                                  'dense_features_1[0][0]']       
INFO:absl:                                                                                                  
INFO:absl: dense_4 (Dense)                (None, 1)            2162        ['concatenate[0][0]']            
INFO:absl:                                                                                                  
INFO:absl:==================================================================================================
INFO:absl:Total params: 14,706
INFO:absl:Trainable params: 14,706
INFO:absl:Non-trainable params: 0
INFO:absl:__________________________________________________________________________________________________
10000/10000 [==============================] - 100s 10ms/step - loss: 0.2372 - binary_accuracy: 0.8605 - val_loss: 0.2222 - val_binary_accuracy: 0.8709
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:struct2tensor is not available.
WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.Socket(zmq.PUSH) at 0x7f88b5e27910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.Socket(zmq.PUSH) at 0x7f88b5e27910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:absl:serve_transformed_features = {'pickup_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:9' shape=(None,) dtype=int64>, 'trip_start_hour': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:15' shape=(None,) dtype=int64>, 'fare': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:5' shape=(None,) dtype=float32>, 'trip_miles': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:12' shape=(None,) dtype=float32>, 'trip_start_day': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:14' shape=(None,) dtype=int64>, 'dropoff_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:3' shape=(None,) dtype=int64>, 'trip_start_month': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:16' shape=(None,) dtype=int64>, 'dropoff_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:2' shape=(None,) dtype=int64>, 'dropoff_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:4' shape=(None,) dtype=int64>, 'payment_type': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:6' shape=(None,) dtype=int64>, 'pickup_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:10' shape=(None,) dtype=int64>, 'pickup_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:8' shape=(None,) dtype=int64>, 'company': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:0' shape=(None,) dtype=int64>, 'pickup_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:7' shape=(None,) dtype=int64>, 'dropoff_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:1' shape=(None,) dtype=int64>, 'trip_seconds': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:13' shape=(None,) dtype=float32>}
INFO:absl:eval_transformed_features = {'pickup_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:9' shape=(None,) dtype=int64>, 'trip_start_hour': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:15' shape=(None,) dtype=int64>, 'fare': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:5' shape=(None,) dtype=float32>, 'trip_miles': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:12' shape=(None,) dtype=float32>, 'trip_start_day': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:14' shape=(None,) dtype=int64>, 'dropoff_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:3' shape=(None,) dtype=int64>, 'trip_start_month': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:16' shape=(None,) dtype=int64>, 'dropoff_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:2' shape=(None,) dtype=int64>, 'dropoff_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:4' shape=(None,) dtype=int64>, 'payment_type': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:6' shape=(None,) dtype=int64>, 'pickup_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:10' shape=(None,) dtype=int64>, 'pickup_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:8' shape=(None,) dtype=int64>, 'company': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:0' shape=(None,) dtype=int64>, 'pickup_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:7' shape=(None,) dtype=int64>, 'tips': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:11' shape=(None,) dtype=int64>, 'dropoff_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:1' shape=(None,) dtype=int64>, 'trip_seconds': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:13' shape=(None,) dtype=float32>}
INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving/assets
INFO:absl:Training complete. Model written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving. ModelRun written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model_run/6
INFO:absl:Running publisher for Trainer
INFO:absl:MetadataStore with DB connection initialized

Анализируйте обучение с помощью TensorBoard

Взгляните на артефакт тренера. Он указывает на каталог, содержащий подкаталоги модели.

model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
['Format-Serving']
['variables', 'assets', 'keras_metadata.pb', 'saved_model.pb']

При желании мы можем подключить TensorBoard к Trainer, чтобы анализировать кривые обучения нашей модели.

model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri

%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}

Оценщик

Evaluator компонент вычисляет модели метрик производительности по сравнению с набором оценки. Он использует TensorFlow модель анализа библиотеки. Evaluator может также дополнительно подтвердить , что недавно обучен модель лучше , чем у предыдущей модели. Это полезно при настройке производственного конвейера, где вы можете автоматически обучать и проверять модель каждый день. В этом ноутбуке, мы только обучить одну модель, поэтому Evaluator автоматически помечает модель как «хорошее».

Evaluator будет принимать в качестве входных данных из ExampleGen , обученной модели от Trainer и конфигурации нарезки. Конфигурация срезов позволяет вам срезать ваши метрики по значениям характеристик (например, как ваша модель работает в поездках на такси, которые начинаются в 8 утра по сравнению с 8 вечера?). См. Пример этой конфигурации ниже:

eval_config = tfma.EvalConfig(
    model_specs=[
        # This assumes a serving model with signature 'serving_default'. If
        # using estimator based EvalSavedModel, add signature_name: 'eval' and
        # remove the label_key.
        tfma.ModelSpec(
            signature_name='serving_default',
            label_key='tips',
            preprocessing_function_names=['transform_features'],
            )
        ],
    metrics_specs=[
        tfma.MetricsSpec(
            # The metrics added here are in addition to those saved with the
            # model (assuming either a keras model or EvalSavedModel is used).
            # Any metrics added into the saved model (for example using
            # model.compile(..., metrics=[...]), etc) will be computed
            # automatically.
            # To add validation thresholds for metrics saved with the model,
            # add them keyed by metric name to the thresholds map.
            metrics=[
                tfma.MetricConfig(class_name='ExampleCount'),
                tfma.MetricConfig(class_name='BinaryAccuracy',
                  threshold=tfma.MetricThreshold(
                      value_threshold=tfma.GenericValueThreshold(
                          lower_bound={'value': 0.5}),
                      # Change threshold will be ignored if there is no
                      # baseline model resolved from MLMD (first run).
                      change_threshold=tfma.GenericChangeThreshold(
                          direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                          absolute={'value': -1e-10})))
            ]
        )
    ],
    slicing_specs=[
        # An empty slice spec means the overall slice, i.e. the whole dataset.
        tfma.SlicingSpec(),
        # Data can be sliced along a feature column. In this case, data is
        # sliced along feature column trip_start_hour.
        tfma.SlicingSpec(feature_keys=['trip_start_hour'])
    ])

Далее мы приводим эту конфигурацию Evaluator и запустить его.

# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.

# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
      strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
      model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
      model_blessing=tfx.dsl.Channel(
          type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
              'latest_blessed_model_resolver')
context.run(model_resolver)

evaluator = tfx.components.Evaluator(
    examples=example_gen.outputs['examples'],
    model=trainer.outputs['model'],
    baseline_model=model_resolver.outputs['model'],
    eval_config=eval_config)
context.run(evaluator)
INFO:absl:Running driver for latest_blessed_model_resolver
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running publisher for latest_blessed_model_resolver
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running driver for Evaluator
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Evaluator
INFO:absl:Nonempty beam arg extra_packages already includes dependency
INFO:absl:udf_utils.get_fn {'eval_config': '{\n  "metrics_specs": [\n    {\n      "metrics": [\n        {\n          "class_name": "ExampleCount"\n        },\n        {\n          "class_name": "BinaryAccuracy",\n          "threshold": {\n            "change_threshold": {\n              "absolute": -1e-10,\n              "direction": "HIGHER_IS_BETTER"\n            },\n            "value_threshold": {\n              "lower_bound": 0.5\n            }\n          }\n        }\n      ]\n    }\n  ],\n  "model_specs": [\n    {\n      "label_key": "tips",\n      "preprocessing_function_names": [\n        "transform_features"\n      ],\n      "signature_name": "serving_default"\n    }\n  ],\n  "slicing_specs": [\n    {},\n    {\n      "feature_keys": [\n        "trip_start_hour"\n      ]\n    }\n  ]\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_eval_shared_model'
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  signature_name: "serving_default"
  label_key: "tips"
  preprocessing_function_names: "transform_features"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "trip_start_hour"
}
metrics_specs {
  metrics {
    class_name: "ExampleCount"
  }
  metrics {
    class_name: "BinaryAccuracy"
    threshold {
      value_threshold {
        lower_bound {
          value: 0.5
        }
      }
    }
  }
}

INFO:absl:Using /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving as  model.
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87bc0f5e50> and <keras.engine.input_layer.InputLayer object at 0x7f87bc0f5b50>).
INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.
INFO:absl:Evaluating model.
INFO:absl:udf_utils.get_fn {'eval_config': '{\n  "metrics_specs": [\n    {\n      "metrics": [\n        {\n          "class_name": "ExampleCount"\n        },\n        {\n          "class_name": "BinaryAccuracy",\n          "threshold": {\n            "change_threshold": {\n              "absolute": -1e-10,\n              "direction": "HIGHER_IS_BETTER"\n            },\n            "value_threshold": {\n              "lower_bound": 0.5\n            }\n          }\n        }\n      ]\n    }\n  ],\n  "model_specs": [\n    {\n      "label_key": "tips",\n      "preprocessing_function_names": [\n        "transform_features"\n      ],\n      "signature_name": "serving_default"\n    }\n  ],\n  "slicing_specs": [\n    {},\n    {\n      "feature_keys": [\n        "trip_start_hour"\n      ]\n    }\n  ]\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_extractors'
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  signature_name: "serving_default"
  label_key: "tips"
  preprocessing_function_names: "transform_features"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "trip_start_hour"
}
metrics_specs {
  metrics {
    class_name: "ExampleCount"
  }
  metrics {
    class_name: "BinaryAccuracy"
    threshold {
      value_threshold {
        lower_bound {
          value: 0.5
        }
      }
    }
  }
  model_names: ""
}

INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  signature_name: "serving_default"
  label_key: "tips"
  preprocessing_function_names: "transform_features"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "trip_start_hour"
}
metrics_specs {
  metrics {
    class_name: "ExampleCount"
  }
  metrics {
    class_name: "BinaryAccuracy"
    threshold {
      value_threshold {
        lower_bound {
          value: 0.5
        }
      }
    }
  }
  model_names: ""
}

INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  signature_name: "serving_default"
  label_key: "tips"
  preprocessing_function_names: "transform_features"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "trip_start_hour"
}
metrics_specs {
  metrics {
    class_name: "ExampleCount"
  }
  metrics {
    class_name: "BinaryAccuracy"
    threshold {
      value_threshold {
        lower_bound {
          value: 0.5
        }
      }
    }
  }
  model_names: ""
}
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b0102150> and <keras.engine.input_layer.InputLayer object at 0x7f875454e810>).
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b06c9d50> and <keras.engine.input_layer.InputLayer object at 0x7f87d4041290>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f874c8d6a10> and <keras.engine.input_layer.InputLayer object at 0x7f874c8ac0d0>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830dcf9fd0> and <keras.engine.input_layer.InputLayer object at 0x7f830dd87110>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830dc8cad0> and <keras.engine.input_layer.InputLayer object at 0x7f830cf892d0>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b041add0> and <keras.engine.input_layer.InputLayer object at 0x7f874d6b6d50>).
WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.

Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830c42a5d0> and <keras.engine.input_layer.InputLayer object at 0x7f830c3037d0>).
INFO:absl:Evaluation complete. Results written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/evaluation/8.
INFO:absl:Checking validation results.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:107: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
INFO:absl:Blessing result True written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/blessing/8.
INFO:absl:Running publisher for Evaluator
INFO:absl:MetadataStore with DB connection initialized

Теперь давайте рассмотрим выходные артефакты Evaluator .

evaluator.outputs
{'evaluation': Channel(
     type_name: ModelEvaluation
     artifacts: [Artifact(artifact: id: 15
 type_id: 29
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/evaluation/8"
 custom_properties {
   key: "name"
   value {
     string_value: "evaluation"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Evaluator"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 29
 name: "ModelEvaluation"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 ),
 'blessing': Channel(
     type_name: ModelBlessing
     artifacts: [Artifact(artifact: id: 16
 type_id: 30
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/blessing/8"
 custom_properties {
   key: "blessed"
   value {
     int_value: 1
   }
 }
 custom_properties {
   key: "current_model"
   value {
     string_value: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6"
   }
 }
 custom_properties {
   key: "current_model_id"
   value {
     int_value: 13
   }
 }
 custom_properties {
   key: "name"
   value {
     string_value: "blessing"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Evaluator"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 30
 name: "ModelBlessing"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 )}

Используя evaluation вывода мы можем показать визуализацию по умолчанию глобальных показателей по всей совокупности оценок.

context.show(evaluator.outputs['evaluation'])

Чтобы увидеть визуализацию срезов показателей оценки, мы можем напрямую вызвать библиотеку анализа модели TensorFlow.

import tensorflow_model_analysis as tfma

# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)

# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
    tfma_result, slicing_column='trip_start_hour')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_hour:19',…

Эта визуализация показывает те же показатели, но вычисляется на каждом значении признака trip_start_hour , а не на всей совокупности оценки.

TensorFlow Model Analysis поддерживает множество других визуализаций, таких как индикаторы справедливости и построение временных рядов производительности модели. Чтобы узнать больше, см учебник .

Поскольку мы добавили пороги в нашу конфигурацию, также доступен вывод проверки. Precence из blessing артефакта указывает , что наша модель прошла проверку. Поскольку это первая проверка, кандидат автоматически получает благословение.

blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
total 0
-rw-rw-r-- 1 kbuilder kbuilder 0 Dec 21 10:13 BLESSED

Теперь также можно проверить успешность, загрузив запись результата проверки:

PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
validation_ok: true
validation_details {
  slicing_details {
    slicing_spec {
    }
    num_matching_slices: 25
  }
}

Толкатель

Pusher компонент, как правило , в конце TFX трубопровода. Он проверяет, является ли модель прошла проверку, и если да, экспорт модели в _serving_model_dir .

pusher = tfx.components.Pusher(
    model=trainer.outputs['model'],
    model_blessing=evaluator.outputs['blessing'],
    push_destination=tfx.proto.PushDestination(
        filesystem=tfx.proto.PushDestination.Filesystem(
            base_directory=_serving_model_dir)))
context.run(pusher)
INFO:absl:Running driver for Pusher
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Running executor for Pusher
INFO:absl:Model version: 1640081600
INFO:absl:Model written to serving path /tmp/tmpkvhhk5j5/serving_model/taxi_simple/1640081600.
INFO:absl:Model pushed to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Pusher/pushed_model/9.
INFO:absl:Running publisher for Pusher
INFO:absl:MetadataStore with DB connection initialized

Давайте рассмотрим выходные артефакты Pusher .

pusher.outputs
{'pushed_model': Channel(
     type_name: PushedModel
     artifacts: [Artifact(artifact: id: 17
 type_id: 32
 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Pusher/pushed_model/9"
 custom_properties {
   key: "name"
   value {
     string_value: "pushed_model"
   }
 }
 custom_properties {
   key: "producer_component"
   value {
     string_value: "Pusher"
   }
 }
 custom_properties {
   key: "pushed"
   value {
     int_value: 1
   }
 }
 custom_properties {
   key: "pushed_destination"
   value {
     string_value: "/tmp/tmpkvhhk5j5/serving_model/taxi_simple/1640081600"
   }
 }
 custom_properties {
   key: "pushed_version"
   value {
     string_value: "1640081600"
   }
 }
 custom_properties {
   key: "state"
   value {
     string_value: "published"
   }
 }
 custom_properties {
   key: "tfx_version"
   value {
     string_value: "1.5.0"
   }
 }
 state: LIVE
 , artifact_type: id: 32
 name: "PushedModel"
 )]
     additional_properties: {}
     additional_custom_properties: {}
 )}

В частности, Pusher экспортирует вашу модель в формате SavedModel, который выглядит так:

push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)

for item in model.signatures.items():
  pp.pprint(item)
('serving_default',
 <ConcreteFunction signature_wrapper(*, examples) at 0x7F82F31FDE50>)
('transform_features',
 <ConcreteFunction signature_wrapper(*, examples) at 0x7F82F31AC410>)

Мы закончили наш тур по встроенным компонентам TFX!