Treten Sie der SIG TFX-Addons-Community bei und helfen Sie, TFX noch besser zu machen!

Modellanalyse mit TFX Pipeline und TensorFlow-Modellanalyse

In diesem Notebook-basierten Tutorial erstellen und führen wir eine TFX-Pipeline aus, die ein einfaches Klassifizierungsmodell erstellt und dessen Leistung über mehrere Durchläufe hinweg analysiert. Dieses Notebook basiert auf der TFX-Pipeline, die wir in Simple TFX Pipeline Tutorial erstellt haben . Wenn Sie dieses Tutorial noch nicht gelesen haben, sollten Sie es lesen, bevor Sie mit diesem Notizbuch fortfahren.

Wenn Sie Ihr Modell optimieren oder mit einem neuen Datensatz trainieren, müssen Sie überprüfen, ob sich Ihr Modell verbessert oder verschlechtert hat. Es reicht möglicherweise nicht aus, nur die obersten Metriken wie die Genauigkeit zu überprüfen. Jedes trainierte Modell sollte evaluiert werden, bevor es in die Produktion geht.

Wir fügen der im vorherigen Tutorial erstellten Pipeline eine Evaluator Komponente hinzu. Die Evaluator-Komponente führt eine eingehende Analyse für Ihre Modelle durch und vergleicht das neue Modell mit einer Baseline, um festzustellen, ob sie "gut genug" sind. Es wird mithilfe der TensorFlow-Modellanalysebibliothek implementiert.

Bitte lesen Sie TFX-Pipelines verstehen , um mehr über verschiedene Konzepte in TFX zu erfahren.

Konfiguration

Der Einrichtungsprozess ist der gleiche wie im vorherigen Tutorial.

Wir müssen zuerst das TFX-Python-Paket installieren und den Datensatz herunterladen, den wir für unser Modell verwenden werden.

Upgrade-Pip

Um zu vermeiden, dass Pip in einem System aktualisiert wird, wenn es lokal ausgeführt wird, stellen Sie sicher, dass wir in Colab ausgeführt werden. Lokale Systeme können natürlich separat nachgerüstet werden.

try:
  import colab
  !pip install --upgrade pip
except:
  pass

TFX installieren

pip install -U tfx

Hast du die Laufzeit neu gestartet?

Wenn Sie Google Colab verwenden und die obige Zelle zum ersten Mal ausführen, müssen Sie die Laufzeit neu starten, indem Sie oben auf die Schaltfläche "LAUFZEIT NEU STARTEN" klicken oder das Menü "Laufzeit > Laufzeit neu starten ..." verwenden. Dies liegt an der Art und Weise, wie Colab Pakete lädt.

Überprüfen Sie die TensorFlow- und TFX-Versionen.

import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
TensorFlow version: 2.4.1
WARNING:absl:RuntimeParameter is only supported on Cloud-based DAG runner currently.
TFX version: 0.30.0

Variablen einrichten

Es gibt einige Variablen, die verwendet werden, um eine Pipeline zu definieren. Sie können diese Variablen nach Belieben anpassen. Standardmäßig wird die gesamte Ausgabe der Pipeline unter dem aktuellen Verzeichnis generiert.

import os

PIPELINE_NAME = "penguin-tfma"

# Output directory to store artifacts generated from the pipeline.
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)

from absl import logging
logging.set_verbosity(logging.INFO)  # Set default logging level.

Beispieldaten vorbereiten

Wir werden denselben Palmer-Pinguin-Datensatz verwenden .

Es gibt vier numerische Merkmale in diesem Datensatz, die bereits auf den Bereich [0,1] normalisiert wurden. Wir werden ein Klassifikationsmodell bauen , die die vorhersagt species von Pinguinen.

Da TFX ExampleGen Eingaben aus einem Verzeichnis liest, müssen wir ein Verzeichnis erstellen und den Datensatz dorthin kopieren.

import urllib.request
import tempfile

DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')  # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
('/tmp/tfx-data99f0r4mn/data.csv', <http.client.HTTPMessage at 0x7f7b42f1dd10>)

Erstellen Sie eine Pipeline

Wir werden der Pipeline, die wir im Simple TFX Pipeline Tutorial erstellt haben, eine Evaluator Komponente hinzufügen.

Eine Evaluator-Komponente erfordert Eingabedaten von einer ExampleGen Komponente und ein Modell von einer Trainer Komponente und einem tfma.EvalConfig Objekt. Optional können wir ein Basismodell liefern, mit dem Metriken mit dem neu trainierten Modell verglichen werden können.

Ein Evaluator erstellt zwei Arten von Ausgabeartefakten, ModelEvaluation und ModelBlessing . ModelEvaluation enthält das detaillierte Bewertungsergebnis, das mit der TFMA-Bibliothek weiter untersucht und visualisiert werden kann. ModelBlessing enthält ein boolesches Ergebnis, ob das Modell bestimmte Kriterien erfüllt hat und kann in späteren Komponenten wie einem Pusher als Signal verwendet werden.

Modelltrainingscode schreiben

Wir verwenden den gleichen Modellcode wie im Simple TFX Pipeline Tutorial .

_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}

# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple

from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils

from tfx.components.trainer.executor import TrainerFnArgs
from tfx.components.trainer.fn_args_utils import DataAccessor
from tfx_bsl.tfxio import dataset_options
from tensorflow_metadata.proto.v0 import schema_pb2

_FEATURE_KEYS = [
    'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'

_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10

# Since we're not generating or creating a schema, we will instead create
# a feature spec.  Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
    **{
        feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
           for feature in _FEATURE_KEYS
       },
    _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}


def _input_fn(file_pattern: List[str],
              data_accessor: DataAccessor,
              schema: schema_pb2.Schema,
              batch_size: int = 200) -> tf.data.Dataset:
  """Generates features and label for training.

  Args:
    file_pattern: List of paths or patterns of input tfrecord files.
    data_accessor: DataAccessor for converting input to RecordBatch.
    schema: schema of the input data.
    batch_size: representing the number of consecutive elements of returned
      dataset to combine in a single batch

  Returns:
    A dataset that contains (features, indices) tuple where features is a
      dictionary of Tensors, and indices is a single Tensor of label indices.
  """
  return data_accessor.tf_dataset_factory(
      file_pattern,
      dataset_options.TensorFlowDatasetOptions(
          batch_size=batch_size, label_key=_LABEL_KEY),
      schema=schema).repeat()


def _build_keras_model() -> tf.keras.Model:
  """Creates a DNN Keras model for classifying penguin data.

  Returns:
    A Keras Model.
  """
  # The model below is built with Functional API, please refer to
  # https://www.tensorflow.org/guide/keras/overview for all API options.
  inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
  d = keras.layers.concatenate(inputs)
  for _ in range(2):
    d = keras.layers.Dense(8, activation='relu')(d)
  outputs = keras.layers.Dense(3)(d)

  model = keras.Model(inputs=inputs, outputs=outputs)
  model.compile(
      optimizer=keras.optimizers.Adam(1e-2),
      loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
      metrics=[keras.metrics.SparseCategoricalAccuracy()])

  model.summary(print_fn=logging.info)
  return model


# TFX Trainer will call this function.
def run_fn(fn_args: TrainerFnArgs):
  """Train the model based on given args.

  Args:
    fn_args: Holds args used to train the model as name/value pairs.
  """

  # This schema is usually either an output of SchemaGen or a manually-curated
  # version provided by pipeline author. A schema can also derived from TFT
  # graph if a Transform component is used. In the case when either is missing,
  # `schema_from_feature_spec` could be used to generate schema from very simple
  # feature_spec, but the schema returned would be very primitive.
  schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)

  train_dataset = _input_fn(
      fn_args.train_files,
      fn_args.data_accessor,
      schema,
      batch_size=_TRAIN_BATCH_SIZE)
  eval_dataset = _input_fn(
      fn_args.eval_files,
      fn_args.data_accessor,
      schema,
      batch_size=_EVAL_BATCH_SIZE)

  model = _build_keras_model()
  model.fit(
      train_dataset,
      steps_per_epoch=fn_args.train_steps,
      validation_data=eval_dataset,
      validation_steps=fn_args.eval_steps)

  # The result of the training should be saved in `fn_args.serving_model_dir`
  # directory.
  model.save(fn_args.serving_model_dir, save_format='tf')
Writing penguin_trainer.py

Schreiben Sie eine Pipeline-Definition

Wir werden eine Funktion zum Erstellen einer TFX-Pipeline definieren. Zusätzlich zu der oben erwähnten Evaluator-Komponente werden wir einen weiteren Knoten namens Resolver hinzufügen. Um zu überprüfen, ob ein neues Modell besser wird als das vorherige Modell, müssen wir es mit einem früheren veröffentlichten Modell vergleichen, das als Baseline bezeichnet wird. ML Metadata (MLMD) verfolgt alle vorherigen Artefakte der Pipeline und Resolver kann mithilfe einer Strategieklasse namens LatestBlessedModelStrategy das neueste gesegnete Modell – ein Modell, das Evaluator erfolgreich bestanden hat – von MLMD LatestBlessedModelStrategy .

import tensorflow_model_analysis as tfma

def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
                     module_file: str, serving_model_dir: str,
                     metadata_path: str) -> tfx.dsl.Pipeline:
  """Creates a three component penguin pipeline with TFX."""
  # Brings data into the pipeline.
  example_gen = tfx.components.CsvExampleGen(input_base=data_root)

  # Uses user-provided Python function that trains a model.
  trainer = tfx.components.Trainer(
      module_file=module_file,
      examples=example_gen.outputs['examples'],
      train_args=tfx.proto.TrainArgs(num_steps=100),
      eval_args=tfx.proto.EvalArgs(num_steps=5))

  # NEW: Get the latest blessed model for Evaluator.
  model_resolver = tfx.dsl.Resolver(
      strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
      model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
      model_blessing=tfx.dsl.Channel(
          type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
              'latest_blessed_model_resolver')

  # NEW: Uses TFMA to compute evaluation statistics over features of a model and
  #   perform quality validation of a candidate model (compared to a baseline).

  eval_config = tfma.EvalConfig(
      model_specs=[tfma.ModelSpec(label_key='species')],
      slicing_specs=[
          # An empty slice spec means the overall slice, i.e. the whole dataset.
          tfma.SlicingSpec(),
          # Calculate metrics for each penguin species.
          tfma.SlicingSpec(feature_keys=['species']),
          ],
      metrics_specs=[
          tfma.MetricsSpec(per_slice_thresholds={
              'sparse_categorical_accuracy':
                  tfma.config.PerSliceMetricThresholds(thresholds=[
                      tfma.PerSliceMetricThreshold(
                          slicing_specs=[tfma.SlicingSpec()],
                          threshold=tfma.MetricThreshold(
                              value_threshold=tfma.GenericValueThreshold(
                                   lower_bound={'value': 0.6}),
                              # Change threshold will be ignored if there is no
                              # baseline model resolved from MLMD (first run).
                              change_threshold=tfma.GenericChangeThreshold(
                                  direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                                  absolute={'value': -1e-10}))
                       )]),
          })],
      )
  evaluator = tfx.components.Evaluator(
      examples=example_gen.outputs['examples'],
      model=trainer.outputs['model'],
      baseline_model=model_resolver.outputs['model'],
      eval_config=eval_config)

  # Checks whether the model passed the validation steps and pushes the model
  # to a file destination if check passed.
  pusher = tfx.components.Pusher(
      model=trainer.outputs['model'],
      model_blessing=evaluator.outputs['blessing'], # Pass an evaluation result.
      push_destination=tfx.proto.PushDestination(
          filesystem=tfx.proto.PushDestination.Filesystem(
              base_directory=serving_model_dir)))

  components = [
      example_gen,
      trainer,

      # Following two components were added to the pipeline.
      model_resolver,
      evaluator,

      pusher,
  ]

  return tfx.dsl.Pipeline(
      pipeline_name=pipeline_name,
      pipeline_root=pipeline_root,
      metadata_connection_config=tfx.orchestration.metadata
      .sqlite_metadata_connection_config(metadata_path),
      components=components)

Wir müssen dem Evaluator über eval_config die folgenden Informationen eval_config :

  • Zusätzliche zu konfigurierende Metriken (wenn mehr Metriken als im Modell definiert gewünscht werden).
  • Zu konfigurierende Slices
  • Modellvalidierungsschwellenwerte, um zu überprüfen, ob eine Validierung eingeschlossen werden soll

Da SparseCategoricalAccuracy bereits im Aufruf von model.compile() , wird es automatisch in die Analyse einbezogen. Daher fügen wir hier keine zusätzlichen Metriken hinzu. SparseCategoricalAccuracy wird verwendet, um zu entscheiden, ob das Modell auch gut genug ist.

Wir berechnen die Metriken für den gesamten Datensatz und für jede Pinguinart. SlicingSpec gibt an, wie wir die deklarierten Metriken aggregieren.

Es gibt zwei Schwellenwerte, die ein neues Modell passieren sollte, einer ist ein absoluter Schwellenwert von 0,6 und der andere ist ein relativer Schwellenwert, der höher als das Basismodell sein sollte. Wenn Sie die Pipeline zum ersten Mal change_threshold wird der change_threshold ignoriert und nur der value_threshold überprüft. Wenn Sie die Pipeline mehr als einmal ausführen, findet der Resolver ein Modell aus der vorherigen Ausführung und wird als Basismodell für den Vergleich verwendet.

Weitere Informationen finden Sie im Handbuch zu Evaluator-Komponenten .

Führen Sie die Pipeline aus

Wir werden LocalDagRunner wie im vorherigen Tutorial verwenden.

tfx.orchestration.LocalDagRunner().run(
  _create_pipeline(
      pipeline_name=PIPELINE_NAME,
      pipeline_root=PIPELINE_ROOT,
      data_root=DATA_ROOT,
      module_file=_trainer_module_file,
      serving_model_dir=SERVING_MODEL_DIR,
      metadata_path=METADATA_PATH))
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/penguin_trainer.py' (including modules: ['penguin_trainer']).
INFO:absl:User module package has hash fingerprint version 1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmpdpv7puz5/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmp1g7_otwo', '--dist-dir', '/tmp/tmpeg1szl61']
INFO:absl:Successfully built user code wheel distribution at 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'; target user module is 'penguin_trainer'.
INFO:absl:Full user module path is 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'
INFO:absl:Running pipeline:
 pipeline_info {
  id: "penguin-tfma"
}
nodes {
  pipeline_node {
    node_info {
      type {
        name: "tfx.components.example_gen.csv_example_gen.component.CsvExampleGen"
      }
      id: "CsvExampleGen"
    }
    contexts {
      contexts {
        type {
          name: "pipeline"
        }
        name {
          field_value {
            string_value: "penguin-tfma"
          }
        }
      }
      contexts {
        type {
          name: "pipeline_run"
        }
        name {
          field_value {
            string_value: "2021-06-02T09:14:35.912739"
          }
        }
      }
      contexts {
        type {
          name: "node"
        }
        name {
          field_value {
            string_value: "penguin-tfma.CsvExampleGen"
          }
        }
      }
    }
    outputs {
      outputs {
        key: "examples"
        value {
          artifact_spec {
            type {
              name: "Examples"
              properties {
                key: "span"
                value: INT
              }
              properties {
                key: "split_names"
                value: STRING
              }
              properties {
                key: "version"
                value: INT
              }
            }
          }
        }
      }
    }
    parameters {
      parameters {
        key: "input_base"
        value {
          field_value {
            string_value: "/tmp/tfx-data99f0r4mn"
          }
        }
      }
      parameters {
        key: "input_config"
        value {
          field_value {
            string_value: "{\n  \"splits\": [\n    {\n      \"name\": \"single_split\",\n      \"pattern\": \"*\"\n    }\n  ]\n}"
          }
        }
      }
      parameters {
        key: "output_config"
        value {
          field_value {
            string_value: "{\n  \"split_config\": {\n    \"splits\": [\n      {\n        \"hash_buckets\": 2,\n        \"name\": \"train\"\n      },\n      {\n        \"hash_buckets\": 1,\n        \"name\": \"eval\"\n      }\n    ]\n  }\n}"
          }
        }
      }
      parameters {
        key: "output_data_format"
        value {
          field_value {
            int_value: 6
          }
        }
      }
    }
    downstream_nodes: "Evaluator"
    downstream_nodes: "Trainer"
    execution_options {
      caching_options {
      }
    }
  }
}
nodes {
  pipeline_node {
    node_info {
      type {
        name: "tfx.dsl.components.common.resolver.Resolver"
      }
      id: "latest_blessed_model_resolver"
    }
    contexts {
      contexts {
        type {
          name: "pipeline"
        }
        name {
          field_value {
            string_value: "penguin-tfma"
          }
        }
      }
      contexts {
        type {
          name: "pipeline_run"
        }
        name {
          field_value {
            string_value: "2021-06-02T09:14:35.912739"
          }
        }
      }
      contexts {
        type {
          name: "node"
        }
        name {
          field_value {
            string_value: "penguin-tfma.latest_blessed_model_resolver"
          }
        }
      }
    }
    inputs {
      inputs {
        key: "model"
        value {
          channels {
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            artifact_query {
              type {
                name: "Model"
              }
            }
          }
        }
      }
      inputs {
        key: "model_blessing"
        value {
          channels {
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            artifact_query {
              type {
                name: "ModelBlessing"
              }
            }
          }
        }
      }
      resolver_config {
        resolver_steps {
          class_path: "tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy.LatestBlessedModelStrategy"
          config_json: "{}"
          input_keys: "model"
          input_keys: "model_blessing"
        }
      }
    }
    downstream_nodes: "Evaluator"
    execution_options {
      caching_options {
      }
    }
  }
}
nodes {
  pipeline_node {
    node_info {
      type {
        name: "tfx.components.trainer.component.Trainer"
      }
      id: "Trainer"
    }
    contexts {
      contexts {
        type {
          name: "pipeline"
        }
        name {
          field_value {
            string_value: "penguin-tfma"
          }
        }
      }
      contexts {
        type {
          name: "pipeline_run"
        }
        name {
          field_value {
            string_value: "2021-06-02T09:14:35.912739"
          }
        }
      }
      contexts {
        type {
          name: "node"
        }
        name {
          field_value {
            string_value: "penguin-tfma.Trainer"
          }
        }
      }
    }
    inputs {
      inputs {
        key: "examples"
        value {
          channels {
            producer_node_query {
              id: "CsvExampleGen"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.CsvExampleGen"
                }
              }
            }
            artifact_query {
              type {
                name: "Examples"
              }
            }
            output_key: "examples"
          }
        }
      }
    }
    outputs {
      outputs {
        key: "model"
        value {
          artifact_spec {
            type {
              name: "Model"
            }
          }
        }
      }
      outputs {
        key: "model_run"
        value {
          artifact_spec {
            type {
              name: "ModelRun"
            }
          }
        }
      }
    }
    parameters {
      parameters {
        key: "custom_config"
        value {
          field_value {
            string_value: "null"
          }
        }
      }
      parameters {
        key: "eval_args"
        value {
          field_value {
            string_value: "{\n  \"num_steps\": 5\n}"
          }
        }
      }
      parameters {
        key: "module_path"
        value {
          field_value {
            string_value: "penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl"
          }
        }
      }
      parameters {
        key: "train_args"
        value {
          field_value {
            string_value: "{\n  \"num_steps\": 100\n}"
          }
        }
      }
    }
    upstream_nodes: "CsvExampleGen"
    downstream_nodes: "Evaluator"
    downstream_nodes: "Pusher"
    execution_options {
      caching_options {
      }
    }
  }
}
nodes {
  pipeline_node {
    node_info {
      type {
        name: "tfx.components.evaluator.component.Evaluator"
      }
      id: "Evaluator"
    }
    contexts {
      contexts {
        type {
          name: "pipeline"
        }
        name {
          field_value {
            string_value: "penguin-tfma"
          }
        }
      }
      contexts {
        type {
          name: "pipeline_run"
        }
        name {
          field_value {
            string_value: "2021-06-02T09:14:35.912739"
          }
        }
      }
      contexts {
        type {
          name: "node"
        }
        name {
          field_value {
            string_value: "penguin-tfma.Evaluator"
          }
        }
      }
    }
    inputs {
      inputs {
        key: "baseline_model"
        value {
          channels {
            producer_node_query {
              id: "latest_blessed_model_resolver"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.latest_blessed_model_resolver"
                }
              }
            }
            artifact_query {
              type {
                name: "Model"
              }
            }
            output_key: "model"
          }
        }
      }
      inputs {
        key: "examples"
        value {
          channels {
            producer_node_query {
              id: "CsvExampleGen"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.CsvExampleGen"
                }
              }
            }
            artifact_query {
              type {
                name: "Examples"
              }
            }
            output_key: "examples"
          }
        }
      }
      inputs {
        key: "model"
        value {
          channels {
            producer_node_query {
              id: "Trainer"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.Trainer"
                }
              }
            }
            artifact_query {
              type {
                name: "Model"
              }
            }
            output_key: "model"
          }
        }
      }
    }
    outputs {
      outputs {
        key: "blessing"
        value {
          artifact_spec {
            type {
              name: "ModelBlessing"
            }
          }
        }
      }
      outputs {
        key: "evaluation"
        value {
          artifact_spec {
            type {
              name: "ModelEvaluation"
            }
          }
        }
      }
    }
    parameters {
      parameters {
        key: "eval_config"
        value {
          field_value {
            string_value: "{\n  \"metrics_specs\": [\n    {\n      \"per_slice_thresholds\": {\n        \"sparse_categorical_accuracy\": {\n          \"thresholds\": [\n            {\n              \"slicing_specs\": [\n                {}\n              ],\n              \"threshold\": {\n                \"change_threshold\": {\n                  \"absolute\": -1e-10,\n                  \"direction\": \"HIGHER_IS_BETTER\"\n                },\n                \"value_threshold\": {\n                  \"lower_bound\": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  \"model_specs\": [\n    {\n      \"label_key\": \"species\"\n    }\n  ],\n  \"slicing_specs\": [\n    {},\n    {\n      \"feature_keys\": [\n        \"species\"\n      ]\n    }\n  ]\n}"
          }
        }
      }
      parameters {
        key: "example_splits"
        value {
          field_value {
            string_value: "null"
          }
        }
      }
    }
    upstream_nodes: "CsvExampleGen"
    upstream_nodes: "Trainer"
    upstream_nodes: "latest_blessed_model_resolver"
    downstream_nodes: "Pusher"
    execution_options {
      caching_options {
      }
    }
  }
}
nodes {
  pipeline_node {
    node_info {
      type {
        name: "tfx.components.pusher.component.Pusher"
      }
      id: "Pusher"
    }
    contexts {
      contexts {
        type {
          name: "pipeline"
        }
        name {
          field_value {
            string_value: "penguin-tfma"
          }
        }
      }
      contexts {
        type {
          name: "pipeline_run"
        }
        name {
          field_value {
            string_value: "2021-06-02T09:14:35.912739"
          }
        }
      }
      contexts {
        type {
          name: "node"
        }
        name {
          field_value {
            string_value: "penguin-tfma.Pusher"
          }
        }
      }
    }
    inputs {
      inputs {
        key: "model"
        value {
          channels {
            producer_node_query {
              id: "Trainer"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.Trainer"
                }
              }
            }
            artifact_query {
              type {
                name: "Model"
              }
            }
            output_key: "model"
          }
        }
      }
      inputs {
        key: "model_blessing"
        value {
          channels {
            producer_node_query {
              id: "Evaluator"
            }
            context_queries {
              type {
                name: "pipeline"
              }
              name {
                field_value {
                  string_value: "penguin-tfma"
                }
              }
            }
            context_queries {
              type {
                name: "pipeline_run"
              }
              name {
                field_value {
                  string_value: "2021-06-02T09:14:35.912739"
                }
              }
            }
            context_queries {
              type {
                name: "node"
              }
              name {
                field_value {
                  string_value: "penguin-tfma.Evaluator"
                }
              }
            }
            artifact_query {
              type {
                name: "ModelBlessing"
              }
            }
            output_key: "blessing"
          }
        }
      }
    }
    outputs {
      outputs {
        key: "pushed_model"
        value {
          artifact_spec {
            type {
              name: "PushedModel"
            }
          }
        }
      }
    }
    parameters {
      parameters {
        key: "custom_config"
        value {
          field_value {
            string_value: "null"
          }
        }
      }
      parameters {
        key: "push_destination"
        value {
          field_value {
            string_value: "{\n  \"filesystem\": {\n    \"base_directory\": \"serving_model/penguin-tfma\"\n  }\n}"
          }
        }
      }
    }
    upstream_nodes: "Evaluator"
    upstream_nodes: "Trainer"
    execution_options {
      caching_options {
      }
    }
  }
}
runtime_spec {
  pipeline_root {
    field_value {
      string_value: "pipelines/penguin-tfma"
    }
  }
  pipeline_run_id {
    field_value {
      string_value: "2021-06-02T09:14:35.912739"
    }
  }
}
execution_mode: SYNC
deployment_config {
  type_url: "type.googleapis.com/tfx.orchestration.IntermediateDeploymentConfig"
  value: "\n\206\001\n\006Pusher\022|\nOtype.googleapis.com/tfx.orchestration.executable_spec.PythonClassExecutableSpec\022)\n\'tfx.components.pusher.executor.Executor\n\236\001\n\rCsvExampleGen\022\214\001\nHtype.googleapis.com/tfx.orchestration.executable_spec.BeamExecutableSpec\022@\n>\n<tfx.components.example_gen.csv_example_gen.executor.Executor\n\207\001\n\tEvaluator\022z\nHtype.googleapis.com/tfx.orchestration.executable_spec.BeamExecutableSpec\022.\n,\n*tfx.components.evaluator.executor.Executor\n\220\001\n\007Trainer\022\204\001\nOtype.googleapis.com/tfx.orchestration.executable_spec.PythonClassExecutableSpec\0221\n/tfx.components.trainer.executor.GenericExecutor\022\230\001\n\rCsvExampleGen\022\206\001\nOtype.googleapis.com/tfx.orchestration.executable_spec.PythonClassExecutableSpec\0223\n1tfx.components.example_gen.driver.FileBasedDriver*[\n0type.googleapis.com/ml_metadata.ConnectionConfig\022\'\032%\n!metadata/penguin-tfma/metadata.db\020\003"
}

INFO:absl:Using deployment config:
 executor_specs {
  key: "CsvExampleGen"
  value {
    beam_executable_spec {
      python_executor_spec {
        class_path: "tfx.components.example_gen.csv_example_gen.executor.Executor"
      }
    }
  }
}
executor_specs {
  key: "Evaluator"
  value {
    beam_executable_spec {
      python_executor_spec {
        class_path: "tfx.components.evaluator.executor.Executor"
      }
    }
  }
}
executor_specs {
  key: "Pusher"
  value {
    python_class_executable_spec {
      class_path: "tfx.components.pusher.executor.Executor"
    }
  }
}
executor_specs {
  key: "Trainer"
  value {
    python_class_executable_spec {
      class_path: "tfx.components.trainer.executor.GenericExecutor"
    }
  }
}
custom_driver_specs {
  key: "CsvExampleGen"
  value {
    python_class_executable_spec {
      class_path: "tfx.components.example_gen.driver.FileBasedDriver"
    }
  }
}
metadata_connection_config {
  sqlite {
    filename_uri: "metadata/penguin-tfma/metadata.db"
    connection_mode: READWRITE_OPENCREATE
  }
}

INFO:absl:Using connection config:
 sqlite {
  filename_uri: "metadata/penguin-tfma/metadata.db"
  connection_mode: READWRITE_OPENCREATE
}

INFO:absl:Component CsvExampleGen is running.
INFO:absl:Running launcher for node_info {
  type {
    name: "tfx.components.example_gen.csv_example_gen.component.CsvExampleGen"
  }
  id: "CsvExampleGen"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.CsvExampleGen"
      }
    }
  }
}
outputs {
  outputs {
    key: "examples"
    value {
      artifact_spec {
        type {
          name: "Examples"
          properties {
            key: "span"
            value: INT
          }
          properties {
            key: "split_names"
            value: STRING
          }
          properties {
            key: "version"
            value: INT
          }
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "input_base"
    value {
      field_value {
        string_value: "/tmp/tfx-data99f0r4mn"
      }
    }
  }
  parameters {
    key: "input_config"
    value {
      field_value {
        string_value: "{\n  \"splits\": [\n    {\n      \"name\": \"single_split\",\n      \"pattern\": \"*\"\n    }\n  ]\n}"
      }
    }
  }
  parameters {
    key: "output_config"
    value {
      field_value {
        string_value: "{\n  \"split_config\": {\n    \"splits\": [\n      {\n        \"hash_buckets\": 2,\n        \"name\": \"train\"\n      },\n      {\n        \"hash_buckets\": 1,\n        \"name\": \"eval\"\n      }\n    ]\n  }\n}"
      }
    }
  }
  parameters {
    key: "output_data_format"
    value {
      field_value {
        int_value: 6
      }
    }
  }
}
downstream_nodes: "Evaluator"
downstream_nodes: "Trainer"
execution_options {
  caching_options {
  }
}

INFO:absl:MetadataStore with DB connection initialized
INFO:absl:select span and version = (0, None)
INFO:absl:latest span and version = (0, None)
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 1
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=1, input_dict={}, output_dict=defaultdict(<class 'list'>, {'examples': [Artifact(artifact: uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
custom_properties {
  key: "input_fingerprint"
  value {
    string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1622625275,sum_checksum:1622625275"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:CsvExampleGen:examples:0"
  }
}
custom_properties {
  key: "span"
  value {
    int_value: 0
  }
}
, artifact_type: name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
)]}), exec_properties={'output_data_format': 6, 'input_config': '{\n  "splits": [\n    {\n      "name": "single_split",\n      "pattern": "*"\n    }\n  ]\n}', 'input_base': '/tmp/tfx-data99f0r4mn', 'output_config': '{\n  "split_config": {\n    "splits": [\n      {\n        "hash_buckets": 2,\n        "name": "train"\n      },\n      {\n        "hash_buckets": 1,\n        "name": "eval"\n      }\n    ]\n  }\n}', 'span': 0, 'version': None, 'input_fingerprint': 'split:single_split,num_files:1,total_bytes:25648,xor_checksum:1622625275,sum_checksum:1622625275'}, execution_output_uri='pipelines/penguin-tfma/CsvExampleGen/.system/executor_execution/1/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/CsvExampleGen/.system/stateful_working_dir/2021-06-02T09:14:35.912739', tmp_dir='pipelines/penguin-tfma/CsvExampleGen/.system/executor_execution/1/.temp/', pipeline_node=node_info {
  type {
    name: "tfx.components.example_gen.csv_example_gen.component.CsvExampleGen"
  }
  id: "CsvExampleGen"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.CsvExampleGen"
      }
    }
  }
}
outputs {
  outputs {
    key: "examples"
    value {
      artifact_spec {
        type {
          name: "Examples"
          properties {
            key: "span"
            value: INT
          }
          properties {
            key: "split_names"
            value: STRING
          }
          properties {
            key: "version"
            value: INT
          }
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "input_base"
    value {
      field_value {
        string_value: "/tmp/tfx-data99f0r4mn"
      }
    }
  }
  parameters {
    key: "input_config"
    value {
      field_value {
        string_value: "{\n  \"splits\": [\n    {\n      \"name\": \"single_split\",\n      \"pattern\": \"*\"\n    }\n  ]\n}"
      }
    }
  }
  parameters {
    key: "output_config"
    value {
      field_value {
        string_value: "{\n  \"split_config\": {\n    \"splits\": [\n      {\n        \"hash_buckets\": 2,\n        \"name\": \"train\"\n      },\n      {\n        \"hash_buckets\": 1,\n        \"name\": \"eval\"\n      }\n    ]\n  }\n}"
      }
    }
  }
  parameters {
    key: "output_data_format"
    value {
      field_value {
        int_value: 6
      }
    }
  }
}
downstream_nodes: "Evaluator"
downstream_nodes: "Trainer"
execution_options {
  caching_options {
  }
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-06-02T09:14:35.912739')
INFO:absl:Generating examples.
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
INFO:absl:Processing input csv data /tmp/tfx-data99f0r4mn/* to TFExample.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
INFO:absl:Examples generated.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 1 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'examples': [Artifact(artifact: uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
custom_properties {
  key: "input_fingerprint"
  value {
    string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1622625275,sum_checksum:1622625275"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:CsvExampleGen:examples:0"
  }
}
custom_properties {
  key: "span"
  value {
    int_value: 0
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
)]}) for execution 1
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Component CsvExampleGen is finished.
INFO:absl:Component latest_blessed_model_resolver is running.
INFO:absl:Running launcher for node_info {
  type {
    name: "tfx.dsl.components.common.resolver.Resolver"
  }
  id: "latest_blessed_model_resolver"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.latest_blessed_model_resolver"
      }
    }
  }
}
inputs {
  inputs {
    key: "model"
    value {
      channels {
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
      }
    }
  }
  inputs {
    key: "model_blessing"
    value {
      channels {
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        artifact_query {
          type {
            name: "ModelBlessing"
          }
        }
      }
    }
  }
  resolver_config {
    resolver_steps {
      class_path: "tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy.LatestBlessedModelStrategy"
      config_json: "{}"
      input_keys: "model"
      input_keys: "model_blessing"
    }
  }
}
downstream_nodes: "Evaluator"
execution_options {
  caching_options {
  }
}

INFO:absl:Running as an resolver node.
INFO:absl:MetadataStore with DB connection initialized
WARNING:absl:Artifact type Model is not found in MLMD.
WARNING:absl:Artifact type ModelBlessing is not found in MLMD.
INFO:absl:Component latest_blessed_model_resolver is finished.
INFO:absl:Component Trainer is running.
INFO:absl:Running launcher for node_info {
  type {
    name: "tfx.components.trainer.component.Trainer"
  }
  id: "Trainer"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Trainer"
      }
    }
  }
}
inputs {
  inputs {
    key: "examples"
    value {
      channels {
        producer_node_query {
          id: "CsvExampleGen"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.CsvExampleGen"
            }
          }
        }
        artifact_query {
          type {
            name: "Examples"
          }
        }
        output_key: "examples"
      }
    }
  }
}
outputs {
  outputs {
    key: "model"
    value {
      artifact_spec {
        type {
          name: "Model"
        }
      }
    }
  }
  outputs {
    key: "model_run"
    value {
      artifact_spec {
        type {
          name: "ModelRun"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "custom_config"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
  parameters {
    key: "eval_args"
    value {
      field_value {
        string_value: "{\n  \"num_steps\": 5\n}"
      }
    }
  }
  parameters {
    key: "module_path"
    value {
      field_value {
        string_value: "penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl"
      }
    }
  }
  parameters {
    key: "train_args"
    value {
      field_value {
        string_value: "{\n  \"num_steps\": 100\n}"
      }
    }
  }
}
upstream_nodes: "CsvExampleGen"
downstream_nodes: "Evaluator"
downstream_nodes: "Pusher"
execution_options {
  caching_options {
  }
}

INFO:absl:MetadataStore with DB connection initialized
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 3
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=3, input_dict={'examples': [Artifact(artifact: id: 1
type_id: 6
uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
properties {
  key: "split_names"
  value {
    string_value: "[\"train\", \"eval\"]"
  }
}
custom_properties {
  key: "input_fingerprint"
  value {
    string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1622625275,sum_checksum:1622625275"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:CsvExampleGen:examples:0"
  }
}
custom_properties {
  key: "payload_format"
  value {
    string_value: "FORMAT_TF_EXAMPLE"
  }
}
custom_properties {
  key: "span"
  value {
    int_value: 0
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
state: LIVE
create_time_since_epoch: 1622625277030
last_update_time_since_epoch: 1622625277030
, artifact_type: id: 6
name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
)]}, output_dict=defaultdict(<class 'list'>, {'model_run': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model_run/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model_run:0"
  }
}
, artifact_type: name: "ModelRun"
)], 'model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model:0"
  }
}
, artifact_type: name: "Model"
)]}), exec_properties={'module_path': 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl', 'train_args': '{\n  "num_steps": 100\n}', 'custom_config': 'null', 'eval_args': '{\n  "num_steps": 5\n}'}, execution_output_uri='pipelines/penguin-tfma/Trainer/.system/executor_execution/3/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Trainer/.system/stateful_working_dir/2021-06-02T09:14:35.912739', tmp_dir='pipelines/penguin-tfma/Trainer/.system/executor_execution/3/.temp/', pipeline_node=node_info {
  type {
    name: "tfx.components.trainer.component.Trainer"
  }
  id: "Trainer"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Trainer"
      }
    }
  }
}
inputs {
  inputs {
    key: "examples"
    value {
      channels {
        producer_node_query {
          id: "CsvExampleGen"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.CsvExampleGen"
            }
          }
        }
        artifact_query {
          type {
            name: "Examples"
          }
        }
        output_key: "examples"
      }
    }
  }
}
outputs {
  outputs {
    key: "model"
    value {
      artifact_spec {
        type {
          name: "Model"
        }
      }
    }
  }
  outputs {
    key: "model_run"
    value {
      artifact_spec {
        type {
          name: "ModelRun"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "custom_config"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
  parameters {
    key: "eval_args"
    value {
      field_value {
        string_value: "{\n  \"num_steps\": 5\n}"
      }
    }
  }
  parameters {
    key: "module_path"
    value {
      field_value {
        string_value: "penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl"
      }
    }
  }
  parameters {
    key: "train_args"
    value {
      field_value {
        string_value: "{\n  \"num_steps\": 100\n}"
      }
    }
  }
}
upstream_nodes: "CsvExampleGen"
downstream_nodes: "Evaluator"
downstream_nodes: "Pusher"
execution_options {
  caching_options {
  }
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-06-02T09:14:35.912739')
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/tmp/tmp3crfzb02.json', '--HistoryManager.hist_file=:memory:']
INFO:absl:Attempting to infer TFX Python dependency for beam
INFO:absl:Copying all content from install dir /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tfx to temp dir /tmp/tmpawrhm8mb/build/tfx
INFO:absl:Generating a temp setup file at /tmp/tmpawrhm8mb/build/tfx/setup.py
INFO:absl:Creating temporary sdist package, logs available at /tmp/tmpawrhm8mb/build/tfx/setup.log
INFO:absl:Added --extra_package=/tmp/tmpawrhm8mb/build/tfx/dist/tfx_ephemeral-0.30.0.tar.gz to beam args
INFO:absl:Train on the 'train' split when train_args.splits is not set.
INFO:absl:Evaluate on the 'eval' split when eval_args.splits is not set.
ERROR:absl:udf_utils.get_fn {'module_path': 'penguin_trainer@pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl', 'train_args': '{\n  "num_steps": 100\n}', 'custom_config': 'null', 'eval_args': '{\n  "num_steps": 5\n}'} 'run_fn'
INFO:absl:Installing 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl' to a temporary directory.
INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmph3llqx07', 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl']
INFO:absl:Successfully installed 'pipelines/penguin-tfma/_wheels/tfx_user_code_Trainer-0.0+1e19049dced0ccb21e0af60dae1c6e0ef09b63d1ff0e370d7f699920c2735703-py3-none-any.whl'.
INFO:absl:Training model.
INFO:absl:Feature body_mass_g has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature body_mass_g has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature body_mass_g has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature body_mass_g has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_depth_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature culmen_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature flipper_length_mm has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Feature species has a shape dim {
  size: 1
}
. Setting to DenseTensor.
INFO:absl:Model: "model"
INFO:absl:__________________________________________________________________________________________________
INFO:absl:Layer (type)                    Output Shape         Param #     Connected to                     
INFO:absl:==================================================================================================
INFO:absl:culmen_length_mm (InputLayer)   [(None, 1)]          0                                            
INFO:absl:__________________________________________________________________________________________________
INFO:absl:culmen_depth_mm (InputLayer)    [(None, 1)]          0                                            
INFO:absl:__________________________________________________________________________________________________
INFO:absl:flipper_length_mm (InputLayer)  [(None, 1)]          0                                            
INFO:absl:__________________________________________________________________________________________________
INFO:absl:body_mass_g (InputLayer)        [(None, 1)]          0                                            
INFO:absl:__________________________________________________________________________________________________
INFO:absl:concatenate (Concatenate)       (None, 4)            0           culmen_length_mm[0][0]           
INFO:absl:                                                                 culmen_depth_mm[0][0]            
INFO:absl:                                                                 flipper_length_mm[0][0]          
INFO:absl:                                                                 body_mass_g[0][0]                
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense (Dense)                   (None, 8)            40          concatenate[0][0]                
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense_1 (Dense)                 (None, 8)            72          dense[0][0]                      
INFO:absl:__________________________________________________________________________________________________
INFO:absl:dense_2 (Dense)                 (None, 3)            27          dense_1[0][0]                    
INFO:absl:==================================================================================================
INFO:absl:Total params: 139
INFO:absl:Trainable params: 139
INFO:absl:Non-trainable params: 0
INFO:absl:__________________________________________________________________________________________________
100/100 [==============================] - 1s 6ms/step - loss: 0.8119 - sparse_categorical_accuracy: 0.5550 - val_loss: 0.2805 - val_sparse_categorical_accuracy: 0.9200
INFO:tensorflow:Assets written to: pipelines/penguin-tfma/Trainer/model/3/Format-Serving/assets
INFO:tensorflow:Assets written to: pipelines/penguin-tfma/Trainer/model/3/Format-Serving/assets
INFO:absl:Training complete. Model written to pipelines/penguin-tfma/Trainer/model/3/Format-Serving. ModelRun written to pipelines/penguin-tfma/Trainer/model_run/3
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 3 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'model_run': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model_run/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model_run:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "ModelRun"
)], 'model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "Model"
)]}) for execution 3
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Component Trainer is finished.
INFO:absl:Component Evaluator is running.
INFO:absl:Running launcher for node_info {
  type {
    name: "tfx.components.evaluator.component.Evaluator"
  }
  id: "Evaluator"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Evaluator"
      }
    }
  }
}
inputs {
  inputs {
    key: "baseline_model"
    value {
      channels {
        producer_node_query {
          id: "latest_blessed_model_resolver"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.latest_blessed_model_resolver"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
  inputs {
    key: "examples"
    value {
      channels {
        producer_node_query {
          id: "CsvExampleGen"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.CsvExampleGen"
            }
          }
        }
        artifact_query {
          type {
            name: "Examples"
          }
        }
        output_key: "examples"
      }
    }
  }
  inputs {
    key: "model"
    value {
      channels {
        producer_node_query {
          id: "Trainer"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Trainer"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
}
outputs {
  outputs {
    key: "blessing"
    value {
      artifact_spec {
        type {
          name: "ModelBlessing"
        }
      }
    }
  }
  outputs {
    key: "evaluation"
    value {
      artifact_spec {
        type {
          name: "ModelEvaluation"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "eval_config"
    value {
      field_value {
        string_value: "{\n  \"metrics_specs\": [\n    {\n      \"per_slice_thresholds\": {\n        \"sparse_categorical_accuracy\": {\n          \"thresholds\": [\n            {\n              \"slicing_specs\": [\n                {}\n              ],\n              \"threshold\": {\n                \"change_threshold\": {\n                  \"absolute\": -1e-10,\n                  \"direction\": \"HIGHER_IS_BETTER\"\n                },\n                \"value_threshold\": {\n                  \"lower_bound\": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  \"model_specs\": [\n    {\n      \"label_key\": \"species\"\n    }\n  ],\n  \"slicing_specs\": [\n    {},\n    {\n      \"feature_keys\": [\n        \"species\"\n      ]\n    }\n  ]\n}"
      }
    }
  }
  parameters {
    key: "example_splits"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
}
upstream_nodes: "CsvExampleGen"
upstream_nodes: "Trainer"
upstream_nodes: "latest_blessed_model_resolver"
downstream_nodes: "Pusher"
execution_options {
  caching_options {
  }
}

INFO:absl:MetadataStore with DB connection initialized
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 4
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=4, input_dict={'examples': [Artifact(artifact: id: 1
type_id: 6
uri: "pipelines/penguin-tfma/CsvExampleGen/examples/1"
properties {
  key: "split_names"
  value {
    string_value: "[\"train\", \"eval\"]"
  }
}
custom_properties {
  key: "input_fingerprint"
  value {
    string_value: "split:single_split,num_files:1,total_bytes:25648,xor_checksum:1622625275,sum_checksum:1622625275"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:CsvExampleGen:examples:0"
  }
}
custom_properties {
  key: "payload_format"
  value {
    string_value: "FORMAT_TF_EXAMPLE"
  }
}
custom_properties {
  key: "span"
  value {
    int_value: 0
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
state: LIVE
create_time_since_epoch: 1622625277030
last_update_time_since_epoch: 1622625277030
, artifact_type: id: 6
name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
)], 'baseline_model': [], 'model': [Artifact(artifact: id: 3
type_id: 10
uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
state: LIVE
create_time_since_epoch: 1622625281771
last_update_time_since_epoch: 1622625281771
, artifact_type: id: 10
name: "Model"
)]}, output_dict=defaultdict(<class 'list'>, {'evaluation': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/evaluation/4"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Evaluator:evaluation:0"
  }
}
, artifact_type: name: "ModelEvaluation"
)], 'blessing': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Evaluator:blessing:0"
  }
}
, artifact_type: name: "ModelBlessing"
)]}), exec_properties={'eval_config': '{\n  "metrics_specs": [\n    {\n      "per_slice_thresholds": {\n        "sparse_categorical_accuracy": {\n          "thresholds": [\n            {\n              "slicing_specs": [\n                {}\n              ],\n              "threshold": {\n                "change_threshold": {\n                  "absolute": -1e-10,\n                  "direction": "HIGHER_IS_BETTER"\n                },\n                "value_threshold": {\n                  "lower_bound": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  "model_specs": [\n    {\n      "label_key": "species"\n    }\n  ],\n  "slicing_specs": [\n    {},\n    {\n      "feature_keys": [\n        "species"\n      ]\n    }\n  ]\n}', 'example_splits': 'null'}, execution_output_uri='pipelines/penguin-tfma/Evaluator/.system/executor_execution/4/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Evaluator/.system/stateful_working_dir/2021-06-02T09:14:35.912739', tmp_dir='pipelines/penguin-tfma/Evaluator/.system/executor_execution/4/.temp/', pipeline_node=node_info {
  type {
    name: "tfx.components.evaluator.component.Evaluator"
  }
  id: "Evaluator"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Evaluator"
      }
    }
  }
}
inputs {
  inputs {
    key: "baseline_model"
    value {
      channels {
        producer_node_query {
          id: "latest_blessed_model_resolver"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.latest_blessed_model_resolver"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
  inputs {
    key: "examples"
    value {
      channels {
        producer_node_query {
          id: "CsvExampleGen"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.CsvExampleGen"
            }
          }
        }
        artifact_query {
          type {
            name: "Examples"
          }
        }
        output_key: "examples"
      }
    }
  }
  inputs {
    key: "model"
    value {
      channels {
        producer_node_query {
          id: "Trainer"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Trainer"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
}
outputs {
  outputs {
    key: "blessing"
    value {
      artifact_spec {
        type {
          name: "ModelBlessing"
        }
      }
    }
  }
  outputs {
    key: "evaluation"
    value {
      artifact_spec {
        type {
          name: "ModelEvaluation"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "eval_config"
    value {
      field_value {
        string_value: "{\n  \"metrics_specs\": [\n    {\n      \"per_slice_thresholds\": {\n        \"sparse_categorical_accuracy\": {\n          \"thresholds\": [\n            {\n              \"slicing_specs\": [\n                {}\n              ],\n              \"threshold\": {\n                \"change_threshold\": {\n                  \"absolute\": -1e-10,\n                  \"direction\": \"HIGHER_IS_BETTER\"\n                },\n                \"value_threshold\": {\n                  \"lower_bound\": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  \"model_specs\": [\n    {\n      \"label_key\": \"species\"\n    }\n  ],\n  \"slicing_specs\": [\n    {},\n    {\n      \"feature_keys\": [\n        \"species\"\n      ]\n    }\n  ]\n}"
      }
    }
  }
  parameters {
    key: "example_splits"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
}
upstream_nodes: "CsvExampleGen"
upstream_nodes: "Trainer"
upstream_nodes: "latest_blessed_model_resolver"
downstream_nodes: "Pusher"
execution_options {
  caching_options {
  }
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-06-02T09:14:35.912739')
ERROR:absl:udf_utils.get_fn {'eval_config': '{\n  "metrics_specs": [\n    {\n      "per_slice_thresholds": {\n        "sparse_categorical_accuracy": {\n          "thresholds": [\n            {\n              "slicing_specs": [\n                {}\n              ],\n              "threshold": {\n                "change_threshold": {\n                  "absolute": -1e-10,\n                  "direction": "HIGHER_IS_BETTER"\n                },\n                "value_threshold": {\n                  "lower_bound": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  "model_specs": [\n    {\n      "label_key": "species"\n    }\n  ],\n  "slicing_specs": [\n    {},\n    {\n      "feature_keys": [\n        "species"\n      ]\n    }\n  ]\n}', 'example_splits': 'null'} 'custom_eval_shared_model'
ERROR:absl:There are change thresholds, but the baseline is missing. This is allowed only when rubber stamping (first run).
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  label_key: "species"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "species"
}
metrics_specs {
  per_slice_thresholds {
    key: "sparse_categorical_accuracy"
    value {
      thresholds {
        slicing_specs {
        }
        threshold {
          value_threshold {
            lower_bound {
              value: 0.6
            }
          }
        }
      }
    }
  }
}

INFO:absl:Using pipelines/penguin-tfma/Trainer/model/3/Format-Serving as  model.
INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.
INFO:absl:Evaluating model.
ERROR:absl:udf_utils.get_fn {'eval_config': '{\n  "metrics_specs": [\n    {\n      "per_slice_thresholds": {\n        "sparse_categorical_accuracy": {\n          "thresholds": [\n            {\n              "slicing_specs": [\n                {}\n              ],\n              "threshold": {\n                "change_threshold": {\n                  "absolute": -1e-10,\n                  "direction": "HIGHER_IS_BETTER"\n                },\n                "value_threshold": {\n                  "lower_bound": 0.6\n                }\n              }\n            }\n          ]\n        }\n      }\n    }\n  ],\n  "model_specs": [\n    {\n      "label_key": "species"\n    }\n  ],\n  "slicing_specs": [\n    {},\n    {\n      "feature_keys": [\n        "species"\n      ]\n    }\n  ]\n}', 'example_splits': 'null'} 'custom_extractors'
INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  label_key: "species"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "species"
}
metrics_specs {
  model_names: ""
  per_slice_thresholds {
    key: "sparse_categorical_accuracy"
    value {
      thresholds {
        slicing_specs {
        }
        threshold {
          value_threshold {
            lower_bound {
              value: 0.6
            }
          }
        }
      }
    }
  }
}

INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  label_key: "species"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "species"
}
metrics_specs {
  model_names: ""
  per_slice_thresholds {
    key: "sparse_categorical_accuracy"
    value {
      thresholds {
        slicing_specs {
        }
        threshold {
          value_threshold {
            lower_bound {
              value: 0.6
            }
          }
        }
      }
    }
  }
}

INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=
model_specs {
  label_key: "species"
}
slicing_specs {
}
slicing_specs {
  feature_keys: "species"
}
metrics_specs {
  model_names: ""
  per_slice_thresholds {
    key: "sparse_categorical_accuracy"
    value {
      thresholds {
        slicing_specs {
        }
        threshold {
          value_threshold {
            lower_bound {
              value: 0.6
            }
          }
        }
      }
    }
  }
}

INFO:absl:Evaluation complete. Results written to pipelines/penguin-tfma/Evaluator/evaluation/4.
INFO:absl:Checking validation results.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:113: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
INFO:absl:Blessing result True written to pipelines/penguin-tfma/Evaluator/blessing/4.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 4 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'evaluation': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/evaluation/4"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Evaluator:evaluation:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "ModelEvaluation"
)], 'blessing': [Artifact(artifact: uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Evaluator:blessing:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "ModelBlessing"
)]}) for execution 4
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Component Evaluator is finished.
INFO:absl:Component Pusher is running.
INFO:absl:Running launcher for node_info {
  type {
    name: "tfx.components.pusher.component.Pusher"
  }
  id: "Pusher"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Pusher"
      }
    }
  }
}
inputs {
  inputs {
    key: "model"
    value {
      channels {
        producer_node_query {
          id: "Trainer"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Trainer"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
  inputs {
    key: "model_blessing"
    value {
      channels {
        producer_node_query {
          id: "Evaluator"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Evaluator"
            }
          }
        }
        artifact_query {
          type {
            name: "ModelBlessing"
          }
        }
        output_key: "blessing"
      }
    }
  }
}
outputs {
  outputs {
    key: "pushed_model"
    value {
      artifact_spec {
        type {
          name: "PushedModel"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "custom_config"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
  parameters {
    key: "push_destination"
    value {
      field_value {
        string_value: "{\n  \"filesystem\": {\n    \"base_directory\": \"serving_model/penguin-tfma\"\n  }\n}"
      }
    }
  }
}
upstream_nodes: "Evaluator"
upstream_nodes: "Trainer"
execution_options {
  caching_options {
  }
}

INFO:absl:MetadataStore with DB connection initialized
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Going to run a new execution 5
INFO:absl:Going to run a new execution: ExecutionInfo(execution_id=5, input_dict={'model': [Artifact(artifact: id: 3
type_id: 10
uri: "pipelines/penguin-tfma/Trainer/model/3"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Trainer:model:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
state: LIVE
create_time_since_epoch: 1622625281771
last_update_time_since_epoch: 1622625281771
, artifact_type: id: 10
name: "Model"
)], 'model_blessing': [Artifact(artifact: id: 5
type_id: 13
uri: "pipelines/penguin-tfma/Evaluator/blessing/4"
custom_properties {
  key: "blessed"
  value {
    int_value: 1
  }
}
custom_properties {
  key: "current_model"
  value {
    string_value: "pipelines/penguin-tfma/Trainer/model/3"
  }
}
custom_properties {
  key: "current_model_id"
  value {
    int_value: 3
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Evaluator:blessing:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
state: LIVE
create_time_since_epoch: 1622625287275
last_update_time_since_epoch: 1622625287275
, artifact_type: id: 13
name: "ModelBlessing"
)]}, output_dict=defaultdict(<class 'list'>, {'pushed_model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Pusher/pushed_model/5"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Pusher:pushed_model:0"
  }
}
, artifact_type: name: "PushedModel"
)]}), exec_properties={'push_destination': '{\n  "filesystem": {\n    "base_directory": "serving_model/penguin-tfma"\n  }\n}', 'custom_config': 'null'}, execution_output_uri='pipelines/penguin-tfma/Pusher/.system/executor_execution/5/executor_output.pb', stateful_working_dir='pipelines/penguin-tfma/Pusher/.system/stateful_working_dir/2021-06-02T09:14:35.912739', tmp_dir='pipelines/penguin-tfma/Pusher/.system/executor_execution/5/.temp/', pipeline_node=node_info {
  type {
    name: "tfx.components.pusher.component.Pusher"
  }
  id: "Pusher"
}
contexts {
  contexts {
    type {
      name: "pipeline"
    }
    name {
      field_value {
        string_value: "penguin-tfma"
      }
    }
  }
  contexts {
    type {
      name: "pipeline_run"
    }
    name {
      field_value {
        string_value: "2021-06-02T09:14:35.912739"
      }
    }
  }
  contexts {
    type {
      name: "node"
    }
    name {
      field_value {
        string_value: "penguin-tfma.Pusher"
      }
    }
  }
}
inputs {
  inputs {
    key: "model"
    value {
      channels {
        producer_node_query {
          id: "Trainer"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Trainer"
            }
          }
        }
        artifact_query {
          type {
            name: "Model"
          }
        }
        output_key: "model"
      }
    }
  }
  inputs {
    key: "model_blessing"
    value {
      channels {
        producer_node_query {
          id: "Evaluator"
        }
        context_queries {
          type {
            name: "pipeline"
          }
          name {
            field_value {
              string_value: "penguin-tfma"
            }
          }
        }
        context_queries {
          type {
            name: "pipeline_run"
          }
          name {
            field_value {
              string_value: "2021-06-02T09:14:35.912739"
            }
          }
        }
        context_queries {
          type {
            name: "node"
          }
          name {
            field_value {
              string_value: "penguin-tfma.Evaluator"
            }
          }
        }
        artifact_query {
          type {
            name: "ModelBlessing"
          }
        }
        output_key: "blessing"
      }
    }
  }
}
outputs {
  outputs {
    key: "pushed_model"
    value {
      artifact_spec {
        type {
          name: "PushedModel"
        }
      }
    }
  }
}
parameters {
  parameters {
    key: "custom_config"
    value {
      field_value {
        string_value: "null"
      }
    }
  }
  parameters {
    key: "push_destination"
    value {
      field_value {
        string_value: "{\n  \"filesystem\": {\n    \"base_directory\": \"serving_model/penguin-tfma\"\n  }\n}"
      }
    }
  }
}
upstream_nodes: "Evaluator"
upstream_nodes: "Trainer"
execution_options {
  caching_options {
  }
}
, pipeline_info=id: "penguin-tfma"
, pipeline_run_id='2021-06-02T09:14:35.912739')
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/tmp/tmp3crfzb02.json', '--HistoryManager.hist_file=:memory:']
INFO:absl:Attempting to infer TFX Python dependency for beam
INFO:absl:Copying all content from install dir /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tfx to temp dir /tmp/tmp4ysqo31p/build/tfx
INFO:absl:Generating a temp setup file at /tmp/tmp4ysqo31p/build/tfx/setup.py
INFO:absl:Creating temporary sdist package, logs available at /tmp/tmp4ysqo31p/build/tfx/setup.log
INFO:absl:Added --extra_package=/tmp/tmp4ysqo31p/build/tfx/dist/tfx_ephemeral-0.30.0.tar.gz to beam args
INFO:absl:Model version: 1622625288
INFO:absl:Model written to serving path serving_model/penguin-tfma/1622625288.
INFO:absl:Model pushed to pipelines/penguin-tfma/Pusher/pushed_model/5.
INFO:absl:Cleaning up stateless execution info.
INFO:absl:Execution 5 succeeded.
INFO:absl:Cleaning up stateful execution info.
INFO:absl:Publishing output artifacts defaultdict(<class 'list'>, {'pushed_model': [Artifact(artifact: uri: "pipelines/penguin-tfma/Pusher/pushed_model/5"
custom_properties {
  key: "name"
  value {
    string_value: "penguin-tfma:2021-06-02T09:14:35.912739:Pusher:pushed_model:0"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "0.30.0"
  }
}
, artifact_type: name: "PushedModel"
)]}) for execution 5
INFO:absl:MetadataStore with DB connection initialized
INFO:absl:Component Pusher is finished.

Wenn die Pipeline abgeschlossen ist, sollten Sie Folgendes sehen können:

INFO:absl:Blessing result True written to pipelines/penguin-tfma/Evaluator/blessing/4.

Oder Sie können das Ausgabeverzeichnis, in dem die generierten Artefakte gespeichert sind, auch manuell überprüfen. Wenn Sie mit einem Dateibrowser pipelines/penguin-tfma/Evaluator/blessing/ besuchen, können NOT_BLESSED je nach Bewertungsergebnis eine Datei mit dem Namen BLESSED oder NOT_BLESSED .

Wenn das serving_model_dir False , weigert sich Pusher, das Modell in das serving_model_dir , da das Modell nicht gut genug ist, um in der Produktion verwendet zu werden.

Sie können die Pipeline möglicherweise mit anderen Evaluierungskonfigurationen erneut ausführen. Selbst wenn Sie die Pipeline mit exakt derselben Konfiguration und demselben Dataset ausführen, kann das trainierte Modell aufgrund der inhärenten Zufälligkeit des Modelltrainings leicht abweichen, was zu einem NOT_BLESSED Modell führen kann.

Untersuchen Sie die Ausgaben der Pipeline

Sie können TFMA verwenden, um das Bewertungsergebnis im ModelEvaluation-Artefakt zu untersuchen und zu visualisieren.

Analyseergebnis aus Ausgabeartefakten abrufen

Sie können MLMD-APIs verwenden, um diese Ausgaben programmgesteuert zu finden. Zuerst werden wir einige Hilfsfunktionen definieren, um nach den soeben erzeugten Ausgabeartefakten zu suchen.

from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib

# TODO(b/171447278): Move these functions into the TFX library.

def get_latest_artifacts(metadata, pipeline_name, component_id):
  """Output artifacts of the latest run of the component."""
  context = metadata.store.get_context_by_type_and_name(
      'node', f'{pipeline_name}.{component_id}')
  executions = metadata.store.get_executions_by_context(context.id)
  latest_execution = max(executions,
                         key=lambda e:e.last_update_time_since_epoch)
  return execution_lib.get_artifacts_dict(metadata, latest_execution.id, 
                                          metadata_store_pb2.Event.OUTPUT)

Wir können die neueste Ausführung der Evaluator Komponente finden und Ausgabeartefakte davon abrufen.

# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs

metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
    METADATA_PATH)

with Metadata(metadata_connection_config) as metadata_handler:
  # Find output artifacts from MLMD.
  evaluator_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
                                          'Evaluator')
  eval_artifact = evaluator_output[standard_component_specs.EVALUATION_KEY][0]
INFO:absl:MetadataStore with DB connection initialized

Evaluator immer ein Bewertungsartefakt zurück, und wir können es mit der TensorFlow-Modellanalysebibliothek visualisieren. Der folgende Code gibt beispielsweise die Genauigkeitsmetriken für jede Pinguinart wieder.

import tensorflow_model_analysis as tfma

eval_result = tfma.load_eval_result(eval_artifact.uri)
tfma.view.render_slicing_metrics(eval_result, slicing_column='species')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'species:0', 'metrics…

Wenn Sie ‚sparse_categorical_accuracy‘ in wählen Show Dropdown-Liste können Sie die Genauigkeitswerte pro Spezies sehen. Möglicherweise möchten Sie weitere Slices hinzufügen und prüfen, ob Ihr Modell für alle Verteilungen geeignet ist und ob eine mögliche Verzerrung vorliegt.

Nächste Schritte

Weitere Informationen zur Modellanalyse finden Sie im Tutorial zur TensorFlow-Modellanalysebibliothek .

Weitere Ressourcen finden Sie unter https://www.tensorflow.org/tfx/tutorials

Bitte lesen Sie TFX-Pipelines verstehen , um mehr über verschiedene Konzepte in TFX zu erfahren.