Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

Create a TFX pipeline using templates

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Introduction

This document will provide instructions to create a TensorFlow Extended (TFX) pipeline using templates which are provided with TFX Python package. Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using ! are provided.

You will build a pipeline using Taxi Trips dataset released by the City of Chicago. We strongly encourage you to try building your own pipeline using your dataset by utilizing this pipeline as a baseline.

Step 1. Set up your environment.

AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.

NOTE: To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.

NOTE: There might be some errors during package installation. For example:

"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment.

Install tfx, kfp, and skaffold, and add installation path to the PATH environment variable.

# Install tfx and kfp Python packages.
!pip install -q --user --upgrade -q tfx==0.21.2
!pip install -q --user --upgrade -q kfp==0.2.5
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
ERROR: Will not install to the user site because it will lack sys.path precedence to grpcio in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages
ERROR: kubernetes 10.0.0 has requirement urllib3>=1.23, but you'll have urllib3 1.22 which is incompatible.
ERROR: kfp-server-api 0.1.40 has requirement urllib3>=1.23, but you'll have urllib3 1.22 which is incompatible.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 36.6M  100 36.6M    0     0  85.4M      0 --:--:-- --:--:-- --:--:-- 85.4M
mv: cannot move 'skaffold' to '/home/jupyter/.local/bin/': No such file or directory

# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
env: PATH=/tmpfs/src/tf_docs_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/puppetlabs/bin:/opt/android-studio/current/bin:/usr/local/go/bin:/usr/local/go/packages/bin:/opt/kubernetes/client/bin/:/home/kbuilder/.local/bin:/home/jupyter/.local/bin

Let's check the versions of TFX.

python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tfx'

In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.

Let's set some environment variables to use Kubeflow Pipelines.

First, get your GCP project ID.

# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
GCP project ID:tf-benchmark-dashboard

We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint. ENDPOINT should contain only the hostname part of the URL. For example, if the URL of the KFP dashboard is <a href="https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start">1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start</a>, ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com.

NOTE: You MUST set your ENDPOINT value below.

# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
    from absl import logging
    logging.error('Set your ENDPOINT in this cell.')
ERROR:absl:Set your ENDPOINT in this cell.

Set the image name as tfx-pipeline under the current GCP project.

# Docker image name for the pipeline image 
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'

And, it's done. We are ready to create a pipeline.

Step 2. Copy the predefined template to your project directory.

In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.

You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put.

PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"AIHub",PIPELINE_NAME)

TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.

The tfx template copy CLI command copies predefined template files into your project directory.

!tfx template copy \
  --pipeline_name={PIPELINE_NAME} \
  --destination_path={PROJECT_DIR} \
  --model=taxi
/bin/sh: 1: tfx: not found

Change the working directory context in this notebook to the project directory.

%cd {PROJECT_DIR}
[Errno 2] No such file or directory: '/home/kbuilder/AIHub/my_pipeline'
/tmpfs/src/temp/docs/tutorials/tfx

NOTE: Don't forget to change directory in File Browser on the left by clicking into the project directory once it is created.

Step 3. Browse your copied source files

The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial.

Here is brief introduction to each of the Python files.

  • configs.py: defines common constants for pipeline runners.
  • pipeline.py: defines TFX components and a pipeline.
  • beam_dag_runner.py / kubeflow_dag_runner.py: define runners for each orchestration engine. Since you are using Kubeflow you will not use the Beam orchestrator.
  • features.py / features_test.py: defines and tests features for the model.
  • hparams.py: defines hyperparameters of the model.
  • preprocessing.py / preprocessing_test.py: defines preprocessing jobs using tf::Transform.
  • model.py / model_test.py: defines a DNN model using TF estimator.

List the files in the project directory:

ls
skaffold  template.ipynb

You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.

You can run unit tests simply by supplying test files to the python binary. For example:

python3 features_test.py
python3: can't open file 'features_test.py': [Errno 2] No such file or directory

Step 4. Run your first TFX pipeline

Components in the TFX pipeline will generate outputs for each run as ML Metadata Artifacts, and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. It has a name starting with hostedkfp-default-.

To run this pipeline you MUST edit configs.py to set your GCS bucket name. You can list your current GCS buckets in this GCP project using the gsutil command.

# You can see your buckets using `gsutil`. The following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
/snap/google-cloud-sdk/124/lib/third_party/requests/__init__.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 2, 3]) may cause slowdown.
  warnings.warn(warning, RequestsDependencyWarning)
artifacts.tf-benchmark-dashboard.appspot.com
bert-raw-data
chiachenc-maskrcnn-test
chiachenc-mlperf-inference
dmchen-data
imagenet-copy
librispeech_dataset
maskrcnn-zongweiz-test
minigo-results
mlcompass-data
mlperf-euw4
mlperf07
mlperf_artifcats
mlperf_bert_sgpyc
mlperf_submission_drop
mlshell_prototype
nn-mlperf-expriment
pkanwar-bert
pkanwar-bert-internal
pkanwar-minigo
pkanwar-profiles
pkb-46a98a2b
pkb-sgpyc-europe-west4
pkb-sgpyc-us-central1
pkb-sgpyc-us-west1
pkb37c39333
pkb46a98a2b
resnet_4x4_testing
rxsang-data
rxsang-resnet-ds
ssd-inference
test_benchmark_schema
tf-benchmark-sgpyc
tjablin-profiles
tmadams-mlperf-test
wangtao-profiles
ywz-data
ywz-test-dir
zongweiz-inference
zongweiz-test2

gsutil ls

Double-click to open configs.py. Set GCS_BUCKET_NAME to the name of the GCS bucket without the gs:// or /. For example, if gsutil ls displayed gs://my-bucket, you should set my-bucket.

GCS_BUCKET_NAME = 'my-bucket'

NOTE:You MUST set your GCS bucket name in the configs.py file before proceeding.

Let's create a TFX pipeline using the tfx pipeline create command.

!tfx pipeline create  \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
/bin/sh: 1: tfx: not found

While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.

A pipeline definition file for argo will be generated, too. The name of this file is ${PIPELINE_NAME}.tar.gz. For example, it will be my_pipeline.tar.gz if the name of your pipeline is my_pipeline. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.

NOTE: kubeflow will be automatically selected as an orchestration engine if airflow is not installed and --engine is not specified.

Now start an execution run with the newly created pipeline using the tfx run create command.

tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found

Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run.

However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline. For example, you can find your runs under the Experiments menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.

One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured when you create a KFP cluster in GCP, or see Troubleshooting document in GCP.

Step 5. Add components for data validation.

In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation.

Double-click to open pipeline.py. Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.

You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.

# Update the pipeline
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found
/bin/sh: 1: tfx: not found

Check pipeline outputs

Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.

Step 6. Add components for training.

In this step, you will add components for training and model validation including Transform, Trainer, Evaluator, and Pusher.

Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)

As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.

!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found
/bin/sh: 1: tfx: not found

When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!

Step 7. (Optional) Try BigQueryExampleGen

BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.

Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the import statement and the query argument of the create_pipeline function.

We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.

Double-click to open configs.py. Uncomment the definition of GCP_PROJECT_ID, GCP_REGION, BIG_QUERY_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the project id and the region value in this file with the correct values for your GCP project.

Note: You MUST set your GCP project ID and region in the configs.py file before proceeding.

Double-click to open kubeflow_dag_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function.

Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.

!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found
/bin/sh: 1: tfx: not found

Step 8. (Optional) Try Dataflow with KFP

Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.

Double-click to open configs.py. Uncomment the definition of GCP_PROJECT_ID, GCP_REGION, and BEAM_PIPELINE_ARGS.

Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)

Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.

!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found
/bin/sh: 1: tfx: not found

You can find your Dataflow jobs in Dataflow in Cloud Console.

Step 9. (Optional) Try Cloud AI Platform Training and Prediction with KFP

TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.

Before editing files, you might first have to enable AI Platform Training & Prediction API.

Double-click to open configs.py. Uncomment the definition of GCP_PROJECT_ID, GCP_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.

Double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.

Update the pipeline and create an execution run as we did in step 5 and 6.

!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
/bin/sh: 1: tfx: not found
/bin/sh: 1: tfx: not found

You can find your training jobs in Cloud AI Platform Jobs. If your pipeline completed successfully, you can find your model in Cloud AI Platform Models.

Step 10. Ingest YOUR data to the pipeline

We made a pipeline for a model using the Chicago Taxi dataset. Now it's time to put your data into the pipeline.

Your data can be stored anywhere your pipeline can access, including GCS, or BigQuery. You will need to modify the pipeline definition to access your data.

  1. If your data is stored in files, modify the DATA_PATH in kubeflow_dag_runner.py or beam_dag_runner.py and set it to the location of your files. If your data is stored in BigQuery, modify BIG_QUERY_QUERY in configs.py to correctly query for your data.
  2. Add features in features.py.
  3. Modify preprocessing.py to transform input data for training.
  4. Modify model.py and hparams.py to describe your ML model.

Please see Trainer component guide for more introduction.

Cleaning up

To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.

Alternatively, you can clean up individual resources by visiting each consoles: