Premade Estimators

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial shows you how to solve the Iris classification problem in TensorFlow using Estimators. An Estimator is TensorFlow's high-level representation of a complete model, and it has been designed for easy scaling and asynchronous training. For more details see Estimators.

Note that in TensorFlow 2.0, the Keras API can accomplish many of these same tasks, and is believed to be an easier API to learn. If you are starting fresh, we would recommend you start with Keras. For more information about the available high level APIs in TensorFlow 2.0, see Standardizing on Keras.

First things first

In order to get started, you will first import TensorFlow and a number of libraries you will need.

import tensorflow as tf

import pandas as pd

The data set

The sample program in this document builds and tests a model that classifies Iris flowers into three different species based on the size of their sepals and petals.

You will train a model using the Iris data set. The Iris data set contains four features and one label. The four features identify the following botanical characteristics of individual Iris flowers:

  • sepal length
  • sepal width
  • petal length
  • petal width

Based on this information, you can define a few helpful constants for parsing the data:

CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']

Next, download and parse the Iris data set using Keras and Pandas. Note that you keep distinct datasets for training and testing.

train_path = tf.keras.utils.get_file(
    "iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
    "iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")

train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)

You can inspect your data to see that you have four float feature columns and one int32 label.

train.head()

For each of the datasets, split out the labels, which the model will be trained to predict.

train_y = train.pop('Species')
test_y = test.pop('Species')

# The label column has now been removed from the features.
train.head()

Overview of programming with Estimators

Now that you have the data set up, you can define a model using a TensorFlow Estimator. An Estimator is any class derived from tf.estimator.Estimator. TensorFlow provides a collection of tf.estimator (for example, LinearRegressor) to implement common ML algorithms. Beyond those, you may write your own custom Estimators. We recommend using pre-made Estimators when just getting started.

To write a TensorFlow program based on pre-made Estimators, you must perform the following tasks:

  • Create one or more input functions.
  • Define the model's feature columns.
  • Instantiate an Estimator, specifying the feature columns and various hyperparameters.
  • Call one or more methods on the Estimator object, passing the appropriate input function as the source of the data.

Let's see how those tasks are implemented for Iris classification.

Create input functions

You must create input functions to supply data for training, evaluating, and prediction.

An input function is a function that returns a tf.data.Dataset object which outputs the following two-element tuple:

  • features - A Python dictionary in which:
    • Each key is the name of a feature.
    • Each value is an array containing all of that feature's values.
  • label - An array containing the values of the label for every example.

Just to demonstrate the format of the input function, here's a simple implementation:

def input_evaluation_set():
    features = {'SepalLength': np.array([6.4, 5.0]),
                'SepalWidth':  np.array([2.8, 2.3]),
                'PetalLength': np.array([5.6, 3.3]),
                'PetalWidth':  np.array([2.2, 1.0])}
    labels = np.array([2, 1])
    return features, labels

Your input function may generate the features dictionary and label list any way you like. However, we recommend using TensorFlow's Dataset API, which can parse all sorts of data.

The Dataset API can handle a lot of common cases for you. For example, using the Dataset API, you can easily read in records from a large collection of files in parallel and join them into a single stream.

To keep things simple in this example you are going to load the data with pandas, and build an input pipeline from this in-memory data:

def input_fn(features, labels, training=True, batch_size=256):
    """An input function for training or evaluating"""
    # Convert the inputs to a Dataset.
    dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))

    # Shuffle and repeat if you are in training mode.
    if training:
        dataset = dataset.shuffle(1000).repeat()
    
    return dataset.batch(batch_size)

Define the feature columns

A feature column is an object describing how the model should use raw input data from the features dictionary. When you build an Estimator model, you pass it a list of feature columns that describes each of the features you want the model to use. The tf.feature_column module provides many options for representing data to the model.

For Iris, the 4 raw features are numeric values, so we'll build a list of feature columns to tell the Estimator model to represent each of the four features as 32-bit floating-point values. Therefore, the code to create the feature column is:

# Feature columns describe how to use the input.
my_feature_columns = []
for key in train.keys():
    my_feature_columns.append(tf.feature_column.numeric_column(key=key))

Feature columns can be far more sophisticated than those we're showing here. You can read more about Feature Columns in this guide.

Now that you have the description of how you want the model to represent the raw features, you can build the estimator.

Instantiate an estimator

The Iris problem is a classic classification problem. Fortunately, TensorFlow provides several pre-made classifier Estimators, including:

For the Iris problem, tf.estimator.DNNClassifier seems like the best choice. Here's how you instantiated this Estimator:

# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
    feature_columns=my_feature_columns,
    # Two hidden layers of 30 and 10 nodes respectively.
    hidden_units=[30, 10],
    # The model must choose between 3 classes.
    n_classes=3)
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbhg2uvbr
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbhg2uvbr', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

Train, Evaluate, and Predict

Now that you have an Estimator object, you can call methods to do the following:

  • Train the model.
  • Evaluate the trained model.
  • Use the trained model to make predictions.

Train the model

Train the model by calling the Estimator's train method as follows:

# Train the Model.
classifier.train(
    input_fn=lambda: input_fn(train, train_y, training=True),
    steps=5000)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because its dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/adagrad.py:83: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhg2uvbr/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 1.1140382, step = 0
INFO:tensorflow:global_step/sec: 312.415
INFO:tensorflow:loss = 0.8781501, step = 100 (0.321 sec)
INFO:tensorflow:global_step/sec: 375.535
INFO:tensorflow:loss = 0.80712265, step = 200 (0.266 sec)
INFO:tensorflow:global_step/sec: 372.712
INFO:tensorflow:loss = 0.7615077, step = 300 (0.268 sec)
INFO:tensorflow:global_step/sec: 368.782
INFO:tensorflow:loss = 0.733555, step = 400 (0.271 sec)
INFO:tensorflow:global_step/sec: 372.689
INFO:tensorflow:loss = 0.6983943, step = 500 (0.268 sec)
INFO:tensorflow:global_step/sec: 370.308
INFO:tensorflow:loss = 0.67940104, step = 600 (0.270 sec)
INFO:tensorflow:global_step/sec: 373.374
INFO:tensorflow:loss = 0.65386146, step = 700 (0.268 sec)
INFO:tensorflow:global_step/sec: 368.335
INFO:tensorflow:loss = 0.63730353, step = 800 (0.272 sec)
INFO:tensorflow:global_step/sec: 371.575
INFO:tensorflow:loss = 0.61313766, step = 900 (0.269 sec)
INFO:tensorflow:global_step/sec: 371.975
INFO:tensorflow:loss = 0.6123625, step = 1000 (0.269 sec)
INFO:tensorflow:global_step/sec: 369.615
INFO:tensorflow:loss = 0.5957534, step = 1100 (0.270 sec)
INFO:tensorflow:global_step/sec: 374.054
INFO:tensorflow:loss = 0.57203, step = 1200 (0.267 sec)
INFO:tensorflow:global_step/sec: 369.713
INFO:tensorflow:loss = 0.56556034, step = 1300 (0.270 sec)
INFO:tensorflow:global_step/sec: 366.202
INFO:tensorflow:loss = 0.547443, step = 1400 (0.273 sec)
INFO:tensorflow:global_step/sec: 361.407
INFO:tensorflow:loss = 0.53326523, step = 1500 (0.277 sec)
INFO:tensorflow:global_step/sec: 367.461
INFO:tensorflow:loss = 0.51837724, step = 1600 (0.272 sec)
INFO:tensorflow:global_step/sec: 364.181
INFO:tensorflow:loss = 0.5281174, step = 1700 (0.275 sec)
INFO:tensorflow:global_step/sec: 368.139
INFO:tensorflow:loss = 0.5139683, step = 1800 (0.271 sec)
INFO:tensorflow:global_step/sec: 366.277
INFO:tensorflow:loss = 0.51073176, step = 1900 (0.273 sec)
INFO:tensorflow:global_step/sec: 366.634
INFO:tensorflow:loss = 0.4949246, step = 2000 (0.273 sec)
INFO:tensorflow:global_step/sec: 364.732
INFO:tensorflow:loss = 0.49381495, step = 2100 (0.274 sec)
INFO:tensorflow:global_step/sec: 365.006
INFO:tensorflow:loss = 0.48916715, step = 2200 (0.274 sec)
INFO:tensorflow:global_step/sec: 366.902
INFO:tensorflow:loss = 0.48790723, step = 2300 (0.273 sec)
INFO:tensorflow:global_step/sec: 362.232
INFO:tensorflow:loss = 0.47671652, step = 2400 (0.276 sec)
INFO:tensorflow:global_step/sec: 368.592
INFO:tensorflow:loss = 0.47324088, step = 2500 (0.271 sec)
INFO:tensorflow:global_step/sec: 371.611
INFO:tensorflow:loss = 0.46822113, step = 2600 (0.269 sec)
INFO:tensorflow:global_step/sec: 362.345
INFO:tensorflow:loss = 0.4621966, step = 2700 (0.276 sec)
INFO:tensorflow:global_step/sec: 362.788
INFO:tensorflow:loss = 0.47817266, step = 2800 (0.275 sec)
INFO:tensorflow:global_step/sec: 368.473
INFO:tensorflow:loss = 0.45853442, step = 2900 (0.271 sec)
INFO:tensorflow:global_step/sec: 360.944
INFO:tensorflow:loss = 0.44062576, step = 3000 (0.277 sec)
INFO:tensorflow:global_step/sec: 370.982
INFO:tensorflow:loss = 0.4331399, step = 3100 (0.269 sec)
INFO:tensorflow:global_step/sec: 366.248
INFO:tensorflow:loss = 0.45120597, step = 3200 (0.273 sec)
INFO:tensorflow:global_step/sec: 371.703
INFO:tensorflow:loss = 0.4403404, step = 3300 (0.269 sec)
INFO:tensorflow:global_step/sec: 362.176
INFO:tensorflow:loss = 0.42405623, step = 3400 (0.276 sec)
INFO:tensorflow:global_step/sec: 363.283
INFO:tensorflow:loss = 0.41672814, step = 3500 (0.275 sec)
INFO:tensorflow:global_step/sec: 363.529
INFO:tensorflow:loss = 0.42626005, step = 3600 (0.275 sec)
INFO:tensorflow:global_step/sec: 367.348
INFO:tensorflow:loss = 0.4089098, step = 3700 (0.272 sec)
INFO:tensorflow:global_step/sec: 363.067
INFO:tensorflow:loss = 0.41276374, step = 3800 (0.275 sec)
INFO:tensorflow:global_step/sec: 364.771
INFO:tensorflow:loss = 0.4112524, step = 3900 (0.274 sec)
INFO:tensorflow:global_step/sec: 363.167
INFO:tensorflow:loss = 0.39261794, step = 4000 (0.275 sec)
INFO:tensorflow:global_step/sec: 362.082
INFO:tensorflow:loss = 0.41160905, step = 4100 (0.276 sec)
INFO:tensorflow:global_step/sec: 364.979
INFO:tensorflow:loss = 0.39620766, step = 4200 (0.274 sec)
INFO:tensorflow:global_step/sec: 363.323
INFO:tensorflow:loss = 0.39696264, step = 4300 (0.275 sec)
INFO:tensorflow:global_step/sec: 361.25
INFO:tensorflow:loss = 0.38196522, step = 4400 (0.277 sec)
INFO:tensorflow:global_step/sec: 365.666
INFO:tensorflow:loss = 0.38667366, step = 4500 (0.274 sec)
INFO:tensorflow:global_step/sec: 361.202
INFO:tensorflow:loss = 0.38149032, step = 4600 (0.277 sec)
INFO:tensorflow:global_step/sec: 365.038
INFO:tensorflow:loss = 0.37832782, step = 4700 (0.274 sec)
INFO:tensorflow:global_step/sec: 366.375
INFO:tensorflow:loss = 0.3726803, step = 4800 (0.273 sec)
INFO:tensorflow:global_step/sec: 366.474
INFO:tensorflow:loss = 0.37167495, step = 4900 (0.273 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5000...
INFO:tensorflow:Saving checkpoints for 5000 into /tmp/tmpbhg2uvbr/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5000...
INFO:tensorflow:Loss for final step: 0.36297452.

<tensorflow_estimator.python.estimator.canned.dnn.DNNClassifierV2 at 0x7fc9983ed470>

Note that you wrap up your input_fn call in a lambda to capture the arguments while providing an input function that takes no arguments, as expected by the Estimator. The steps argument tells the method to stop training after a number of training steps.

Evaluate the trained model

Now that the model has been trained, you can get some statistics on its performance. The following code block evaluates the accuracy of the trained model on the test data:

eval_result = classifier.evaluate(
    input_fn=lambda: input_fn(test, test_y, training=False))

print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because its dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-09-10T01:40:47Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpbhg2uvbr/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.21153s
INFO:tensorflow:Finished evaluation at 2020-09-10-01:40:47
INFO:tensorflow:Saving dict for global step 5000: accuracy = 0.96666664, average_loss = 0.42594802, global_step = 5000, loss = 0.42594802
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 5000: /tmp/tmpbhg2uvbr/model.ckpt-5000

Test set accuracy: 0.967


Unlike the call to the train method, you did not pass the steps argument to evaluate. The input_fn for eval only yields a single epoch of data.

The eval_result dictionary also contains the average_loss (mean loss per sample), the loss (mean loss per mini-batch) and the value of the estimator's global_step (the number of training iterations it underwent).

Making predictions (inferring) from the trained model

You now have a trained model that produces good evaluation results. You can now use the trained model to predict the species of an Iris flower based on some unlabeled measurements. As with training and evaluation, you make predictions using a single function call:

# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
    'SepalLength': [5.1, 5.9, 6.9],
    'SepalWidth': [3.3, 3.0, 3.1],
    'PetalLength': [1.7, 4.2, 5.4],
    'PetalWidth': [0.5, 1.5, 2.1],
}

def input_fn(features, batch_size=256):
    """An input function for prediction."""
    # Convert the inputs to a Dataset without labels.
    return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)

predictions = classifier.predict(
    input_fn=lambda: input_fn(predict_x))

The predict method returns a Python iterable, yielding a dictionary of prediction results for each example. The following code prints a few predictions and their probabilities:

for pred_dict, expec in zip(predictions, expected):
    class_id = pred_dict['class_ids'][0]
    probability = pred_dict['probabilities'][class_id]

    print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
        SPECIES[class_id], 100 * probability, expec))
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpbhg2uvbr/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Prediction is "Setosa" (91.3%), expected "Setosa"
Prediction is "Versicolor" (52.0%), expected "Versicolor"
Prediction is "Virginica" (63.5%), expected "Virginica"