预创建的 Estimators

在 tensorFlow.google.cn 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码 下载 notebook

本教程将向您展示如何使用 Estimators 解决 Tensorflow 中的鸢尾花(Iris)分类问题。Estimator 是 Tensorflow 完整模型的高级表示,它被设计用于轻松扩展和异步训练。更多细节请参阅 Estimators

请注意,在 Tensorflow 2.0 中,Keras API 可以完成许多相同的任务,而且被认为是一个更易学习的API。如果您刚刚开始入门,我们建议您从 Keras 开始。有关 Tensorflow 2.0 中可用高级API的更多信息,请参阅 Keras标准化

首先要做的事

为了开始,您将首先导入 Tensorflow 和一系列您需要的库。

import tensorflow as tf

import pandas as pd

数据集

本文档中的示例程序构建并测试了一个模型,该模型根据花萼花瓣的大小将鸢尾花分成三种物种。

您将使用鸢尾花数据集训练模型。该数据集包括四个特征和一个标签。这四个特征确定了单个鸢尾花的以下植物学特征:

  • 花萼长度
  • 花萼宽度
  • 花瓣长度
  • 花瓣宽度

根据这些信息,您可以定义一些有用的常量来解析数据:

CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']

接下来,使用 Keras 与 Pandas 下载并解析鸢尾花数据集。注意为训练和测试保留不同的数据集。

train_path = tf.keras.utils.get_file(
    "iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
    "iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")

train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv
2194/2194 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv
573/573 [==============================] - 0s 0us/step

通过检查数据您可以发现有四列浮点型特征和一列 int32 型标签。

train.head()

对于每个数据集都分割出标签,模型将被训练来预测这些标签。

train_y = train.pop('Species')
test_y = test.pop('Species')

# 标签列现已从数据中删除
train.head()

Estimator 编程概述

现在您已经设定好了数据,您可以使用 Tensorflow Estimator 定义模型。Estimator 是从 tf.estimator.Estimator 中派生的任何类。Tensorflow提供了一组tf.estimator(例如,LinearRegressor)来实现常见的机器学习算法。此外,您可以编写您自己的自定义 Estimator。入门阶段我们建议使用预创建的 Estimator。

为了编写基于预创建的 Estimator 的 Tensorflow 项目,您必须完成以下工作:

  • 创建一个或多个输入函数
  • 定义模型的特征列
  • 实例化一个 Estimator,指定特征列和各种超参数。
  • 在 Estimator 对象上调用一个或多个方法,传递合适的输入函数以作为数据源。

我们来看看这些任务是如何在鸢尾花分类中实现的。

创建输入函数

您必须创建输入函数来提供用于训练、评估和预测的数据。

输入函数是一个返回 tf.data.Dataset 对象的函数,此对象会输出下列含两个元素的元组:

  • features——Python字典,其中:
    • 每个键都是特征名称
    • 每个值都是包含此特征所有值的数组
  • label 包含每个样本的标签的值的数组。

为了向您展示输入函数的格式,请查看下面这个简单的实现:

def input_evaluation_set():
    features = {'SepalLength': np.array([6.4, 5.0]),
                'SepalWidth':  np.array([2.8, 2.3]),
                'PetalLength': np.array([5.6, 3.3]),
                'PetalWidth':  np.array([2.2, 1.0])}
    labels = np.array([2, 1])
    return features, labels

您的输入函数可以以您喜欢的方式生成 features 字典与 label 列表。但是,我们建议使用 Tensorflow 的 Dataset API,该 API 可以用来解析各种类型的数据。

Dataset API 可以为您处理很多常见情况。例如,使用 Dataset API,您可以轻松地从大量文件中并行读取记录,并将它们合并为单个数据流。

为了简化此示例,我们将使用 pandas 加载数据,并利用此内存数据构建输入管道。

def input_fn(features, labels, training=True, batch_size=256):
    """An input function for training or evaluating"""
    # 将输入转换为数据集。
    dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))

    # 如果在训练模式下混淆并重复数据。
    if training:
        dataset = dataset.shuffle(1000).repeat()

    return dataset.batch(batch_size)

定义特征列(feature columns)

特征列(feature columns)是一个对象,用于描述模型应该如何使用特征字典中的原始输入数据。当您构建一个 Estimator 模型的时候,您会向其传递一个特征列的列表,其中包含您希望模型使用的每个特征。tf.feature_column 模块提供了许多为模型表示数据的选项。

对于鸢尾花问题,4 个原始特征是数值,因此我们将构建一个特征列的列表,以告知 Estimator 模型将 4 个特征都表示为 32 位浮点值。故创建特征列的代码如下所示:

# 特征列描述了如何使用输入。
my_feature_columns = []
for key in train.keys():
    my_feature_columns.append(tf.feature_column.numeric_column(key=key))

特征列可能比上述示例复杂得多。您可以从指南获取更多关于特征列的信息。

我们已经介绍了如何使模型表示原始特征,现在您可以构建 Estimator 了。

实例化 Estimator

鸢尾花为题是一个经典的分类问题。幸运的是,Tensorflow 提供了几个预创建的 Estimator 分类器,其中包括:

对于鸢尾花问题,tf.estimator.DNNClassifier 似乎是最好的选择。您可以这样实例化该 Estimator:

# 构建一个拥有两个隐层,隐藏节点分别为 30 和 10 的深度神经网络。
classifier = tf.estimator.DNNClassifier(
    feature_columns=my_feature_columns,
    # 隐层所含结点数量分别为 30 和 10.
    hidden_units=[30, 10],
    # 模型必须从三个类别中做出选择。
    n_classes=3)
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmpfs/tmp/tmp3udubuum
INFO:tensorflow:Using config: {'_model_dir': '/tmpfs/tmp/tmp3udubuum', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

## 训练、评估和预测

我们已经有一个 Estimator 对象,现在可以调用方法来执行下列操作:

  • 训练模型。
  • 评估经过训练的模型。
  • 使用经过训练的模型进行预测。

训练模型

通过调用 Estimator 的 Train 方法来训练模型,如下所示:

# 训练模型。
classifier.train(
    input_fn=lambda: input_fn(train, train_y, training=True),
    steps=5000)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/training_util.py:396: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/optimizers/optimizer_v2/adagrad.py:86: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
2022-06-03 20:12:11.513179: W tensorflow/core/common_runtime/forward_type_inference.cc:231] Type inference failed. This indicates an invalid graph that escaped type checking. Error message: INVALID_ARGUMENT: expected compatible input types, but input 1:
type_id: TFT_OPTIONAL
args {
  type_id: TFT_PRODUCT
  args {
    type_id: TFT_TENSOR
    args {
      type_id: TFT_INT64
    }
  }
}
 is neither a subtype nor a supertype of the combined inputs preceding it:
type_id: TFT_OPTIONAL
args {
  type_id: TFT_PRODUCT
  args {
    type_id: TFT_TENSOR
    args {
      type_id: TFT_INT32
    }
  }
}

    while inferring type of node 'dnn/zero_fraction/cond/output/_18'
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmpfs/tmp/tmp3udubuum/model.ckpt.
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-0.meta
INFO:tensorflow:100
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-0.index
INFO:tensorflow:100
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-0.data-00000-of-00001
INFO:tensorflow:100
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 1.1871419, step = 0
INFO:tensorflow:global_step/sec: 262.501
INFO:tensorflow:loss = 1.0586027, step = 100 (0.382 sec)
INFO:tensorflow:global_step/sec: 309.528
INFO:tensorflow:loss = 1.0224475, step = 200 (0.323 sec)
INFO:tensorflow:global_step/sec: 309.122
INFO:tensorflow:loss = 1.0021696, step = 300 (0.323 sec)
INFO:tensorflow:global_step/sec: 305.483
INFO:tensorflow:loss = 0.98385453, step = 400 (0.328 sec)
INFO:tensorflow:global_step/sec: 313.636
INFO:tensorflow:loss = 0.9666541, step = 500 (0.319 sec)
INFO:tensorflow:global_step/sec: 302.849
INFO:tensorflow:loss = 0.95699286, step = 600 (0.330 sec)
INFO:tensorflow:global_step/sec: 307.921
INFO:tensorflow:loss = 0.94715554, step = 700 (0.325 sec)
INFO:tensorflow:global_step/sec: 304.817
INFO:tensorflow:loss = 0.9227888, step = 800 (0.328 sec)
INFO:tensorflow:global_step/sec: 306.826
INFO:tensorflow:loss = 0.91765195, step = 900 (0.326 sec)
INFO:tensorflow:global_step/sec: 300.114
INFO:tensorflow:loss = 0.8983342, step = 1000 (0.333 sec)
INFO:tensorflow:global_step/sec: 307.921
INFO:tensorflow:loss = 0.8948569, step = 1100 (0.325 sec)
INFO:tensorflow:global_step/sec: 307.657
INFO:tensorflow:loss = 0.8928715, step = 1200 (0.325 sec)
INFO:tensorflow:global_step/sec: 307.464
INFO:tensorflow:loss = 0.8852857, step = 1300 (0.325 sec)
INFO:tensorflow:global_step/sec: 312.041
INFO:tensorflow:loss = 0.86307967, step = 1400 (0.320 sec)
INFO:tensorflow:global_step/sec: 309.883
INFO:tensorflow:loss = 0.8579792, step = 1500 (0.323 sec)
INFO:tensorflow:global_step/sec: 308.636
INFO:tensorflow:loss = 0.8394183, step = 1600 (0.324 sec)
INFO:tensorflow:global_step/sec: 314.715
INFO:tensorflow:loss = 0.84471893, step = 1700 (0.317 sec)
INFO:tensorflow:global_step/sec: 303.488
INFO:tensorflow:loss = 0.8450931, step = 1800 (0.330 sec)
INFO:tensorflow:global_step/sec: 307.971
INFO:tensorflow:loss = 0.82962644, step = 1900 (0.325 sec)
INFO:tensorflow:global_step/sec: 307.6
INFO:tensorflow:loss = 0.83097994, step = 2000 (0.325 sec)
INFO:tensorflow:global_step/sec: 304.299
INFO:tensorflow:loss = 0.8203571, step = 2100 (0.329 sec)
INFO:tensorflow:global_step/sec: 304.628
INFO:tensorflow:loss = 0.8007113, step = 2200 (0.328 sec)
INFO:tensorflow:global_step/sec: 308.211
INFO:tensorflow:loss = 0.8005485, step = 2300 (0.324 sec)
INFO:tensorflow:global_step/sec: 311.531
INFO:tensorflow:loss = 0.7959402, step = 2400 (0.321 sec)
INFO:tensorflow:global_step/sec: 309.918
INFO:tensorflow:loss = 0.78558564, step = 2500 (0.323 sec)
INFO:tensorflow:global_step/sec: 304.164
INFO:tensorflow:loss = 0.77327716, step = 2600 (0.329 sec)
INFO:tensorflow:global_step/sec: 309.352
INFO:tensorflow:loss = 0.7660598, step = 2700 (0.323 sec)
INFO:tensorflow:global_step/sec: 306.028
INFO:tensorflow:loss = 0.7698417, step = 2800 (0.327 sec)
INFO:tensorflow:global_step/sec: 305.184
INFO:tensorflow:loss = 0.74683523, step = 2900 (0.328 sec)
INFO:tensorflow:global_step/sec: 312.721
INFO:tensorflow:loss = 0.75409395, step = 3000 (0.320 sec)
INFO:tensorflow:global_step/sec: 313.607
INFO:tensorflow:loss = 0.7558953, step = 3100 (0.319 sec)
INFO:tensorflow:global_step/sec: 305.01
INFO:tensorflow:loss = 0.75001884, step = 3200 (0.328 sec)
INFO:tensorflow:global_step/sec: 309.859
INFO:tensorflow:loss = 0.73717, step = 3300 (0.323 sec)
INFO:tensorflow:global_step/sec: 310.813
INFO:tensorflow:loss = 0.73428655, step = 3400 (0.322 sec)
INFO:tensorflow:global_step/sec: 309.215
INFO:tensorflow:loss = 0.7339251, step = 3500 (0.323 sec)
INFO:tensorflow:global_step/sec: 308.821
INFO:tensorflow:loss = 0.7190552, step = 3600 (0.324 sec)
INFO:tensorflow:global_step/sec: 309.818
INFO:tensorflow:loss = 0.73737824, step = 3700 (0.323 sec)
INFO:tensorflow:global_step/sec: 311.013
INFO:tensorflow:loss = 0.71433717, step = 3800 (0.322 sec)
INFO:tensorflow:global_step/sec: 306.944
INFO:tensorflow:loss = 0.7150771, step = 3900 (0.326 sec)
INFO:tensorflow:global_step/sec: 311.085
INFO:tensorflow:loss = 0.7169112, step = 4000 (0.321 sec)
INFO:tensorflow:global_step/sec: 311.862
INFO:tensorflow:loss = 0.7134766, step = 4100 (0.320 sec)
INFO:tensorflow:global_step/sec: 302.129
INFO:tensorflow:loss = 0.69705623, step = 4200 (0.331 sec)
INFO:tensorflow:global_step/sec: 304.174
INFO:tensorflow:loss = 0.6862689, step = 4300 (0.329 sec)
INFO:tensorflow:global_step/sec: 310.592
INFO:tensorflow:loss = 0.6912059, step = 4400 (0.322 sec)
INFO:tensorflow:global_step/sec: 308.894
INFO:tensorflow:loss = 0.6822791, step = 4500 (0.324 sec)
INFO:tensorflow:global_step/sec: 313.145
INFO:tensorflow:loss = 0.69432884, step = 4600 (0.319 sec)
INFO:tensorflow:global_step/sec: 311.468
INFO:tensorflow:loss = 0.6866505, step = 4700 (0.321 sec)
INFO:tensorflow:global_step/sec: 309.626
INFO:tensorflow:loss = 0.6691384, step = 4800 (0.323 sec)
INFO:tensorflow:global_step/sec: 309.185
INFO:tensorflow:loss = 0.67639184, step = 4900 (0.323 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5000...
INFO:tensorflow:Saving checkpoints for 5000 into /tmpfs/tmp/tmp3udubuum/model.ckpt.
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-5000.index
INFO:tensorflow:0
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-5000.data-00000-of-00001
INFO:tensorflow:0
INFO:tensorflow:/tmpfs/tmp/tmp3udubuum/model.ckpt-5000.meta
INFO:tensorflow:100
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5000...
INFO:tensorflow:Loss for final step: 0.6570629.
<tensorflow_estimator.python.estimator.canned.dnn.DNNClassifierV2 at 0x7fb22822d700>

注意将 input_fn 调用封装在 lambda 中以获取参数,同时提供不带参数的输入函数,如 Estimator 所预期的那样。step 参数告知该方法在训练多少步后停止训练。

评估经过训练的模型

现在模型已经经过训练,您可以获取一些关于模型性能的统计信息。代码块将在测试数据上对经过训练的模型的准确率(accuracy)进行评估:

eval_result = classifier.evaluate(
    input_fn=lambda: input_fn(test, test_y, training=False))

print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2022-06-03T20:12:28
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tmp3udubuum/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.48784s
INFO:tensorflow:Finished evaluation at 2022-06-03-20:12:29
INFO:tensorflow:Saving dict for global step 5000: accuracy = 0.43333334, average_loss = 0.7213013, global_step = 5000, loss = 0.7213013
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 5000: /tmpfs/tmp/tmp3udubuum/model.ckpt-5000

Test set accuracy: 0.433

与对 train 方法的调用不同,我们没有传递 steps 参数来进行评估。用于评估的 input_fn 只生成一个 epoch 的数据。

eval_result 字典亦包含 average_loss(每个样本的平均误差),loss(每个 mini-batch 的平均误差)与 Estimator 的 global_step(经历的训练迭代次数)值。

利用经过训练的模型进行预测(推理)

我们已经有一个经过训练的模型,可以生成准确的评估结果。我们现在可以使用经过训练的模型,根据一些无标签测量结果预测鸢尾花的品种。与训练和评估一样,我们使用单个函数调用进行预测:

# 由模型生成预测
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
    'SepalLength': [5.1, 5.9, 6.9],
    'SepalWidth': [3.3, 3.0, 3.1],
    'PetalLength': [1.7, 4.2, 5.4],
    'PetalWidth': [0.5, 1.5, 2.1],
}

def input_fn(features, batch_size=256):
    """An input function for prediction."""
    # 将输入转换为无标签数据集。
    return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)

predictions = classifier.predict(
    input_fn=lambda: input_fn(predict_x))

predict 方法返回一个 Python 可迭代对象,为每个样本生成一个预测结果字典。以下代码输出了一些预测及其概率:

for pred_dict, expec in zip(predictions, expected):
    class_id = pred_dict['class_ids'][0]
    probability = pred_dict['probabilities'][class_id]

    print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
        SPECIES[class_id], 100 * probability, expec))
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tmp3udubuum/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Prediction is "Setosa" (74.5%), expected "Setosa"
Prediction is "Virginica" (41.4%), expected "Versicolor"
Prediction is "Versicolor" (48.5%), expected "Virginica"