使用 tf.data 加载 NumPy 数据

在 Tensorflow.org 上查看 在 Google Colab 运行 在 Github 上查看源代码 下载此 notebook

本教程提供了将数据从 NumPy 数组加载到 tf.data.Dataset 的示例 本示例从一个 .npz 文件中加载 MNIST 数据集。但是,本实例中 NumPy 数据的来源并不重要。

安装


from __future__ import absolute_import, division, print_function, unicode_literals
 
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds

.npz 文件中加载

DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'

path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
  train_examples = data['x_train']
  train_labels = data['y_train']
  test_examples = data['x_test']
  test_labels = data['y_test']

使用 tf.data.Dataset 加载 NumPy 数组

假设您有一个示例数组和相应的标签数组,请将两个数组作为元组传递给 tf.data.Dataset.from_tensor_slices 以创建 tf.data.Dataset

train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))

使用该数据集

打乱和批次化数据集

BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100

train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)

建立和训练模型

model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer=tf.keras.optimizers.RMSprop(),
                loss=tf.keras.losses.SparseCategoricalCrossentropy(),
                metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_dataset, epochs=10)
WARNING: Logging before flag parsing goes to stderr.
W0823 14:04:05.730727 140570150737664 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where

Epoch 1/10
938/938 [==============================] - 5s 5ms/step - loss: 9.9995 - sparse_categorical_accuracy: 0.3773
Epoch 2/10
938/938 [==============================] - 4s 4ms/step - loss: 8.9917 - sparse_categorical_accuracy: 0.4408
Epoch 3/10
938/938 [==============================] - 3s 4ms/step - loss: 8.8517 - sparse_categorical_accuracy: 0.4501
Epoch 4/10
938/938 [==============================] - 4s 4ms/step - loss: 8.8329 - sparse_categorical_accuracy: 0.4512
Epoch 5/10
938/938 [==============================] - 4s 4ms/step - loss: 8.7632 - sparse_categorical_accuracy: 0.4554
Epoch 6/10
938/938 [==============================] - 4s 4ms/step - loss: 8.6959 - sparse_categorical_accuracy: 0.4597
Epoch 7/10
938/938 [==============================] - 4s 4ms/step - loss: 8.6639 - sparse_categorical_accuracy: 0.4619
Epoch 8/10
938/938 [==============================] - 4s 4ms/step - loss: 8.7060 - sparse_categorical_accuracy: 0.4593
Epoch 9/10
938/938 [==============================] - 4s 4ms/step - loss: 8.6347 - sparse_categorical_accuracy: 0.4637
Epoch 10/10
938/938 [==============================] - 4s 4ms/step - loss: 8.6381 - sparse_categorical_accuracy: 0.4635

<tensorflow.python.keras.callbacks.History at 0x7fd9005654e0>
model.evaluate(test_dataset)
157/157 [==============================] - 1s 3ms/step - loss: 8.5253 - sparse_categorical_accuracy: 0.4703

[8.52525661249829, 0.4703]