Ayuda a proteger la Gran Barrera de Coral con TensorFlow en Kaggle Únete Challenge

Cargar datos NumPy

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

Este tutorial proporciona un ejemplo de la carga de datos de matrices de NumPy en un tf.data.Dataset .

Este ejemplo carga el conjunto de datos de una MNIST .npz archivo. Sin embargo, la fuente de las matrices NumPy no es importante.

Configuración

import numpy as np
import tensorflow as tf

Cargar desde .npz archivo

DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'

path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
  train_examples = data['x_train']
  train_labels = data['y_train']
  test_examples = data['x_test']
  test_labels = data['y_test']

Cargar matrices NumPy con tf.data.Dataset

Asumiendo que tiene una serie de ejemplos y una matriz correspondiente de etiquetas, pasar las dos matrices como una tupla en tf.data.Dataset.from_tensor_slices para crear un tf.data.Dataset .

train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))

Usa los conjuntos de datos

Mezclar y agrupar los conjuntos de datos

BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100

train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)

Construye y entrena un modelo

model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10)
])

model.compile(optimizer=tf.keras.optimizers.RMSprop(),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
Epoch 1/10
938/938 [==============================] - 2s 2ms/step - loss: 3.2085 - sparse_categorical_accuracy: 0.8713
Epoch 2/10
938/938 [==============================] - 2s 2ms/step - loss: 0.5051 - sparse_categorical_accuracy: 0.9253
Epoch 3/10
938/938 [==============================] - 2s 2ms/step - loss: 0.3736 - sparse_categorical_accuracy: 0.9440
Epoch 4/10
938/938 [==============================] - 2s 2ms/step - loss: 0.3181 - sparse_categorical_accuracy: 0.9516
Epoch 5/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2931 - sparse_categorical_accuracy: 0.9577
Epoch 6/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2674 - sparse_categorical_accuracy: 0.9630
Epoch 7/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2480 - sparse_categorical_accuracy: 0.9669
Epoch 8/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2365 - sparse_categorical_accuracy: 0.9693
Epoch 9/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2131 - sparse_categorical_accuracy: 0.9723
Epoch 10/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2017 - sparse_categorical_accuracy: 0.9748
<tensorflow.python.keras.callbacks.History at 0x7feec7f89810>
model.evaluate(test_dataset)
157/157 [==============================] - 0s 2ms/step - loss: 0.7410 - sparse_categorical_accuracy: 0.9558
[0.7409695386886597, 0.9557999968528748]