Se usó la API de Cloud Translation para traducir esta página.
Switch to English

tf.data.Dataset

TensorFlow 1 versión Ver código fuente en GitHub

Representa una potencialmente gran conjunto de elementos.

Se utiliza en los cuadernos

Se utiliza en la guía Se utiliza en los tutoriales

Los tf.data.Dataset compatible la API de escritura tuberías de entrada descriptivos y eficientes. Dataset de uso sigue un patrón común:

  1. Crear un conjunto de datos de origen de los datos de entrada.
  2. Aplicar transformaciones de conjuntos de datos para preprocesar los datos.
  3. Iterar sobre el conjunto de datos y procesar los elementos.

Iteración ocurre de una forma de streaming, por lo que el conjunto de datos completo no tiene que caber en la memoria.

Fuente: Conjuntos de datos

La forma más sencilla de crear un conjunto de datos es crearlo a partir de una pitón list :

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
  print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)

Para las líneas de proceso de archivos, utilice tf.data.TextLineDataset :

dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"])

Para los registros de proceso escrito en el TFRecord formato, utilice TFRecordDataset :

dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"])

To create a dataset of all files matching a pattern, use tf.data.Dataset.list_files :

dataset = tf.data.Dataset.list_files("/path/*.txt")  # doctest: +SKIP

See tf.data.FixedLengthRecordDataset and tf.data.Dataset.from_generator for more ways to create datasets.

Transformations:

Once you have a dataset, you can apply transformations to prepare the data for your model:

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.map(lambda x: x*2)
list(dataset.as_numpy_iterator())
[2, 4, 6]

Common Terms:

Element : A single output from calling next() on a dataset iterator. Elements may be nested structures containing multiple components. For example, the element (1, (3, "apple")) has one tuple nested in another tuple. The components are 1 , 3 , and "apple" .

Component : The leaf in the nested structure of an element.

Supported types:

Elements can be nested structures of tuples, named tuples, and dictionaries. Note that Python lists are not treated as nested structures of components. Instead, lists are converted to tensors and treated as components. For example, the element (1, [1, 2, 3]) has only two components; the tensor 1 and the tensor [1, 2, 3] . Element components can be of any type representable by tf.TypeSpec , including tf.Tensor , tf.data.Dataset , tf.sparse.SparseTensor , tf.RaggedTensor , and tf.TensorArray .

a = 1 # Integer element
b = 2.0 # Float element
c = (1, 2) # Tuple element with 2 components
d = {"a": (2, 2), "b": 3} # Dict element with 3 components
Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP
e = Point(1, 2) # Named tuple # doctest: +SKIP
f = tf.data.Dataset.range(10) # Dataset element

variant_tensor A DT_VARIANT tensor that represents the dataset.

element_spec The type specification of an element of this dataset.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)

Methods

apply

View source

Applies a transformation function to this dataset.

apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset .

dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
  return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]

Args
transformation_func A function that takes one Dataset argument and returns a Dataset .

Returns
Dataset The Dataset returned by applying transformation_func to this dataset.

as_numpy_iterator

View source

Returns an iterator which converts all elements of the dataset to numpy.

Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator .

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
  print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)

This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
  print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]

as_numpy_iterator() will preserve the nested structure of dataset ele