Esta página foi traduzida pela API Cloud Translation.
Switch to English

tf.data.Dataset

TensorFlow 1 versão Ver fonte no GitHub

Representa um potencialmente grande conjunto de elementos.

Usado nos cadernos

Usado na guia Usado nos tutoriais

Os tf.data.Dataset suportado pela API escrita dutos de entrada descritivos e eficientes. Dataset uso segue um padrão comum:

  1. Criar um conjunto de dados fonte de seus dados de entrada.
  2. Aplicar conjunto de dados transformações para pré-processar os dados.
  3. Iterar sobre o conjunto de dados e processar os elementos.

Iteração acontece de uma forma de streaming, para que o conjunto de dados completo não precisa caber na memória.

Fonte conjuntos de dados:

A maneira mais simples de criar um conjunto de dados é criá-lo a partir de uma python list :

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
  print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)

Para linhas de processo de arquivos, use tf.data.TextLineDataset :

dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"])

Para registros de processos escritos no TFRecord formato, use TFRecordDataset :

dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"])

To create a dataset of all files matching a pattern, use tf.data.Dataset.list_files :

dataset = tf.data.Dataset.list_files("/path/*.txt")  # doctest: +SKIP

See tf.data.FixedLengthRecordDataset and tf.data.Dataset.from_generator for more ways to create datasets.

Transformations:

Once you have a dataset, you can apply transformations to prepare the data for your model:

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.map(lambda x: x*2)
list(dataset.as_numpy_iterator())
[2, 4, 6]

Common Terms:

Element : A single output from calling next() on a dataset iterator. Elements may be nested structures containing multiple components. For example, the element (1, (3, "apple")) has one tuple nested in another tuple. The components are 1 , 3 , and "apple" .

Component : The leaf in the nested structure of an element.

Supported types:

Elements can be nested structures of tuples, named tuples, and dictionaries. Note that Python lists are not treated as nested structures of components. Instead, lists are converted to tensors and treated as components. For example, the element (1, [1, 2, 3]) has only two components; the tensor 1 and the tensor [1, 2, 3] . Element components can be of any type representable by tf.TypeSpec , including tf.Tensor , tf.data.Dataset , tf.sparse.SparseTensor , tf.RaggedTensor , and tf.TensorArray .

a = 1 # Integer element
b = 2.0 # Float element
c = (1, 2) # Tuple element with 2 components
d = {"a": (2, 2), "b": 3} # Dict element with 3 components
Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP
e = Point(1, 2) # Named tuple # doctest: +SKIP
f = tf.data.Dataset.range(10) # Dataset element

variant_tensor A DT_VARIANT tensor that represents the dataset.

element_spec The type specification of an element of this dataset.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)

Methods

apply

View source

Applies a transformation function to this dataset.

apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset .

dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
  return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]

Args
transformation_func A function that takes one Dataset argument and returns a Dataset .

Returns
Dataset The Dataset returned by applying transformation_func to this dataset.

as_numpy_iterator

View source

Returns an iterator which converts all elements of the dataset to numpy.

Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator .

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
  print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)

This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
  print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]

as_numpy_iterator() will preserve the nested structure of dataset elements.

dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),