Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tf.compat.v1.data.FixedLengthRecordDataset

View source on GitHub

A Dataset of fixed-length records from one or more binary files.

tf.compat.v1.data.FixedLengthRecordDataset(
    filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None,
    compression_type=None, num_parallel_reads=None
)

Args:

  • filenames: A tf.string tensor or tf.data.Dataset containing one or more filenames.
  • record_bytes: A tf.int64 scalar representing the number of bytes in each record.
  • header_bytes: (Optional.) A tf.int64 scalar representing the number of bytes to skip at the start of a file.
  • footer_bytes: (Optional.) A tf.int64 scalar representing the number of bytes to ignore at the end of a file.
  • buffer_size: (Optional.) A tf.int64 scalar representing the number of bytes to buffer when reading.
  • compression_type: (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP".
  • num_parallel_reads: (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially.

Attributes:

  • element_spec: The type specification of an element of this dataset.
  dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]).element_spec 
    TensorSpec(shape=(), dtype=tf.int32, name=None) 
     
  • output_classes: Returns the class of each component of an element of this dataset. (deprecated)

  • output_shapes: Returns the shape of each component of an element of this dataset. (deprecated)

  • output_types: Returns the type of each component of an element of this dataset. (deprecated)

Methods

__iter__

View source

__iter__()

Creates an Iterator for enumerating the elements of this dataset.

The returned iterator implements the Python iterator protocol and therefore can only be used in eager mode.

Returns:

An Iterator over the elements of this dataset.

Raises:

  • RuntimeError: If not inside of tf.function and not executing eagerly.

apply

View source

apply(
    transformation_func
)

Applies a transformation function to this dataset.

apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.

dataset = tf.data.Dataset.range(100) 
def dataset_fn(ds): 
  return ds.filter(lambda x: x < 5) 
dataset = dataset.apply(dataset_fn) 
list(dataset.as_numpy_iterator()) 
[0, 1, 2, 3, 4] 

Args:

  • transformation_func: A function that takes one Dataset argument and returns a Dataset.

Returns:

  • Dataset: The Dataset returned by applying transformation_func to this dataset.

as_numpy_iterator

View source

as_numpy_iterator()

Returns an iterator which converts all elements of the dataset to numpy.

Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
for element in dataset: 
  print(element) 
tf.Tensor(1, shape=(), dtype=int32) 
tf.Tensor(2, shape=(), dtype=int32) 
tf.Tensor(3, shape=(), dtype=int32) 

This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
for element in dataset.as_numpy_iterator(): 
  print(element) 
1 
2 
3 
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
print(list(dataset.as_numpy_iterator())) 
[1, 2, 3] 

as_numpy_iterator() will preserve the nested structure of dataset elements.

dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 
                                              'b': [5, 6]}) 
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, 
                                      {'a': (2, 4), 'b': 6}] 
True 

Returns:

An iterable over the elements of the dataset, with their tensors converted to numpy arrays.

Raises:

  • TypeError: if an element contains a non-Tensor value.
  • RuntimeError: if eager execution is not enabled.

batch

View source

batch(
    batch_size, drop_remainder=False
)

Combines consecutive elements of this dataset into batches.

dataset = tf.data.Dataset.range(8) 
dataset = dataset.batch(3) 
list(dataset.as_numpy_iterator()) 
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] 
dataset = tf.data.Dataset.range(8) 
dataset = dataset.batch(3, drop_remainder=True) 
list(dataset.as_numpy_iterator()) 
[array([0, 1, 2]), array([3, 4, 5])] 

The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.

Args:

  • batch_size: A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
  • drop_remainder: (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.

Returns:

  • Dataset: A Dataset.

cache

View source

cache(
    filename=''
)

Caches the elements in this dataset.

The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.

dataset = tf.data.Dataset.range(5) 
dataset = dataset.map(lambda x: x**2) 
dataset = dataset.cache() 
# The first time reading through the data will generate the data using 
# `range` and `map`. 
list(dataset.as_numpy_iterator()) 
[0, 1, 4, 9, 16] 
# Subsequent iterations read from the cache. 
list(dataset.as_numpy_iterator()) 
[0, 1, 4, 9, 16] 

When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.

dataset = tf.data.Dataset.range(5) 
dataset = dataset.cache("/path/to/file")  # doctest: +SKIP 
list(dataset.as_numpy_iterator())  # doctest: +SKIP 
[0, 1, 2, 3, 4] 
dataset = tf.data.Dataset.range(10) 
dataset = dataset.cache("/path/to/file")  # Same file! # doctest: +SKIP 
list(dataset.as_numpy_iterator())  # doctest: +SKIP 
[0, 1, 2, 3, 4] 

Args:

  • filename: A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.

Returns:

  • Dataset: A Dataset.

concatenate

View source

concatenate(
    dataset
)

Creates a Dataset by concatenating the given dataset with this dataset.

a = tf.data.Dataset.range(1, 4)  # ==> [ 1, 2, 3 ] 
b = tf.data.Dataset.range(4, 8)  # ==> [ 4, 5, 6, 7 ] 
ds = a.concatenate(b) 
list(ds.as_numpy_iterator()) 
[1, 2, 3, 4, 5, 6, 7] 
# The input dataset and dataset to be concatenated should have the same 
# nested structures and output types. 
c = tf.data.Dataset.zip((a, b)) 
a.concatenate(c) 
Traceback (most recent call last): 
TypeError: Two datasets to concatenate have different types 
<dtype: 'int64'> and (tf.int64, tf.int64) 
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) 
a.concatenate(d) 
Traceback (most recent call last): 
TypeError: Two datasets to concatenate have different types 
<dtype: 'int64'> and <dtype: 'string'> 

Args:

  • dataset: Dataset to be concatenated.

Returns:

  • Dataset: A Dataset.

enumerate

View source

enumerate(
    start=0
)

Enumerates the elements of this dataset.

It is similar to python's enumerate.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
dataset = dataset.enumerate(start=5) 
for element in dataset.as_numpy_iterator(): 
  print(element) 
(5, 1) 
(6, 2) 
(7, 3) 
# The nested structure of the input dataset determines the structure of 
# elements in the resulting dataset. 
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) 
dataset = dataset.enumerate() 
for element in dataset.as_numpy_iterator(): 
  print(element) 
(0, array([7, 8], dtype=int32)) 
(1, array([ 9, 10], dtype=int32)) 

Args:

Returns:

  • Dataset: A Dataset.

filter

View source

filter(
    predicate
)

Filters this dataset according to predicate.

dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
dataset = dataset.filter(lambda x: x < 3) 
list(dataset.as_numpy_iterator()) 
[1, 2] 
# `tf.math.equal(x, y)` is required for equality comparison 
def filter_fn(x): 
  return tf.math.equal(x, 1) 
dataset = dataset.filter(filter_fn) 
list(dataset.as_numpy_iterator()) 
[1] 

Args:

  • predicate: A function mapping a dataset element to a boolean.

Returns:

  • Dataset: The Dataset containing the elements of this dataset for which predicate is True.

filter_with_legacy_function

View source

filter_with_legacy_function(
    predicate
)

Filters this dataset according to predicate. (deprecated)

Args:

  • predicate: A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor.

Returns:

  • Dataset: The Dataset containing the elements of this dataset for which predicate is True.

flat_map

View source

flat_map(
    map_func
)

Maps map_func across this dataset and flattens the result.

Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:

dataset = Dataset.from_tensor_slices([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) 
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) 
list(dataset.as_numpy_iterator()) 
[1, 2, 3, 4, 5, 6, 7, 8, 9] 

tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)

Args:

  • map_func: A function mapping a dataset element to a dataset.

Returns:

  • Dataset: A Dataset.

from_generator

View source

@staticmethod
from_generator(
    generator, output_types, output_shapes=None, args=None
)

Creates a Dataset whose elements are generated by generator.

The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with the given output_types and (optional) output_shapes arguments.

import itertools 
 
def gen(): 
  for i in itertools.count(1): 
    yield (i, [1] * i) 
 
dataset = tf.data.Dataset.from_generator( 
     gen, 
     (tf.int64, tf.int64), 
     (tf.TensorShape([]), tf.TensorShape([None]))) 
 
list(dataset.take(3).as_numpy_iterator()) 
[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))] 

Args:

  • generator: A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
  • output_types: A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
  • output_shapes: (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
  • args: (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.

Returns:

  • Dataset: A Dataset.

from_sparse_tensor_slices

View source

@staticmethod
from_sparse_tensor_slices(
    sparse_tensor
)

Splits each rank-N tf.SparseTensor in this dataset row-wise. (deprecated)

Args:

Returns:

  • Dataset: A Dataset of rank-(N-1) sparse tensors.

from_tensor_slices

View source

@staticmethod
from_tensor_slices(
    tensors
)

Creates a Dataset whose elements are slices of the given tensors.

The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.

# Slicing a 1D tensor produces scalar tensor elements. 
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) 
list(dataset.as_numpy_iterator()) 
[1, 2, 3] 
# Slicing a 2D tensor produces 1D tensor elements. 
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) 
list(dataset.as_numpy_iterator()) 
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)] 
# Slicing a tuple of 1D tensors produces tuple elements containing 
# scalar tensors. 
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) 
list(dataset.as_numpy_iterator()) 
[(1, 3, 5), (2, 4, 6)] 
# Dictionary structure is also preserved. 
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) 
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, 
                                      {'a': 2, 'b': 4}] 
True 
# Two tensors can be combined into one Dataset object. 
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor 
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor 
dataset = Dataset.from_tensor_slices((features, labels)) 
# Both the features and the labels tensors can be converted 
# to a Dataset object separately and combined after. 
features_dataset = Dataset.from_tensor_slices(features) 
labels_dataset = Dataset.from_tensor_slices(labels) 
dataset = Dataset.zip((features_dataset, labels_dataset)) 
# A batched feature and label set can be converted to a Dataset 
# in similar fashion. 
batched_features = tf.constant([[[1, 3], [2, 3]], 
                                [[2, 1], [1, 2]], 
                                [[3, 3], [3, 2]]], shape=(3, 2, 2)) 
batched_labels = tf.constant([['A', 'A'], 
                              ['B', 'B'], 
                              ['A', 'B']], shape=(3, 2, 1)) 
dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) 
for element in dataset.as_numpy_iterator(): 
  print(element) 
(array([[1, 3], 
       [2, 3]], dtype=int32), array([[b'A'], 
       [b'A']], dtype=object)) 
(array([[2, 1], 
       [1, 2]], dtype=int32), array([[b'B'], 
       [b'B']], dtype=object)) 
(array([[3, 3], 
       [3, 2]], dtype=int32), array([[b'A'], 
       [b'B']], dtype=object)) 

Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.

Args:

  • tensors: A dataset element, with each component having the same size in the first dimension.

Returns:

  • Dataset: A Dataset.

from_tensors

View source

@staticmethod
from_tensors(
    tensors
)

Creates a Dataset with a single element, comprising the given tensors.

from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.

dataset = tf.data.Dataset.from_tensors([1, 2, 3]) 
list(dataset.as_numpy_iterator()) 
[array([1, 2, 3], dtype=int32)] 
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) 
list(dataset.as_numpy_iterator()) 
[(array([1, 2, 3], dtype=int32), b'A')] 
# You can use `from_tensors` to produce a dataset which repeats 
# the same example many times. 
example = tf.constant([1,2,3]) 
dataset = tf.data.Dataset.from_tensors(example).repeat(2) 
list(dataset.as_numpy_iterator()) 
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] 

Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.

Args:

  • tensors: A dataset element.

Returns:

  • Dataset: A Dataset.

interleave

View source

interleave(
    map_func, cycle_length=AUTOTUNE, block_length=1, num_parallel_calls=None,
    deterministic=None
)

Maps map_func across this dataset, and interleaves the results.

For example, you can use Dataset.interleave() to process many input files concurrently:

# Preprocess 4 files concurrently, and interleave blocks of 16 records 
# from each file. 
filenames = ["/var/data/file1.txt", "/var/data/file2.txt", 
             "/var/data/file3.txt", "/var/data/file4.txt"] 
dataset = tf.data.Dataset.from_tensor_slices(filenames) 
def parse_fn(filename): 
  return tf.data.Dataset.range(10) 
dataset = dataset.interleave(lambda x: 
    tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), 
    cycle_length=4, block_length=16) 

The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.

For example:

dataset = Dataset.range(1, 6)  # ==> [ 1, 2, 3, 4, 5 ] 
# NOTE: New lines indicate "block" boundaries. 
dataset = dataset.interleave( 
    lambda x: Dataset.from_tensors(x).repeat(6), 
    cycle_length=2, block_length=4) 
list(dataset.as_numpy_iterator()) 
[1, 1, 1, 1, 
 2, 2, 2, 2, 
 1, 1, 
 2, 2, 
 3, 3, 3, 3, 
 4, 4, 4, 4, 
 3, 3, 
 4, 4, 
 5, 5, 5, 5, 
 5, 5] 

Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.

filenames = ["/var/data/file1.txt", "/var/data/file2.txt", 
             "/var/data/file3.txt", "/var/data/file4.txt"] 
dataset = tf.data.Dataset.from_tensor_slices(filenames) 
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), 
    cycle_length=4, num_parallel_calls=tf.data.experimental.AUTOTUNE, 
    deterministic=False) 

Args:

  • map_func: A function mapping a dataset element to a dataset.
  • cycle_length: (Optional.) The number of input elements that will be processed concurrently. If not specified, the value will be derived from the number of available CPU cores. If the num_parallel_calls argument is set to tf.data.experimental.AUTOTUNE, the cycle_length argument also identifies the maximum degree of parallelism.
  • block_length: (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element.
  • num_parallel_calls: (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.experimental.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
  • deterministic: (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.

Returns:

  • Dataset: A Dataset.

list_files

View source

@staticmethod
list_files(
    file_pattern, shuffle=None, seed=None
)

A dataset of all files matching one or more glob patterns.

The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.

Example:

If we had the following files on our filesystem:

  • /path/to/dir/a.txt
  • /path/to/dir/b.py
  • /path/to/dir/c.py

If we pass "/path/to/dir/*.py" as the directory, the dataset would produce:

  • /path/to/dir/b.py
  • /path/to/dir/c.py

Args:

  • file_pattern: A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
  • shuffle: (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
  • seed: (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.

Returns:

  • Dataset: A Dataset of strings corresponding to file names.

make_initializable_iterator

View source

make_initializable_iterator(
    shared_name=None
)

Creates an Iterator for enumerating the elements of this dataset. (deprecated)