Missed TensorFlow World? Check out the recap. Learn more

tf.compat.v2.data.experimental.CsvDataset

View source on GitHub

Class CsvDataset

A Dataset comprising lines from one or more CSV files.

__init__

View source

__init__(
    filenames,
    record_defaults,
    compression_type=None,
    buffer_size=None,
    header=False,
    field_delim=',',
    use_quote_delim=True,
    na_value='',
    select_cols=None
)

Creates a CsvDataset by reading and decoding CSV files.

The elements of this dataset correspond to records from the file(s). RFC 4180 format is expected for CSV files (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.

For example, suppose we have a file 'my_file0.csv' with four CSV columns of different data types:

abcdefg,4.28E10,5.55E6,12
hijklmn,-5.3E14,,2

We can construct a CsvDataset from it as follows:

tf.compat.v1.enable_eager_execution()

 dataset = tf.data.experimental.CsvDataset(
    "my_file*.csv",
    [tf.float32,  # Required field, use dtype or empty tensor
     tf.constant([0.0], dtype=tf.float32),  # Optional field, default to 0.0
     tf.int32,  # Required field, use dtype or empty tensor
     ],
    select_cols=[1,2,3]  # Only parse last three columns
)

The expected output of its iterations is:

for element in dataset:
  print(element)

>> (4.28e10, 5.55e6, 12)
>> (-5.3e14, 0.0, 2)

Args:

  • filenames: A tf.string tensor containing one or more filenames.
  • record_defaults: A list of default values for the CSV fields. Each item in the list is either a valid CSV DType (float32, float64, int32, int64, string), or a Tensor object with one of the above types. One per column of CSV data, with either a scalar Tensor default value for the column if it is optional, or DType or empty Tensor if required. If both this and select_columns are specified, these must have the same lengths, and column_defaults is assumed to be sorted in order of increasing column index.
  • compression_type: (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". Defaults to no compression.
  • buffer_size: (Optional.) A tf.int64 scalar denoting the number of bytes to buffer while reading files. Defaults to 4MB.
  • header: (Optional.) A tf.bool scalar indicating whether the CSV file(s) have header line(s) that should be skipped when parsing. Defaults to False.
  • field_delim: (Optional.) A tf.string scalar containing the delimiter character that separates fields in a record. Defaults to ",".
  • use_quote_delim: (Optional.) A tf.bool scalar. If False, treats double quotation marks as regular characters inside of string fields (ignoring RFC 4180, Section 2, Bullet 5). Defaults to True.
  • na_value: (Optional.) A tf.string scalar indicating a value that will be treated as NA/NaN.
  • select_cols: (Optional.) A sorted list of column indices to select from the input data. If specified, only this subset of columns will be parsed. Defaults to parsing all columns.

Properties

element_spec

The type specification of an element of this dataset.

Returns:

A nested structure of tf.TypeSpec objects matching the structure of an element of this dataset and specifying the type of individual components.

Methods

__iter__

View source

__iter__()

Creates an Iterator for enumerating the elements of this dataset.

The returned iterator implements the Python iterator protocol and therefore can only be used in eager mode.

Returns:

An Iterator over the elements of this dataset.

Raises:

  • RuntimeError: If not inside of tf.function and not executing eagerly.

apply

View source

apply(transformation_func)

Applies a transformation function to this dataset.

apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.

For example:

dataset = (dataset.map(lambda x: x ** 2)
           .apply(group_by_window(key_func, reduce_func, window_size))
           .map(lambda x: x ** 3))

Args:

  • transformation_func: A function that takes one Dataset argument and returns a Dataset.

Returns:

  • Dataset: The Dataset returned by applying transformation_func to this dataset.

batch

View source

batch(
    batch_size,
    drop_remainder=False
)

Combines consecutive elements of this dataset into batches.

The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.

Args:

  • batch_size: A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
  • drop_remainder: (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.

Returns:

  • Dataset: A Dataset.

cache

View source

cache(filename='')

Caches the elements in this dataset.

Args:

  • filename: A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.

Returns:

  • Dataset: A Dataset.

concatenate

View source

concatenate(dataset)

Creates a Dataset by concatenating the given dataset with this dataset.

a = Dataset.range(1, 4)  # ==> [ 1, 2, 3 ]
b = Dataset.range(4, 8)  # ==> [ 4, 5, 6, 7 ]

# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
# c = Dataset.range(8, 14).batch(2)  # ==> [ [8, 9], [10, 11], [12, 13] ]
# d = Dataset.from_tensor_slices([14.0, 15.0, 16.0])
# a.concatenate(c) and a.concatenate(d) would result in error.

a.concatenate(b)  # ==> [ 1, 2, 3, 4, 5, 6, 7 ]

Args:

  • dataset: Dataset to be concatenated.

Returns:

  • Dataset: A Dataset.

enumerate

View source

enumerate(start=0)

Enumerates the elements of this dataset.

It is similar to python's enumerate.

For example:

# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { 1, 2, 3 }
b = { (7, 8), (9, 10) }

# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a.enumerate(start=5)) == { (5, 1), (6, 2), (7, 3) }
b.enumerate() == { (0, (7, 8)), (1, (9, 10)) }

Args:

Returns:

  • Dataset: A Dataset.

filter

View source

filter(predicate)

Filters this dataset according to predicate.

d = tf.data.Dataset.from_tensor_slices([1, 2, 3])

d = d.filter(lambda x: x < 3)  # ==> [1, 2]

# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
  return tf.math.equal(x, 1)

d = d.filter(filter_fn)  # ==> [1]

Args:

  • predicate: A function mapping a dataset element to a boolean.

Returns:

  • Dataset: The Dataset containing the elements of this dataset for which predicate is True.

flat_map

View source

flat_map(map_func)

Maps map_func across this dataset and flattens the result.

Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:

a = Dataset.from_tensor_slices([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])

a.flat_map(lambda x: Dataset.from_tensor_slices(x + 1)) # ==>
#  [ 2, 3, 4, 5, 6, 7, 8, 9, 10 ]

tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)

Args:

  • map_func: A function mapping a dataset element to a dataset.

Returns:

  • Dataset: A Dataset.

from_generator

View source

from_generator(
    generator,
    output_types,
    output_shapes=None,
    args=None
)

Creates a Dataset whose elements are generated by generator.

The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with the given output_types and (optional) output_shapes arguments.

For example:

import itertools
tf.compat.v1.enable_eager_execution()

def gen():
  for i in itertools.count(1):
    yield (i, [1] * i)

ds = tf.data.Dataset.from_generator(
    gen, (tf.int64, tf.int64), (tf.TensorShape([]), tf.TensorShape([None])))

for value in ds.take(2):
  print value
# (1, array([1]))
# (2, array([1, 1]))

NOTE: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the Dataset- and Iterator-related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.

NOTE: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().

Args:

  • generator: A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
  • output_types: A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
  • output_shapes: (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
  • args: (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.

Returns:

  • Dataset: A Dataset.

from_tensor_slices

View source

from_tensor_slices(tensors)

Creates a Dataset whose elements are slices of the given tensors.

Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.

Args:

  • tensors: A dataset element, with each component having the same size in the 0th dimension.

Returns:

  • Dataset: A Dataset.

from_tensors

View source

from_tensors(tensors)

Creates a Dataset with a single element, comprising the given tensors.

Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.

Args:

  • tensors: A dataset element.

Returns:

  • Dataset: A Dataset.

interleave

View source

interleave(
    map_func,
    cycle_length=AUTOTUNE,
    block_length=1,
    num_parallel_calls=None
)

Maps map_func across this dataset, and interleaves the results.

For example, you can use Dataset.interleave() to process many input files concurrently:

# Preprocess 4 files concurrently, and interleave blocks of 16 records from
# each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt", ...]
dataset = (Dataset.from_tensor_slices(filenames)
           .interleave(lambda x:
               TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
               cycle_length=4, block_length=16))

The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator.

For example:

a = Dataset.range(1, 6)  # ==> [ 1, 2, 3, 4, 5 ]

# NOTE: New lines indicate "block" boundaries.
a.interleave(lambda x: Dataset.from_tensors(x).repeat(6),
            cycle_length=2, block_length=4)  # ==> [1, 1, 1, 1,
                                             #      2, 2, 2, 2,
                                             #      1, 1,
                                             #      2, 2,
                                             #      3, 3, 3, 3,
                                             #      4, 4, 4, 4,
                                             #      3, 3,
                                             #      4, 4,
                                             #      5, 5, 5, 5,
                                             #      5, 5]

NOTE: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function. If map_func contains any stateful operations, the order in which that state is accessed is undefined.

Args:

  • map_func: A function mapping a dataset element to a dataset.
  • cycle_length: (Optional.) The number of input elements that will be processed concurrently. If not specified, the value will be derived from the number of available CPU cores. If the num_parallel_calls argument is set to tf.data.experimental.AUTOTUNE, the cycle_length argument also identifies the maximum degree of parallelism.
  • block_length: (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element.
  • num_parallel_calls: (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.experimental.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.

Returns:

  • Dataset: A Dataset.

list_files

View source

list_files(
    file_pattern,
    shuffle=None,
    seed=None
)

A dataset of all files matching one or more glob patterns.

NOTE: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.

Example:

If we had the following files on our filesystem: - /path/to/dir/a.txt - /path/to/dir/b.py - /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: - /path/to/dir/b.py - /path/to/dir/c.py

Args:

  • file_pattern: A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
  • shuffle: (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
  • seed: (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.compat.v1.set_random_seed for behavior.

Returns:

  • Dataset: A Dataset of strings corresponding to file names.

map

View source

map(
    map_func,
    num_parallel_calls=None
)

Maps map_func across the elements of this dataset.

This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input.

For example:

a = Dataset.range(1, 6)  # ==> [ 1, 2, 3, 4, 5 ]

a.map(lambda x: x + 1)  # ==> [ 2, 3, 4, 5, 6 ]

The input signature of map_func is determined by the structure of each element in this dataset. For example: