![]() |
Represents a potentially large set of elements.
Inherits From: CheckpointableBase
tf.compat.v2.data.Dataset(
variant_tensor
)
A Dataset
can be used to represent an input pipeline as a
collection of elements and a "logical plan" of transformations that act on
those elements.
A dataset contains elements that each have the same (nested) structure and the
individual components of the structure can be of any type representable by
tf.TypeSpec
, including tf.Tensor
, tf.data.Dataset
, tf.SparseTensor
,
tf.RaggedTensor
, or tf.TensorArray
.
Example elements:
# Integer element
a = 1
# Float element
b = 2.0
# Tuple element with 2 components
c = (1, 2)
# Dict element with 3 components
d = {"a": (2, 2), "b": 3}
# Element containing a dataset
e = tf.data.Dataset.from_element(10)
Args | |
---|---|
variant_tensor
|
A DT_VARIANT tensor that represents the dataset. |
Attributes | |
---|---|
element_spec
|
The type specification of an element of this dataset. |
Methods
apply
apply(
transformation_func
)
Applies a transformation function to this dataset.
apply
enables chaining of custom Dataset
transformations, which are
represented as functions that take one Dataset
argument and return a
transformed Dataset
.
For example:
dataset = (dataset.map(lambda x: x ** 2)
.apply(group_by_window(key_func, reduce_func, window_size))
.map(lambda x: x ** 3))
Args | |
---|---|
transformation_func
|
A function that takes one Dataset argument and
returns a Dataset .
|
Returns | |
---|---|
Dataset
|
The Dataset returned by applying transformation_func to this
dataset.
|
batch
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
The components of the resulting element will have an additional outer
dimension, which will be batch_size
(or N % batch_size
for the last
element if batch_size
does not divide the number of input elements N
evenly and drop_remainder
is False
). If your program depends on the
batches having the same outer dimension, you should set the drop_remainder
argument to True
to prevent the smaller batch from being produced.
Args | |
---|---|
batch_size
|
A tf.int64 scalar tf.Tensor , representing the number of
consecutive elements of this dataset to combine in a single batch.
|
drop_remainder
|
(Optional.) A tf.bool scalar tf.Tensor , representing
whether the last batch should be dropped in the case it has fewer than
batch_size elements; the default behavior is not to drop the smaller
batch.
|
Returns | |
---|---|
Dataset
|
A Dataset .
|
cache
cache(
filename=''
)
Caches the elements in this dataset.
Args | |
---|---|
filename
|
A tf.string scalar tf.Tensor , representing the name of a
directory on the filesystem to use for caching elements in this Dataset.
If a filename is not provided, the dataset will be cached in memory.
|
Returns | |
---|---|
Dataset
|
A Dataset .
|
concatenate
concatenate(
dataset
)
Creates a Dataset
by concatenating the given dataset with this dataset.
a = Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
# c = Dataset.range(8, 14).batch(2) # ==> [ [8, 9], [10, 11], [12, 13] ]
# d = Dataset.from_tensor_slices([14.0, 15.0, 16.0])
# a.concatenate(c) and a.concatenate(d) would result in error.
a.concatenate(b) # ==> [ 1, 2, 3, 4, 5, 6, 7 ]
Args | |
---|---|
dataset
|
Dataset to be concatenated.
|
Returns | |
---|---|
Dataset
|
A Dataset .
|
enumerate
enumerate(
start=0
)
Enumerates the elements of this dataset.
It is similar to python's enumerate
.
For example:
# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { 1, 2, 3 }
b = { (7, 8), (9, 10) }
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a.enumerate(start=5)) == { (5, 1), (6, 2), (7, 3) }
b.enumerate() == { (0, (7, 8)), (1, (9, 10)) }
Args | |
---|---|
start
|
A tf.int64 scalar tf.Tensor , representing the start value for
enumeration.
|
Returns | |
---|---|
Dataset
|
A Dataset .
|
filter
filter(
predicate
)
Filters this dataset according to predicate
.
d = tf.data.Dataset.from_tensor_slices([1, 2, 3])
d = d.filter(lambda x: x < 3) # ==> [1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
d = d.filter(filter_fn) # ==> [1]
Args | |
---|---|
predicate
|
A function mapping a dataset element to a boolean. |
Returns | |
---|---|
Dataset
|
The Dataset containing the elements of this dataset for which
predicate is True .
|
flat_map
flat_map(
map_func
)
Maps map_func
across this dataset and flattens the result.
Use flat_map
if you want to make sure that the order of your dataset
stays the same. For example, to flatten a dataset of batches into a
dataset of their elements:
a = Dataset.from_tensor_slices([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
a.flat_map(lambda x: Dataset.from_tensor_slices(x + 1)) # ==>
# [ 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
tf.data.Dataset.interleave()
is a generalization of flat_map
, since
flat_map
produces the same output as
tf.data.Dataset.interleave(cycle_length=1)
Args | |
---|---|
map_func
|
A function mapping a dataset element to a dataset. |
Returns | |
---|---|
Dataset
|
A Dataset .
|
from_generator
@staticmethod
from_generator( generator, output_types, output_shapes=None, args=None )
Creates a Dataset
whose elements are generated by generator
.
The generator
argument must be a callable object that returns
an object that supports the iter()
protocol (e.g. a generator function).
The elements generated by generator
must be compatible with the given
output_types
and (optional) output_shapes
arguments.
For example:
import itertools
tf.compat.v1.enable_eager_execution()
def gen():
for i in itertools.count(1):
yield (i, [1] * i)
ds = tf.data.Dataset.from_generator(
gen, (tf.int64, tf.int64), (tf.TensorShape([]), tf.TensorShape([None])))
for value in ds.take(2):
print value
# (1, array([1]))
# (2, array([1, 1]))
Args | |
---|---|
generator
|
A callable object that returns an object that supports the
iter() protocol. If args is not specified, generator must take no
arguments; otherwise it must take as many arguments as there are values
in args .
|
output_types
|
A nested structure of tf.DType objects corresponding to
each component of an element yielded by generator .
|
output_shapes
|
(Optional.) A nested structure of tf.TensorShape objects
corresponding to each component of an element yielded by generator .
|
args
|
(Optional.) A tuple of tf.Tensor objects that will be evaluated
and passed to generator as NumPy-array arguments.
|
Returns | |
---|---|
Dataset
|
A Dataset .
|
from_tensor_slices
@staticmethod
from_tensor_slices( tensors )
Creates a Dataset
whose elements are slices of the given tensors.
Note that if tensors
contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
tf.constant
operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If tensors
contains one or more large NumPy arrays, consider the alternative described
in this guide.
Args | |
---|---|
tensors
|
A dataset element, with each component having the same size in the 0th dimension. |
Returns | |
---|---|
Dataset
|
A Dataset .
|
from_tensors
@staticmethod
from_tensors( tensors )
Creates a Dataset
with a single element, comprising the given tensors.
Note that if tensors
contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
tf.constant
operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If tensors
contains one or more large NumPy arrays, consider the alternative described
in this
guide.
Args | |
---|---|
tensors
|
A dataset element. |
Returns | |
---|---|
Dataset
|
A Dataset .
|
interleave
interleave(
map_func, cycle_length=AUTOTUNE, block_length=1, num_parallel_calls=None
)
Maps map_func
across this dataset, and interleaves the results.
For example, you can use Dataset.interleave()
to process many input files
concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records from
# each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt", ...]
dataset = (Dataset.from_tensor_slices(filenames)
.interleave(lambda x:
TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16))
The cycle_length
and block_length
arguments control the order in which
elements are produced. cycle_length
controls the number of input elements
that are processed concurrently. If you set cycle_length
to 1, this
transformation will handle one input element at a time, and will produce
identical results to tf.data.Dataset.flat_map
. In general,
this transformation will apply map_func
to cycle_length
input elements,
open iterators on the returned Dataset
objects, and cycle through them
producing block_length
consecutive elements from each iterator, and
consuming the next input element each time it reaches the end of an
iterator.
For example:
a = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
a.interleave(lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4) # ==> [1, 1, 1, 1,
# 2, 2, 2, 2,
# 1, 1,
# 2, 2,
# 3, 3, 3, 3,
# 4, 4, 4, 4,
# 3, 3,
# 4, 4,
# 5, 5, 5, 5,
# 5, 5]
Args | |
---|---|
map_func
|
A function mapping a dataset element to a dataset. |
cycle_length
|
(Optional.) The number of input elements that will be
processed concurrently. If not specified, the value will be derived from
the number of available CPU cores. If the num_parallel_calls argument
is set to tf.data.experimental.AUTOTUNE , the cycle_length argument
also identifies the maximum degree of parallelism.
|
block_length
|
(Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. |
num_parallel_calls
|
(Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in pa |