tf.contrib.data.sloppy_interleave( map_func, cycle_length, block_length=1 )
A non-deterministic version of the
Dataset.interleave() transformation. (deprecated)
THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
non-deterministically interleaves the results.
The resulting dataset is almost identical to
interleave. The key
difference is that if retrieving a value from a given output iterator would
get_next to block, that iterator will be skipped, and consumed
when next available. If consuming from all iterators would cause the
get_next call to block, the
get_next call blocks until the first value is
If the underlying datasets produce elements as fast as they are consumed, the
sloppy_interleave transformation behaves identically to
However, if an underlying dataset would block the consumer,
sloppy_interleave can violate the round-robin order (that
strictly obeys), producing an element from a different underlying
# Preprocess 4 files concurrently. filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords") dataset = filenames.apply( tf.contrib.data.sloppy_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=4))
WARNING: The order of elements in the resulting dataset is not
Dataset.interleave() if you want the elements to have a
map_func: A function mapping a nested structure of tensors (having shapes and types defined by
self.output_types) to a
cycle_length: The number of input
Datasets to interleave from in parallel.
block_length: The number of consecutive elements to pull from an input
Datasetbefore advancing to the next input
sloppy_interleavewill skip the remainder of elements in the
block_lengthin order to avoid blocking.
Dataset transformation function, which can be passed to