tfds.ReadConfig

View source on GitHub

Configures input reading pipeline.

options tf.data.Options(), dataset options. Those options are added to the default values defined in tfrecord_reader.py. Note that when shuffle_files is True and no seed is defined, experimental_deterministic will be set to False internally, unless it is defined here.
try_autocache If True (default) and the dataset satisfy the right conditions (dataset small enough, files not shuffled,...) the dataset will be cached during the first iteration (through ds = ds.cache()).
shuffle_seed tf.int64, seeds forwarded to tf.data.Dataset.shuffle when shuffle_files=True.
shuffle_reshuffle_each_iteration bool, forwarded to tf.data.Dataset.shuffle when shuffle_files=True.
interleave_cycle_length int, forwarded to tf.data.Dataset.interleave. Default to 16.
interleave_block_length int, forwarded to tf.data.Dataset.interleave. Default to 16.
experimental_interleave_sort_fn Function with signature List[FileDict] -> List[FileDict], which takes the list of dict(file: str, take: int, skip: int) and returns the modified version to read. This can be used to sort/shuffle the shards to read in a custom order, instead of relying on shuffle_files=True.
interleave_parallel_reads

Methods

__eq__

Return self==value.

__ge__

Automatically created by attrs.

__gt__

Automatically created by attrs.

__le__

Automatically created by attrs.

__lt__

Automatically created by attrs.

__ne__

Check equality and either forward a NotImplemented or return the result negated.