tfds.ReadConfig

View source on GitHub

Configures input reading pipeline.

tfds.ReadConfig(
    options=NOTHING, shuffle_seed=attr_dict['shuffle_seed'].default, shuffle_reshuff
    le_each_iteration=attr_dict['shuffle_reshuffle_each_iteration'].default,
    interleave_parallel_reads=attr_dict['interleave_parallel_reads'].default,
    interleave_block_length=attr_dict['interleave_block_length'].default, experiment
    al_interleave_sort_fn=attr_dict['experimental_interleave_sort_fn'].default
)

Attributes:

  • options: tf.data.Options(), dataset options. Those options are added to the default values defined in tfrecord_reader.py. Note that when shuffle_files is True and no seed is defined, experimental_deterministic will be set to False internally, unless it is defined here.
  • shuffle_seed: tf.int64, seeds forwarded to tf.data.Dataset.shuffle when shuffle_files=True.
  • shuffle_reshuffle_each_iteration: bool, forwarded to tf.data.Dataset.shuffle when shuffle_files=True.
  • interleave_parallel_reads: int, forwarded to tf.data.Dataset.interleave. Default to 16.
  • interleave_block_length: int, forwarded to tf.data.Dataset.interleave. Default to 16.
  • experimental_interleave_sort_fn: Function with signature List[FileDict] -> List[FileDict], which takes the list of dict(file: str, take: int, skip: int) and returns the modified version to read. This can be used to sort/shuffle the shards to read in a custom order, instead of relying on shuffle_files=True.

Methods

__eq__

__eq__(
    other
)

Return self==value.

__ge__

__ge__(
    other
)

Automatically created by attrs.

__gt__

__gt__(
    other
)

Automatically created by attrs.

__le__

__le__(
    other
)

Automatically created by attrs.

__lt__

__lt__(
    other
)

Automatically created by attrs.

__ne__

__ne__(
    other
)

Check equality and either forward a NotImplemented or return the result negated.