{ }
View source on GitHub |
Run options for experimental_distribute_dataset(s_from_function)
.
tf.distribute.InputOptions(
experimental_fetch_to_device=None,
experimental_replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
,
experimental_place_dataset_on_device=False,
experimental_per_replica_buffer_size=1
)
Used in the notebooks
Used in the guide | Used in the tutorials |
---|---|
This can be used to hold some strategy specific configs.
# Setup TPUStrategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
dataset = tf.data.Dataset.range(16)
distributed_dataset_on_host = (
strategy.experimental_distribute_dataset(
dataset,
tf.distribute.InputOptions(
experimental_replication_mode=
experimental_replication_mode.PER_WORKER,
experimental_place_dataset_on_device=False,
experimental_per_replica_buffer_size=1)))