|View source on GitHub|
A list of devices with a state & compute distribution policy.
tf.compat.v1.distribute.Strategy( extended )
See the guide for overview and examples.
Returns the cluster resolver associated with this strategy.
In general, when using a multi-worker
Strategies that intend to have an associated
Single-worker strategies usually do not have a
For more information, please see
||Returns number of replicas over which gradients are aggregated.|
distribute_datasets_from_function( dataset_fn, options=None )
tf.data.Dataset instances created by calls to
dataset_fn that users pass in is an input function that has a
tf.distribute.InputContext argument and returns a
instance. It is expected that the returned dataset from
already batched by per-replica batch size (i.e. global batch size divided by
the number of replicas in sync) and sharded.
not batch or shard the
returned from the input function.
dataset_fn will be called on the CPU
device of each of the workers and each generates a dataset where every
replica on that worker will dequeue one batch of inputs (i.e. if a worker
has two replicas, two batches will be dequeued from the
This method can be used for several purposes. First, it allows you to
specify your own batching and sharding logic. (In contrast,
tf.distribute.experimental_distribute_dataset does batching and sharding
for you.) For example, where
experimental_distribute_dataset is unable to shard the input files, this
method might be used to manually shard the dataset (avoiding the slow
fallback behavior in
experimental_distribute_dataset). In cases where the
dataset is infinite, this sharding can be done by creating dataset replicas
that differ only in their random seed.