|TensorFlow 1 version||View source on GitHub|
A one-machine strategy that puts all variables on a single device.
tf.distribute.experimental.CentralStorageStrategy( compute_devices=None, parameter_device=None )
Used in the notebooks
|Used in the guide|
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs.
strategy = tf.distribute.experimental.CentralStorageStrategy() # Create a dataset ds = tf.data.Dataset.range(5).batch(2) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): return val + 1 # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.run(train_step, args=(x,))
Returns the cluster resolver associated with this strategy.
In general, when using a multi-worker
Strategies that intend to have an associated
Single-worker strategies usually do not have a
For more information, please see
||Returns number of replicas over which gradients are aggregated.|
distribute_datasets_from_function( dataset_fn, options=None )
tf.data.Dataset instances created by calls to
dataset_fn that users pass in is an input function that has a
tf.distribute.InputContext argument and returns a
instance. It is expected that the returned dataset from
already batched by per-replica batch size (i.e. global batch size divided by
the number of replicas in sync) and sharded.
not batch or shard the
returned from the input function.
dataset_fn will be called on the CPU
device of each of the workers and each generates a dataset where every
replica on that worker will dequeue one batch of inputs (i.e. if a worker
has two replicas, two batches will be dequeued from the
This method can be used for several purposes. First, it allows you to specify your own batching and