ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more


A CrossDeviceOps implementation that copies values to one device to reduce.

Inherits From: CrossDeviceOps

This implementation always copies values to one device to reduce them, then broadcast reduced values to the destinations. It doesn't support efficient batching.

Here is how you can use ReductionToOneDevice in tf.distribute.MirroredStrategy:

  strategy = tf.distribute.MirroredStrategy(

reduce_to_device the intermediate device to reduce to. If None, reduce to the first device in destinations of the reduce method.
accumulation_fn a function that does accumulation. If None, tf.math.add_n is used.



View source

Reduce values to destinations in batches.

See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.

reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.

A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.