tf.distribute.CrossDeviceOps

Base class for cross-device reduction and broadcasting algorithms.

The main purpose of this class is to be passed to tf.distribute.MirroredStrategy in order to choose among different cross device communication implementations. Prefer using the methods of tf.distribute.Strategy instead of the ones of this class.

Implementations:

Methods

batch_reduce

View source

Reduce values to destinations in batches.

See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context.

Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.

Returns
A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.

Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations.

batch_reduce_implementation

View source

Implementation of batch_reduce.

Overriding this method is useful for subclass implementers.

Args
reduce_op a tf.distribute.ReduceOp specifying how values should be combined.
value_destination_pairs a sequence of (value, destinations) pairs. See reduce for descriptions.
options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details.

Returns
A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs.

Raises
ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations.

broadcast

View source