tf.distribute.HierarchicalCopyAllReduce

Hierarchical copy all-reduce implementation of CrossDeviceOps.

Inherits From: CrossDeviceOps

Used in the notebooks

Used in the guide

It reduces to one GPU along edges in some hierarchy and broadcasts back to each GPU along the same path. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation.

This is a reduction created for Nvidia DGX-1 which assumes GPUs connects like that on DGX-1 machine. If you have different GPU inter-connections, it is likely that it would be slower than tf.distribute.ReductionToOneDevice.

For reduces that are not all-reduce, it fal