tf.distribute.MultiWorkerMirroredStrategy

A distribution strategy for synchronous training on multiple workers.

Inherits From: Strategy

Used in the notebooks

Used in the guide Used in the tutorials

This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it replicates all variables and computations to each local device. The difference is that it uses a distributed collective implementation (e.g. all-reduce), so that multiple workers can work together.

You need to launch your program on each worker and configure cluster_resolver correctly. For example, if you are using