|View source on GitHub|
A distribution strategy for synchronous training on multiple workers.
tf.distribute.MultiWorkerMirroredStrategy( cluster_resolver=None, communication_options=None )
Used in the notebooks
|Used in the guide||Used in the tutorials|
This strategy implements synchronous distributed training across multiple
workers, each with potentially multiple GPUs. Similar to
tf.distribute.MirroredStrategy, it replicates all variables and computations
to each local device. The difference is that it uses a distributed collective
implementation (e.g. all-reduce), so that multiple workers can work together.