tf.distribute.experimental.MultiWorkerMirroredStrategy

A distribution strategy for synchronous training on multiple workers.

Inherits From: Strategy

Used in the notebooks

Used in the guide Used in the tutorials

This strategy implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it creates copies of all variables in the model on each device across all workers.

It uses CollectiveOps's implementation of multi-worker all-reduce to to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.

By default it uses all local GPUs or CPU for single-worker training.

When 'TF_CONFIG' environment variable is set, it parses cluster_spec, task_type and task_id from 'TF_CONFIG' and turns into a multi-worker strategy which mirrored models on GPUs of all machines in a cluster. In the current implementation, it uses all GPUs in a cluster and it assumes all workers have the same number of GPUs.

You can also pass a distribute.cluster_resolver.ClusterResolver instance when instantiating the strategy. The task_type, task_id etc. will be parsed from the resolver instance instead of from the TF_CONFIG env var.

It supports both eager mode and graph mode. However, for eager mode, it has to set up the eager context in its constructor and therefore all ops in eager mode have to run after the strategy object is created.

communication optional Enum of type distribute.experimental.CollectiveCommunication. This provides a way for the user to override the choice of collective op communication. Possible values include AUTO, RING, and NCCL.
cluster_resolver optional distribute.cluster_resolver.ClusterResolver object. The default ClusterResolver that is used is the TFConfigClusterResolver which is instantiated from the TF_CONFIG env var.

cluster_resolver Returns the cluster resolver associated with this strategy.

As a multi-worker strategy, tf.distribute.experimental.MultiWorkerMirroredStrategy provides the associated tf.distribute.cluster_resolver.ClusterResolver. If the user provides one in __init__, that instance is returned; if the user does not, a default TFConfigClusterResolver is provided.

extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated.

Methods

experimental_assign_to_logical_device

View source

Adds annotation that tensor will be assigned to a logical device.


# Initializing TPU system with 2 logical devices and 4 replicas.
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
topology = tf.tpu.experimental.initialize_tpu_system(resolver)
device_assignment = tf.tpu.experimental.DeviceAssignment.build(
    topology,
    computation_shape=[1, 1, 1, 2],
    num_replicas=4)
strategy = tf.distribute.TPUStrategy(
    resolver, experimental_device_assignment=device_assignment)
iterator = iter(inputs)

@tf.function()
def step_fn(inputs):
  output = tf.add(inputs, inputs)

  # Add operation will be executed on logical device 0.
  output = strategy.experimental_assign_to_logical_device(output, 0)
  return output

strategy.run(step_fn, args=(next(iterator),))

Args
tensor Input tensor to annotate.
logical_device_id Id of the logical core to which the tensor will be assigned.

Raises
ValueError The logical device id presented is not consistent with total number of partitions specified by the device assignment.

Returns
Annotated tensor with idential value as tensor.

experimental_distribute_dataset