tf.distribute.Strategy API when in a replica context.
To be used inside your replicated step function, such as in a
__init__( strategy, replica_id_in_sync_group )
Initialize self. See help(type(self)) for accurate signature.
The devices this replica is to be executed on, as a tuple of strings.
Returns number of replicas over which gradients are aggregated.
Which replica is being defined, from 0 to
num_replicas_in_sync - 1.
__exit__( exception_type, exception_value, traceback )
merge_call( merge_fn, args=(), kwargs=None )
Merge args across replicas and run
merge_fn in a cross-replica context.
This allows communication and coordination when there are multiple calls
to a model function triggered by a call to
tf.distribute.StrategyExtended.call_for_each_replica for an
If not inside a distributed scope, this is equivalent to:
strategy = tf.distribute.get_strategy() with cross-replica-context(strategy): return merge_fn(strategy, *args, **kwargs)
merge_fn: function that joins arguments from threads that are given as PerReplica. It accepts
tf.distribute.Strategyobject as the first argument.
args: List or tuple with positional per-thread arguments for
kwargs: Dict with keyword per-thread arguments for
The return value of
merge_fn, except for
PerReplica values which are