Mirrors vars to distribute across multiple devices and machines.
*** contrib version ***
This strategy uses one replica per device and sync replication for its multi-GPU version.
cluster_spec is given by the
configure method., it turns into the
mulit-worker version that works on multiple workers with in-graph replication.
configure will be called by higher-level APIs if running in
There are several important concepts for distributed TensorFlow, e.g.
in-graph replication and
'synchronous training' and they have already been defined in the
The distribution strategy inherits these concepts as well and in addition to
that we also clarify several more concepts:
- In-graph replication: the
clientcreates a single
tf.Graphthat specifies tasks for devices on all workers. The
clientthen creates a client session which will talk to the
masterservice of a
worker. Then the
masterwill partition the graph and distribute the work to all participating workers.
- Worker: A
workeris a TensorFlow
taskthat usually maps to one physical machine. We will have multiple
workers with different
taskindex. They all do similar things except for one worker checkpointing model variables, writing summaries, etc. in addition to its ordinary work.
The multi-worker version of this class maps one replica to one device on a
worker. It mirrors all model variables on all replicas. For example, if you
workers and each
worker has 4 GPUs, it will create 8 copies of
the model variables on these 8 GPUs. Then like in MirroredStrategy, each
replica performs their computation with their own copy of variables unless in
cross-replica model where variable or tensor reduction happens.
devices: a list of device strings.
num_gpus: number of GPUs. For local training, either specify
num_gpus. In distributed training, this must be specified as number of GPUs on each worker.
num_gpus_per_worker: number of GPUs per worker. This is the same as
num_gpusand only one of
num_gpus_per_workercan be specified.
cross_device_ops: optional, a descedant of
CrossDeviceOps. If this is not set, the
configuremethod will try to find the best one.
auto_shard_dataset: whether to auto-shard the dataset when there are multiple workers.
cross_tower_ops: Deprecated alias for
__init__( devices=None, num_gpus=None, num_gpus_per_worker=None, cross_device_ops=None, auto_shard_dataset=False, cross_tower_ops=None )
Initialize self. See help(type(self)) for accurate signature.
tf.distribute.StrategyExtended with additional methods.
Returns number of replicas over which gradients are aggregated.
Any final actions to be done at the end of all computations.
In eager mode, it executes any finalize actions as a side effect. In graph mode, it creates the finalize ops and returns them.
For example, TPU shutdown ops.
A list of ops to execute.
Any initialization to be done before running any computations.
In eager mode, it executes any initialization as a side effect. In graph mode, it creates the initialization ops and returns them.
For example, TPU initialize_system ops.
A list of ops to execute.
experimental_run( fn, input_iterator=None )
Runs ops in
fn on each replica, with inputs from
When eager execution is enabled, executes ops specified by
fn on each
replica. Otherwise, builds a graph to execute the ops on each replica.
Each replica will take a single, different input from the inputs provided by
get_next call on the input iterator.
fn may call
tf.distribute.get_replica_context() to access members such
IMPORTANT: Depending on the
DistributionStrategy being used, and whether
eager execution is enabled,
fn may be called one or more times (once for
fn: function to run. The inputs to the function must match the outputs of
input_iterator.get_next(). The output must be a
input_iterator: (Optional) input iterator from which the inputs are taken.
Merged return value of
fn across replicas. The structure of the return
value is the same as the return value from
fn. Each element in the
structure can either be
PerReplica (if the values are unsynchronized),
Mirrored (if the values are kept in sync), or
Tensor (if running on a
Makes an iterator for input provided via input_dataset.
Data from the given dataset will be distributed evenly across all the
compute replicas. We will assume that the input dataset is batched by the
global batch size. With this assumption, we will make a best effort to
divide each batch across all the replicas (one or more workers).
If this effort fails, an error will be thrown, and the user should instead
make_input_fn_iterator which provides more control to the user, and
does not try to divide a batch across replicas.
The user could also use
make_input_fn_iterator if they want to
customize which input is fed to which replica/worker etc.
tf.data.Datasetthat will be distributed evenly across all replicas.
tf.distribute.InputIterator which returns inputs for each step of the
computation. User should call
initialize on the returned iterator.
make_input_fn_iterator( input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER )
Returns an iterator split across replicas created from an input function.
input_fn should take an
tf.distribute.InputContext object where
information about input sharding can be accessed:
def input_fn(input_context): d = tf.data.Dataset.from_tensors([[1.]]).repeat() return d.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) with strategy.scope(): iterator = strategy.make_input_fn_iterator( input_fn) replica_results = strategy.extended.call_for_each_replica( replica_fn, iterator.get_next())
input_fn: A function that returns a
tf.data.Dataset. This function is expected to take an
replication_mode: an enum value of
PER_WORKERis supported currently.
An iterator object that can be initialized and fetched next element.
reduce( reduce_op, value )
value across replicas.
tf.distribute.ReduceOpvalue specifying how values should be combined.
value: A "per replica" value to be combined into a single tensor.
Returns a context manager selecting this Strategy as current.
with strategy.scope(): code block, this thread
will use a variable creator set by
strategy, and will
enter its "cross-replica context".
A context manager.
Returns a copy of
config_proto modified for use with this strategy.
The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
The updated copy of the