|View source on GitHub|
A parameter server DistributionStrategy.
tf.contrib.distribute.ParameterServerStrategy( num_gpus_per_worker=0 )
*** contrib version ***
This strategy class works for both local training and between-graph replicated
training for multiple workers. If
cluster_spec is specified, either passed
in to init() method or parsed from the
variables and updates to those variables are assigned to parameter servers and
other operations are assigned to workers. If
cluster_spec is not set, it
becomes local training where variables are assigned to local CPU or the only
GPU. When each worker has more than one GPU, operations will be replicated on
these GPUs. In both cases, operations are replicated but variables are not and
these workers share a common view for which parameter server a variable is
This class assumes between-graph replication will be used and works on a graph for a particular worker. Note that each graph and worker is independent. This means that while each worker will synchronously compute a single gradient update across all GPUs, updates between workers proceed asynchronously. Operations that occur only on the first replica (such as incrementing the global step), will occur on the first replica of every worker.
It is expected to call
call_for_each_replica(fn, ...) for any
operations which potentially can be replicated across replicas (i.e. multiple
GPUs) even if there is only CPU or one GPU. When defining the
caution needs to be taken:
2) It is generally not recommended to open a device scope under the strategy's
scope. A device scope (i.e. calling
tf.device) will be merged with or
override the device for operations but will not change the device for
3) It is also not recommended to open a colocation scope (i.e. calling
tf.compat.v1.colocate_with) under the strategy's scope. For colocating
strategy.extended.colocate_vars_with instead. Colocation of
ops will possibly create conflicts of device assignment.
||number of local GPUs or GPUs per worker, the default is 0 meaning CPU only.|
||Returns number of replicas over which gradients are aggregated.|
experimental_distribute_dataset( dataset )
Distributes a tf.data.Dataset instance provided via
The returned distributed dataset can be iterated over similar to how regular datasets can. NOTE: Currently, the user cannot add any more transformations to a distributed dataset.
The following is an example:
strategy = tf.distribute.MirroredStrategy() # Create a dataset dataset = dataset_ops.Dataset.TFRecordDataset([ "/a/1.tfr", "/a/2.tfr", "/a/3.tfr", "/a/4.tfr"]) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.experimental_run_v2(train_step, args=(x,))
We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers).
In a multi-worker setting, we will first attempt to distribute the dataset by attempting to detect whether the dataset is being created out of ReaderDatasets (e.g. TFRecordDataset, TextLineDataset, etc.) and if so, attempting to shard the input files. Note that there has to be at least one input file per worker. If you have less than one input file per worker, we suggest that you should disable distributing your dataset using the method below.
If that attempt is unsuccessful (e.g. the dataset is created from a
Dataset.range), we will shard the dataset evenly at the end by appending a
.shard operation to the end of the processing pipeline. This will cause
the entire preprocessing pipeline for all the data to be run on every
worker, and each worker will do redundant work. We will print a warning
if this method of sharding is selected. In this case, consider using
You can disable dataset sharding across workers using the
Within each worker, we will also split the data among all the worker devices (if more than one a present), and this will happen even if multi-worker sharding is disabled using the method above.
If the above batch splitting and dataset sharding logic is undesirable,
experimental_distribute_datasets_from_function instead, which
does not do any automatic splitting or sharding.
experimental_distribute_datasets_from_function( dataset_fn )
tf.data.Dataset instances created by calls to
dataset_fn will be called once for each worker in the strategy. Each
replica on that worker will dequeue one batch of inputs from the local
Dataset (i.e. if a worker has two replicas, two batches will be dequeued
Dataset every step).
This method can be used for several purposes. For example, where
experimental_distribute_dataset is unable to shard the input files, this
method might be used to manually shard the dataset (avoiding the slow
fallback behavior in
experimental_distribute_dataset). In cases where the
dataset is infinite, this sharding can be done by creating dataset replicas
that differ only in their random seed.
experimental_distribute_dataset may also sometimes fail to split the
batch across replicas on a worker. In that case, this method can be used
where that limitation does not exist.
dataset_fn should take an
tf.distribute.InputContext instance where
information about batching and input replication can be accessed:
def dataset_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) return d.shard( input_context.num_input_pipelines, input_context.input_pipeline_id) inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) for batch in inputs: replica_results = strategy.experimental_run_v2(replica_fn, args=(batch,))
A function taking a
experimental_local_results( value )
Returns the list of all local per-replica values contained in
A value returned by
A tuple of values contained in
experimental_make_numpy_dataset( numpy_input, session=None )
Makes a tf.data.Dataset for input provided via a numpy array.
This avoids adding
numpy_input as a large constant in the graph,
and copies the data to the machine or machines that will be processing
Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy.
numpy_input = np.ones(, dtype=np.float32) dataset = strategy.experimental_make_numpy_dataset(numpy_input) dist_dataset = strategy.experimental_distribute_dataset(dataset)
A nest of NumPy input arrays that will be converted into a
dataset. Note that lists of Numpy arrays are stacked, as that is normal
||(TensorFlow v1.x graph execution only) A session used for initialization.|
experimental_run( fn, input_iterator=None )
Runs ops in
fn on each replica, with inputs from
DEPRECATED: This method is not available in TF 2.x. Please switch
When eager execution is enabled, executes ops specified by
fn on each
replica. Otherwise, builds a graph to execute the ops on each replica.
Each replica will take a single, different input from the inputs provided by
get_next call on the input iterator.
fn may call
tf.distribute.get_replica_context() to access members such
The function to run. The inputs to the function must match the outputs
||(Optional) input iterator from which the inputs are taken.|
Merged return value of
experimental_run_v2( fn, args=(), kwargs=None )
fn on each replica, with the given arguments.
Executes ops specified by
fn on each replica. If
"per-replica" values, such as those produced by a "distributed
fn is executed on a particular replica, it will be executed with the
component of those "per-replica" values that correspond to that replica.
fn may call
tf.distribute.get_replica_context() to access members such
All arguments in
kwargs should either be nest of tensors or
per-replica objects containing tensors or composite tensors.
The function to run. The output must be a
(Optional) Positional arguments to
(Optional) Keyword arguments to
Merged return value of
make_dataset_iterator( dataset )
Makes an iterator for input provided via
Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the per-replica batch size.
The user could also use
make_input_fn_iterator if they want to
customize which input is fed to which replica/worker etc.
make_input_fn_iterator( input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER )
Returns an iterator split across replicas created from an input function.
DEPRECATED: This method is not available in TF 2.x.
input_fn should take an
tf.distribute.InputContext object where
information about batching and input sharding can be accessed:
def input_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) return d.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) with strategy.scope(): iterator = strategy.make_input_fn_iterator(input_fn) replica_results = strategy.experimental_run(replica_fn, iterator)
tf.data.Dataset returned by
input_fn should have a per-replica
batch size, which may be computed using