|View source on GitHub|
Class to synchronize, aggregate gradients and pass them to the optimizer.
tf.compat.v1.train.SyncReplicasOptimizer( opt, replicas_to_aggregate, total_num_replicas=None, variable_averages=None, variables_to_average=None, use_locking=False, name='sync_replicas' )
This class is deprecated. For synchronous training, please use Distribution Strategies.
In a typical asynchronous training environment, it's common to have some stale gradients. For example, with a N-replica asynchronous training, gradients will be applied to the variables N times independently. Depending on each replica's training speed, some gradients might be calculated from copies of the variable from several steps back (N-1 steps on average). This optimizer avoids stale gradients by collecting gradients from all replicas, averaging them, then applying them to the variables in one shot, after which replicas can fetch the new variables and continue.
The following accumulators/queue are created:
gradient accumulators, one per variable to train. Gradients are pushed to them and the chief worker will wait until enough gradients are collected and then average them before applying to variables. The accumulator will drop all stale gradients (more details in the accumulator op).
tokenqueue where the optimizer pushes the new global_step value after all variables are updated.
The following local variable is created:
sync_rep_local_step, one per replica. Compared against the global_step in each accumulator to check for staleness of the gradients.
The optimizer adds nodes to the graph to collect gradients and pause the trainers until variables are updated. For the Parameter Server job:
- An accumulator is created for each variable, and each replica pushes the gradients into the accumulators instead of directly applying them to the variables.
- Each accumulator averages once enough gradients (replicas_to_aggregate) have been accumulated.
- Apply the averaged gradients to the variables.
- Only after all variables have been updated, increment the global step.
- Only after step 4, pushes
token_queue, once for each worker replica. The workers can now fetch the global step, use it to update its local_step variable and start the next batch. Please note that some workers can consume multiple minibatches, while some may not consume even one. This is because each worker fetches minibatches as long as a token exists. If one worker is stuck for some reason and does not consume a token, another worker can use it.
For the replicas:
- Start a step: fetch variables and compute gradients.
- Once the gradients have been computed, push them into gradient accumulators. Each accumulator will check the staleness and drop the stale.
- After pushing all the gradients, dequeue an updated value of global_step from the token queue and record that step to its local_step variable. Note that this is effectively a barrier.
- Start the next batch.
# Create any optimizer to update the variables, say a simple SGD: opt = GradientDescentOptimizer(learning_rate=0.1) # Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each # step the optimizer collects 50 gradients before applying to variables. # Note that if you want to have 2 backup replicas, you can change # total_num_replicas=52 and make sure this number matches how many physical # replicas you started in your job. opt = tf.compat.v1.train.SyncReplicasOptimizer(opt, replicas_to_aggregate=50, total_num_replicas=50) # Some models have startup_delays to help stabilize the model but when using # sync_replicas training, set it to 0. # Now you can call `minimize()` or `compute_gradients()` and # `apply_gradients()` normally training_op = opt.minimize(total_loss, global_step=self.global_step) # You can create the hook which handles initialization and queues. sync_replicas_hook = opt.make_session_run_hook(is_chief)
In the training program, every worker will run the train_op as if not synchronized.
with training.MonitoredTrainingSession( master=workers[worker_id].target, is_chief=is_chief, hooks=[sync_replicas_hook]) as mon_sess: while not mon_sess.should_stop(): mon_sess.run(training_op)
To use SyncReplicasOptimizer with an
Estimator, you need to send
sync_replicas_hook while calling the fit.
my_estimator = DNNClassifier(..., optimizer=opt) my_estimator.fit(..., hooks=[sync_replicas_hook])
||The actual optimizer that will be used to compute and apply the gradients. Must be one of the Optimizer classes.|
||number of replicas to aggregate for each variable update.|
||Total number of tasks/workers/replicas, could be different from replicas_to_aggregate. If total_num_replicas > replicas_to_aggregate: it is backup_replicas + replicas_to_aggregate. If total_num_replicas < replicas_to_aggregate: Replicas compute multiple batches per update to variables.|
||a list of variables that need to be averaged. Only needed if variable_averages is passed in.|
||If True use locks for update operation.|
||string. Optional name of the returned operation.|
apply_gradients( grads_and_vars, global_step=None, name=None )
Apply gradients to variables.
This contains most of the synchronization implementation and also wraps the apply_gradients() from the real optimizer.
||List of (gradient, variable) pairs as returned by compute_gradients().|
||Optional Variable to increment by one after the variables have been updated.|
||Optional name for the returned operation. Default to the name passed to the Optimizer constructor.|
||The op to dequeue a token so the replicas can exit this batch and start the next one. This is executed by each replica.|
||If the grads_and_vars is empty.|
||If global step is not provided, the staleness cannot be checked.|
compute_gradients( *args, **kwargs )
Compute gradients of "loss" for the variables in "var_list".
This simply wraps the compute_gradients() from the real optimizer. The gradients will be aggregated in the apply_gradients() so that user can modify the gradients like clipping with per replica global norm if needed. The global norm with aggregated gradients can be bad as one replica's huge gradients can hurt the gradients from other replicas.
||Arguments for compute_gradients().|
||Keyword arguments for compute_gradients().|
|A list of (gradient, variable) pairs.|