View source on GitHub |
An optimizer that applies loss scaling in backprop.
Inherits From: Optimizer
tf.contrib.mixed_precision.LossScaleOptimizer(
opt, loss_scale_manager
)
This class is useful for "mixed precision training" on GPUs (or other potential accelerators), an approach to improve compute throughput without compromising model quality.
The canonical way to perform mixed precision training is the following:
- Model variables are kept in high precision (e.g. float32).
- Computations are done in lower precision (e.g. float16), which enjoys performance speedup by virtue of hardware support. Variables are casted to lower precision before they're used.
- Final gradients are casted back to high precision dtype, then used to update variables.
The side-effect of performing computation in lower precision, is that it comes with smaller numerical range. During backproping, small gradients might underflow in the reduced numerical range, causing a model to converge at suboptimal level.
To prevent underflow, this optimizer multiplies the loss by a factor before backprop starts. Consequently, the gradients are linearly scaled up by the same factor, thus not falling into the underflow zone. After that, to perserve the correctness of backprop, the gradients are down-scaled by the same factor, casted to the (higher) variable precision, then applied on the variables.
See Nvidia's manual on mixed precision training for more details.
To use loss scale optimizer, one only needs choose a loss scale strategy and wrap a regular optimizer. See examples below.
loss = loss_fn()
opt = tf.AdamOptimizer(learning_rate=...)
# Choose a loss scale manager which decides how to pick the right loss scale
# throughout the training process.
loss_scale_manager = tf.contrib.mixed_precision.FixedLossScaleManager(5000)
# Wraps the original optimizer in a LossScaleOptimizer.
loss_scale_optimizer =
tf.contrib.mixed_precision.LossScaleOptimizer(opt, loss_scale_manager)
# Call minimize() on the loss scale optimizer.
train_op = loss_scale_optimizer.minimize(loss)
If gradients clipping is applied, one can call
optimizer.compute_gradients()
and optimizer.apply_gradients()
separately.
Notice the following way of using LossScaleOptimizer is not intended. Always
use loss_scale_optimizer.compute_gradients()
to compute gradients instead of
tf.gradients()
if doing mixed precision training.
# The following is a wrong way to use LossScaleOptimizer along with
# tf.gradients().
# Always use loss_scale_optimizer.compute_gradients() to compute grads, or
# loss scale is not correctly applied.
grads = tf.gradients(loss, ...)
# Do some custom grad clipping.
grads = clip_grads(grads, ...)
loss_scale_optimizer.apply(grads_and_vars)
Args | |
---|---|
opt
|
The actual optimizer that will be used to compute and apply the
gradients. Must be an implementation of the
tf.compat.v1.train.Optimizer interface.
|
loss_scale_manager
|
A LossScaleManager object. |
Methods
apply_gradients
apply_gradients(
grads_and_vars, global_step=None, name=None
)
Apply gradients. See base class tf.compat.v1.train.Optimizer
.
compute_gradients
compute_gradients(
loss, var_list=None, gate_gradients=optimizer.Optimizer.GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None
)
Compute gradients. See base class tf.compat.v1.train.Optimizer
.
get_name
get_name()
get_slot
get_slot(
var, name
)
Return a slot named name
created for var
by the Optimizer.
Some Optimizer
subclasses use additional variables. For example
Momentum
and Adagrad
use variables to accumulate updates. This method
gives access to these Variable
objects if for some reason you need them.
Use get_slot_names()
to get the list of slot names created by the
Optimizer
.
Args | |
---|---|
var
|
A variable passed to minimize() or apply_gradients() .
|
name
|
A string. |
Returns | |
---|---|
The Variable for the slot if it was created, None otherwise.
|
get_slot_names
get_slot_names()
Return a list of the names of slots created by the Optimizer
.
See get_slot()
.
Returns | |
---|---|
A list of strings. |
minimize
minimize(
loss, global_step=None, var_list=None, gate_gradients=GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, name=None,
grad_loss=None
)
Add operations to minimize loss
by updating var_list
.
This method simply combines calls compute_gradients()
and
apply_gradients()
. If you want to process the gradient before applying
them call compute_gradients()
and apply_gradients()
explicitly instead
of using this function.
Args | |
---|---|
loss
|
A Tensor containing the value to minimize.
|
global_step
|
Optional Variable to increment by one after the
variables have been updated.
|
var_list
|
Optional list or tuple of Variable objects to update to
minimize loss . Defaults to the list of variables collected in
the graph under the key GraphKeys.TRAINABLE_VARIABLES .
|
gate_gradients
|
How to gate the computation of gradients. Can be
GATE_NONE , GATE_OP , or GATE_GRAPH .
|
aggregation_method
|
Specifies the method used to combine gradient terms.
Valid values are defined in the class AggregationMethod .
|
colocate_gradients_with_ops
|
If True, try colocating gradients with the corresponding op. |
name
|
Optional name for the returned operation. |
grad_loss
|
Optional. A Tensor holding the gradient computed for loss .
|
Returns | |
---|---|
An Operation that updates the variables in var_list . If global_step
was not None , that operation also increments global_step .
|
Raises | |
---|---|
ValueError
|
If some of the variables are not Variable objects.
|
Eager Compatibility
When eager execution is enabled, loss
should be a Python function that
takes no arguments and computes the value to be minimized. Minimization (and
gradient computation) is done with respect to the elements of var_list
if
not None, else with respect to any trainable variables created during the
execution of the loss
function. gate_gradients
, aggregation_method
,
colocate_gradients_with_ops
and grad_loss
are ignored when eager
execution is enabled.
variables
variables()
A list of variables which encode the current state of Optimizer
.
Includes slot variables and additional global variables created by the optimizer in the current default graph.
Returns | |
---|---|
A list of variables. |