|View source on GitHub|
Base class for optimizers.
tf.compat.v1.train.Optimizer( use_locking, name )
This class defines the API to add Ops to train a model. You never use this
class directly, but instead instantiate one of its subclasses such as
# Create an optimizer with the desired parameters. opt = GradientDescentOptimizer(learning_rate=0.1) # Add Ops to the graph to minimize a cost by updating a list of variables. # "cost" is a Tensor, and the list of variables contains tf.Variable # objects. opt_op = opt.minimize(cost, var_list=<list of variables>)
In the training program you will just have to run the returned Op.
# Execute opt_op to do one step of training: opt_op.run()
Processing gradients before applying them.
minimize() takes care of both computing the gradients and
applying them to the variables. If you want to process the gradients
before applying them you can instead use the optimizer in three steps:
- Compute the gradients with
- Process the gradients as you wish.
- Apply the processed gradients with
# Create an optimizer. opt = GradientDescentOptimizer(learning_rate=0.1) # Compute the gradients for a list of variables. grads_and_vars = opt.compute_gradients(loss, <list of variables>) # grads_and_vars is a list of tuples (gradient, variable). Do whatever you # need to the 'gradient' part, for example cap them, etc. capped_grads_and_vars = [(MyCapper(gv), gv) for gv in grads_and_vars] # Ask the optimizer to apply the capped gradients. opt.apply_gradients(capped_grads_and_vars)
compute_gradients() accept a
argument that controls the degree of parallelism during the application of
The possible values are:
GATE_NONE: Compute and apply gradients in parallel. This provides
the maximum parallelism in execution, at the cost of some non-reproducibility
in the results. For example the two gradients of
matmul depend on the input
GATE_NONE one of the gradients could be applied to one of the
inputs before the other gradient is computed resulting in non-reproducible
GATE_OP: For each Op, make sure all gradients are computed before
they are used. This prevents race conditions for Ops that generate gradients
for multiple inputs where the gradients depend on the inputs.
GATE_GRAPH: Make sure all gradients for all variables are computed
before any one of them is used. This provides the least parallelism but can
be useful if you want to process all gradients before applying any of them.
Some optimizer subclasses, such as
allocate and manage additional variables associated with the variables to
train. These are called Slots. Slots have names and you can ask the
optimizer for the names of the slots that it uses. Once you have a slot name
you can ask the optimizer for the variable it created to hold the slot value.
This can be useful if you want to log debug a training algorithm, report stats about the slots, etc.
||Bool. If True apply use locks to prevent concurrent updates to variables.|
||A non-empty string. The name to use for accumulators created for the optimizer.|
||If name is malformed.|
apply_gradients( grads_and_vars, global_step=None, name=None )
Apply gradients to variables.
This is the second part of
minimize(). It returns an
List of (gradient, variable) pairs as returned by
Optional name for the returned operation. Default to the
name passed to the
||If none of the variables have gradients.|
If you should use
compute_gradients( loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None )
Compute gradients of
loss for the variables in
This is the first part of
minimize(). It returns a list
of (gradient, variable) pairs where "gradient" is the gradient
for "variable". Note that "gradient" can be a
None if there is no gradient for the
||A Tensor containing the value to|