|TensorFlow 1 version||View source on GitHub|
Base class for Keras optimizers.
See Migration guide for more details.
tf.keras.optimizers.Optimizer( name, gradient_aggregator=None, gradient_transformers=None, **kwargs )
# Create an optimizer with the desired parameters. opt = tf.keras.optimizers.SGD(learning_rate=0.1) # `loss` is a callable that takes no argument and returns the value # to minimize. loss = lambda: 3 * var1 * var1 + 2 * var2 * var2 # In graph mode, returns op that minimizes the loss by updating the listed # variables. opt_op = opt.minimize(loss, var_list=[var1, var2]) opt_op.run() # In eager mode, simply call minimize to update the list of variables. opt.minimize(loss, var_list=[var1, var2])
Usage in custom training loops
In Keras models, sometimes variables are created when the model is first called, instead of construction time. Examples include 1) sequential models without input shape pre-defined, or 2) subclassed models. Pass var_list as callable in these cases.
opt = tf.keras.optimizers.SGD(learning_rate=0.1) model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(num_hidden, activation='relu')) model.add(tf.keras.layers.Dense(num_classes, activation='sigmoid')) loss_fn = lambda: tf.keras.losses.mse(model(input), output) var_list_fn = lambda: model.trainable_weights for input, output in data: opt.minimize(loss_fn, var_list_fn)
Processing gradients before applying them
minimize() takes care of both computing the gradients and
applying them to the variables. If you want to process the gradients
before applying them you can instead use the optimizer in three steps:
- Compute the gradients with
- Process the gradients as you wish.
- Apply the processed gradients with
# Create an optimizer. opt = tf.keras.optimizers.SGD(learning_rate=0.1) # Compute the gradients for a list of variables. with tf.GradientTape() as tape: loss = <call_loss_function> vars = <list_of_variables> grads = tape.gradient(loss, vars) # Process the gradients, for example cap them, etc. # capped_grads = [MyCapper(g) for g in grads] processed_grads = [process_gradient(g) for g in grads] # Ask the optimizer to apply the processed gradients. opt.apply_gradients(zip(processed_grads, var_list))
This optimizer class is
tf.distribute.Strategy aware, which means it
automatically sums gradients across all replicas. To average gradients,
you divide your loss by the global batch size, which is done
automatically if you use
tf.keras built-in training or evaluation loops.
reduction argument of your loss which should be set to
tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE for averaging or
tf.keras.losses.Reduction.SUM for not.
To aggregate gradients yourself, call
experimental_aggregate_gradients set to False. This is useful if you need to
process aggregated gradients.
If you are not using these and you want to average gradients, you should use
tf.math.reduce_sum to add up your per-example losses and then divide by the
global batch size. Note that when using
tf.distribute.Strategy, the first
component of a tensor's shape is the replica-local batch size, which is off
by a factor equal to the number of replicas being used to compute a single
step. As a result, using
tf.math.reduce_mean will give the wrong answer,
resulting in gradients that can be many times too big.
All Keras optimizers respect variable constraints. If constraint function is passed to any variable, the constraint will be applied to the variable after the gradient has been applied to the variable. Important: If gradient is sparse tensor, variable constraint is not supported.
The entire optimizer is currently thread compatible, not thread-safe. The user needs to perform synchronization if necessary.
Many optimizer subclasses, such as
Adagrad allocate and manage
additional variables associated with the variables to train. These are called
Slots. Slots have names and you can ask the optimizer for the names of
the slots that it uses. Once you have a slot name you can ask the optimizer
for the variable it created to hold the slot value.
This can be useful if you want to log debug a training algorithm, report stats about the slots, etc.
These are arguments passed to the optimizer subclass constructor
__init__ method), and then passed to
They can be either regular Python values (like 1.0), tensors, or
callables. If they are callable, the callable will be called during
apply_gradients() to get the value for the hyper parameter.
Hyperparameters can be overwritten through user code:
# Create an optimizer with the desired parameters. opt = tf.keras.optimizers.SGD(learning_rate=0.1) # `loss` is a callable that takes no argument and returns the value # to minimize. loss = lambda: 3 * var1 + 2 * var2 # In eager mode, simply call minimize to update the list of variables. opt.minimize(loss, var_list=[var1, var2]) # update learning rate opt.learning_rate