Factory function returning an optimizer class with decoupled weight decay.
Returns an optimizer class. An instance of the returned class computes the
update step of
base_optimizer and additionally decays the weights.
E.g., the class returned by
extend_with_decoupled_weight_decay(tf.train.AdamOptimizer) is equivalent to
The API of the new optimizer class slightly differs from the API of the
- The first argument to the constructor is the weight decay rate.
apply_gradients accept the optional keyword argument
decay_var_list, which specifies the variables that should be decayed.
None, all variables that are optimized are decayed.
# MyAdamW is a new class MyAdamW = extend_with_decoupled_weight_decay(tf.train.AdamOptimizer) # Create a MyAdamW object optimizer = MyAdamW(weight_decay=0.001, learning_rate=0.001) sess.run(optimizer.minimize(loss, decay_variables=[var1, var2])) Note that this extension decays weights BEFORE applying the update based on the gradient, i.e. this extension only has the desired behaviour for optimizers which do not depend on the value of'var' in the update step!
base_optimizer: An optimizer class that inherits from tf.train.Optimizer.
A new optimizer class that inherits from DecoupledWeightDecayExtension and base_optimizer.