|View source on GitHub|
Applies linear cosine decay to the learning rate.
tf.train.linear_cosine_decay( learning_rate, global_step, decay_steps, num_periods=0.5, alpha=0.0, beta=0.001, name=None )
See [Bello et al., ICML2017] Neural Optimizer Search with RL. https://arxiv.org/abs/1709.07417
For the idea of warm starts here controlled by
see [Loshchilov & Hutter, ICLR2016] SGDR: Stochastic Gradient Descent
with Warm Restarts. https://arxiv.org/abs/1608.03983
Note that linear cosine decay is more aggressive than cosine decay and larger initial learning rates can typically be used.
When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies a linear cosine decay function
to a provided initial learning rate. It requires a
global_step value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step.
The function returns the decayed learning rate. It is computed as:
global_step = min(global_step, decay_steps) linear_decay = (decay_steps - global_step) / decay_steps) cosine_decay = 0.5 * ( 1 + cos(pi * 2 * num_periods * global_step / decay_steps)) decayed = (alpha + linear_decay) * cosine_decay + beta decayed_learning_rate = learning_rate * decayed
decay_steps = 1000 lr_decayed = linear_cosine_decay(learning_rate, global_step, decay_steps)
learning_rate: A scalar
float64Tensor or a Python number. The initial learning rate.
global_step: A scalar
Tensoror a Python number. Global step to use for the decay computation.
decay_steps: A scalar
Tensoror a Python number. Number of steps to decay over.
num_periods: Number of periods in the cosine part of the decay. See computation above.
alpha: See computation above.
beta: See computation above.
name: String. Optional name of the operation. Defaults to 'LinearCosineDecay'.
Tensor of the same type as
learning_rate. The decayed
global_stepis not supplied.
When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.