tfm.optimization.PowerAndLinearDecay

Stay organized with collections Save and categorize content based on your preferences.

Learning rate schedule with multiplied by linear decay at the end.

The schedule has the following behavoir. Let offset_step = step - offset.

1) offset_step < 0, the actual learning rate equals initial_learning_rate. 2) offset_step <= total_decay_steps * (1 - linear_decay_fraction), the actual learning rate equals lr * offset_step^power. 3) total_decay_steps * (1 - linear_decay_fraction) <= offset_step < total_decay_steps, the actual learning rate equals lr * offset_step^power * (total_decay_steps - offset_step) / (total_decay_steps * linear_decay_fraction). 4) offset_step >= total_decay_steps, the actual learning rate equals zero.

initial_learning_rate The initial learning rate.
total_decay_steps The total number of steps for power + linear decay.
power The order of the polynomial.
linear_decay_fraction In the last linear_decay_fraction steps, the learning rate will be multiplied by a linear decay.
offset The offset applied to steps.
name Optional, name of learning rate schedule.

Methods

from_config

Instantiates a LearningRateSchedule from its config.

Args
config Output of get_config().

Returns
A LearningRateSchedule instance.

get_config

View source

Get the configuration of the learning rate schedule.

__call__

View source

Call self as a function.