tf.train.piecewise_constant( x, boundaries, values, name=None )
See the guide: Training > Decaying the learning rate
Piecewise constant from boundaries and interval values.
Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5 for the next 10000 steps, and 0.1 for any additional steps.
global_step = tf.Variable(0, trainable=False) boundaries = [100000, 110000] values = [1.0, 0.5, 0.1] learning_rate = tf.train.piecewise_constant(global_step, boundaries, values) # Later, whenever we perform an optimization step, we increment global_step.
x: A 0-D scalar
Tensor. Must be one of the following types:
boundaries: A list of
floats with strictly increasing entries, and with all elements having the same type as
values: A list of
ints that specifies the values for the intervals defined by
boundaries. It should have one more element than
boundaries, and all elements should have the same type.
name: A string. Optional name of the operation. Defaults to 'PiecewiseConstant'.
A 0-D Tensor. Its value is
x <= boundaries,
x > boundaries and
x <= boundaries, ...,
and values[-1] when
x > boundaries[-1].
ValueError: if types of
boundariesdo not match, or types of all
valuesdo not match or the number of elements in the lists does not match.
When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions.