Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge

tf.keras.mixed_precision.experimental.LossScaleOptimizer

An deprecated optimizer that applies loss scaling.

Inherits From: LossScaleOptimizer, Optimizer

This class is identical to the non-experimental keras.mixed_precision.LossScaleOptimizer except its constructor takes different arguments. For this class (the experimental version), the constructor takes a loss_scale argument. For the non-experimental class, the constructor encodes the loss scaling information in multiple arguments. Note that unlike this class, the non-experimental class does not accept a tf.compat.v1.mixed_precision.LossScale, which is deprecated.

If you currently use this class, you should switch to the non-experimental tf.keras.mixed_precision.LossScaleOptimizer instead. We show several examples of converting the use of the experimental class to the equivalent non-experimental class.

# In all of the the examples below, `opt1` and `opt2` are identical
opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), loss_scale='dynamic')
opt2 = tf.keras.mixed_precision.LossScaleOptimizer(
    tf.keras.optimizers.SGD())
assert opt1.get_config() == opt2.get_config()
opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), loss_scale=123)
# dynamic=False indicates to use fixed loss scaling. initial_scale=123
# refers to the initial loss scale, which is the single fixed loss scale
# when dynamic=False.
opt2 = tf.keras.mixed_precision.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), dynamic=False, initial_scale=123)
assert opt1.get_config() == opt2.get_config()
loss_scale = tf.compat.v1.mixed_precision.experimental.DynamicLossScale(
    initial_loss_scale=2048, increment_period=500)
opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), loss_scale=loss_scale)
opt2 = tf.keras.mixed_precision.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), initial_scale=2048,
    dynamic_growth_steps=500)
assert opt1.get_config() == opt2.get_config()

Make sure to also switch from this class to the non-experimental class in isinstance checks, if you have any. If you do not do this, your model may run into hard-to-debug issues, as the experimental LossScaleOptimizer subclasses the non-experimental LossScaleOptimizer, but not vice versa. It is safe to switch isinstance checks to the non-experimental LossScaleOptimizer even before using the non-experimental LossScaleOptimizer.

opt1 = tf.keras.mixed_precision.experimental.LossScaleOptimizer(
    tf.keras.optimizers.SGD(), loss_scale='dynamic')
# The experimental class subclasses the non-experimental class
isinstance(opt1, tf.keras.mixed_precision.LossScaleOptimizer)
True
opt2 = tf.keras.mixed_precision.LossScaleOptimizer(
    tf.keras.optimizers.SGD())
# The non-experimental class does NOT subclass the experimental class.
isinstance(opt2, tf.keras.mixed_precision.experimental.LossScaleOptimizer)
False

optimizer The Optimizer instance to wrap.
loss_scale The loss scale to scale the loss and gradients. This can either be an int/float to use a fixed loss scale, the string "dynamic" to use dynamic loss scaling, or an instance of a LossScale. The string "dynamic" equivalent to passing DynamicLossScale(), and passing an int/float is equivalent to passing a FixedLossScale with the given loss scale. If a DynamicLossScale is passed, DynamicLossScale.multiplier must be 2 (the default).

ValueError in case of any invalid argument.

dynamic Bool indicating whether dynamic loss scaling is used.
dynamic_counter The number of steps since the loss scale was last increased or decreased.

This is None if LossScaleOptimizer.dynamic is False.

The counter is incremented every step. Once it reaches LossScaleOptimizer.dynamic_growth_steps, the loss scale will be doubled and the counter will be reset back to zero. If nonfinite gradients are encountered, the loss scale will be halved and the counter will be reset back to zero.

dynamic_growth_steps The number of steps it takes to increase the loss scale.

This is None if LossScaleOptimizer.dynamic is False.

Every dynamic_growth_steps consecutive steps with finite gradients, the loss scale is increased.

initial_scale The initial loss scale.

If LossScaleOptimizer.dynamic is False, this is the same number as LossScaleOptimizer.loss_scale, a