View source on GitHub

Loss scale manager with a fixed loss scale.

Inherits From: LossScaleManager

The loss scale is not updated for the lifetime of the class.

loss_scale A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training.

ValueError If loss_scale is less than 1.



View source

Returns the loss scale as a scalar float32 tensor.


View source

Updates loss scale based on if gradients are finite in current step.

finite_grads bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan).

An op, when executed updates the loss scale. If eager execution is enabled, does not return anything.