|View source on GitHub|
Loss scale manager with a fixed loss scale.
tf.contrib.mixed_precision.FixedLossScaleManager( loss_scale )
The loss scale is not updated for the lifetime of the class.
||A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training.|
||If loss_scale is less than 1.|
Returns the loss scale as a scalar
update_loss_scale( finite_grads )
Updates loss scale based on if gradients are finite in current step.
||bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan).|
|An op, when executed updates the loss scale. If eager execution is enabled, does not return anything.|