|TensorFlow 1 version||View source on GitHub|
Reduce learning rate when a metric has stopped improving.
Compat aliases for migration
See Migration guide for more details.
tf.keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0, **kwargs )
Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.001) model.fit(X_train, Y_train, callbacks=[reduce_lr])
||quantity to be monitored.|
factor by which the learning rate will be reduced.
||number of epochs with no improvement after which learning rate will be reduced.|
||int. 0: quiet, 1: update messages.|
||threshold for measuring the new optimum, to only focus on significant changes.|
||number of epochs to wait before resuming normal operation after lr has been reduced.|
||lower bound on the learning rate.|
set_model( model )