tf.keras.callbacks.ReduceLROnPlateau

Reduce learning rate when a metric has stopped improving.

Inherits From: Callback

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

Example:

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
                              patience=5, min_lr=0.001)
model.fit(X_train, Y_train, callbacks=[reduce_lr])

monitor quantity to be monitored.
factor factor by which the learning rate will be reduced. new_lr = lr * factor.
patience number of epochs with no improvement after which learning rate will be reduced.
verbose int. 0: quiet, 1: update messages.
mode one of {'auto', 'min', '