|TensorFlow 1 version||View source on GitHub|
Optimizer that implements the FTRL algorithm.
See Migration guide for more details.
tf.keras.optimizers.Ftrl( learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, name='Ftrl', l2_shrinkage_regularization_strength=0.0, beta=0.0, **kwargs )
"Follow The Regularized Leader" (FTRL) is an optimization algorithm developed at Google for click-through rate prediction in the early 2010s. It is most suitable for shallow models with large and sparse feature spaces. The algorithm is described in this paper. The Keras version has support for both online L2 regularization (the L2 regularization described in the paper above) and shrinkage-type L2 regularization (which is the addition of an L2 penalty to the loss function).
n = 0 sigma = 0 z = 0
Update rule for one variable
prev_n = n n = n + g ** 2 sigma = (sqrt(n) - sqrt(prev_n)) / lr z