tff.learning.optimizers.build_yogi

Returns a tff.learning.optimizers.Optimizer for Yogi.

The Yogi optimizer is based on Adaptive methods for nonconvex optimization

The update rule given learning rate lr, epsilon eps, accumulator acc, preconditioner s, iteration t, weights w and gradients g is:

acc = beta_1 * acc + (1 - beta_1) * g
s = s + (1 - beta_2) * sign(g - s) * (g ** 2)
normalized_lr = lr * sqrt(1 - beta_2**t) / (1 - beta_1**t)
w = w - normalized_lr * acc / (sqrt(s) + eps)

Implementation of Yogi is based on additive updates, as opposed to multiplicative updates (as in Adam). Experiments show better performance across NLP and Vision tasks both in centralized and federated settings.

Typically use 10x the learning rate used for Adam.

learning_rate A positive float for learning rate.
beta_1 A float between 0.0 and 1.0 for the decay used to track the previous gradients.
beta_2 A float between 0.0 and 1.0 for the decay used to track the magnitude (second moment) of previous gradients.
epsilon A constant trading off adaptivity and noise..
initial_preconditioner_value The starting value for preconditioner. Only positive values are allowed.