ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

tf.keras.optimizers.Nadam

Optimizer that implements the NAdam algorithm.

Inherits From: Optimizer

Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.

learning_rate A Tensor or a floating point value. The learning rate.
beta_1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2 A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon A small constant for numerical stability.
name Optional name for the operations created when applying gradients. Defaults to "Nadam".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value.

Usage Example:

opt = tf.keras.optimizers.Nadam(learning_rate=0.2)
var1 = tf.Variable(10.0)
loss = lambda: (var1 ** 2) / 2.0
step_count = opt.minimize(loss, [var1]).numpy()
"{:.1f}".format(var1.numpy())
9.8

Reference:

ValueError in case of any invalid argument.