|View source on GitHub|
tff.learning.optimizers.Optimizer for momentum SGD.
tff.learning.optimizers.build_sgdm( learning_rate: optimizer.Float = 0.01, momentum: Optional[optimizer.Float] = None ) ->
Used in the notebooks
|Used in the tutorials|
This class supports the simple gradient descent and its variant with momentum.
If momentum is not used, the update rule given learning rate
w = w - lr * g
m (a float between
1.0) is used, the update rule is
v = m * v + g w = w - lr * v
v is the velocity from previous steps of the optimizer.
||A positive float for learning rate, default to 0.01.|
An optional float between 0.0 and 1.0. If