Registration is open for TensorFlow Dev Summit 2020 Learn more


public class RMSProp<Model: Differentiable>: Optimizer
    where Model.TangentVector: VectorProtocol & PointwiseMultiplicative & ElementaryFunctions,
          Model.TangentVector.VectorSpaceScalar == Float

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except for the learning rate, which can be freely tuned). This optimizer is usually a good choice for recurrent neural networks.

Reference: rmsprop: Divide the gradient by a running average of its recent magnitude

  • Declaration

    public typealias Model = Model
  • The learning rate.


    public var learningRate: Float
  • rho


    public var rho: Float
  • A small scalar added to the denominator to improve numerical stability.


    public var epsilon: Float
  • The weight decay.


    public var decay: Float
  • The step count.


    public var step: Float
  • The alpha values for all model differentiable variables.


    public var alpha: Model.TangentVector
  • Declaration

    public init(
        for model: __shared Model,
        learningRate: Float = 0.001,
        rho: Float = 0.9,
        epsilon: Float = 1e-8,
        decay: Float = 0
  • Declaration

    public func update(_ model: inout Model, along direction: Model.TangentVector)