Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings


public class RMSProp<Model: Differentiable>: Optimizer
  Model.TangentVector: VectorProtocol & PointwiseMultiplicative
    & ElementaryFunctions & KeyPathIterable,
  Model.TangentVector.VectorSpaceScalar == Float

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except for the learning rate, which can be freely tuned). This optimizer is usually a good choice for recurrent neural networks.

Reference: “rmsprop: Divide the gradient by a running average of its recent magnitude”

  • Declaration

    public typealias Model = Model
  • The learning rate.


    public var learningRate: Float
  • rho


    public var rho: Float
  • A small scalar added to the denominator to improve numerical stability.


    public var epsilon: Float
  • The learning rate decay.


    public var decay: Float
  • The step count.


    public var step: Float
  • The alpha values for all model differentiable variables.


    public var alpha: Model.TangentVector
  • Declaration

    public init(
      for model: __shared Model,
      learningRate: Float = 0.001,
      rho: Float = 0.9,
      epsilon: Float = 1e-8,
      decay: Float = 0
  • Declaration

    public func update(_ model: inout Model, along direction: Model.TangentVector)
  • Declaration

    public required init(copying other: RMSProp, to device: Device)