public class RMSProp<Model: Differentiable>: Optimizer
    where Model.TangentVector: VectorProtocol & PointwiseMultiplicative & ElementaryFunctions,
          Model.TangentVector.VectorSpaceScalar == Float

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except for the learning rate, which can be freely tuned). This optimizer is usually a good choice for recurrent neural networks.

Reference: rmsprop: Divide the gradient by a running average of its recent magnitude

  • Declaration

    public typealias Model = Model
  • The learning rate.

    Declaration

    public var learningRate: Float
  • rho

    Declaration

    public var rho: Float
  • A small scalar added to the denominator to improve numerical stability.

    Declaration

    public var epsilon: Float
  • The weight decay.

    Declaration

    public var decay: Float
  • The step count.

    Declaration

    public var step: Float
  • The alpha values for all model differentiable variables.

    Declaration

    public var alpha: Model.TangentVector
  • Declaration

    public init(
        for model: __shared Model,
        learningRate: Float = 0.001,
        rho: Float = 0.9,
        epsilon: Float = 1e-8,
        decay: Float = 0
    )
  • Declaration

    public func update(_ model: inout Model, along direction: Model.TangentVector)