public class RMSProp<Model: Layer>: Optimizer
    where Model.AllDifferentiableVariables == Model.CotangentVector

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except the learning rate, which can be freely tuned). This optimizer is usually a good choice for recurrent neural networks.

Reference: rmsprop: Divide the gradient by a running average of its recent magnitude

  • The learning rate.

    Declaration

    public var learningRate: Float
  • rho

    Declaration

    public var rho: Float
  • A small scalar added to the denominator to improve numerical stability.

    Declaration

    public var epsilon: Float
  • The weight decay.

    Declaration

    public var decay: Float
  • The step count.

    Declaration

    public var step: Float
  • The alpha values for all model differentiable variables.

    Declaration

    public var alpha: Model.AllDifferentiableVariables
  • Declaration

    public init(
        for model: __shared Model,
        learningRate: Float = 0.001,
        rho: Float = 0.9,
        epsilon: Float = 1e-8,
        decay: Float = 0
    )
  • Declaration

    public func update(_ model: inout Model.AllDifferentiableVariables,
                       along direction: Model.CotangentVector)