public class AdaDelta<Model: Differentiable>: Optimizer
    where Model.TangentVector: VectorProtocol & PointwiseMultiplicative & ElementaryFunctions,
          Model.TangentVector.VectorSpaceScalar == Float

ADADELTA optimizer.

ADADELTA is a more robust extension of AdaGrad. ADADELTA adapts learning rates based on a moving window of gradient updates rather than by accumulating all past gradient norms. It can thus adapt faster to changing dynamics of the optimization problem space.

Reference: ADADELTA: An Adaptive Learning Rate Method

  • Declaration

    public typealias Model = Model
  • The learning rate.

    Declaration

    public var learningRate: Float
  • rho

    The decay factor, corresponding to fraction of gradient to keep at each time step.

    Declaration

    public var rho: Float
  • A small scalar added to the denominator to improve numerical stability.

    Declaration

    public var epsilon: Float
  • The learning rate decay.

    Declaration

    public var decay: Float
  • The current step.

    Declaration

    public var step: Int
  • The accumulated, exponentially decaying average of squared gradients.

    Declaration

    public var averageSquared: Model.TangentVector
  • The accumulated parameter updates.

    Declaration

    public var accumulatedDelta: Model.TangentVector
  • Declaration

    public init(
        for model: __shared Model,
        learningRate: Float = 1,
        rho: Float = 0.95,
        epsilon: Float = 1e-6,
        decay: Float = 0
    )
  • Declaration

    public func update(_ model: inout Model, along direction: Model.TangentVector)