tf.compat.v1.train.AdagradOptimizer

Optimizer that implements the Adagrad algorithm.

Inherits From: Optimizer

References:

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization :Duchi et al., 2011 (pdf)

learning_rate A Tensor or a floating point value. The learning rate.
initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive.
use_locking If True use locks for update operations.
name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad".

ValueError If the initial_accumulator_value is invalid.

Methods

apply_gradients