tf.compat.v1.train.AdagradOptimizer

Optimizer that implements the Adagrad algorithm.

Inherits From: Optimizer

References:

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization :Duchi et al., 2011 (pdf)

learning_rate A Tensor or a floating point value. The learning rate.
initial_accumulator_value A floating point value. Starting value for the accumulators, must be positive.
use_locking If True use locks for update operations.
name Optional name prefix for the operations created when applying gradients. Defaults to "Adagrad".

ValueError If the initial_accumulator_value is invalid.

Methods

apply_gradients

View source

Apply gradients to variables.

This is the second part of minimize(). It returns an Operation that applies gradients.

Args
grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.

Returns
An Operation that applies the specified g