{ }
Update relevant entries in 'var' and 'accum' according to the adagrad scheme.
tf.raw_ops.SparseApplyAdagradV2(
var,
accum,
lr,
epsilon,
grad,
indices,
use_locking=False,
update_slots=True,
name=None
)
That is for rows we have grad for, we update var and accum as follows:
\[accum += grad * grad\]
\[var -= lr * grad * (1 / sqrt(accum))\]
Returns | |
---|---|
A mutable Tensor . Has the same type as var .
|