{ }
Update 'var' and 'accum' according to FOBOS with Adagrad learning rate.
tf.raw_ops.ApplyProximalAdagrad(
var, accum, lr, l1, l2, grad, use_locking=False, name=None
)
accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}
Returns | |
---|---|
A mutable Tensor . Has the same type as var .
|