{ }
Sparse update '*var' as FOBOS algorithm with fixed learning rate.
tf.raw_ops.SparseApplyProximalGradientDescent(
var, alpha, l1, l2, grad, indices, use_locking=False, name=None
)
That is for rows we have grad for, we update var as follows:
\[prox_v = var - alpha * grad\]
\[var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}\]
Returns | |
---|---|
A mutable Tensor . Has the same type as var .
|