{ }
Update '*var' according to the RMSProp algorithm.
tf.raw_ops.ApplyRMSProp(
var,
ms,
mom,
lr,
rho,
momentum,
epsilon,
grad,
use_locking=False,
name=None
)
Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom
Returns | |
---|---|
A mutable Tensor . Has the same type as var .
|