tf.raw_ops.SparseApplyRMSProp

Update '*var' according to the RMSProp algorithm.

Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)

$$ms <- rho * ms_{t-1} + (1-rho) * grad * grad$$
$$mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon)$$
$$var <- var - mom$$

var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
ms A mutable Tensor. Must have the same type as var. Should be from a Variable().
mom A mutable Tensor. Must have the same type as var. Should be from a Variable().
lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
rho A