Update relevant entries in '*var' and '*accum' according to the momentum scheme.
Set use_nesterov = True if you want to use Nesterov momentum.
That is for rows we have grad for, we update var and accum as follows:
accum = accum * momentum - lr * grad var += accum
Nested Classes
| class | ResourceSparseApplyKerasMomentum.Options |
Optional attributes for
ResourceSparseApplyKerasMomentum
|
|
Public Methods
| static <T, U extends Number> ResourceSparseApplyKerasMomentum | |
| static ResourceSparseApplyKerasMomentum.Options |
useLocking
(Boolean useLocking)
|
| static ResourceSparseApplyKerasMomentum.Options |
useNesterov
(Boolean useNesterov)
|
Inherited Methods
Public Methods
public static ResourceSparseApplyKerasMomentum create ( Scope scope, Operand <?> var, Operand <?> accum, Operand <T> lr, Operand <T> grad, Operand <U> indices, Operand <T> momentum, Options... options)
Factory method to create a class wrapping a new ResourceSparseApplyKerasMomentum operation.
Parameters
| scope | current scope |
|---|---|
| var | Should be from a Variable(). |
| accum | Should be from a Variable(). |
| lr | Learning rate. Must be a scalar. |
| grad | The gradient. |
| indices | A vector of indices into the first dimension of var and accum. |
| momentum | Momentum. Must be a scalar. |
| options | carries optional attributes values |
Returns
- a new instance of ResourceSparseApplyKerasMomentum
public static ResourceSparseApplyKerasMomentum.Options useLocking (Boolean useLocking)
Parameters
| useLocking | If `True`, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
|---|
public static ResourceSparseApplyKerasMomentum.Options useNesterov (Boolean useNesterov)
Parameters
| useNesterov | If `True`, the tensor passed to compute grad will be var + momentum * accum, so in the end, the var you get is actually var + momentum * accum. |
|---|