Training Ops
Summary
Classes |
|
---|---|
tensorflow:: |
Update '*var' according to the adadelta scheme. |
tensorflow:: |
Update '*var' according to the adagrad scheme. |
tensorflow:: |
Update '*var' according to the proximal adagrad scheme. |
tensorflow:: |
Update '*var' according to the Adam algorithm. |
tensorflow:: |
Update '*var' according to the AddSign update. |
tensorflow:: |
Update '*var' according to the centered RMSProp algorithm. |
tensorflow:: |
Update '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update '*var' by subtracting 'alpha' * 'delta' from it. |
tensorflow:: |
Update '*var' according to the momentum scheme. |
tensorflow:: |
Update '*var' according to the AddSign update. |
tensorflow:: |
Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. |
tensorflow:: |
Update '*var' as FOBOS algorithm with fixed learning rate. |
tensorflow:: |
Update '*var' according to the RMSProp algorithm. |
tensorflow:: |
Update '*var' according to the adadelta scheme. |
tensorflow:: |
Update '*var' according to the adagrad scheme. |
tensorflow:: |
Update '*var' according to the proximal adagrad scheme. |
tensorflow:: |
Update '*var' according to the Adam algorithm. |
tensorflow:: |
Update '*var' according to the Adam algorithm. |
tensorflow:: |
Update '*var' according to the AddSign update. |
tensorflow:: |
Update '*var' according to the centered RMSProp algorithm. |
tensorflow:: |
Update '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update '*var' by subtracting 'alpha' * 'delta' from it. |
tensorflow:: |
Update '*var' according to the momentum scheme. |
tensorflow:: |
Update '*var' according to the momentum scheme. |
tensorflow:: |
Update '*var' according to the AddSign update. |
tensorflow:: |
Update '*var' and '*accum' according to FOBOS with Adagrad learning rate. |
tensorflow:: |
Update '*var' as FOBOS algorithm with fixed learning rate. |
tensorflow:: |
Update '*var' according to the RMSProp algorithm. |
tensorflow:: |
var: Should be from a Variable(). |
tensorflow:: |
Update relevant entries in '*var' and '*accum' according to the adagrad scheme. |
tensorflow:: |
Update entries in '*var' and '*accum' according to the proximal adagrad scheme. |
tensorflow:: |
Update '*var' according to the centered RMSProp algorithm. |
tensorflow:: |
Update relevant entries in '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update relevant entries in '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update relevant entries in '*var' and '*accum' according to the momentum scheme. |
tensorflow:: |
Update relevant entries in '*var' and '*accum' according to the momentum scheme. |
tensorflow:: |
Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. |
tensorflow:: |
Sparse update '*var' as FOBOS algorithm with fixed learning rate. |
tensorflow:: |
Update '*var' according to the RMSProp algorithm. |
tensorflow:: |
var: Should be from a Variable(). |
tensorflow:: |
Update relevant entries in '*var' and '*accum' according to the adagrad scheme. |
tensorflow:: |
Update entries in '*var' and '*accum' according to the proximal adagrad scheme. |
tensorflow:: |
Update '*var' according to the centered RMSProp algorithm. |
tensorflow:: |
Update relevant entries in '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update relevant entries in '*var' according to the Ftrl-proximal scheme. |
tensorflow:: |
Update relevant entries in '*var' and '*accum' according to the momentum scheme. |
tensorflow:: |
Sparse update entries in '*var' and '*accum' according to FOBOS algorithm. |
tensorflow:: |
Sparse update '*var' as FOBOS algorithm with fixed learning rate. |
tensorflow:: |
Update '*var' according to the RMSProp algorithm. |