Module: tfa.optimizers.weight_decay_optimizers

View source on GitHub

Base class to make optimizers weight decay ready.


class AdamW: Optimizer that implements the Adam algorithm with weight decay.

class DecoupledWeightDecayExtension: This class allows to extend optimizers with decoupled weight decay.

class SGDW: Optimizer that implements the Momentum algorithm with weight_decay.


extend_with_decoupled_weight_decay(...): Factory function returning an optimizer class with decoupled weight

Type Aliases

FloatTensorLike: The central part of internal API.