TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

Module: tfa.optimizers

View source on GitHub

Additional optimizers that conform to Keras API.

Modules

conditional_gradient module: Conditional Gradient method for TensorFlow.

lazy_adam module: Variant of the Adam optimizer that handles sparse updates more efficiently.

lookahead module

moving_average module

rectified_adam module: Rectified Adam (RAdam) optimizer.

weight_decay_optimizers module: Base class to make optimizers weight decay ready.

Classes

class AdamW: Optimizer that implements the Adam algorithm with weight decay.

class ConditionalGradient: Optimizer that implements the Conditional Gradient optimization.

class LazyAdam: Variant of the Adam optimizer that handles sparse updates more

class Lookahead: This class allows to extend optimizers with the lookahead mechanism.

class MovingAverage: Optimizer that computes a moving average of the variables.

class RectifiedAdam: Variant of the Adam optimizer whose adaptive learning rate is rectified

class SGDW: Optimizer that implements the Momentum algorithm with weight_decay.

Functions

extend_with_decoupled_weight_decay(...): Factory function returning an optimizer class with decoupled weight