TensorFlow Addons Optimizers: LazyAdam

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

Overview

This notebook will demonstrate how to use the lazy adam optimizer from the Addons package.

LazyAdam

LazyAdam is a variant of the Adam optimizer that handles sparse updates more efficiently. The original Adam algorithm maintains two moving-average accumulators for each trainable variable; the accumulators are updated at every step. This class provides lazier handling of gradient updates for sparse variables. It only updates moving-average accumulators for sparse variable indices that appear in the current batch, rather than updating the accumulators for all indices. Compared with the original Adam optimizer, it can provide large improvements in model training throughput for some applications. However, it provides slightly different semantics than the original Adam algorithm, and may lead to different empirical results.

Setup

pip install -q -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
# Hyperparameters
batch_size=64
epochs=10

Build the Model

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
    tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
    tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])

Prepare the Data

# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255

Train and Evaluate

Simply replace typical keras optimizers with the new tfa optimizer

# Compile the model
model.compile(
    optimizer=tfa.optimizers.LazyAdam(0.001),  # Utilize TFA optimizer
    loss=tf.keras.losses.SparseCategoricalCrossentropy(),
    metrics=['accuracy'])

# Train the network
history = model.fit(
    x_train,
    y_train,
    batch_size=batch_size,
    epochs=epochs)

Epoch 1/10
938/938 [==============================] - 2s 2ms/step - loss: 0.3284 - accuracy: 0.9047
Epoch 2/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1462 - accuracy: 0.9559
Epoch 3/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1060 - accuracy: 0.9680
Epoch 4/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0825 - accuracy: 0.9741
Epoch 5/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0670 - accuracy: 0.9792
Epoch 6/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0574 - accuracy: 0.9822
Epoch 7/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0493 - accuracy: 0.9849
Epoch 8/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0422 - accuracy: 0.9866
Epoch 9/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0364 - accuracy: 0.9885
Epoch 10/10
938/938 [==============================] - 2s 2ms/step - loss: 0.0324 - accuracy: 0.9894

# Evaluate the network
print('Evaluate on test data:')
results = model.evaluate(x_test, y_test, batch_size=128, verbose = 2)
print('Test loss = {0}, Test acc: {1}'.format(results[0], results[1]))
Evaluate on test data:
79/79 - 0s - loss: 0.0861 - accuracy: 0.9752
Test loss = 0.08609830588102341, Test acc: 0.9751999974250793