![]() |
![]() |
![]() |
![]() |
Overview
This is an end to end example showing the usage of the sparsity preserving clustering API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.
Other pages
For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.
Contents
In the tutorial, you will:
- Train a
tf.keras
model for the MNIST dataset from scratch. - Fine-tune the model with sparsity and see the accuracy and observe that the model was successfully pruned.
- Apply weight clustering to the pruned model and observe the loss of sparsity.
- Apply sparsity preserving clustering on the pruned model and observe that the sparsity applied earlier has been preserved.
- Generate a TFLite model and check that the accuracy has been preserved in the pruned clustered model.
- Compare the sizes of the different models to observe the compression benefits of applying sparsity followed by the collaborative optimization technique of sparsity preserving clustering.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
Train a tf.keras model for MNIST to be pruned and clustered
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Epoch 1/10 1688/1688 [==============================] - 8s 4ms/step - loss: 0.3112 - accuracy: 0.9121 - val_loss: 0.1266 - val_accuracy: 0.9673 Epoch 2/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.1215 - accuracy: 0.9662 - val_loss: 0.0795 - val_accuracy: 0.9783 Epoch 3/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0845 - accuracy: 0.9756 - val_loss: 0.0654 - val_accuracy: 0.9820 Epoch 4/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0685 - accuracy: 0.9802 - val_loss: 0.0601 - val_accuracy: 0.9825 Epoch 5/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0586 - accuracy: 0.9823 - val_loss: 0.0592 - val_accuracy: 0.9833 Epoch 6/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0522 - accuracy: 0.9845 - val_loss: 0.0532 - val_accuracy: 0.9858 Epoch 7/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0468 - accuracy: 0.9860 - val_loss: 0.0571 - val_accuracy: 0.9847 Epoch 8/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0423 - accuracy: 0.9873 - val_loss: 0.0543 - val_accuracy: 0.9842 Epoch 9/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0383 - accuracy: 0.9883 - val_loss: 0.0535 - val_accuracy: 0.9860 Epoch 10/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0346 - accuracy: 0.9895 - val_loss: 0.0574 - val_accuracy: 0.9848 <tensorflow.python.keras.callbacks.History at 0x7f8e9c4170d0>
Evaluate the baseline model and save it for later usage
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Baseline test accuracy: 0.9812999963760376 Saving model to: /tmp/tmpfua4ka97.h5
Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude()
API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.
Define the model and apply the sparsity API
Note that the pre-trained model is used.
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
pruned_model.summary()
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:2191: UserWarning: `layer.add_variable` is deprecated and will be removed in a future version. Please use `layer.add_weight` method instead. warnings.warn('`layer.add_variable` is deprecated and ' Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= prune_low_magnitude_reshape (None, 28, 28, 1) 1 _________________________________________________________________ prune_low_magnitude_conv2d ( (None, 26, 26, 12) 230 _________________________________________________________________ prune_low_magnitude_max_pool (None, 13, 13, 12) 1 _________________________________________________________________ prune_low_magnitude_flatten (None, 2028) 1 _________________________________________________________________ prune_low_magnitude_dense (P (None, 10) 40572 ================================================================= Total params: 40,805 Trainable params: 20,410 Non-trainable params: 20,395 _________________________________________________________________
Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
Epoch 1/3 WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:5049: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version. Instructions for updating: The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU. 1688/1688 [==============================] - 9s 5ms/step - loss: 0.0931 - accuracy: 0.9696 - val_loss: 0.0878 - val_accuracy: 0.9755 Epoch 2/3 1688/1688 [==============================] - 8s 4ms/step - loss: 0.0597 - accuracy: 0.9813 - val_loss: 0.0732 - val_accuracy: 0.9802 Epoch 3/3 1688/1688 [==============================] - 8s 4ms/step - loss: 0.0499 - accuracy: 0.9849 - val_loss: 0.0689 - val_accuracy: 0.9822 <tensorflow.python.keras.callbacks.History at 0x7f8e9c3566d0>
Define helper functions to calculate and print the sparsity of the model.
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
if "kernel" not in weight.name or "centroid" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step.
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
stripped_pruned_model_copy = tf.keras.models.clone_model(stripped_pruned_model)
stripped_pruned_model_copy.set_weights(stripped_pruned_model.get_weights())
conv2d/kernel:0: 50.00% sparsity (54/108) dense/kernel:0: 50.00% sparsity (10140/20280)
Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases
Next, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model with tfmot.sparsity.keras.strip_pruning
before applying the clustering API.
# Clustering
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS
}
clustered_model = cluster_weights(stripped_pruned_model, **clustering_params)
clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train clustering model:')
clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
stripped_pruned_model.save("stripped_pruned_model_clustered.h5")
# Sparsity preserving clustering
from tensorflow_model_optimization.python.core.clustering.keras.experimental import (
cluster,
)
cluster_weights = cluster.cluster_weights
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS,
'preserve_sparsity': True
}
sparsity_clustered_model = cluster_weights(stripped_pruned_model_copy, **clustering_params)
sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train sparsity preserving clustering model:')
sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
Train clustering model: Epoch 1/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0404 - accuracy: 0.9873 - val_loss: 0.0608 - val_accuracy: 0.9845 Epoch 2/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0386 - accuracy: 0.9879 - val_loss: 0.0599 - val_accuracy: 0.9842 Epoch 3/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0394 - accuracy: 0.9873 - val_loss: 0.0599 - val_accuracy: 0.9842 WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. Train sparsity preserving clustering model: Epoch 1/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0411 - accuracy: 0.9873 - val_loss: 0.0570 - val_accuracy: 0.9840 Epoch 2/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0409 - accuracy: 0.9865 - val_loss: 0.0559 - val_accuracy: 0.9835 Epoch 3/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0397 - accuracy: 0.9872 - val_loss: 0.0582 - val_accuracy: 0.9853 <tensorflow.python.keras.callbacks.History at 0x7f8e2c447bd0>
Check sparsity for both models.
print("Clustered Model sparsity:\n")
print_model_weights_sparsity(clustered_model)
print("\nSparsity preserved clustered Model sparsity:\n")
print_model_weights_sparsity(sparsity_clustered_model)
Clustered Model sparsity: conv2d/kernel:0: 0.00% sparsity (0/108) dense/kernel:0: 0.98% sparsity (198/20280) Sparsity preserved clustered Model sparsity: conv2d/kernel:0: 50.00% sparsity (54/108) dense/kernel:0: 50.00% sparsity (10140/20280)
Create 1.6x smaller models from clustering
Define helper function to get zipped model file.
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
# Clustered model
clustered_model_file = 'clustered_model.h5'
# Save the model.
clustered_model.save(clustered_model_file)
#Sparsity Preserve Clustered model
sparsity_clustered_model_file = 'sparsity_clustered_model.h5'
# Save the model.
sparsity_clustered_model.save(sparsity_clustered_model_file)
print("Clustered Model size: ", get_gzipped_model_size(clustered_model_file), ' KB')
print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')
Clustered Model size: 245.456 KB Sparsity preserved clustered Model size: 154.102 KB
Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization
Strip clustering wrappers and convert to TFLite.
stripped_sparsity_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model)
converter = tf.lite.TFLiteConverter.from_keras_model(stripped_sparsity_clustered_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
sparsity_clustered_quant_model = converter.convert()
_, pruned_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_and_clustered_tflite_file, 'wb') as f:
f.write(sparsity_clustered_quant_model)
print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')
print("Sparsity preserved clustered and quantized TFLite model size:",
get_gzipped_model_size(pruned_and_clustered_tflite_file), ' KB')
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model. INFO:tensorflow:Assets written to: /tmp/tmp8wy5vqee/assets Sparsity preserved clustered Model size: 154.102 KB Sparsity preserved clustered and quantized TFLite model size: 7.6 KB
See the persistence of accuracy from TF to TFLite
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.
# Keras model evaluation
stripped_sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
_, sparsity_clustered_keras_accuracy = stripped_sparsity_clustered_model.evaluate(
test_images, test_labels, verbose=0)
# TFLite model evaluation
interpreter = tf.lite.Interpreter(pruned_and_clustered_tflite_file)
interpreter.allocate_tensors()
sparsity_clustered_tflite_accuracy = eval_model(interpreter)
print('Pruned, clustered and quantized Keras model accuracy:', sparsity_clustered_keras_accuracy)
print('Pruned, clustered and quantized TFLite model accuracy:', sparsity_clustered_tflite_accuracy)
Evaluated on 0 results so far. Evaluated on 1000 results so far. Evaluated on 2000 results so far. Evaluated on 3000 results so far. Evaluated on 4000 results so far. Evaluated on 5000 results so far. Evaluated on 6000 results so far. Evaluated on 7000 results so far. Evaluated on 8000 results so far. Evaluated on 9000 results so far. Pruned, clustered and quantized Keras model accuracy: 0.978600025177002 Pruned, clustered and quantized TFLite model accuracy: 0.9785
Conclusion
In this tutorial, you learned how to create a model, prune it using the prune_low_magnitude()
API, and apply sparsity preserving clustering to preserve sparsity while clustering the weights. The sparsity preserving clustered model was compared to a clustered one to show that sparsity is preserved in the former and lost in the latter. Next, the pruned clustered model was converted to TFLite to show the compression benefits of chaining the pruning and sparsity preserving clustering model optimization techniques and, finally, the TFLite model was evaluated to ensure that the accuracy persists in the TFLite backend.