![]() |
![]() |
![]() |
![]() |
Overview
This is an end to end example showing the usage of the pruning preserving quantization aware training (PQAT) API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.
Other pages
For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.
Contents
In the tutorial, you will:
- Train a
tf.keras
model for the MNIST dataset from scratch. - Fine-tune the model with pruning, using the sparsity API, and see the accuracy.
- Apply QAT and observe the loss of sparsity.
- Apply PQAT and observe that the sparsity applied earlier has been preserved.
- Generate a TFLite model and observe the effects of applying PQAT on it.
- Compare the achieved PQAT model accuracy with a model quantized using post-training quantization.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
2022-12-14 12:24:30.518309: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2022-12-14 12:24:30.518430: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2022-12-14 12:24:30.518440: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Train a tf.keras model for MNIST without pruning
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
2022-12-14 12:24:32.114487: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:267] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Epoch 1/10 1688/1688 [==============================] - 8s 5ms/step - loss: 0.3346 - accuracy: 0.9047 - val_loss: 0.1567 - val_accuracy: 0.9558 Epoch 2/10 1688/1688 [==============================] - 8s 4ms/step - loss: 0.1489 - accuracy: 0.9565 - val_loss: 0.1002 - val_accuracy: 0.9737 Epoch 3/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0982 - accuracy: 0.9727 - val_loss: 0.0715 - val_accuracy: 0.9810 Epoch 4/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0763 - accuracy: 0.9776 - val_loss: 0.0684 - val_accuracy: 0.9813 Epoch 5/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0641 - accuracy: 0.9808 - val_loss: 0.0589 - val_accuracy: 0.9852 Epoch 6/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0550 - accuracy: 0.9835 - val_loss: 0.0576 - val_accuracy: 0.9843 Epoch 7/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0489 - accuracy: 0.9856 - val_loss: 0.0570 - val_accuracy: 0.9847 Epoch 8/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0439 - accuracy: 0.9869 - val_loss: 0.0567 - val_accuracy: 0.9833 Epoch 9/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0395 - accuracy: 0.9883 - val_loss: 0.0557 - val_accuracy: 0.9845 Epoch 10/10 1688/1688 [==============================] - 7s 4ms/step - loss: 0.0361 - accuracy: 0.9894 - val_loss: 0.0545 - val_accuracy: 0.9862 <keras.callbacks.History at 0x7fe801d579a0>
Evaluate the baseline model and save it for later usage
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Baseline test accuracy: 0.9818000197410583 Saving model to: /tmpfs/tmp/tmpae524eu3.h5
Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude()
API to prune the whole pre-trained model to demonstrate and observe its effectiveness in reducing the model size when applying zip, while maintaining accuracy. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.
Define the model and apply the sparsity API
The model needs to be pre-trained before using the sparsity API.
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
pruned_model.summary()
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23. Instructions for updating: Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089 Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= prune_low_magnitude_reshape (None, 28, 28, 1) 1 (PruneLowMagnitude) prune_low_magnitude_conv2d (None, 26, 26, 12) 230 (PruneLowMagnitude) prune_low_magnitude_max_poo (None, 13, 13, 12) 1 ling2d (PruneLowMagnitude) prune_low_magnitude_flatten (None, 2028) 1 (PruneLowMagnitude) prune_low_magnitude_dense ( (None, 10) 40572 PruneLowMagnitude) ================================================================= Total params: 40,805 Trainable params: 20,410 Non-trainable params: 20,395 _________________________________________________________________
Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
Epoch 1/3 1688/1688 [==============================] - 10s 5ms/step - loss: 0.1077 - accuracy: 0.9636 - val_loss: 0.0971 - val_accuracy: 0.9718 Epoch 2/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0713 - accuracy: 0.9781 - val_loss: 0.0826 - val_accuracy: 0.9780 Epoch 3/3 1688/1688 [==============================] - 8s 5ms/step - loss: 0.0614 - accuracy: 0.9816 - val_loss: 0.0782 - val_accuracy: 0.9785 <keras.callbacks.History at 0x7fe7d42895e0>
Define helper functions to calculate and print the sparsity of the model.
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
# ignore auxiliary quantization weights
if "quantize_layer" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
Check that the model was correctly pruned. We need to strip the pruning wrapper first.
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
conv2d/kernel:0: 50.00% sparsity (54/108) conv2d/bias:0: 0.00% sparsity (0/12) dense/kernel:0: 50.00% sparsity (10140/20280) dense/bias:0: 0.00% sparsity (0/10)
For this example, there is minimal loss in test accuracy after pruning, compared to the baseline.
_, pruned_model_accuracy = pruned_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', pruned_model_accuracy)
Baseline test accuracy: 0.9818000197410583 Pruned test accuracy: 0.9764000177383423
Apply QAT and PQAT and check effect on model sparsity in both cases
Next, we apply both QAT and pruning-preserving QAT (PQAT) on the pruned model and observe that PQAT preserves sparsity on your pruned model. Note that we stripped pruning wrappers from your pruned model with tfmot.sparsity.keras.strip_pruning
before applying PQAT API.
# QAT
qat_model = tfmot.quantization.keras.quantize_model(stripped_pruned_model)
qat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train qat model:')
qat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
# PQAT
quant_aware_annotate_model = tfmot.quantization.keras.quantize_annotate_model(
stripped_pruned_model)
pqat_model = tfmot.quantization.keras.quantize_apply(
quant_aware_annotate_model,
tfmot.experimental.combine.Default8BitPrunePreserveQuantizeScheme())
pqat_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train pqat model:')
pqat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
Train qat model: 422/422 [==============================] - 4s 8ms/step - loss: 0.0400 - accuracy: 0.9885 - val_loss: 0.0585 - val_accuracy: 0.9848 Train pqat model: 422/422 [==============================] - 4s 8ms/step - loss: 0.0427 - accuracy: 0.9877 - val_loss: 0.0574 - val_accuracy: 0.9843 <keras.callbacks.History at 0x7fe7e73c8bb0>
print("QAT Model sparsity:")
print_model_weights_sparsity(qat_model)
print("PQAT Model sparsity:")
print_model_weights_sparsity(pqat_model)
QAT Model sparsity: conv2d/kernel:0: 7.41% sparsity (8/108) conv2d/bias:0: 0.00% sparsity (0/12) dense/kernel:0: 6.66% sparsity (1351/20280) dense/bias:0: 0.00% sparsity (0/10) PQAT Model sparsity: conv2d/kernel:0: 50.00% sparsity (54/108) conv2d/bias:0: 0.00% sparsity (0/12) dense/kernel:0: 50.00% sparsity (10140/20280) dense/bias:0: 0.00% sparsity (0/10)
See compression benefits of PQAT model
Define helper function to get zipped model file.
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
Since this is a small model, the difference between the two models isn't very noticeable. Applying pruning and PQAT to a bigger production model would yield a more significant compression.
# QAT model
converter = tf.lite.TFLiteConverter.from_keras_model(qat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
qat_tflite_model = converter.convert()
qat_model_file = 'qat_model.tflite'
# Save the model.
with open(qat_model_file, 'wb') as f:
f.write(qat_tflite_model)
# PQAT model
converter = tf.lite.TFLiteConverter.from_keras_model(pqat_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
pqat_tflite_model = converter.convert()
pqat_model_file = 'pqat_model.tflite'
# Save the model.
with open(pqat_model_file, 'wb') as f:
f.write(pqat_tflite_model)
print("QAT model size: ", get_gzipped_model_size(qat_model_file), ' KB')
print("PQAT model size: ", get_gzipped_model_size(pqat_model_file), ' KB')
WARNING:absl:Found untraced functions such as _update_step_xla, reshape_layer_call_fn, reshape_layer_call_and_return_conditional_losses, conv2d_layer_call_fn, conv2d_layer_call_and_return_conditional_losses while saving (showing 5 of 10). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpm3ywaky2/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpm3ywaky2/assets /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/lite/python/convert.py:765: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway. warnings.warn("Statistics for quantized inputs were expected, but not " 2022-12-14 12:26:27.601802: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-12-14 12:26:27.601845: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. WARNING:absl:Found untraced functions such as _update_step_xla, reshape_layer_call_fn, reshape_layer_call_and_return_conditional_losses, conv2d_layer_call_fn, conv2d_layer_call_and_return_conditional_losses while saving (showing 5 of 10). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpwioyph_e/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpwioyph_e/assets /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/lite/python/convert.py:765: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway. warnings.warn("Statistics for quantized inputs were expected, but not " 2022-12-14 12:26:30.061820: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-12-14 12:26:30.061864: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. QAT model size: 17.32 KB PQAT model size: 14.597 KB
See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
You evaluate the model, which has been pruned and quantized, and then see the accuracy from TensorFlow persists in the TFLite backend.
interpreter = tf.lite.Interpreter(pqat_model_file)
interpreter.allocate_tensors()
pqat_test_accuracy = eval_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', pqat_test_accuracy)
print('Pruned TF test accuracy:', pruned_model_accuracy)
Evaluated on 0 results so far. Evaluated on 1000 results so far. Evaluated on 2000 results so far. INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Evaluated on 3000 results so far. Evaluated on 4000 results so far. Evaluated on 5000 results so far. Evaluated on 6000 results so far. Evaluated on 7000 results so far. Evaluated on 8000 results so far. Evaluated on 9000 results so far. Pruned and quantized TFLite test_accuracy: 0.9821 Pruned TF test accuracy: 0.9764000177383423
Apply post-training quantization and compare to PQAT model
Next, we use normal post-training quantization (no fine-tuning) on the pruned model and check its accuracy against the PQAT model. This demonstrates why you would need to use PQAT to improve the quantized model's accuracy.
First, define a generator for the callibration dataset from the first 1000 training images.
def mnist_representative_data_gen():
for image in train_images[:1000]:
image = np.expand_dims(image, axis=0).astype(np.float32)
yield [image]
Quantize the model and compare accuracy to the previously acquired PQAT model. Note that the model quantized with fine-tuning achieves higher accuracy.
converter = tf.lite.TFLiteConverter.from_keras_model(stripped_pruned_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = mnist_representative_data_gen
post_training_tflite_model = converter.convert()
post_training_model_file = 'post_training_model.tflite'
# Save the model.
with open(post_training_model_file, 'wb') as f:
f.write(post_training_tflite_model)
# Compare accuracy
interpreter = tf.lite.Interpreter(post_training_model_file)
interpreter.allocate_tensors()
post_training_test_accuracy = eval_model(interpreter)
print('PQAT TFLite test_accuracy:', pqat_test_accuracy)
print('Post-training (no fine-tuning) TF test accuracy:', post_training_test_accuracy)
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op while saving (showing 1 of 1). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp8lko_blg/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp8lko_blg/assets /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/lite/python/convert.py:765: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway. warnings.warn("Statistics for quantized inputs were expected, but not " 2022-12-14 12:26:31.690126: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-12-14 12:26:31.690161: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. fully_quantize: 0, inference_type: 6, input_inference_type: FLOAT32, output_inference_type: FLOAT32 Evaluated on 0 results so far. Evaluated on 1000 results so far. Evaluated on 2000 results so far. Evaluated on 3000 results so far. Evaluated on 4000 results so far. Evaluated on 5000 results so far. Evaluated on 6000 results so far. Evaluated on 7000 results so far. Evaluated on 8000 results so far. Evaluated on 9000 results so far. PQAT TFLite test_accuracy: 0.9821 Post-training (no fine-tuning) TF test accuracy: 0.9762
Conclusion
In this tutorial, you learned how to create a model, prune it using the sparsity API, and apply the sparsity-preserving quantization aware training (PQAT) to preserve sparsity while using QAT. The final PQAT model was compared to the QAT one to show that the sparsity is preserved in the former and lost in the latter. Next, the models were converted to TFLite to show the compression benefits of chaining pruning and PQAT model optimization techniques and the TFLite model was evaluated to ensure that the accuracy persists in the TFLite backend. Finally, the PQAT model was compared to a quantized pruned model achieved using the post-training quantization API to demonstrate the advantage of PQAT in recovering the accuracy loss from normal quantization.