![]() |
![]() |
![]() |
![]() |
Welcome to the guide on Keras weights pruning for improving latency of on-device inference via XNNPACK.
This guide presents the usage of the newly introduced tfmot.sparsity.keras.PruningPolicy
API and demonstrates how it could be used for accelerating mostly convolutional models on modern CPUs using XNNPACK Sparse inference.
The guide covers the following steps of the model creation process:
- Build and train the dense baseline
- Fine-tune model with pruning
- Convert to TFLite
- On-device benchmark
The guide doesn't cover the best practices for the fine-tuning with pruning. For more detailed information on this topic, please check out our comprehensive guide.
Setup
pip install -q tensorflow
pip install -q tensorflow-model-optimization
import tempfile
import tensorflow as tf
import numpy as np
from tensorflow import keras
import tensorflow_datasets as tfds
import tensorflow_model_optimization as tfmot
%load_ext tensorboard
2023-10-03 11:17:19.531296: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-10-03 11:17:19.531342: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-10-03 11:17:19.531379: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Build and train the dense model
We build and train a simple baseline CNN for classification task on CIFAR10 dataset.
# Load CIFAR10 dataset.
(ds_train, ds_val, ds_test), ds_info = tfds.load(
'cifar10',
split=['train[:90%]', 'train[90%:]', 'test'],
as_supervised=True,
with_info=True,
)
# Normalize the input image so that each pixel value is between 0 and 1.
def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
return tf.image.convert_image_dtype(image, tf.float32), label
# Load the data in batches of 128 images.
batch_size = 128
def prepare_dataset(ds, buffer_size=None):
ds = ds.map(normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.cache()
if buffer_size:
ds = ds.shuffle(buffer_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
ds_train = prepare_dataset(ds_train,
buffer_size=ds_info.splits['train'].num_examples)
ds_val = prepare_dataset(ds_val)
ds_test = prepare_dataset(ds_test)
# Build the dense baseline model.
dense_model = keras.Sequential([
keras.layers.InputLayer(input_shape=(32, 32, 3)),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.Conv2D(
filters=8,
kernel_size=(3, 3),
strides=(2, 2),
padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.DepthwiseConv2D(kernel_size=(3, 3), padding='same'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=16, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.DepthwiseConv2D(
kernel_size=(3, 3), strides=(2, 2), padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=32, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.GlobalAveragePooling2D(),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Compile and train the dense model for 10 epochs.
dense_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
dense_model.fit(
ds_train,
epochs=10,
validation_data=ds_val)
# Evaluate the dense model.
_, dense_model_accuracy = dense_model.evaluate(ds_test, verbose=0)
2023-10-03 11:17:23.198789: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Epoch 1/10 352/352 [==============================] - 14s 23ms/step - loss: 1.9730 - accuracy: 0.2703 - val_loss: 2.3706 - val_accuracy: 0.1740 Epoch 2/10 352/352 [==============================] - 5s 14ms/step - loss: 1.6978 - accuracy: 0.3736 - val_loss: 2.1230 - val_accuracy: 0.2326 Epoch 3/10 352/352 [==============================] - 5s 14ms/step - loss: 1.6037 - accuracy: 0.4090 - val_loss: 1.7919 - val_accuracy: 0.3416 Epoch 4/10 352/352 [==============================] - 5s 14ms/step - loss: 1.5405 - accuracy: 0.4409 - val_loss: 1.5667 - val_accuracy: 0.4308 Epoch 5/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4906 - accuracy: 0.4628 - val_loss: 1.4805 - val_accuracy: 0.4598 Epoch 6/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4596 - accuracy: 0.4714 - val_loss: 1.5235 - val_accuracy: 0.4382 Epoch 7/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4361 - accuracy: 0.4800 - val_loss: 1.4907 - val_accuracy: 0.4414 Epoch 8/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4145 - accuracy: 0.4907 - val_loss: 1.4981 - val_accuracy: 0.4614 Epoch 9/10 352/352 [==============================] - 5s 14ms/step - loss: 1.3994 - accuracy: 0.4932 - val_loss: 1.4911 - val_accuracy: 0.4420 Epoch 10/10 352/352 [==============================] - 5s 14ms/step - loss: 1.3867 - accuracy: 0.4970 - val_loss: 1.4354 - val_accuracy: 0.4772
Build the sparse model
Using the instructions from the comprehensive guide, we apply tfmot.sparsity.keras.prune_low_magnitude
function with parameters that target on-device acceleration via pruning i.e. tfmot.sparsity.keras.PruneForLatencyOnXNNPack
policy.
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after after 5 epochs.
end_epoch = 5
num_iterations_per_epoch = len(ds_train)
end_step = num_iterations_per_epoch * end_epoch
# Define parameters for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.25,
final_sparsity=0.75,
begin_step=0,
end_step=end_step),
'pruning_policy': tfmot.sparsity.keras.PruneForLatencyOnXNNPack()
}
# Try to apply pruning wrapper with pruning policy parameter.
try:
model_for_pruning = prune_low_magnitude(dense_model, **pruning_params)
except ValueError as e:
print(e)
The call prune_low_magnitude
results in ValueError
with the message Could not find a GlobalAveragePooling2D layer with keepdims = True in all output branches
. The message indicates that the model isn't supported for pruning with policy tfmot.sparsity.keras.PruneForLatencyOnXNNPack
and specifically the layer GlobalAveragePooling2D
requires the parameter keepdims = True
. Let's fix that and reapply prune_low_magnitude
function.
fixed_dense_model = keras.Sequential([
keras.layers.InputLayer(input_shape=(32, 32, 3)),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.Conv2D(
filters=8,
kernel_size=(3, 3),
strides=(2, 2),
padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.DepthwiseConv2D(kernel_size=(3, 3), padding='same'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=16, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.DepthwiseConv2D(
kernel_size=(3, 3), strides=(2, 2), padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=32, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.GlobalAveragePooling2D(keepdims=True),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Use the pretrained model for pruning instead of training from scratch.
fixed_dense_model.set_weights(dense_model.get_weights())
# Try to reapply pruning wrapper.
model_for_pruning = prune_low_magnitude(fixed_dense_model, **pruning_params)
Invocation of prune_low_magnitude
has finished without any errors meaning that the model is fully supported for the tfmot.sparsity.keras.PruneForLatencyOnXNNPack
policy and can be accelerated using XNNPACK Sparse inference.
Fine-tune the sparse model
Following the pruning example, we fine-tune the sparse model using the weights of the dense model. We start fine-tuning of the model with 25% sparsity (25% of the weights are set to zero) and end with 75% sparsity.
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
model_for_pruning.fit(
ds_train,
epochs=15,
validation_data=ds_val,
callbacks=callbacks)
# Evaluate the dense model.
_, pruned_model_accuracy = model_for_pruning.evaluate(ds_test, verbose=0)
print('Dense model test accuracy:', dense_model_accuracy)
print('Pruned model test accuracy:', pruned_model_accuracy)
Epoch 1/15 352/352 [==============================] - 8s 15ms/step - loss: 1.3953 - accuracy: 0.4936 - val_loss: 1.4770 - val_accuracy: 0.4558 Epoch 2/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4170 - accuracy: 0.4876 - val_loss: 2.0113 - val_accuracy: 0.3190 Epoch 3/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4470 - accuracy: 0.4751 - val_loss: 2.7967 - val_accuracy: 0.1966 Epoch 4/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4432 - accuracy: 0.4753 - val_loss: 1.7363 - val_accuracy: 0.3802 Epoch 5/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4248 - accuracy: 0.4823 - val_loss: 1.4734 - val_accuracy: 0.4506 Epoch 6/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4090 - accuracy: 0.4882 - val_loss: 1.4609 - val_accuracy: 0.4572 Epoch 7/15 352/352 [==============================] - 5s 14ms/step - loss: 1.4009 - accuracy: 0.4938 - val_loss: 1.6531 - val_accuracy: 0.4000 Epoch 8/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3940 - accuracy: 0.4969 - val_loss: 1.4390 - val_accuracy: 0.4808 Epoch 9/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3857 - accuracy: 0.4990 - val_loss: 1.3984 - val_accuracy: 0.4838 Epoch 10/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3824 - accuracy: 0.4992 - val_loss: 1.4029 - val_accuracy: 0.4852 Epoch 11/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3741 - accuracy: 0.5034 - val_loss: 1.5039 - val_accuracy: 0.4488 Epoch 12/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3704 - accuracy: 0.5038 - val_loss: 1.5050 - val_accuracy: 0.4554 Epoch 13/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3639 - accuracy: 0.5053 - val_loss: 1.4128 - val_accuracy: 0.4816 Epoch 14/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3616 - accuracy: 0.5076 - val_loss: 1.8371 - val_accuracy: 0.3952 Epoch 15/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3562 - accuracy: 0.5104 - val_loss: 1.4272 - val_accuracy: 0.4808 Dense model test accuracy: 0.4767000079154968 Pruned model test accuracy: 0.4666999876499176
The logs show the progression of sparsity on a per-layer basis.
#docs_infra: no_execute
%tensorboard --logdir={logdir}
After the fine-tuning with pruning, test accuracy demonstrates a modest improvement (43% to 44%) compared to the dense model. Let's compare on-device latency using TFLite benchmark.
Model conversion and benchmarking
To convert the pruned model into TFLite, we need replace the PruneLowMagnitude
wrappers with original layers via the strip_pruning
function. Also, since the weights of the pruned model (model_for_pruning
) are mostly zeros, we may apply an optimization tf.lite.Optimize.EXPERIMENTAL_SPARSITY
to efficiently store the resulted TFLite model. This optimization flag is not required for the dense model.
converter = tf.lite.TFLiteConverter.from_keras_model(dense_model)
dense_tflite_model = converter.convert()
_, dense_tflite_file = tempfile.mkstemp('.tflite')
with open(dense_tflite_file, 'wb') as f:
f.write(dense_tflite_model)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.EXPERIMENTAL_SPARSITY]
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpxk93n9_b/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpxk93n9_b/assets 2023-10-03 11:19:44.944330: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format. 2023-10-03 11:19:44.944365: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpdu_sots1/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpdu_sots1/assets 2023-10-03 11:19:47.911675: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format. 2023-10-03 11:19:47.911713: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Following the instructions of TFLite Model Benchmarking Tool, we build the tool, upload it to the Android device together with dense and pruned TFLite models, and benchmark both models on the device.
! adb shell /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/dense_model.tflite \
--use_xnnpack=true \
--num_runs=100 \
--num_threads=1
/bin/bash: adb: command not found
! adb shell /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/pruned_model.tflite \
--use_xnnpack=true \
--num_runs=100 \
--num_threads=1
/bin/bash: adb: command not found
Benchmarks on Pixel 4 resulted in average inference time of 17us for the dense model and 12us for the pruned model. The on-device benchmarks demonstrate a clear 5us or 30% improvements in latency even for such small models. In our experience, larger models based on MobileNetV3 or EfficientNet-lite show similar performance improvements. The speed-up varies based on the relative contribution of 1x1 convolutions to the overall model.
Conclusion
In this tutorial, we show how one may create sparse models for faster on-device performance using the new functionality introduced by the TF MOT API and XNNPack. These sparse models are smaller and faster than their dense counterparts while retaining or even surpassing their quality.
We encourage you to try this new capability which can be particularly important for deploying your models on device.