Trim insignificant weights

This document provides an overview on model pruning to help you determine how it fits with your usecase. To dive right into the code, see the Pruning with Keras tutorial and the API docs. For additional details on how to use the Keras API, a deep dive into pruning, and documentation on more advanced usage patterns, see the Train sparse models guide.

Overview

Magnitude-based weight pruning gradually zeroes out model weights during the training process to achieve model sparsity. Sparse models are easier to compress, and we can skip the zeroes during inference for latency improvements.

This technique brings improvements via model compression. In the future, framework support for this technique will provide latency improvements. We've seen up to 6x improvements in model compression with minimal loss of accuracy.

The technique is being evaluated in various speech applications, such as speech recognition and text-to-speech, and has been experimented on across various vision and translation models.

Users can apply this technique using APIs for Keras on Tensorflow 1.x for versions 1.14+ and nightly in both graph and eager execution.

Results

Image Classification

Model Non-sparse Top-1 Accuracy Sparse Accuracy Sparsity
InceptionV3 78.1% 78.0% 50%
76.1%75%
74.6%87.5%
MobilenetV1 22470.9%69.5%50%

The models were tested on Imagenet.

Translation

Model Non-sparse BLEU Sparse BLEU Sparsity
GNMT EN-DE 26.77 26.86 80%
26.5285%
26.1990%
GNMT DE-EN 29.47 29.50 80%
29.2485%
28.8190%

The models use WMT16 German and English dataset with news-test2013 as the dev set and news-test2015 as the test set.

Examples

In addition to the Prune with Keras tutorial, see the following examples:

  • Train a CNN model on the MNIST handwritten digit classification task with pruning: code
  • Train a LSTM on the IMDB sentiment classification task with pruning: code

Tips

  1. Start with a pre-trained model or weights if possible. If not, create one without pruning and start after.
  2. Do not prune very frequently to give the model time to recover. The toolkit provides a default frequency.
  3. Try running an experiment where you prune a pre-trained model to the final sparsity with begin step 0.
  4. Have a learning rate that's not too high or too low when the model is pruning. Consider the pruning schedule to be a hyperparameter.

For background, see To prune, or not to prune: exploring the efficacy of pruning for model compression [paper].