Maintained by TensorFlow Model Optimization
This document provides an overview on model pruning to help you determine how it fits with your use case.
- To dive right into an end-to-end example, see the Pruning with Keras example.
- To quickly find the APIs you need for your use case, see the pruning comprehensive guide.
Magnitude-based weight pruning gradually zeroes out model weights during the training process to achieve model sparsity. Sparse models are easier to compress, and we can skip the zeroes during inference for latency improvements.
This technique brings improvements via model compression. In the future, framework support for this technique will provide latency improvements. We've seen up to 6x improvements in model compression with minimal loss of accuracy.
The technique is being evaluated in various speech applications, such as speech recognition and text-to-speech, and has been experimented on across various vision and translation models.
API Compatibility Matrix
Users can apply pruning with the following APIs:
- Model building:
tf.keraswith only Sequential and Functional models
- TensorFlow versions: TF 1.x for versions 1.14+ and 2.x.
tf.compat.v1with a TF 2.X package and
tf.compat.v2with a TF 1.X package are not supported.
- TensorFlow execution mode: both graph and eager
- Distributed training:
tf.distributewith only graph execution
It is on our roadmap to add support in the following areas:
|Model||Non-sparse Top-1 Accuracy||Sparse Accuracy||Sparsity|
The models were tested on Imagenet.
|Model||Non-sparse BLEU||Sparse BLEU||Sparsity|
The models use WMT16 German and English dataset with news-test2013 as the dev set and news-test2015 as the test set.
In addition to the Prune with Keras tutorial, see the following examples:
- Train a CNN model on the MNIST handwritten digit classification task with pruning: code
- Train a LSTM on the IMDB sentiment classification task with pruning: code
For background, see To prune, or not to prune: exploring the efficacy of pruning for model compression [paper].