本頁面由 Cloud Translation API 翻譯而成。
Switch to English

Keras示例中的權重聚類

在TensorFlow.org上查看 在Google Colab中運行 在GitHub上查看源代碼 下載筆記本

總覽

歡迎來到端至端例如用於權聚類 ,所述TensorFlow模型優化工具包的一部分。

其他頁面

有關什麼是權重聚類的介紹以及確定是否應使用它(包括支持的功能),請參見概述頁面

要快速找到用例所需的API(除了將模型與16個集群完全集群在一起之外),請參見綜合指南

內容

在本教程中,您將:

  1. 從頭開始為MNIST數據集訓練tf.keras模型。
  2. 通過應用權重聚類API對模型進行微調,並查看準確性。
  3. 通過集群創建一個6倍小的TF和TFLite模型。
  4. 通過將權重聚類和訓練後量化相結合,創建一個8倍小的TFLite模型。
  5. 查看從TF到TFLite的準確性的持久性。

建立

您可以在本地virtualenvcolab中運行此Jupyter Notebook。有關設置依賴項的詳細信息,請參閱安裝指南

 pip install -q tensorflow-model-optimization
 import tensorflow as tf
from tensorflow import keras

import numpy as np
import tempfile
import zipfile
import os
 

在不聚類的情況下為MNIST訓練tf.keras模型

 # Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images  = test_images / 255.0

# Define the model architecture.
model = keras.Sequential([
    keras.layers.InputLayer(input_shape=(28, 28)),
    keras.layers.Reshape(target_shape=(28, 28, 1)),
    keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
    keras.layers.MaxPooling2D(pool_size=(2, 2)),
    keras.layers.Flatten(),
    keras.layers.Dense(10)
])

# Train the digit classification model
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(
    train_images,
    train_labels,
    validation_split=0.1,
    epochs=10
)
 
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
Epoch 1/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.3352 - accuracy: 0.9039 - val_loss: 0.1543 - val_accuracy: 0.9575
Epoch 2/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1535 - accuracy: 0.9559 - val_loss: 0.0948 - val_accuracy: 0.9745
Epoch 3/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1003 - accuracy: 0.9715 - val_loss: 0.0750 - val_accuracy: 0.9788
Epoch 4/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0791 - accuracy: 0.9768 - val_loss: 0.0652 - val_accuracy: 0.9828
Epoch 5/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0669 - accuracy: 0.9803 - val_loss: 0.0663 - val_accuracy: 0.9807
Epoch 6/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0589 - accuracy: 0.9820 - val_loss: 0.0581 - val_accuracy: 0.9833
Epoch 7/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0528 - accuracy: 0.9840 - val_loss: 0.0584 - val_accuracy: 0.9832
Epoch 8/10
1688/1688 [==============================] - 8s 5ms/step - loss: 0.0479 - accuracy: 0.9854 - val_loss: 0.0560 - val_accuracy: 0.9838
Epoch 9/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0434 - accuracy: 0.9867 - val_loss: 0.0550 - val_accuracy: 0.9853
Epoch 10/10
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0393 - accuracy: 0.9880 - val_loss: 0.0571 - val_accuracy: 0.9845

<tensorflow.python.keras.callbacks.History at 0x7fd1a1e18668>

評估基準模型並保存以備後用

 _, baseline_model_accuracy = model.evaluate(
    test_images, test_labels, verbose=0)

print('Baseline test accuracy:', baseline_model_accuracy)

_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
 
Baseline test accuracy: 0.9805999994277954
Saving model to:  /tmp/tmpphs68ctq.h5

通過聚類微調預訓練模型

cluster_weights() API應用於整個預先訓練的模型,以展示其在應用zip後減小模型大小的同時保持良好準確性的有效性。有關如何最好地平衡用例的準確性和壓縮率,請參考綜合指南中的每層示例。

定義模型並應用集群API

在將模型傳遞給集群API之前,請確保已對其進行了訓練並顯示出可接受的準確性。

 import tensorflow_model_optimization as tfmot

cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization

clustering_params = {
  'number_of_clusters': 16,
  'cluster_centroids_init': CentroidInitialization.LINEAR
}

# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)

# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)

clustered_model.compile(
  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
  optimizer=opt,
  metrics=['accuracy'])

clustered_model.summary()
 
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
cluster_reshape (ClusterWeig (None, 28, 28, 1)         0         
_________________________________________________________________
cluster_conv2d (ClusterWeigh (None, 26, 26, 12)        136       
_________________________________________________________________
cluster_max_pooling2d (Clust (None, 13, 13, 12)        0         
_________________________________________________________________
cluster_flatten (ClusterWeig (None, 2028)              0         
_________________________________________________________________
cluster_dense (ClusterWeight (None, 10)                20306     
=================================================================
Total params: 20,442
Trainable params: 54
Non-trainable params: 20,388
_________________________________________________________________

微調模型並根據基線評估準確性

使用1個時期的聚類對模型進行微調。

 # Fine-tune model
clustered_model.fit(
  train_images,
  train_labels,
  batch_size=500,
  epochs=1,
  validation_split=0.1)
 
108/108 [==============================] - 2s 16ms/step - loss: 0.0535 - accuracy: 0.9821 - val_loss: 0.0692 - val_accuracy: 0.9803

<tensorflow.python.keras.callbacks.History at 0x7fd18437ee10>

對於此示例,與基線相比,聚類後測試準確性的損失最小。

 _, clustered_model_accuracy = clustered_model.evaluate(
  test_images, test_labels, verbose=0)

print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
 
Baseline test accuracy: 0.9805999994277954
Clustered test accuracy: 0.9753000140190125

通過集群創建6倍較小的模型

要查看群集的壓縮優勢, strip_clustering和應用標準壓縮算法(例如通過gzip)都是必需的。

首先,為TensorFlow創建可壓縮模型。在這裡, strip_clustering刪除了聚類僅在訓練期間需要的所有變量(例如tf.Variable用於存儲聚類質心和索引),否則會在推理期間增加模型大小。

 final_model = tfmot.clustering.keras.strip_clustering(clustered_model)

_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file, 
                           include_optimizer=False)
 
Saving clustered model to:  /tmp/tmpfnmtfvf8.h5

然後,為TFLite創建可壓縮模型。您可以將集群模型轉換為可在目標後端上運行的格式。 TensorFlow Lite是可用於部署到移動設備的示例。

 clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
  f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
 
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: /tmp/tmpe966h_56/assets
Saved clustered TFLite model to: /tmp/clustered_mnist.tflite

定義一個輔助函數,以實際通過gzip壓縮模型並測量壓縮後的大小。

 def get_gzipped_model_size(file):
  # It returns the size of the gzipped model in bytes.
  import os
  import zipfile

  _, zipped_file = tempfile.mkstemp('.zip')
  with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
    f.write(file)

  return os.path.getsize(zipped_file)
 

比較並發現模型比聚類小6倍

 print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
 
Size of gzipped baseline Keras model: 78076.00 bytes
Size of gzipped clustered Keras model: 13362.00 bytes
Size of gzipped clustered TFlite model: 12982.00 bytes

通過將權重聚類和訓練後量化相結合,創建一個小8倍的 TFLite模型

您可以將訓練後量化應用於集群模型,以獲取更多好處。

 converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()

_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')

with open(quantized_and_clustered_tflite_file, 'wb') as f:
  f.write(tflite_quant_model)

print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
 
INFO:tensorflow:Assets written to: /tmp/tmpg0gw8r5x/assets

INFO:tensorflow:Assets written to: /tmp/tmpg0gw8r5x/assets

Saved quantized and clustered TFLite model to: /tmp/tmp43crqft1.tflite
Size of gzipped baseline Keras model: 78076.00 bytes
Size of gzipped clustered and quantized TFlite model: 9830.00 bytes

查看從TF到TFLite的準確性的持久性

定義一個輔助函數,以評估測試數據集上的TFLite模型。

 def eval_model(interpreter):
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on every image in the "test" dataset.
  prediction_digits = []
  for i, test_image in enumerate(test_images):
    if i % 1000 == 0:
      print('Evaluated on {n} results so far.'.format(n=i))
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the digit with highest
    # probability.
    output = interpreter.tensor(output_index)
    digit = np.argmax(output()[0])
    prediction_digits.append(digit)

  print('\n')
  # Compare prediction results with ground truth labels to calculate accuracy.
  prediction_digits = np.array(prediction_digits)
  accuracy = (prediction_digits == test_labels).mean()
  return accuracy
 

您評估已聚類和量化的模型,然後查看TensorFlow的準確性一直持續到TFLite後端。

 interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()

test_accuracy = eval_model(interpreter)

print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
 
Evaluated on 0 results so far.
Evaluated on 1000 results so far.
Evaluated on 2000 results so far.
Evaluated on 3000 results so far.
Evaluated on 4000 results so far.
Evaluated on 5000 results so far.
Evaluated on 6000 results so far.
Evaluated on 7000 results so far.
Evaluated on 8000 results so far.
Evaluated on 9000 results so far.


Clustered and quantized TFLite test_accuracy: 0.975
Clustered TF test accuracy: 0.9753000140190125

結論

在本教程中,您了解瞭如何使用TensorFlow模型優化工具包API創建集群模型。更具體地說,您已經遍歷了一個端到端示例,該示例為MNIST創建了8倍較小的模型,並具有最小的精度差異。我們鼓勵您嘗試這項新功能,這對於在資源有限的環境中進行部署特別重要。