Halaman ini diterjemahkan oleh Cloud Translation API.
Switch to English

Contoh pemangkasan di Keras

Lihat di TensorFlow.org Jalankan di Google Colab Lihat sumber di GitHub Unduh buku catatan

Gambaran

Selamat datang di contoh ujung-ke-ujung untuk pemangkasan bobot berbasis besaran.

Halaman lain

Untuk pengenalan tentang apa itu pemangkasan dan untuk menentukan apakah Anda harus menggunakannya (termasuk apa yang didukung), lihat halaman ikhtisar .

Untuk menemukan API yang Anda butuhkan untuk kasus penggunaan Anda dengan cepat (selain memangkas sepenuhnya model dengan ketersebaran 80%), lihat panduan komprehensif .

Ringkasan

Dalam tutorial ini, Anda akan:

  1. Melatih tf.keras model untuk MNIST dari awal.
  2. Sempurnakan model dengan menerapkan API pemangkasan dan lihat akurasinya.
  3. Buat model TF dan TFLite 3x lebih kecil dari pemangkasan.
  4. Buat model TFLite 10x lebih kecil dengan menggabungkan pemangkasan dan kuantisasi pasca pelatihan.
  5. Lihat persistensi ketelitian dari TF hingga TFLite.

Mendirikan

 pip install -q tensorflow-model-optimization
import tempfile
import os

import tensorflow as tf
import numpy as np

from tensorflow import keras

%load_ext tensorboard

Latih model untuk MNIST tanpa pemangkasan

# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0

# Define the model architecture.
model = keras.Sequential([
  keras.layers.InputLayer(input_shape=(28, 28)),
  keras.layers.Reshape(target_shape=(28, 28, 1)),
  keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
  keras.layers.MaxPooling2D(pool_size=(2, 2)),
  keras.layers.Flatten(),
  keras.layers.Dense(10)
])

# Train the digit classification model
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(
  train_images,
  train_labels,
  epochs=4,
  validation_split=0.1,
)
Epoch 1/4
1688/1688 [==============================] - 7s 4ms/step - loss: 0.3422 - accuracy: 0.9004 - val_loss: 0.1760 - val_accuracy: 0.9498
Epoch 2/4
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1813 - accuracy: 0.9457 - val_loss: 0.1176 - val_accuracy: 0.9698
Epoch 3/4
1688/1688 [==============================] - 7s 4ms/step - loss: 0.1220 - accuracy: 0.9648 - val_loss: 0.0864 - val_accuracy: 0.9770
Epoch 4/4
1688/1688 [==============================] - 7s 4ms/step - loss: 0.0874 - accuracy: 0.9740 - val_loss: 0.0763 - val_accuracy: 0.9787

<tensorflow.python.keras.callbacks.History at 0x7f32cbeb9550>

Evaluasi akurasi pengujian dasar dan simpan model untuk penggunaan nanti.

_, baseline_model_accuracy = model.evaluate(
    test_images, test_labels, verbose=0)

print('Baseline test accuracy:', baseline_model_accuracy)

_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
Baseline test accuracy: 0.972599983215332
Saved baseline model to: /tmp/tmp6quew9ig.h5

Sempurnakan model terlatih dengan pemangkasan

Tentukan modelnya

Anda akan menerapkan pemangkasan ke seluruh model dan melihatnya di ringkasan model.

Dalam contoh ini, Anda memulai model dengan ketersebaran 50% (bobot 50% nol) dan diakhiri dengan ketersebaran 80%.

Dalam panduan komprehensif , Anda dapat melihat cara memangkas beberapa lapisan untuk peningkatan akurasi model.

import tensorflow_model_optimization as tfmot

prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude

# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set. 

num_images = train_images.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs

# Define model for pruning.
pruning_params = {
      'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
                                                               final_sparsity=0.80,
                                                               begin_step=0,
                                                               end_step=end_step)
}

model_for_pruning = prune_low_magnitude(model, **pruning_params)

# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model_for_pruning.summary()
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/sparsity/keras/pruning_wrapper.py:220: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
prune_low_magnitude_reshape  (None, 28, 28, 1)         1         
_________________________________________________________________
prune_low_magnitude_conv2d ( (None, 26, 26, 12)        230       
_________________________________________________________________
prune_low_magnitude_max_pool (None, 13, 13, 12)        1         
_________________________________________________________________
prune_low_magnitude_flatten  (None, 2028)              1         
_________________________________________________________________
prune_low_magnitude_dense (P (None, 10)                40572     
=================================================================
Total params: 40,805
Trainable params: 20,410
Non-trainable params: 20,395
_________________________________________________________________

Latih dan evaluasi model terhadap baseline

Sempurnakan dengan pemangkasan selama dua periode.

tfmot.sparsity.keras.UpdatePruningStep diperlukan selama pelatihan, dan tfmot.sparsity.keras.PruningSummaries menyediakan log untuk melacak kemajuan dan debugging.

logdir = tempfile.mkdtemp()

callbacks = [
  tfmot.sparsity.keras.UpdatePruningStep(),
  tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]

model_for_pruning.fit(train_images, train_labels,
                  batch_size=batch_size, epochs=epochs, validation_split=validation_split,
                  callbacks=callbacks)
Epoch 1/2
  1/422 [..............................] - ETA: 0s - loss: 0.2689 - accuracy: 0.8984WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
422/422 [==============================] - 3s 7ms/step - loss: 0.1105 - accuracy: 0.9691 - val_loss: 0.1247 - val_accuracy: 0.9682
Epoch 2/2
422/422 [==============================] - 3s 6ms/step - loss: 0.1197 - accuracy: 0.9667 - val_loss: 0.0969 - val_accuracy: 0.9763

<tensorflow.python.keras.callbacks.History at 0x7f32422a9550>

Untuk contoh ini, ada sedikit kerugian dalam keakuratan pengujian setelah pemangkasan, dibandingkan dengan baseline.

_, model_for_pruning_accuracy = model_for_pruning.evaluate(
   test_images, test_labels, verbose=0)

print('Baseline test accuracy:', baseline_model_accuracy) 
print('Pruned test accuracy:', model_for_pruning_accuracy)
Baseline test accuracy: 0.972599983215332
Pruned test accuracy: 0.9689000248908997

Log menunjukkan perkembangan ketersebaran per lapisan.

%tensorboard --logdir={logdir}

Untuk pengguna non-Colab, Anda dapat melihathasil dari operasi sebelumnya dari blok kode ini di TensorBoard.dev .

Buat model 3x lebih kecil dari pemangkasan

Baik tfmot.sparsity.keras.strip_pruning dan penerapan algoritme kompresi standar (misalnya melalui gzip) diperlukan untuk melihat manfaat kompresi pemangkasan.

  • strip_pruning diperlukan karena menghapus setiap tf. Variabel yang hanya diperlukan pemangkasan selama pelatihan, yang jika tidak akan menambah ukuran model selama inferensi
  • Penerapan algoritme kompresi standar diperlukan karena matriks bobot serial berukuran sama seperti sebelum pemangkasan. Namun, pemangkasan membuat sebagian besar bobot menjadi nol, yang menambahkan redundansi yang dapat digunakan algoritme untuk mengompresi model lebih lanjut.

Pertama, buat model yang dapat dikompresi untuk TensorFlow.

model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)

_, pruned_keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model_for_export, pruned_keras_file, include_optimizer=False)
print('Saved pruned Keras model to:', pruned_keras_file)
Saved pruned Keras model to: /tmp/tmpu92n0irx.h5

Kemudian, buat model yang dapat dikompresi untuk TFLite.

converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
pruned_tflite_model = converter.convert()

_, pruned_tflite_file = tempfile.mkstemp('.tflite')

with open(pruned_tflite_file, 'wb') as f:
  f.write(pruned_tflite_model)

print('Saved pruned TFLite model to:', pruned_tflite_file)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
INFO:tensorflow:Assets written to: /tmp/tmpunez1uhy/assets
Saved pruned TFLite model to: /tmp/tmp9oa2swr6.tflite

Tentukan fungsi pembantu untuk benar-benar mengompresi model melalui gzip dan mengukur ukuran zip.

def get_gzipped_model_size(file):
  # Returns size of gzipped model, in bytes.
  import os
  import zipfile

  _, zipped_file = tempfile.mkstemp('.zip')
  with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
    f.write(file)

  return os.path.getsize(zipped_file)

Bandingkan dan lihat bahwa modelnya 3x lebih kecil dari pemangkasan.

print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned Keras model: %.2f bytes" % (get_gzipped_model_size(pruned_keras_file)))
print("Size of gzipped pruned TFlite model: %.2f bytes" % (get_gzipped_model_size(pruned_tflite_file)))
Size of gzipped baseline Keras model: 78048.00 bytes
Size of gzipped pruned Keras model: 25680.00 bytes
Size of gzipped pruned TFlite model: 24946.00 bytes

Buat model 10x lebih kecil dari menggabungkan pemangkasan dan kuantisasi

Anda dapat menerapkan kuantisasi pasca pelatihan ke model yang dipangkas untuk mendapatkan manfaat tambahan.

converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_and_pruned_tflite_model = converter.convert()

_, quantized_and_pruned_tflite_file = tempfile.mkstemp('.tflite')

with open(quantized_and_pruned_tflite_file, 'wb') as f:
  f.write(quantized_and_pruned_tflite_model)

print('Saved quantized and pruned TFLite model to:', quantized_and_pruned_tflite_file)

print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_pruned_tflite_file)))
INFO:tensorflow:Assets written to: /tmp/tmpf68nyuwr/assets

INFO:tensorflow:Assets written to: /tmp/tmpf68nyuwr/assets

Saved quantized and pruned TFLite model to: /tmp/tmp85dhxupl.tflite
Size of gzipped baseline Keras model: 78048.00 bytes
Size of gzipped pruned and quantized TFlite model: 7663.00 bytes

Lihat persistensi akurasi dari TF ke TFLite

Tentukan fungsi pembantu untuk mengevaluasi model TF Lite pada set data pengujian.

import numpy as np

def evaluate_model(interpreter):
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on ever y image in the "test" dataset.
  prediction_digits = []
  for i, test_image in enumerate(test_images):
    if i % 1000 == 0:
      print('Evaluated on {n} results so far.'.format(n=i))
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the digit with highest
    # probability.
    output = interpreter.tensor(output_index)
    digit = np.argmax(output()[0])
    prediction_digits.append(digit)

  print('\n')
  # Compare prediction results with ground truth labels to calculate accuracy.
  prediction_digits = np.array(prediction_digits)
  accuracy = (prediction_digits == test_labels).mean()
  return accuracy

Anda mengevaluasi model yang dipangkas dan dikuantisasi dan melihat bahwa akurasi dari TensorFlow tetap ada hingga backend TFLite.

interpreter = tf.lite.Interpreter(model_content=quantized_and_pruned_tflite_model)
interpreter.allocate_tensors()

test_accuracy = evaluate_model(interpreter)

print('Pruned and quantized TFLite test_accuracy:', test_accuracy)
print('Pruned TF test accuracy:', model_for_pruning_accuracy)
Evaluated on 0 results so far.
Evaluated on 1000 results so far.
Evaluated on 2000 results so far.
Evaluated on 3000 results so far.
Evaluated on 4000 results so far.
Evaluated on 5000 results so far.
Evaluated on 6000 results so far.
Evaluated on 7000 results so far.
Evaluated on 8000 results so far.
Evaluated on 9000 results so far.


Pruned and quantized TFLite test_accuracy: 0.9692
Pruned TF test accuracy: 0.9689000248908997

Kesimpulan

Dalam tutorial ini, Anda melihat cara membuat model renggang dengan TensorFlow Model Optimization Toolkit API untuk TensorFlow dan TFLite. Anda kemudian menggabungkan pemangkasan dengan kuantisasi pasca pelatihan untuk mendapatkan manfaat tambahan.

Anda membuat model 10x lebih kecil untuk MNIST, dengan perbedaan akurasi yang minimal.

Kami mendorong Anda untuk mencoba kemampuan baru ini, yang dapat menjadi sangat penting untuk penerapan di lingkungan dengan sumber daya terbatas.