此页面由 Cloud Translation API 翻译。
Switch to English

训练后float16量化

在TensorFlow.org上查看 在Google Colab中运行 在GitHub上查看源代码 下载笔记本

总览

TensorFlow Lite现在支持在将模型从TensorFlow转换为TensorFlow Lite的平面缓冲区格式期间将权重转换为16位浮点值。这样可将模型尺寸减小2倍。一些硬件(例如GPU)可以通过这种降低精度的算法进行本地计算,从而实现了比传统浮点执行更快的速度。 Tensorflow Lite GPU委托可以配置为以这种方式运行。但是,无需额外修改,转换为float16权重的模型仍可以在CPU上运行:在第一次推断之前,将float16权重上采样到float32。这样可以显着减小模型大小,以最小化对延迟和准确性的影响。

在本教程中,您将从头训练MNIST模型,在TensorFlow中检查其准确性,然后将模型转换为具有float16量化的Tensorflow Lite平面缓冲区。最后,检查转换后模型的准确性,并将其与原始float32模型进行比较。

建立MNIST模型

建立

 import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)

import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
 
 tf.float16
 
tf.float16

训练并导出模型

 # Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0

# Define the model architecture
model = keras.Sequential([
  keras.layers.InputLayer(input_shape=(28, 28)),
  keras.layers.Reshape(target_shape=(28, 28, 1)),
  keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
  keras.layers.MaxPooling2D(pool_size=(2, 2)),
  keras.layers.Flatten(),
  keras.layers.Dense(10)
])

# Train the digit classification model
model.compile(optimizer='adam',
              loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
model.fit(
  train_images,
  train_labels,
  epochs=1,
  validation_data=(test_images, test_labels)
)
 
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
1875/1875 [==============================] - 12s 6ms/step - loss: 0.2864 - accuracy: 0.9207 - val_loss: 0.1467 - val_accuracy: 0.9560

<tensorflow.python.keras.callbacks.History at 0x7fcd75df46a0>

对于该示例,您只训练了一个时期就对模型进行了训练,因此它只能进行约96%的训练。

转换为TensorFlow Lite模型

使用Python TFLiteConverter ,您现在可以将训练后的模型转换为TensorFlow Lite模型。

现在使用TFLiteConverter加载模型:

 converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
 

将其.tflite文件中:

 tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
 
 tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
 
84452

要改为在导出时将模型量化为float16,请首先将optimizations标志设置为使用默认优化。然后指定float16是目标平台上受支持的类型:

 converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
 

最后,像往常一样转换模型。注意,默认情况下,为方便调用,转换后的模型仍将使用浮点输入和输出。

 tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
 
44272

请注意,生成的文件的大小约为1/2

ls -lh {tflite_models_dir}
total 128K
-rw-rw-r-- 1 colaboratory-playground 50844828 44K Jun 23 06:04 mnist_model_quant_f16.tflite
-rw-rw-r-- 1 colaboratory-playground 50844828 83K Jun 23 06:04 mnist_model.tflite

运行TensorFlow Lite模型

使用Python TensorFlow Lite解释器运行TensorFlow Lite模型。

将模型加载到解释器中

 interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
 
 interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
 

在一张图像上测试模型

 test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)

input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]

interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
 
 import matplotlib.pylab as plt

plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
                              predict=str(np.argmax(predictions[0]))))
plt.grid(False)
 

png

 test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)

input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]

interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
 
 plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
                              predict=str(np.argmax(predictions[0]))))
plt.grid(False)
 

png

评估模型

 # A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
  input_index = interpreter.get_input_details()[0]["index"]
  output_index = interpreter.get_output_details()[0]["index"]

  # Run predictions on every image in the "test" dataset.
  prediction_digits = []
  for test_image in test_images:
    # Pre-processing: add batch dimension and convert to float32 to match with
    # the model's input data format.
    test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
    interpreter.set_tensor(input_index, test_image)

    # Run inference.
    interpreter.invoke()

    # Post-processing: remove batch dimension and find the digit with highest
    # probability.
    output = interpreter.tensor(output_index)
    digit = np.argmax(output()[0])
    prediction_digits.append(digit)

  # Compare prediction results with ground truth labels to calculate accuracy.
  accurate_count = 0
  for index in range(len(prediction_digits)):
    if prediction_digits[index] == test_labels[index]:
      accurate_count += 1
  accuracy = accurate_count * 1.0 / len(prediction_digits)

  return accuracy
 
 print(evaluate_model(interpreter))
 
0.956

重复对float16量化模型进行评估,以获得:

 # NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
 
0.956

在此示例中,您已将模型量化为float16,精度没有差异。

也可以在GPU上评估fp16量化模型。要使用降低的精度值执行所有算术,请确保在您的应用中创建TfLiteGPUDelegateOptions结构,并将precision_loss_allowed设置为1 ,如下所示:

 //Prepare GPU delegate.
const TfLiteGpuDelegateOptions options = {
  .metadata = NULL,
  .compile_options = {
    .precision_loss_allowed = 1,  // FP16
    .preferred_gl_object_type = TFLITE_GL_OBJECT_TYPE_FASTEST,
    .dynamic_batch_enabled = 0,   // Not fully functional yet
  },
};
 

可在此处找到有关TFLite GPU委托以及如何在应用程序中使用它的详细文档。