Google I/O is a wrap! Catch up on TensorFlow sessions View sessions

Guía completa de formación consciente de cuantificación

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

Bienvenido a la guía completa para el entrenamiento consciente de cuantificación de Keras.

Esta página documenta varios casos de uso y muestra cómo utilizar la API para cada uno. Una vez que sepas lo que las API que necesita, encontrar los parámetros y los detalles de bajo nivel en los documentos de la API .

Se cubren los siguientes casos de uso:

  • Implemente un modelo con cuantificación de 8 bits con estos pasos.
    • Defina un modelo consciente de cuantificación.
    • Solo para los modelos Keras HDF5, utilice una lógica de deserialización y puntos de control especiales. Por lo demás, la formación es estándar.
    • Cree un modelo cuantificado a partir del que reconoce la cuantificación.
  • Experimente con la cuantificación.
    • Cualquier cosa para la experimentación no tiene una ruta admitida para la implementación.
    • Las capas personalizadas de Keras se someten a experimentación.

Configuración

Para encontrar las API que necesita y comprender los propósitos, puede ejecutar, pero omita la lectura de esta sección.

! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization

import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot

import tempfile

input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)

def setup_model():
  model = tf.keras.Sequential([
      tf.keras.layers.Dense(20, input_shape=input_shape),
      tf.keras.layers.Flatten()
  ])
  return model

def setup_pretrained_weights():
  model= setup_model()

  model.compile(
      loss=tf.keras.losses.categorical_crossentropy,
      optimizer='adam',
      metrics=['accuracy']
  )

  model.fit(x_train, y_train)

  _, pretrained_weights = tempfile.mkstemp('.tf')

  model.save_weights(pretrained_weights)

  return pretrained_weights

def setup_pretrained_model():
  model = setup_model()
  pretrained_weights = setup_pretrained_weights()
  model.load_weights(pretrained_weights)
  return model

setup_model()
pretrained_weights = setup_pretrained_weights()
2021-10-01 11:29:25.336019: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

Definir modelo consciente de cuantificación

Mediante la definición de los modelos de las siguientes maneras, hay rutas disponibles para el despliegue a backends que figuran en la página de información general . De forma predeterminada, se utiliza la cuantificación de 8 bits.

Cuantizar todo el modelo

Tu caso de uso:

  • Los modelos de subclases no son compatibles.

Consejos para una mejor precisión del modelo:

  • Pruebe "Cuantizar algunas capas" para omitir la cuantificación de las capas que más reducen la precisión.
  • En general, es mejor afinar con el entrenamiento consciente de la cuantificación en lugar de entrenar desde cero.

Para que todo el modelo consciente de cuantización, aplicar tfmot.quantization.keras.quantize_model al modelo.

base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy

quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer (QuantizeLay  (None, 20)               3         
 er)                                                             
                                                                 
 quant_dense_2 (QuantizeWrap  (None, 20)               425       
 perV2)                                                          
                                                                 
 quant_flatten_2 (QuantizeWr  (None, 20)               1         
 apperV2)                                                        
                                                                 
=================================================================
Total params: 429
Trainable params: 420
Non-trainable params: 9
_________________________________________________________________

Cuantizar algunas capas

La cuantificación de un modelo puede tener un efecto negativo en la precisión. Puede cuantificar selectivamente capas de un modelo para explorar el equilibrio entre precisión, velocidad y tamaño del modelo.

Tu caso de uso:

  • Para implementar en un backend que solo funciona bien con modelos totalmente cuantificados (por ejemplo, EdgeTPU v1, la mayoría de los DSP), intente "Cuantizar todo el modelo".

Consejos para una mejor precisión del modelo:

  • En general, es mejor afinar con el entrenamiento consciente de la cuantificación en lugar de entrenar desde cero.
  • Intente cuantificar las últimas capas en lugar de las primeras.
  • Evite cuantificar capas críticas (por ejemplo, mecanismo de atención).

En el siguiente ejemplo, cuantificar sólo las Dense capas.

# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy

# Helper function uses `quantize_annotate_layer` to annotate that only the 
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
  if isinstance(layer, tf.keras.layers.Dense):
    return tfmot.quantization.keras.quantize_annotate_layer(layer)
  return layer

# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense` 
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
    base_model,
    clone_function=apply_quantization_to_dense,
)

# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_1 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_3 (QuantizeWrap  (None, 20)               425       
 perV2)                                                          
                                                                 
 flatten_3 (Flatten)         (None, 20)                0         
                                                                 
=================================================================
Total params: 428
Trainable params: 420
Non-trainable params: 8
_________________________________________________________________

Aunque este ejemplo utiliza el tipo de la capa de decidir qué cuantizar, la forma más fácil de cuantizar una capa determinada es establecer su name propiedad, y busque ese nombre en el clone_function .

print(base_model.layers[0].name)
dense_3

Precisión de modelo más legible pero potencialmente más baja

Esto no es compatible con el ajuste fino con el entrenamiento consciente de cuantificación, por lo que puede ser menos preciso que los ejemplos anteriores.

Ejemplo funcional

# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)

# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)

# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 20)]              0         
                                                                 
 quantize_layer_2 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_4 (QuantizeWrap  (None, 10)               215       
 perV2)                                                          
                                                                 
 flatten_4 (Flatten)         (None, 10)                0         
                                                                 
=================================================================
Total params: 218
Trainable params: 210
Non-trainable params: 8
_________________________________________________________________

Ejemplo secuencial

# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
  tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
  tf.keras.layers.Flatten()
])

# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)

quant_aware_model.summary()
Model: "sequential_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_3 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_5 (QuantizeWrap  (None, 20)               425       
 perV2)                                                          
                                                                 
 flatten_5 (Flatten)         (None, 20)                0         
                                                                 
=================================================================
Total params: 428
Trainable params: 420
Non-trainable params: 8
_________________________________________________________________

Punto de control y deserialización

Su caso de uso: este código sólo es necesaria para el formato modelo HDF5 (no HDF5 pesos u otros formatos).

# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)

# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)

# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
  loaded_model = tf.keras.models.load_model(keras_model_file)

loaded_model.summary()
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
WARNING:tensorflow:No training configuration found in the save file, so the model was *not* compiled. Compile it manually.
Model: "sequential_5"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_4 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_6 (QuantizeWrap  (None, 20)               425       
 perV2)                                                          
                                                                 
 quant_flatten_6 (QuantizeWr  (None, 20)               1         
 apperV2)                                                        
                                                                 
=================================================================
Total params: 429
Trainable params: 420
Non-trainable params: 9
_________________________________________________________________

Crear e implementar un modelo cuantificado

En general, consulte la documentación del backend de implementación que utilizará.

Este es un ejemplo del backend TFLite.

base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)

# Typically you train the model here.

converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

quantized_tflite_model = converter.convert()
1/1 [==============================] - 0s 269ms/step - loss: 16.1181 - accuracy: 0.0000e+00
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.kernel
WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.bias
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
2021-10-01 11:29:28.281011: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Found untraced functions such as dense_7_layer_call_fn, dense_7_layer_call_and_return_conditional_losses, flatten_7_layer_call_fn, flatten_7_layer_call_and_return_conditional_losses, dense_7_layer_call_fn while saving (showing 5 of 10). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: /tmp/tmps5i7uwfh/assets
INFO:tensorflow:Assets written to: /tmp/tmps5i7uwfh/assets
2021-10-01 11:29:29.254470: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:363] Ignored output_format.
2021-10-01 11:29:29.254516: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:366] Ignored drop_control_dependency.
WARNING:absl:Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded
2021-10-01 11:29:29.360670: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:704] Cannot get mac count for %2 = "tfl.fully_connected"(%0, %1, %cst_0) {fused_activation_function = "NONE", keep_num_dims = false, weights_format = "DEFAULT"} : (tensor<?x20x!quant.uniform<i8:f32, 3.9215686274509805E-9:-1>>, tensor<*x!quant.uniform<i8<-127:127>:f32, 0.047244094488188976>>, none) -> tensor<?x20x!quant.uniform<i8:f32, 0.047058823529411764>>

Experimente con la cuantificación

Su caso de uso: el uso de los siguientes medios APIs que hay no se admite camino hacia la implementación. Por ejemplo, la conversión de TFLite y las implementaciones del kernel solo admiten la cuantificación de 8 bits. Las funciones también son experimentales y no están sujetas a compatibilidad con versiones anteriores.

Configuración: DefaultDenseQuantizeConfig

Experimentar requiere el uso de tfmot.quantization.keras.QuantizeConfig , que describe cómo cuantificar los pesos, activaciones, y salidas de una capa.

A continuación se muestra un ejemplo que define el mismo QuantizeConfig utilizado para la Dense capa en los valores por defecto de la API.

Durante la propagación hacia adelante en este ejemplo, el LastValueQuantizer devuelto en get_weights_and_quantizers es llamado con layer.kernel como el de entrada, produciendo una salida. Los sustituye salida layer.kernel en la propagación hacia adelante original de la Dense capa, a través de la lógica definida en set_quantize_weights . La misma idea se aplica a las activaciones y salidas.

LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer

class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
    # Configure how to quantize weights.
    def get_weights_and_quantizers(self, layer):
      return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]

    # Configure how to quantize activations.
    def get_activations_and_quantizers(self, layer):
      return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]

    def set_quantize_weights(self, layer, quantize_weights):
      # Add this line for each item returned in `get_weights_and_quantizers`
      # , in the same order
      layer.kernel = quantize_weights[0]

    def set_quantize_activations(self, layer, quantize_activations):
      # Add this line for each item returned in `get_activations_and_quantizers`
      # , in the same order.
      layer.activation = quantize_activations[0]

    # Configure how to quantize outputs (may be equivalent to activations).
    def get_output_quantizers(self, layer):
      return []

    def get_config(self):
      return {}

Cuantizar la capa personalizada de Keras

En este ejemplo se utiliza el DefaultDenseQuantizeConfig para cuantificar la CustomLayer .

La aplicación de la configuración es la misma en todos los casos de uso "Experimentar con cuantificación".

quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope

class CustomLayer(tf.keras.layers.Dense):
  pass

model = quantize_annotate_model(tf.keras.Sequential([
   quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
   tf.keras.layers.Flatten()
]))

# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
  {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
   'CustomLayer': CustomLayer}):
  # Use `quantize_apply` to actually make the model quantization aware.
  quant_aware_model = tfmot.quantization.keras.quantize_apply(model)

quant_aware_model.summary()
Model: "sequential_8"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_6 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_custom_layer (Quantiz  (None, 20)               425       
 eWrapperV2)                                                     
                                                                 
 quant_flatten_9 (QuantizeWr  (None, 20)               1         
 apperV2)                                                        
                                                                 
=================================================================
Total params: 429
Trainable params: 420
Non-trainable params: 9
_________________________________________________________________

Modificar los parámetros de cuantificación

Error común: cuantificación del sesgo a menos de 32-bits por lo general perjudica la exactitud del modelo demasiado.

En este ejemplo se modifica la Dense capa de usar 4 bits por sus pesos en lugar de los predeterminados de 8-bits. El resto del modelo sigue utilizando valores predeterminados de API.

quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope

class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
    # Configure weights to quantize with 4-bit instead of 8-bits.
    def get_weights_and_quantizers(self, layer):
      return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]

La aplicación de la configuración es la misma en todos los casos de uso "Experimentar con cuantificación".

model = quantize_annotate_model(tf.keras.Sequential([
   # Pass in modified `QuantizeConfig` to modify this Dense layer.
   quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
   tf.keras.layers.Flatten()
]))

# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
  {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
  # Use `quantize_apply` to actually make the model quantization aware.
  quant_aware_model = tfmot.quantization.keras.quantize_apply(model)

quant_aware_model.summary()
Model: "sequential_9"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_7 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_9 (QuantizeWrap  (None, 20)               425       
 perV2)                                                          
                                                                 
 quant_flatten_10 (QuantizeW  (None, 20)               1         
 rapperV2)                                                       
                                                                 
=================================================================
Total params: 429
Trainable params: 420
Non-trainable params: 9
_________________________________________________________________

Modificar partes de la capa para cuantificar

En este ejemplo se modifica la Dense capa para saltar de cuantificación de la activación. El resto del modelo sigue utilizando valores predeterminados de API.

quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope

class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
    def get_activations_and_quantizers(self, layer):
      # Skip quantizing activations.
      return []

    def set_quantize_activations(self, layer, quantize_activations):
      # Empty since `get_activaations_and_quantizers` returns
      # an empty list.
      return

La aplicación de la configuración es la misma en todos los casos de uso "Experimentar con cuantificación".

model = quantize_annotate_model(tf.keras.Sequential([
   # Pass in modified `QuantizeConfig` to modify this Dense layer.
   quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
   tf.keras.layers.Flatten()
]))

# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
  {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
  # Use `quantize_apply` to actually make the model quantization aware.
  quant_aware_model = tfmot.quantization.keras.quantize_apply(model)

quant_aware_model.summary()
Model: "sequential_10"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_8 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_10 (QuantizeWra  (None, 20)               423       
 pperV2)                                                         
                                                                 
 quant_flatten_11 (QuantizeW  (None, 20)               1         
 rapperV2)                                                       
                                                                 
=================================================================
Total params: 427
Trainable params: 420
Non-trainable params: 7
_________________________________________________________________

Utilice un algoritmo de cuantificación personalizado

El tfmot.quantization.keras.quantizers.Quantizer clase es un exigible que puede aplicar cualquier algoritmo a sus entradas.

En este ejemplo, las entradas son los pesos, y que se aplican las matemáticas en el FixedRangeQuantizer función __call__ a los pesos. En lugar de los valores de los pesos originales, la salida de la FixedRangeQuantizer ahora se pasa a lo que habría utilizado los pesos.

quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope

class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
  """Quantizer which forces outputs to be between -1 and 1."""

  def build(self, tensor_shape, name, layer):
    # Not needed. No new TensorFlow variables needed.
    return {}

  def __call__(self, inputs, training, weights, **kwargs):
    return tf.keras.backend.clip(inputs, -1.0, 1.0)

  def get_config(self):
    # Not needed. No __init__ parameters to serialize.
    return {}


class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
    # Configure weights to quantize with 4-bit instead of 8-bits.
    def get_weights_and_quantizers(self, layer):
      # Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
      return [(layer.kernel, FixedRangeQuantizer())]

La aplicación de la configuración es la misma en todos los casos de uso "Experimentar con cuantificación".

model = quantize_annotate_model(tf.keras.Sequential([
   # Pass in modified `QuantizeConfig` to modify this `Dense` layer.
   quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
   tf.keras.layers.Flatten()
]))

# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
  {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
  # Use `quantize_apply` to actually make the model quantization aware.
  quant_aware_model = tfmot.quantization.keras.quantize_apply(model)

quant_aware_model.summary()
Model: "sequential_11"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 quantize_layer_9 (QuantizeL  (None, 20)               3         
 ayer)                                                           
                                                                 
 quant_dense_11 (QuantizeWra  (None, 20)               423       
 pperV2)                                                         
                                                                 
 quant_flatten_12 (QuantizeW  (None, 20)               1         
 rapperV2)                                                       
                                                                 
=================================================================
Total params: 427
Trainable params: 420
Non-trainable params: 7
_________________________________________________________________