Clasificación de texto con TensorFlow Lite Model Maker

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

La biblioteca fabricante TensorFlow Lite modelo simplifica el proceso de adaptación y la conversión de un modelo de datos de entrada TensorFlow particulares al implementar este modelo para aplicaciones de LD en el dispositivo.

Este cuaderno muestra un ejemplo de un extremo a otro que utiliza la biblioteca Model Maker para ilustrar la adaptación y conversión de un modelo de clasificación de texto de uso común para clasificar reseñas de películas en un dispositivo móvil. El modelo de clasificación de texto clasifica el texto en categorías predefinidas. Las entradas deben ser texto preprocesado y las salidas son las probabilidades de las categorías. El conjunto de datos utilizado en este tutorial son reseñas de películas positivas y negativas.

Prerrequisitos

Instale los paquetes requeridos

Para ejecutar este ejemplo, instalar los paquetes necesarios, incluyendo el paquete de fabricante Modelo de la cesión temporal de GitHub .

pip install -q tflite-model-maker

Importe los paquetes necesarios.

import numpy as np
import os

from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader

import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_addons/utils/ensure_tf_install.py:67: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.3.0 and strictly below 2.6.0 (nightly versions are not supported). 
 The versions of TensorFlow you are currently using is 2.6.0 and is not supported. 
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. 
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
  UserWarning,
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/numba/core/errors.py:154: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
  warnings.warn(msg)

Descargue los datos de entrenamiento de muestra.

En este tutorial, vamos a utilizar el SST-2 (Stanford sentimiento Treebank) que es una de las tareas en el pegamento de referencia. Contiene 67,349 reseñas de películas para entrenamiento y 872 reseñas de películas para pruebas. El conjunto de datos tiene dos clases: críticas de películas positivas y negativas.

data_dir = tf.keras.utils.get_file(
      fname='SST-2.zip',
      origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
      extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Downloading data from https://dl.fbaipublicfiles.com/glue/data/SST-2.zip
7446528/7439277 [==============================] - 2s 0us/step
7454720/7439277 [==============================] - 2s 0us/step

El conjunto de datos SST-2 se almacena en formato TSV. La única diferencia entre TSV y CSV es que TSV utiliza una pestaña \t carácter delimitador como su lugar de una coma , en el formato CSV.

Aquí están las primeras 5 líneas del conjunto de datos de entrenamiento. label = 0 significa negativo, label = 1 significa positivo.

frase etiqueta
ocultar nuevas secreciones de las unidades parentales 0
no contiene ingenio, solo bromas laboriosas 0
que ama a sus personajes y comunica algo bastante hermoso sobre la naturaleza humana 1
permanece completamente satisfecho de seguir siendo el mismo durante todo 0
sobre los peores clichés de la venganza de los nerds que los cineastas pudieron sacar a la luz 0

A continuación, vamos a cargar el conjunto de datos en una trama de datos pandas y cambiar los nombres de las etiquetas actuales ( 0 y 1 ) a una las más legible ( negative y positive ) y los utilizan para el entrenamiento del modelo.

import pandas as pd

def replace_label(original_file, new_file):
  # Load the original file to pandas. We need to specify the separator as
  # '\t' as the training data is stored in TSV format
  df = pd.read_csv(original_file, sep='\t')

  # Define how we want to change the label name
  label_map = {0: 'negative', 1: 'positive'}

  # Excute the label change
  df.replace({'label': label_map}, inplace=True)

  # Write the updated dataset to a new file
  df.to_csv(new_file)

# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')

Inicio rápido

Hay cinco pasos para entrenar un modelo de clasificación de texto:

Paso 1. Elija una arquitectura de modelo de clasificación de texto.

Aquí utilizamos la arquitectura de modelo de incrustación de palabras promedio, que producirá un modelo pequeño y rápido con una precisión decente.

spec = model_spec.get('average_word_vec')

Modelo Maker también es compatible con otras arquitecturas como modelo BERT . Si usted está interesado en aprender sobre otra arquitectura, ver el Elija una arquitectura modelo de texto clasificador sección de abajo.

Paso 2. Cargar los datos de entrenamiento y de prueba, luego de preproceso de acuerdo a un determinado model_spec .

Model Maker puede tomar datos de entrada en formato CSV. Cargaremos el conjunto de datos de entrenamiento y prueba con el nombre de etiqueta legible por humanos que se creó anteriormente.

Cada arquitectura de modelo requiere que los datos de entrada se procesen de una manera particular. DataLoader lee el requisito de model_spec y automáticamente ejecuta el procesamiento previo necesario.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=False)
2021-08-12 12:42:10.766466: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.774526: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.775549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.778072: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-12 12:42:10.778716: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.779805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.780786: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.372042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.373107: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374054: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0

Paso 3. Entrena el modelo de TensorFlow con los datos de entrenamiento.

El promedio palabra incrustación uso modelo batch_size = 32 por defecto. Por lo tanto, verá que se necesitan 2104 pasos para revisar las 67,349 oraciones en el conjunto de datos de entrenamiento. Entrenaremos el modelo durante 10 épocas, lo que significa pasar 10 veces por el conjunto de datos de entrenamiento.

model = text_classifier.create(train_data, model_spec=spec, epochs=10)
2021-08-12 12:42:11.945865: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:11.945910: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:11.946007: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1614] Profiler found 1 GPUs
2021-08-12 12:42:12.177195: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:12.180022: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:42:12.260396: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/10
   2/2104 [..............................] - ETA: 7:11 - loss: 0.6918 - accuracy: 0.5469
2021-08-12 12:42:13.142844: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:13.142884: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:13.337209: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:42:13.340075: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
58/2104 [..............................] - ETA: 15s - loss: 0.6902 - accuracy: 0.5436
2021-08-12 12:42:13.369348: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 12:42:13.372838: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:13.378566: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.382803: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:42:13.390407: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.391576: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:42:13.391931: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13
Dumped tool data for xplane.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6791 - accuracy: 0.5674
Epoch 2/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.5622 - accuracy: 0.7169
Epoch 3/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4407 - accuracy: 0.7983
Epoch 4/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3911 - accuracy: 0.8284
Epoch 5/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3655 - accuracy: 0.8427
Epoch 6/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3520 - accuracy: 0.8516
Epoch 7/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3398 - accuracy: 0.8584
Epoch 8/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3339 - accuracy: 0.8631
Epoch 9/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3276 - accuracy: 0.8649
Epoch 10/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3224 - accuracy: 0.8673

Paso 4. Evalúe el modelo con los datos de prueba.

Después de entrenar el modelo de clasificación de texto usando las oraciones en el conjunto de datos de entrenamiento, usaremos las 872 oraciones restantes en el conjunto de datos de prueba para evaluar cómo se desempeña el modelo contra nuevos datos que nunca antes había visto.

Como el tamaño de lote predeterminado es 32, se necesitarán 28 pasos para revisar las 872 oraciones en el conjunto de datos de prueba.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.5172 - accuracy: 0.8337

Paso 5. Exportar como modelo de TensorFlow Lite.

Exportemos la clasificación de texto que hemos entrenado en el formato TensorFlow Lite. Especificaremos qué carpeta exportar el modelo. De forma predeterminada, el modelo flotante TFLite se exporta para la arquitectura de modelo de incrustación de palabras promedio.

model.export(export_dir='average_word_vec')
2021-08-12 12:43:10.533295: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2021-08-12 12:43:10.973483: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.973851: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 12:43:10.973955: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 12:43:10.974556: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.974968: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.976253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 12:43:10.977511: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.007ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.

2021-08-12 12:43:11.008758: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:43:11.008802: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:43:11.012064: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-08-12 12:43:11.027591: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 722  ops, equivalently 361  MACs

Puede descargar el archivo de modelo de TensorFlow Lite con la barra lateral izquierda de Colab. Entra en el average_word_vec carpeta que hemos especificado en export_dir parámetro anterior, haga clic en el model.tflite archivo y seleccione Download para descargarlo en su ordenador local.

Este modelo se puede integrar en un Android o una aplicación para iOS usando la API NLClassifier de la biblioteca de tareas Lite TensorFlow .

Ver la aplicación de ejemplo Clasificación TFLite texto para más detalles sobre cómo se utiliza el modelo en una aplicación de trabajo.

Nota 1: El enlace de modelos de Android Studio aún no admite la clasificación de texto, así que utilice la biblioteca de tareas de TensorFlow Lite.

Nota 2: Hay una model.json archivo en la misma carpeta con el modelo TFLite. Contiene la representación JSON del metadatos incluido dentro del modelo TensorFlow Lite. Los metadatos del modelo ayudan a la biblioteca de tareas de TFLite a saber qué hace el modelo y cómo preprocesar / postprocesar los datos para el modelo. No es necesario descargar la model.json archivo ya que es sólo para fines informativos y su contenido está ya dentro del archivo TFLite.

Nota 3: Si se entrena un modelo de clasificación de texto usando MobileBERT o la arquitectura BERT-Base, tendrá que utilizar la API BertNLClassifier en lugar de integrar el modelo entrenado en una aplicación móvil.

Las siguientes secciones recorren el ejemplo paso a paso para mostrar más detalles.

Elija una arquitectura modelo para Text Classifier

Cada model_spec objeto representa un modelo específico para el clasificador de texto. TensorFlow Lite Modelo Fabricante actualmente soporta MobileBERT , incrustaciones de palabras promediado y BERT-Base modelos.

Modelo admitido Nombre de model_spec descripcion del modelo Tamaño del modelo
Promedio de incrustación de palabras 'average_word_vec' Promedio de incrustaciones de palabras de texto con activación RELU. <1 MB
MobileBERT 'mobilebert_classifier' 4,3 veces más pequeño y 5,5 veces más rápido que BERT-Base mientras logra resultados competitivos, adecuado para aplicaciones en el dispositivo. 25 MB con cuantificación
100 MB sin cuantificación
Base BERT 'bert_classifier' Modelo BERT estándar que se usa ampliamente en tareas de PNL. 300 MB

En el inicio rápido, usamos el modelo de inserción de palabras promedio. Interruptor Vamos a MobileBERT para entrenar un modelo con mayor precisión.

mb_spec = model_spec.get('mobilebert_classifier')

Cargar datos de entrenamiento

Puede cargar su propio conjunto de datos para trabajar con este tutorial. Sube tu conjunto de datos usando la barra lateral izquierda en Colab.

Subir archivo

Si prefiere no cargar el conjunto de datos a la nube, también puede ejecutar localmente la biblioteca siguiendo la guía .

Para simplificarlo, reutilizaremos el conjunto de datos SST-2 descargado anteriormente. Vamos a usar la DataLoader.from_csv método para cargar los datos.

Tenga en cuenta que, dado que hemos cambiado la arquitectura del modelo, tendremos que volver a cargar el conjunto de datos de prueba y entrenamiento para aplicar la nueva lógica de preprocesamiento.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=False)

La biblioteca fabricante modelo también apoya la from_folder() método para cargar datos. Asume que los datos de texto de la misma clase están en el mismo subdirectorio y que el nombre de la subcarpeta es el nombre de la clase. Cada archivo de texto contiene una muestra de reseña de películas. Los class_labels parámetro se utiliza para especificar qué las subcarpetas.

Entrena un modelo de TensorFlow

Entrene un modelo de clasificación de texto utilizando los datos de entrenamiento.

model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)
2021-08-12 12:43:36.251639: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:43:36.251682: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:43:36.388556: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:43:36.391183: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   1/1403 [..............................] - ETA: 20:40:48 - loss: 1.8896 - test_accuracy: 0.5417
2021-08-12 12:44:47.706948: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:44:47.707003: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2/1403 [..............................] - ETA: 1:20:12 - loss: 1.8514 - test_accuracy: 0.5104
2021-08-12 12:44:52.481577: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:44:52.502256: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:44:52.654230: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 10500 callback api events and 10497 activity events. 
2021-08-12 12:44:53.004241: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:44:53.471527: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:53.642439: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:44:54.210824: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:54.221801: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:44:54.245751: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53
Dumped tool data for xplane.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
1403/1403 [==============================] - 326s 195ms/step - loss: 0.3642 - test_accuracy: 0.8503
Epoch 2/3
1403/1403 [==============================] - 265s 189ms/step - loss: 0.1269 - test_accuracy: 0.9546
Epoch 3/3
1403/1403 [==============================] - 262s 187ms/step - loss: 0.0746 - test_accuracy: 0.9767

Examine la estructura detallada del modelo.

model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_word_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_mask (InputLayer)         [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_type_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
hub_keras_layer_v1v2 (HubKerasL (None, 512)          24581888    input_word_ids[0][0]             
                                                                 input_mask[0][0]                 
                                                                 input_type_ids[0][0]             
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 512)          0           hub_keras_layer_v1v2[0][0]       
__________________________________________________________________________________________________
output (Dense)                  (None, 2)            1026        dropout_1[0][0]                  
==================================================================================================
Total params: 24,582,914
Trainable params: 24,582,914
Non-trainable params: 0
__________________________________________________________________________________________________

Evaluar el modelo

Evalúe el modelo que acabamos de entrenar utilizando los datos de prueba y mida la pérdida y el valor de precisión.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 8s 50ms/step - loss: 0.3570 - test_accuracy: 0.9060

Exportar como modelo de TensorFlow Lite

Convertir el modelo entrenado a formato modelo TensorFlow Lite con metadatos para que pueda utilizar más tarde en una aplicación ML en el dispositivo. El archivo de etiqueta y el archivo de vocabulario están incrustados en metadatos. El nombre de archivo por defecto es TFLite model.tflite .

En muchas aplicaciones de aprendizaje automático en el dispositivo, el tamaño del modelo es un factor importante. Por lo tanto, se recomienda que aplique cuantificar el modelo para hacerlo más pequeño y potencialmente ejecutar más rápido. La técnica de cuantificación posterior al entrenamiento predeterminada es la cuantificación de rango dinámico para los modelos BERT y MobileBERT.

model.export(export_dir='mobilebert/')
2021-08-12 12:58:59.645438: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:58:59.645491: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:58:59.645498: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:360] Ignored change_concat_input_ranges.
2021-08-12 12:58:59.645836: I tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:58:59.719952: I tensorflow/cc/saved_model/reader.cc:90] Reading meta graph with tags { serve }
2021-08-12 12:58:59.720017: I tensorflow/cc/saved_model/reader.cc:132] Reading SavedModel debug info (if present) from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:00.055674: I tensorflow/cc/saved_model/loader.cc:211] Restoring SavedModel bundle.
2021-08-12 12:59:01.918508: I tensorflow/cc/saved_model/loader.cc:195] Running initialization op on SavedModel bundle at path: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:02.940575: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 3294762 microseconds.
2021-08-12 12:59:08.166940: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 5.511 G  ops, equivalently 2.755 G  MACs
2021-08-12 12:59:08.346145: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346201: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346208: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346213: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346220: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346225: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346230: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346235: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346253: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346258: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346263: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346268: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346274: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346279: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346284: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346289: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346313: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346318: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346323: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346328: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346334: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346339: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346344: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346349: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346374: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346379: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346385: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346390: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346396: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346400: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346422: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346444: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346449: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346455: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346460: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346476: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346481: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346486: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346491: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346497: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346502: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346507: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346512: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346528: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346533: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346538: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346543: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346549: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346554: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346559: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346563: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346579: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346584: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346589: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346594: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346600: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346605: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346610: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346615: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346640: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346646: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346651: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346656: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346662: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346667: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346672: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346676: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346691: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346695: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346700: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346705: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346712: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346717: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346722: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346726: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346741: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346746: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346751: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346756: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346762: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346766: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346771: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346776: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346791: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346796: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346801: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346806: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346812: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346817: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346822: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346827: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346840: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346845: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346850: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346855: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346861: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346866: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346871: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346876: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346890: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346894: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346899: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346904: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346910: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346915: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346919: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346924: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346939: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346943: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346948: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346953: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346959: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346964: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346969: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346974: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347001: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347007: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347011: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347016: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347022: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347027: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347032: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347037: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347051: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347056: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347061: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347066: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347072: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347077: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347082: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347087: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347102: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347107: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347111: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347116: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347123: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347127: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347132: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347137: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347152: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347157: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347162: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347167: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347173: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347178: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347183: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347187: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347206: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347211: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347216: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347221: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347227: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347232: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347237: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347242: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347257: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347262: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347266: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347271: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347277: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347281: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347286: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347291: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347305: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347310: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347315: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347319: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347325: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347330: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347335: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347340: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347354: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347359: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347375: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347380: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347384: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347389: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347403: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul20 because it has no allocated buffer.
2021-08-12 12:59:08.347408: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul22 because it has no allocated buffer.
2021-08-12 12:59:08.347412: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul24 because it has no allocated buffer.
2021-08-12 12:59:08.347417: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul26 because it has no allocated buffer.
2021-08-12 12:59:08.347423: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_125 because it has no allocated buffer.
2021-08-12 12:59:08.347427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_127 because it has no allocated buffer.
2021-08-12 12:59:08.347432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_129 because it has no allocated buffer.
2021-08-12 12:59:08.347437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_131 because it has no allocated buffer.

El archivo de modelo TensorFlow Lite se puede integrar en una aplicación móvil usando la API BertNLClassifier en TensorFlow Biblioteca de tareas Lite . Tenga en cuenta que esto es diferente de la NLClassifier API utilizado para integrar la clasificación de texto entrenado con la arquitectura media palabra modelo vectorial.

Los formatos de exportación pueden ser uno o una lista de los siguientes:

De forma predeterminada, exporta solo el archivo de modelo de TensorFlow Lite que contiene los metadatos del modelo. También puede optar por exportar otros archivos relacionados con el modelo para un mejor examen. Por ejemplo, exportar solo el archivo de etiqueta y el archivo de vocabulario de la siguiente manera:

model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])

Se puede evaluar el modelo TFLite con evaluate_tflite método para medir su precisión. Convertir el modelo entrenado de TensorFlow al formato TFLite y aplicar la cuantificación puede afectar su precisión, por lo que se recomienda evaluar la precisión del modelo TFLite antes de la implementación.

accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)
TFLite model accuracy:  {'accuracy': 0.911697247706422}

Uso avanzado

El create función es la función de controlador que utiliza la biblioteca modelista para crear modelos. El model_spec parámetro define la especificación del modelo. Los AverageWordVecSpec y BertClassifierSpec clases están soportadas actualmente. El create de función consta de los pasos siguientes:

  1. Crea el modelo para el clasificador de texto de acuerdo con model_spec .
  2. Entrena el modelo clasificador. Las épocas por defecto y el tamaño de lote predeterminado se establecen por los default_training_epochs y default_batch_size variables en el model_spec objeto.

Esta sección cubre temas de uso avanzado como ajustar el modelo y los hiperparámetros de entrenamiento.

Personalizar los hiperparámetros del modelo MobileBERT

Los parámetros del modelo que puede ajustar son:

  • seq_len : Longitud de la secuencia de alimentación en el modelo.
  • initializer_range : La desviación estándar de la truncated_normal_initializer para inicializar todas las matrices de peso.
  • trainable : Boolean que especifica si la capa pre-entrenado es entrenable.

Los parámetros de la canalización de entrenamiento que puede ajustar son:

  • model_dir : La ubicación de los archivos de modelo de punto de control. Si no se establece, se utilizará un directorio temporal.
  • dropout_rate : La tasa de abandono.
  • learning_rate : La tasa de aprendizaje inicial para el optimizador de Adán.
  • tpu : la dirección de TPU para conectarse a.

Por ejemplo, se puede establecer el seq_len=256 (por defecto es 128). Esto permite que el modelo clasifique texto más extenso.

new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256

Personalizar los hiperparámetros del modelo de incrustación de palabras promedio

Se puede ajustar la infraestructura de modelo como el wordvec_dim y los seq_len variables en el AverageWordVecSpec clase.

Por ejemplo, se puede entrenar el modelo con un valor mayor de wordvec_dim . Observe que debe construir un nuevo model_spec si modifica el modelo.

new_model_spec = AverageWordVecSpec(wordvec_dim=32)

Obtenga los datos preprocesados.

new_train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=True)

Entrena el nuevo modelo.

model = text_classifier.create(new_train_data, model_spec=new_model_spec)
2021-08-12 13:04:08.907763: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:08.907807: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:09.074585: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:09.086334: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   2/2104 [..............................] - ETA: 5:58 - loss: 0.6948 - accuracy: 0.4688
2021-08-12 13:04:09.720736: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:09.720777: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
21/2104 [..............................] - ETA: 2:30 - loss: 0.6940 - accuracy: 0.4702
2021-08-12 13:04:10.973207: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 13:04:10.980573: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:11.045547: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:11.052796: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:11.063746: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.068200: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:11.084769: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.087101: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:11.087939: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 8s 4ms/step - loss: 0.6526 - accuracy: 0.6062
Epoch 2/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4705 - accuracy: 0.7775
Epoch 3/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3944 - accuracy: 0.8228

Ajustar los hiperparámetros de entrenamiento

También puede sintonizar los hiperparámetros formación como epochs y batch_size que afectan la exactitud del modelo. Por ejemplo,

  • epochs : más épocas podrían lograr una mayor precisión, pero pueden dar lugar a un ajuste por exceso.
  • batch_size : el número de muestras a utilizar en un solo paso de formación.

Por ejemplo, puedes entrenar con más épocas.

model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)
2021-08-12 13:04:29.741606: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:29.741645: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:29.923763: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:29.937026: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/20
   2/2104 [..............................] - ETA: 6:22 - loss: 0.6923 - accuracy: 0.5781
2021-08-12 13:04:30.617172: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:30.617216: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:30.818046: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
21/2104 [..............................] - ETA: 40s - loss: 0.6939 - accuracy: 0.4866
2021-08-12 13:04:30.819829: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:30.896524: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:30.902312: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:30.911299: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.915427: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:30.928110: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.929821: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:30.930444: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6602 - accuracy: 0.5985
Epoch 2/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4865 - accuracy: 0.7690
Epoch 3/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4005 - accuracy: 0.8199
Epoch 4/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3676 - accuracy: 0.8400
Epoch 5/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3498 - accuracy: 0.8512
Epoch 6/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3380 - accuracy: 0.8567
Epoch 7/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3280 - accuracy: 0.8624
Epoch 8/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3215 - accuracy: 0.8664
Epoch 9/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3164 - accuracy: 0.8691
Epoch 10/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3105 - accuracy: 0.8699
Epoch 11/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3072 - accuracy: 0.8733
Epoch 12/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3045 - accuracy: 0.8739
Epoch 13/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3028 - accuracy: 0.8742
Epoch 14/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2993 - accuracy: 0.8773
Epoch 15/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2973 - accuracy: 0.8779
Epoch 16/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2957 - accuracy: 0.8791
Epoch 17/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2940 - accuracy: 0.8802
Epoch 18/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2919 - accuracy: 0.8807
Epoch 19/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2904 - accuracy: 0.8815
Epoch 20/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2895 - accuracy: 0.8825

Evalúe el modelo recién reentrenado con 20 épocas de entrenamiento.

new_test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=False)

loss, accuracy = model.evaluate(new_test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.4997 - accuracy: 0.8349

Cambiar la arquitectura del modelo

Puede cambiar el modelo cambiando el model_spec . A continuación se muestra cómo cambiar al modelo BERT-Base.

Cambiar el model_spec al modelo BERT-Base para el clasificador de texto.

spec = model_spec.get('bert_classifier')

Los pasos restantes son los mismos.

Personalice la cuantificación posterior al entrenamiento en el modelo TensorFlow Lite

Después de la formación de cuantificación es una técnica de conversión que puede reducir el tamaño del modelo y la latencia de la inferencia, además de mejorar la velocidad de la CPU y el acelerador de hardware inferencia, con un poco de degradación en la precisión del modelo. Por tanto, se utiliza mucho para optimizar el modelo.

La biblioteca Model Maker aplica una técnica de cuantificación posterior al entrenamiento predeterminada al exportar el modelo. Si desea personalizar la cuantificación posterior al entrenamiento, Modelo Maker soporta múltiples opciones después de la formación de cuantificación utilizando QuantizationConfig también. Tomemos como ejemplo la cuantificación de float16. Primero, defina la configuración de cuantificación.

config = QuantizationConfig.for_float16()

Luego exportamos el modelo de TensorFlow Lite con dicha configuración.

model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

Lee mas

Usted puede leer nuestra clasificación de texto ejemplo para aprender detalles técnicos. Para obtener más información, consulte: