Classificazione del testo con TensorFlow Lite Model Maker

Visualizza su TensorFlow.org Esegui in Google Colab Visualizza la fonte su GitHub Scarica taccuino

La biblioteca Maker tensorflow Lite Modello semplifica il processo di adattamento e la conversione di un modello tensorflow a particolari dati di input durante la distribuzione di questo modello per le applicazioni on-dispositivo ML.

Questo taccuino mostra un esempio end-to-end che utilizza la libreria Model Maker per illustrare l'adattamento e la conversione di un modello di classificazione del testo comunemente usato per classificare le recensioni di film su un dispositivo mobile. Il modello di classificazione del testo classifica il testo in categorie predefinite. Gli input devono essere testo preelaborato e gli output sono le probabilità delle categorie. Il set di dati utilizzato in questo tutorial sono recensioni di film positive e negative.

Prerequisiti

Installa i pacchetti richiesti

Per eseguire questo esempio, installare i pacchetti necessari, compreso il pacchetto Maker Modello dalla repo GitHub .

pip install -q tflite-model-maker

Importa i pacchetti richiesti.

import numpy as np
import os

from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader

import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_addons/utils/ensure_tf_install.py:67: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.3.0 and strictly below 2.6.0 (nightly versions are not supported). 
 The versions of TensorFlow you are currently using is 2.6.0 and is not supported. 
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. 
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
  UserWarning,
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/numba/core/errors.py:154: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
  warnings.warn(msg)

Scarica i dati di allenamento di esempio.

In questo tutorial, useremo lo SST-2 (Stanford Sentiment Treebank), che è uno dei compiti della COLLA benchmark. Contiene 67.349 recensioni di film per la formazione e 872 recensioni di film per i test. Il set di dati ha due classi: recensioni di film positive e negative.

data_dir = tf.keras.utils.get_file(
      fname='SST-2.zip',
      origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
      extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Downloading data from https://dl.fbaipublicfiles.com/glue/data/SST-2.zip
7446528/7439277 [==============================] - 2s 0us/step
7454720/7439277 [==============================] - 2s 0us/step

Il set di dati SST-2 è archiviato in formato TSV. L'unica differenza tra TSV e CSV è che TSV utilizza una scheda \t carattere come delimitatore anziché una virgola , nel formato CSV.

Ecco le prime 5 righe del set di dati di addestramento. label=0 significa negativo, label=1 significa positivo.

frase etichetta
nascondere nuove secrezioni dalle unità genitoriali 0
non contiene arguzia, solo gag laboriose 0
che ama i suoi personaggi e comunica qualcosa di piuttosto bello sulla natura umana 1
rimane completamente soddisfatto di rimanere lo stesso per tutto il tempo 0
sui peggiori cliché sulla vendetta dei nerd che i registi potrebbero tirare fuori 0

Successivamente, abbiamo caricherà il set di dati in un dataframe Panda e cambiare i nomi delle etichette attuali ( 0 e 1 ) ad una più quelli leggibili ( negative e positive ) e utilizzarli per la formazione del modello.

import pandas as pd

def replace_label(original_file, new_file):
  # Load the original file to pandas. We need to specify the separator as
  # '\t' as the training data is stored in TSV format
  df = pd.read_csv(original_file, sep='\t')

  # Define how we want to change the label name
  label_map = {0: 'negative', 1: 'positive'}

  # Excute the label change
  df.replace({'label': label_map}, inplace=True)

  # Write the updated dataset to a new file
  df.to_csv(new_file)

# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')

Avvio veloce

Ci sono cinque passaggi per addestrare un modello di classificazione del testo:

Passaggio 1. Scegliere un'architettura del modello di classificazione del testo.

Qui usiamo l'architettura media del modello di incorporamento di parole, che produrrà un modello piccolo e veloce con una precisione decente.

spec = model_spec.get('average_word_vec')

Modello Maker supporta anche altre architetture modello come il BERT . Se siete interessati a conoscere altre architetture, vedere la Scegliere un modello di architettura per il testo classificatore sezione sottostante.

Fase 2. Caricare i dati di allenamento e di test, quindi pre-elaborazione loro in base ad una specifica model_spec .

Model Maker può accettare dati di input in formato CSV. Caricheremo il set di dati di training e test con il nome dell'etichetta leggibile che è stato creato in precedenza.

Ogni architettura del modello richiede che i dati di input vengano elaborati in un modo particolare. DataLoader legge il requisito da model_spec e automaticamente esegue la pre-elaborazione necessaria.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=False)
2021-08-12 12:42:10.766466: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.774526: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.775549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.778072: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-12 12:42:10.778716: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.779805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.780786: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.372042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.373107: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374054: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0

Passaggio 3. Addestrare il modello TensorFlow con i dati di addestramento.

La parola embedding modello uso medio batch_size = 32 per impostazione predefinita. Pertanto vedrai che sono necessari 2104 passaggi per passare attraverso le 67.349 frasi nel set di dati di addestramento. Addestreremo il modello per 10 epoche, il che significa passare attraverso il set di dati di addestramento 10 volte.

model = text_classifier.create(train_data, model_spec=spec, epochs=10)
2021-08-12 12:42:11.945865: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:11.945910: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:11.946007: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1614] Profiler found 1 GPUs
2021-08-12 12:42:12.177195: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:12.180022: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:42:12.260396: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/10
   2/2104 [..............................] - ETA: 7:11 - loss: 0.6918 - accuracy: 0.5469
2021-08-12 12:42:13.142844: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:13.142884: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:13.337209: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:42:13.340075: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
58/2104 [..............................] - ETA: 15s - loss: 0.6902 - accuracy: 0.5436
2021-08-12 12:42:13.369348: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 12:42:13.372838: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:13.378566: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.382803: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:42:13.390407: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.391576: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:42:13.391931: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13
Dumped tool data for xplane.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6791 - accuracy: 0.5674
Epoch 2/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.5622 - accuracy: 0.7169
Epoch 3/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4407 - accuracy: 0.7983
Epoch 4/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3911 - accuracy: 0.8284
Epoch 5/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3655 - accuracy: 0.8427
Epoch 6/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3520 - accuracy: 0.8516
Epoch 7/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3398 - accuracy: 0.8584
Epoch 8/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3339 - accuracy: 0.8631
Epoch 9/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3276 - accuracy: 0.8649
Epoch 10/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3224 - accuracy: 0.8673

Passaggio 4. Valutare il modello con i dati del test.

Dopo aver addestrato il modello di classificazione del testo utilizzando le frasi nel set di dati di addestramento, utilizzeremo le restanti 872 frasi nel set di dati di test per valutare le prestazioni del modello rispetto a nuovi dati che non ha mai visto prima.

Poiché la dimensione batch predefinita è 32, saranno necessari 28 passaggi per eseguire le 872 frasi nel set di dati di test.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.5172 - accuracy: 0.8337

Passaggio 5. Esporta come modello TensorFlow Lite.

Esportiamo la classificazione del testo che abbiamo addestrato nel formato TensorFlow Lite. Specifichiamo in quale cartella esportare il modello. Per impostazione predefinita, il modello float TFLite viene esportato per l'architettura del modello di word embedding medio.

model.export(export_dir='average_word_vec')
2021-08-12 12:43:10.533295: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2021-08-12 12:43:10.973483: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.973851: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 12:43:10.973955: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 12:43:10.974556: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.974968: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.976253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 12:43:10.977511: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.007ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.

2021-08-12 12:43:11.008758: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:43:11.008802: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:43:11.012064: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-08-12 12:43:11.027591: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 722  ops, equivalently 361  MACs

È possibile scaricare il file del modello TensorFlow Lite utilizzando la barra laterale sinistra di Colab. Andate nella average_word_vec cartella come abbiamo specificato in export_dir parametro sopra, fare clic destro sul model.tflite file e scegliere Download per scaricarlo sul computer locale.

Questo modello può essere integrato in un Android o un app iOS utilizzando l' API NLClassifier del tensorflow Lite Task Biblioteca .

Vedere l' applicazione di esempio classificazione TFLite testo per maggiori dettagli su come il modello è utilizzato in un'applicazione di lavoro.

Nota 1: Android Studio Model Binding non supporta ancora la classificazione del testo, quindi utilizzare la libreria di attività TensorFlow Lite.

Nota 2: C'è una model.json file nella stessa cartella con il modello TFLite. Esso contiene la rappresentazione JSON dei metadati in bundle all'interno del modello tensorflow Lite. I metadati del modello aiutano la Task Library di TFLite a sapere cosa fa il modello e come pre-elaborare/post-elaborare i dati per il modello. Non è necessario scaricare il model.json file come si è solo a scopo informativo e il suo contenuto è già dentro il file TFLite.

Nota 3: Se ci si allena un modello di classificazione testo utilizzando MobileBERT o l'architettura BERT-Base, è necessario utilizzare BertNLClassifier API invece di integrare il modello addestrato in un app mobile.

Le sezioni seguenti illustrano passo passo l'esempio per mostrare maggiori dettagli.

Scegli un'architettura modello per Text Classifier

Ogni model_spec oggetto rappresenta un modello specifico per il classificatore testo. Tensorflow Lite Model Maker supporta attualmente MobileBERT , incastri di parole calcolo della media e BERT-Base modelli.

Modello supportato Nome di model_spec Descrizione del Modello Dimensioni del modello
Incorporamento di parole medio 'parola_media_vec' Media di incorporamenti di parole di testo con attivazione RELU. <1MB
cellulareBERT 'classificatore_mobilebert' 4,3 volte più piccolo e 5,5 volte più veloce di BERT-Base ottenendo risultati competitivi, adatto per applicazioni su dispositivo. 25 MB con quantizzazione
100 MB senza quantizzazione
BERT-Base 'classificatore_bert' Modello BERT standard ampiamente utilizzato nelle attività di PNL. 300 MB

Nell'avvio rapido, abbiamo utilizzato il modello di word embedding medio. Interruttore di Let a MobileBERT addestrare un modello con maggiore precisione.

mb_spec = model_spec.get('mobilebert_classifier')

Carica dati di allenamento

Puoi caricare il tuo set di dati per lavorare attraverso questo tutorial. Carica il tuo set di dati utilizzando la barra laterale di sinistra in Colab.

Caricare un file

Se si preferisce non caricare il tuo set di dati nel cloud, è possibile anche a livello locale gestito la libreria seguendo la guida .

Per semplificare, riutilizzeremo il set di dati SST-2 scaricato in precedenza. Usiamo il DataLoader.from_csv metodo per caricare i dati.

Tieni presente che poiché abbiamo modificato l'architettura del modello, sarà necessario ricaricare il set di dati di addestramento e test per applicare la nuova logica di preelaborazione.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=False)

La libreria Maker modello supporta anche la from_folder() metodo per caricare i dati. Presuppone che i dati di testo della stessa classe siano nella stessa sottodirectory e che il nome della sottocartella sia il nome della classe. Ogni file di testo contiene un campione di recensione del film. I class_labels parametro viene utilizzato per specificare quali le sottocartelle.

Addestra un modello TensorFlow

Addestrare un modello di classificazione del testo utilizzando i dati di addestramento.

model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)
2021-08-12 12:43:36.251639: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:43:36.251682: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:43:36.388556: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:43:36.391183: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   1/1403 [..............................] - ETA: 20:40:48 - loss: 1.8896 - test_accuracy: 0.5417
2021-08-12 12:44:47.706948: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:44:47.707003: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2/1403 [..............................] - ETA: 1:20:12 - loss: 1.8514 - test_accuracy: 0.5104
2021-08-12 12:44:52.481577: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:44:52.502256: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:44:52.654230: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 10500 callback api events and 10497 activity events. 
2021-08-12 12:44:53.004241: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:44:53.471527: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:53.642439: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:44:54.210824: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:54.221801: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:44:54.245751: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53
Dumped tool data for xplane.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
1403/1403 [==============================] - 326s 195ms/step - loss: 0.3642 - test_accuracy: 0.8503
Epoch 2/3
1403/1403 [==============================] - 265s 189ms/step - loss: 0.1269 - test_accuracy: 0.9546
Epoch 3/3
1403/1403 [==============================] - 262s 187ms/step - loss: 0.0746 - test_accuracy: 0.9767

Esaminare la struttura dettagliata del modello.

model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_word_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_mask (InputLayer)         [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_type_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
hub_keras_layer_v1v2 (HubKerasL (None, 512)          24581888    input_word_ids[0][0]             
                                                                 input_mask[0][0]                 
                                                                 input_type_ids[0][0]             
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 512)          0           hub_keras_layer_v1v2[0][0]       
__________________________________________________________________________________________________
output (Dense)                  (None, 2)            1026        dropout_1[0][0]                  
==================================================================================================
Total params: 24,582,914
Trainable params: 24,582,914
Non-trainable params: 0
__________________________________________________________________________________________________

Valuta il modello

Valutare il modello che abbiamo appena addestrato utilizzando i dati del test e misurare il valore di perdita e accuratezza.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 8s 50ms/step - loss: 0.3570 - test_accuracy: 0.9060

Esporta come modello TensorFlow Lite

Convertire il modello addestrato per la Modello di formato tensorflow Lite con i metadati in modo che in seguito sarà possibile utilizzare in un'applicazione sul dispositivo ML. Il file etichetta e il file vocab sono incorporati nei metadati. Il nome file predefinito è TFLite model.tflite .

In molte applicazioni ML su dispositivo, la dimensione del modello è un fattore importante. Pertanto, si consiglia di applicare la quantizzazione del modello per renderlo più piccolo e potenzialmente eseguirlo più velocemente. La tecnica di quantizzazione post-addestramento predefinita è la quantizzazione dell'intervallo dinamico per i modelli BERT e MobileBERT.

model.export(export_dir='mobilebert/')
2021-08-12 12:58:59.645438: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:58:59.645491: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:58:59.645498: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:360] Ignored change_concat_input_ranges.
2021-08-12 12:58:59.645836: I tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:58:59.719952: I tensorflow/cc/saved_model/reader.cc:90] Reading meta graph with tags { serve }
2021-08-12 12:58:59.720017: I tensorflow/cc/saved_model/reader.cc:132] Reading SavedModel debug info (if present) from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:00.055674: I tensorflow/cc/saved_model/loader.cc:211] Restoring SavedModel bundle.
2021-08-12 12:59:01.918508: I tensorflow/cc/saved_model/loader.cc:195] Running initialization op on SavedModel bundle at path: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:02.940575: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 3294762 microseconds.
2021-08-12 12:59:08.166940: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 5.511 G  ops, equivalently 2.755 G  MACs
2021-08-12 12:59:08.346145: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346201: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346208: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346213: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346220: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346225: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346230: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346235: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346253: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346258: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346263: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346268: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346274: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346279: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346284: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346289: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346313: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346318: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346323: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346328: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346334: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346339: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346344: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346349: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346374: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346379: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346385: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346390: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346396: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346400: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346422: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346444: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346449: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346455: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346460: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346476: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346481: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346486: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346491: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346497: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346502: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346507: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346512: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346528: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346533: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346538: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346543: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346549: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346554: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346559: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346563: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346579: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346584: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346589: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346594: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346600: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346605: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346610: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346615: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346640: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346646: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346651: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346656: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346662: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346667: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346672: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346676: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346691: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346695: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346700: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346705: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346712: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346717: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346722: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346726: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346741: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346746: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346751: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346756: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346762: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346766: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346771: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346776: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346791: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346796: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346801: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346806: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346812: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346817: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346822: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346827: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346840: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346845: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346850: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346855: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346861: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346866: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346871: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346876: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346890: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346894: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346899: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346904: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346910: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346915: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346919: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346924: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346939: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346943: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346948: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346953: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346959: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346964: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346969: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346974: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347001: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347007: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347011: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347016: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347022: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347027: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347032: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347037: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347051: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347056: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347061: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347066: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347072: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347077: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347082: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347087: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347102: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347107: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347111: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347116: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347123: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347127: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347132: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347137: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347152: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347157: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347162: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347167: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347173: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347178: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347183: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347187: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347206: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347211: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347216: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347221: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347227: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347232: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347237: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347242: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347257: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347262: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347266: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347271: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347277: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347281: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347286: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347291: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347305: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347310: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347315: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347319: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347325: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347330: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347335: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347340: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347354: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347359: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347375: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347380: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347384: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347389: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347403: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul20 because it has no allocated buffer.
2021-08-12 12:59:08.347408: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul22 because it has no allocated buffer.
2021-08-12 12:59:08.347412: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul24 because it has no allocated buffer.
2021-08-12 12:59:08.347417: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul26 because it has no allocated buffer.
2021-08-12 12:59:08.347423: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_125 because it has no allocated buffer.
2021-08-12 12:59:08.347427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_127 because it has no allocated buffer.
2021-08-12 12:59:08.347432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_129 because it has no allocated buffer.
2021-08-12 12:59:08.347437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_131 because it has no allocated buffer.

Il file del modello tensorflow Lite può essere integrato in un mobile app utilizzando l' API BertNLClassifier in tensorflow Lite Task Biblioteca . Si noti che questo è diverso dal NLClassifier API utilizzata per integrare la classificazione di testo allenato con la media di architettura parola modello vettoriale.

I formati di esportazione possono essere uno o un elenco dei seguenti:

Per impostazione predefinita, esporta solo il file del modello TensorFlow Lite contenente i metadati del modello. Puoi anche scegliere di esportare altri file relativi al modello per un migliore esame. Ad esempio, esportando solo il file etichetta e il file vocab come segue:

model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])

È possibile valutare il modello TFLite con evaluate_tflite metodo per misurare la sua accuratezza. La conversione del modello TensorFlow addestrato nel formato TFLite e l'applicazione della quantizzazione può influire sulla sua accuratezza, quindi si consiglia di valutare l'accuratezza del modello TFLite prima della distribuzione.

accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)
TFLite model accuracy:  {'accuracy': 0.911697247706422}

Utilizzo avanzato

Il create funzione è la funzione di driver che gli usi della biblioteca modellista per creare modelli. Il model_spec parametro definisce la specifica modello. I AverageWordVecSpec e BertClassifierSpec classi sono supportati. Il create funzione comprende le seguenti fasi:

  1. Crea il modello per il classificatore testo secondo model_spec .
  2. Addestra il modello classificatore. Le epoche di default e la dimensione predefinita lotto sono definiti dalle default_training_epochs e default_batch_size variabili nella model_spec oggetto.

Questa sezione tratta argomenti di utilizzo avanzato come la regolazione del modello e gli iperparametri di training.

Personalizza gli iperparametri del modello MobileBERT

I parametri del modello che puoi regolare sono:

  • seq_len : Lunghezza della sequenza per alimentare il modello.
  • initializer_range : La deviazione standard della truncated_normal_initializer per inizializzare tutte le matrici peso.
  • trainable : booleano che specifica se lo strato di pre-qualificato è addestrabile.

I parametri della pipeline di addestramento che è possibile regolare sono:

  • model_dir : La posizione dei file modello di checkpoint. Se non è impostato, verrà utilizzata una directory temporanea.
  • dropout_rate : Il tasso di abbandono.
  • learning_rate : Il tasso di apprendimento iniziale per l'ottimizzatore Adam.
  • tpu : indirizzo TPU a cui connettersi.

Per esempio, è possibile impostare il seq_len=256 (valore di default è 128). Ciò consente al modello di classificare testi più lunghi.

new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256

Personalizza gli iperparametri del modello di incorporamento delle parole medi

È possibile regolare l'infrastruttura modello come la wordvec_dim ei seq_len variabili nella AverageWordVecSpec di classe.

Ad esempio, è possibile addestrare il modello con un valore più grande di wordvec_dim . Si noti che è necessario costruire un nuovo model_spec se si modifica il modello.

new_model_spec = AverageWordVecSpec(wordvec_dim=32)

Ottieni i dati preelaborati.

new_train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=True)

Allena il nuovo modello.

model = text_classifier.create(new_train_data, model_spec=new_model_spec)
2021-08-12 13:04:08.907763: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:08.907807: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:09.074585: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:09.086334: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   2/2104 [..............................] - ETA: 5:58 - loss: 0.6948 - accuracy: 0.4688
2021-08-12 13:04:09.720736: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:09.720777: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
21/2104 [..............................] - ETA: 2:30 - loss: 0.6940 - accuracy: 0.4702
2021-08-12 13:04:10.973207: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 13:04:10.980573: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:11.045547: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:11.052796: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:11.063746: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.068200: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:11.084769: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.087101: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:11.087939: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 8s 4ms/step - loss: 0.6526 - accuracy: 0.6062
Epoch 2/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4705 - accuracy: 0.7775
Epoch 3/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3944 - accuracy: 0.8228

Ottimizza gli iperparametri di allenamento

È inoltre possibile ottimizzare i iperparametri di formazione come epochs e batch_size che influenzano l'accuratezza del modello. Ad esempio,

  • epochs : più epoche potrebbe raggiungere una migliore precisione, ma può portare a sovradattamento.
  • batch_size : il numero di campioni da utilizzare in un solo passo di formazione.

Ad esempio, puoi allenarti con più epoche.

model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)
2021-08-12 13:04:29.741606: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:29.741645: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:29.923763: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:29.937026: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/20
   2/2104 [..............................] - ETA: 6:22 - loss: 0.6923 - accuracy: 0.5781
2021-08-12 13:04:30.617172: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:30.617216: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:30.818046: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
21/2104 [..............................] - ETA: 40s - loss: 0.6939 - accuracy: 0.4866
2021-08-12 13:04:30.819829: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:30.896524: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:30.902312: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:30.911299: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.915427: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:30.928110: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.929821: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:30.930444: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6602 - accuracy: 0.5985
Epoch 2/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4865 - accuracy: 0.7690
Epoch 3/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4005 - accuracy: 0.8199
Epoch 4/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3676 - accuracy: 0.8400
Epoch 5/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3498 - accuracy: 0.8512
Epoch 6/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3380 - accuracy: 0.8567
Epoch 7/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3280 - accuracy: 0.8624
Epoch 8/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3215 - accuracy: 0.8664
Epoch 9/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3164 - accuracy: 0.8691
Epoch 10/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3105 - accuracy: 0.8699
Epoch 11/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3072 - accuracy: 0.8733
Epoch 12/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3045 - accuracy: 0.8739
Epoch 13/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3028 - accuracy: 0.8742
Epoch 14/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2993 - accuracy: 0.8773
Epoch 15/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2973 - accuracy: 0.8779
Epoch 16/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2957 - accuracy: 0.8791
Epoch 17/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2940 - accuracy: 0.8802
Epoch 18/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2919 - accuracy: 0.8807
Epoch 19/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2904 - accuracy: 0.8815
Epoch 20/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2895 - accuracy: 0.8825

Valuta il modello appena riaddestrato con 20 epoche di addestramento.

new_test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=False)

loss, accuracy = model.evaluate(new_test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.4997 - accuracy: 0.8349

Cambia l'architettura del modello

È possibile modificare il modello cambiando il model_spec . Quanto segue mostra come passare al modello BERT-Base.

Modificare il model_spec al modello di BERT-Base per il classificatore testo.

spec = model_spec.get('bert_classifier')

I passaggi rimanenti sono gli stessi.

Personalizza la quantizzazione post-allenamento sul modello TensorFlow Lite

Post-formazione quantizzazione è una tecnica di conversione in grado di ridurre le dimensioni del modello e la latenza di inferenza, ma anche di migliorare la velocità della CPU e acceleratore hardware inferenza, con un po 'di degrado nella precisione del modello. Pertanto, è ampiamente utilizzato per ottimizzare il modello.

La libreria Model Maker applica una tecnica di quantizzazione post-addestramento predefinita durante l'esportazione del modello. Se si desidera personalizzare la quantizzazione post-allenamento, Modello Maker supporta molteplici opzioni di post-formazione di quantizzazione che utilizzano QuantizationConfig pure. Prendiamo come esempio la quantizzazione float16. Innanzitutto, definisci la configurazione di quantizzazione.

config = QuantizationConfig.for_float16()

Quindi esportiamo il modello TensorFlow Lite con tale configurazione.

model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

Per saperne di più

Potete leggere la nostra classificazione testo esempio per imparare i dettagli tecnici. Per ulteriori informazioni, fare riferimento a: