Klasifikasi teks dengan TensorFlow Lite Model Maker

Lihat di TensorFlow.org Jalankan di Google Colab Lihat sumber di GitHub Unduh buku catatan

The TensorFlow Lite Model pembuat perpustakaan menyederhanakan proses beradaptasi dan mengubah model TensorFlow untuk input data tertentu ketika deploying model ini untuk aplikasi ML pada perangkat.

Notebook ini menunjukkan contoh ujung ke ujung yang menggunakan perpustakaan Model Maker untuk menggambarkan adaptasi dan konversi model klasifikasi teks yang umum digunakan untuk mengklasifikasikan ulasan film di perangkat seluler. Model klasifikasi teks mengklasifikasikan teks ke dalam kategori yang telah ditentukan. Input harus berupa teks yang telah diproses sebelumnya dan outputnya adalah probabilitas kategori. Dataset yang digunakan dalam tutorial ini adalah review film positif dan negatif.

Prasyarat

Instal paket yang diperlukan

Untuk menjalankan contoh ini, menginstal paket yang dibutuhkan, termasuk paket Pembuat Model dari repo GitHub .

pip install -q tflite-model-maker

Impor paket yang diperlukan.

import numpy as np
import os

from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader

import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_addons/utils/ensure_tf_install.py:67: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.3.0 and strictly below 2.6.0 (nightly versions are not supported). 
 The versions of TensorFlow you are currently using is 2.6.0 and is not supported. 
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. 
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
  UserWarning,
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/numba/core/errors.py:154: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
  warnings.warn(msg)

Unduh contoh data pelatihan.

Dalam tutorial ini, kita akan menggunakan SST-2 (Stanford Sentimen Treebank) yang merupakan salah satu tugas dalam LEM patokan. Ini berisi 67.349 ulasan film untuk pelatihan dan 872 ulasan film untuk pengujian. Dataset memiliki dua kelas: ulasan film positif dan negatif.

data_dir = tf.keras.utils.get_file(
      fname='SST-2.zip',
      origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
      extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Downloading data from https://dl.fbaipublicfiles.com/glue/data/SST-2.zip
7446528/7439277 [==============================] - 2s 0us/step
7454720/7439277 [==============================] - 2s 0us/step

Dataset SST-2 disimpan dalam format TSV. Satu-satunya perbedaan antara TSV dan CSV adalah bahwa TSV menggunakan tab \t karakter sebagai pembatas yang bukan koma , dalam format CSV.

Berikut adalah 5 baris pertama dari dataset pelatihan. label=0 berarti negatif, label=1 berarti positif.

kalimat label
sembunyikan sekresi baru dari unit induk 0
tidak mengandung kecerdasan, hanya lelucon yang dibuat-buat 0
yang mencintai karakternya dan mengomunikasikan sesuatu yang agak indah tentang sifat manusia 1
tetap benar-benar puas untuk tetap sama selama ini 0
pada klise balas dendam terburuk yang bisa dikeruk oleh pembuat film 0

Berikutnya, kami akan memuat dataset ke dalam dataframe Panda dan mengubah nama label saat ini ( 0 dan 1 ) untuk dibaca manusia yang lebih ( negative dan positive ) dan menggunakannya untuk model pelatihan.

import pandas as pd

def replace_label(original_file, new_file):
  # Load the original file to pandas. We need to specify the separator as
  # '\t' as the training data is stored in TSV format
  df = pd.read_csv(original_file, sep='\t')

  # Define how we want to change the label name
  label_map = {0: 'negative', 1: 'positive'}

  # Excute the label change
  df.replace({'label': label_map}, inplace=True)

  # Write the updated dataset to a new file
  df.to_csv(new_file)

# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')

Mulai cepat

Ada lima langkah untuk melatih model klasifikasi teks:

Langkah 1. Pilih arsitektur model klasifikasi teks.

Di sini kami menggunakan arsitektur model penyisipan kata rata-rata, yang akan menghasilkan model kecil dan cepat dengan akurasi yang layak.

spec = model_spec.get('average_word_vec')

Model pembuat juga mendukung arsitektur Model lain seperti Bert . Jika Anda tertarik untuk belajar tentang arsitektur lainnya, lihat Pilih arsitektur model untuk Text Classifier bagian bawah.

Langkah 2. Load pelatihan dan uji data, kemudian preprocess mereka sesuai dengan spesifik model_spec .

Model Maker dapat mengambil data input dalam format CSV. Kami akan memuat dataset pelatihan dan pengujian dengan nama label yang dapat dibaca manusia yang telah dibuat sebelumnya.

Setiap arsitektur model membutuhkan data input untuk diproses dengan cara tertentu. DataLoader membaca persyaratan dari model_spec dan secara otomatis mengeksekusi preprocessing diperlukan.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=spec,
      is_training=False)
2021-08-12 12:42:10.766466: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.774526: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.775549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.778072: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-12 12:42:10.778716: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.779805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:10.780786: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.372042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.373107: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374054: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:42:11.374939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0

Langkah 3. Latih model TensorFlow dengan data pelatihan.

Rata-rata Kata embedding menggunakan model batch_size = 32 secara default. Oleh karena itu, Anda akan melihat bahwa dibutuhkan 2104 langkah untuk melewati 67.349 kalimat dalam kumpulan data pelatihan. Kami akan melatih model selama 10 epoch, yang berarti melalui dataset pelatihan 10 kali.

model = text_classifier.create(train_data, model_spec=spec, epochs=10)
2021-08-12 12:42:11.945865: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:11.945910: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:11.946007: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1614] Profiler found 1 GPUs
2021-08-12 12:42:12.177195: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:12.180022: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:42:12.260396: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/10
   2/2104 [..............................] - ETA: 7:11 - loss: 0.6918 - accuracy: 0.5469
2021-08-12 12:42:13.142844: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:42:13.142884: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:42:13.337209: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:42:13.340075: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
58/2104 [..............................] - ETA: 15s - loss: 0.6902 - accuracy: 0.5436
2021-08-12 12:42:13.369348: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 12:42:13.372838: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:42:13.378566: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.382803: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:42:13.390407: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13

2021-08-12 12:42:13.391576: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:42:13.391931: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13
Dumped tool data for xplane.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmp9i5p9rfi/summaries/train/plugins/profile/2021_08_12_12_42_13/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6791 - accuracy: 0.5674
Epoch 2/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.5622 - accuracy: 0.7169
Epoch 3/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4407 - accuracy: 0.7983
Epoch 4/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3911 - accuracy: 0.8284
Epoch 5/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3655 - accuracy: 0.8427
Epoch 6/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3520 - accuracy: 0.8516
Epoch 7/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3398 - accuracy: 0.8584
Epoch 8/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3339 - accuracy: 0.8631
Epoch 9/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3276 - accuracy: 0.8649
Epoch 10/10
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3224 - accuracy: 0.8673

Langkah 4. Evaluasi model dengan data uji.

Setelah melatih model klasifikasi teks menggunakan kalimat-kalimat dalam dataset pelatihan, kita akan menggunakan 872 kalimat yang tersisa dalam dataset pengujian untuk mengevaluasi kinerja model terhadap data baru yang belum pernah dilihat sebelumnya.

Karena ukuran batch default adalah 32, maka diperlukan 28 langkah untuk melewati 872 kalimat dalam kumpulan data pengujian.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.5172 - accuracy: 0.8337

Langkah 5. Ekspor sebagai model TensorFlow Lite.

Mari ekspor klasifikasi teks yang telah kita latih dalam format TensorFlow Lite. Kami akan menentukan folder mana yang akan diekspor modelnya. Secara default, model float TFLite diekspor untuk arsitektur model penyematan kata rata-rata.

model.export(export_dir='average_word_vec')
2021-08-12 12:43:10.533295: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2021-08-12 12:43:10.973483: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.973851: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 12:43:10.973955: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 12:43:10.974556: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.974968: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.975996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 12:43:10.976253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 12:43:10.977511: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.007ms.
  function_optimizer: function_optimizer did nothing. time = 0.001ms.

2021-08-12 12:43:11.008758: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:43:11.008802: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:43:11.012064: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-08-12 12:43:11.027591: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 722  ops, equivalently 361  MACs

Anda dapat mendownload file model TensorFlow Lite menggunakan sidebar kiri Colab. Pergilah ke average_word_vec folder seperti yang kita ditentukan dalam export_dir parameter di atas, klik kanan pada model.tflite berkas dan pilih Download untuk men-download ke komputer lokal Anda.

Model ini dapat diintegrasikan ke dalam Android atau aplikasi iOS dengan menggunakan API NLClassifier dari TensorFlow Lite Tugas Perpustakaan .

Lihat TFLite Teks sampel Klasifikasi aplikasi untuk rincian lebih lanjut tentang bagaimana model digunakan dalam aplikasi kerja.

Catatan 1: Pengikatan Model Android Studio belum mendukung klasifikasi teks, jadi gunakan Pustaka Tugas TensorFlow Lite.

Catatan 2: Ada model.json file dalam folder yang sama dengan model TFLite. Ini berisi JSON representasi dari metadata dibundel dalam model TensorFlow Lite. Metadata model membantu TFLite Task Library mengetahui apa yang dilakukan model dan bagaimana melakukan pra-proses/pasca-proses data untuk model. Anda tidak perlu men-download model.json file seperti ini hanya untuk tujuan informasi dan isinya sudah di dalam file TFLite.

Catatan 3: Jika Anda melatih model klasifikasi teks menggunakan MobileBERT atau arsitektur Bert-Base, Anda akan perlu menggunakan BertNLClassifier API bukan untuk mengintegrasikan model dilatih menjadi aplikasi mobile.

Bagian berikut menelusuri contoh langkah demi langkah untuk menampilkan lebih banyak detail.

Pilih arsitektur model untuk Pengklasifikasi Teks

Setiap model_spec objek merupakan model khusus untuk classifier teks. TensorFlow Lite Model pembuat saat ini mendukung MobileBERT , rata-rata kata embeddings dan Bert-Base model.

Model yang Didukung Nama model_spec Deskripsi Model Ukuran model
Penyematan Kata Rata-rata 'rata-rata_kata_vec' Rata-rata penyisipan kata teks dengan aktivasi RELU. <1MB
MobileBERT 'mobilebert_classifier' 4,3x lebih kecil dan 5,5x lebih cepat dari BERT-Base sambil mencapai hasil yang kompetitif, cocok untuk aplikasi di perangkat. 25MB dengan kuantisasi
100MB tanpa kuantisasi
BERT-Base 'bert_classifier' Model BERT standar yang banyak digunakan dalam tugas-tugas NLP. 300MB

Dalam memulai cepat, kami telah menggunakan model penyisipan kata rata-rata. Beralih Mari ke MobileBERT untuk melatih model dengan akurasi yang lebih tinggi.

mb_spec = model_spec.get('mobilebert_classifier')

Muat data pelatihan

Anda dapat mengunggah dataset Anda sendiri untuk bekerja melalui tutorial ini. Unggah set data Anda menggunakan bilah sisi kiri di Colab.

Unggah data

Jika Anda memilih untuk tidak meng-upload dataset Anda ke awan, Anda juga dapat secara lokal menjalankan perpustakaan dengan mengikuti panduan .

Untuk membuatnya tetap sederhana, kami akan menggunakan kembali dataset SST-2 yang diunduh sebelumnya. Mari kita gunakan DataLoader.from_csv metode untuk memuat data.

Harap diperhatikan bahwa karena kami telah mengubah arsitektur model, kami perlu memuat ulang set data pelatihan dan pengujian untuk menerapkan logika prapemrosesan baru.

train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=True)
test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=mb_spec,
      is_training=False)

Model pembuat perpustakaan juga mendukung from_folder() metode untuk memuat data. Diasumsikan bahwa data teks dari kelas yang sama berada di subdirektori yang sama dan nama subfolder adalah nama kelas. Setiap file teks berisi satu sampel ulasan film. The class_labels parameter digunakan untuk menentukan subfolder.

Latih Model TensorFlow

Latih model klasifikasi teks menggunakan data pelatihan.

model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)
2021-08-12 12:43:36.251639: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:43:36.251682: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 12:43:36.388556: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:43:36.391183: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   1/1403 [..............................] - ETA: 20:40:48 - loss: 1.8896 - test_accuracy: 0.5417
2021-08-12 12:44:47.706948: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 12:44:47.707003: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2/1403 [..............................] - ETA: 1:20:12 - loss: 1.8514 - test_accuracy: 0.5104
2021-08-12 12:44:52.481577: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 12:44:52.502256: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 12:44:52.654230: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 10500 callback api events and 10497 activity events. 
2021-08-12 12:44:53.004241: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 12:44:53.471527: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:53.642439: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 12:44:54.210824: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53

2021-08-12 12:44:54.221801: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 12:44:54.245751: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53
Dumped tool data for xplane.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmpttxb2mkh/summaries/train/plugins/profile/2021_08_12_12_44_53/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
1403/1403 [==============================] - 326s 195ms/step - loss: 0.3642 - test_accuracy: 0.8503
Epoch 2/3
1403/1403 [==============================] - 265s 189ms/step - loss: 0.1269 - test_accuracy: 0.9546
Epoch 3/3
1403/1403 [==============================] - 262s 187ms/step - loss: 0.0746 - test_accuracy: 0.9767

Periksa struktur model rinci.

model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_word_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_mask (InputLayer)         [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_type_ids (InputLayer)     [(None, 128)]        0                                            
__________________________________________________________________________________________________
hub_keras_layer_v1v2 (HubKerasL (None, 512)          24581888    input_word_ids[0][0]             
                                                                 input_mask[0][0]                 
                                                                 input_type_ids[0][0]             
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 512)          0           hub_keras_layer_v1v2[0][0]       
__________________________________________________________________________________________________
output (Dense)                  (None, 2)            1026        dropout_1[0][0]                  
==================================================================================================
Total params: 24,582,914
Trainable params: 24,582,914
Non-trainable params: 0
__________________________________________________________________________________________________

Evaluasi modelnya

Evaluasi model yang baru saja kita latih menggunakan data uji dan ukur nilai kehilangan dan akurasinya.

loss, acc = model.evaluate(test_data)
28/28 [==============================] - 8s 50ms/step - loss: 0.3570 - test_accuracy: 0.9060

Ekspor sebagai model TensorFlow Lite

Mengkonversi model dilatih untuk TensorFlow Lite Format model dengan metadata sehingga nanti dapat digunakan dalam aplikasi ML pada perangkat. File label dan file vocab disematkan dalam metadata. Default TFLite nama file adalah model.tflite .

Dalam banyak aplikasi ML di perangkat, ukuran model merupakan faktor penting. Oleh karena itu, disarankan agar Anda menerapkan model kuantisasi untuk membuatnya lebih kecil dan berpotensi berjalan lebih cepat. Teknik kuantisasi pasca-pelatihan default adalah kuantisasi rentang dinamis untuk model BERT dan MobileBERT.

model.export(export_dir='mobilebert/')
2021-08-12 12:58:59.645438: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 12:58:59.645491: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 12:58:59.645498: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:360] Ignored change_concat_input_ranges.
2021-08-12 12:58:59.645836: I tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:58:59.719952: I tensorflow/cc/saved_model/reader.cc:90] Reading meta graph with tags { serve }
2021-08-12 12:58:59.720017: I tensorflow/cc/saved_model/reader.cc:132] Reading SavedModel debug info (if present) from: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:00.055674: I tensorflow/cc/saved_model/loader.cc:211] Restoring SavedModel bundle.
2021-08-12 12:59:01.918508: I tensorflow/cc/saved_model/loader.cc:195] Running initialization op on SavedModel bundle at path: /tmp/tmputtjyezz/saved_model
2021-08-12 12:59:02.940575: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 3294762 microseconds.
2021-08-12 12:59:08.166940: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1899] Estimated count of arithmetic ops: 5.511 G  ops, equivalently 2.755 G  MACs
2021-08-12 12:59:08.346145: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346201: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346208: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346213: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346220: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346225: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346230: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346235: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_0/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346253: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346258: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346263: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346268: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346274: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346279: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346284: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346289: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_1/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346313: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346318: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346323: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346328: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346334: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346339: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346344: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346349: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_2/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346374: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346379: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346385: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346390: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346396: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346400: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_3/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346422: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346444: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346449: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346455: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346460: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_4/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346476: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346481: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346486: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346491: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346497: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346502: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346507: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346512: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_5/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346528: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346533: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346538: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346543: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346549: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346554: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346559: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346563: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_6/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346579: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346584: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346589: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346594: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346600: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346605: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346610: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346615: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_7/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346640: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346646: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346651: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346656: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346662: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346667: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346672: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346676: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_8/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346691: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346695: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346700: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346705: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346712: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346717: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346722: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346726: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_9/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346741: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346746: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346751: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346756: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346762: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346766: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346771: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346776: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_10/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346791: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346796: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346801: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346806: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346812: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346817: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346822: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346827: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_11/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346840: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346845: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346850: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346855: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346861: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346866: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346871: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346876: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_12/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346890: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346894: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346899: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346904: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346910: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346915: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346919: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346924: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_13/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.346939: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.346943: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.346948: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.346953: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.346959: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.346964: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.346969: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.346974: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_14/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347001: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347007: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347011: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347016: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347022: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347027: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347032: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347037: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_15/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347051: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347056: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347061: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347066: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347072: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347077: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347082: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347087: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_16/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347102: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347107: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347111: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347116: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347123: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347127: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347132: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347137: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_17/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347152: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347157: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347162: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347167: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347173: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347178: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347183: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347187: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_18/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347206: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347211: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347216: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347221: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347227: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347232: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347237: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347242: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_19/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347257: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347262: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347266: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347271: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347277: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347281: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347286: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347291: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_20/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347305: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347310: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347315: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347319: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347325: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347330: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347335: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347340: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_21/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347354: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul15 because it has no allocated buffer.
2021-08-12 12:59:08.347359: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul17 because it has no allocated buffer.
2021-08-12 12:59:08.347364: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul19 because it has no allocated buffer.
2021-08-12 12:59:08.347369: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul21 because it has no allocated buffer.
2021-08-12 12:59:08.347375: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_114 because it has no allocated buffer.
2021-08-12 12:59:08.347380: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_116 because it has no allocated buffer.
2021-08-12 12:59:08.347384: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_118 because it has no allocated buffer.
2021-08-12 12:59:08.347389: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_22/attention/self/MatMul_120 because it has no allocated buffer.
2021-08-12 12:59:08.347403: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul20 because it has no allocated buffer.
2021-08-12 12:59:08.347408: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul22 because it has no allocated buffer.
2021-08-12 12:59:08.347412: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul24 because it has no allocated buffer.
2021-08-12 12:59:08.347417: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul26 because it has no allocated buffer.
2021-08-12 12:59:08.347423: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_125 because it has no allocated buffer.
2021-08-12 12:59:08.347427: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_127 because it has no allocated buffer.
2021-08-12 12:59:08.347432: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_129 because it has no allocated buffer.
2021-08-12 12:59:08.347437: I tensorflow/lite/tools/optimize/quantize_weights.cc:234] Skipping quantization of tensor bert/encoder/layer_23/attention/self/MatMul_131 because it has no allocated buffer.

The TensorFlow Lite file model dapat diintegrasikan dalam aplikasi mobile dengan menggunakan API BertNLClassifier di TensorFlow Lite Tugas Perpustakaan . Harap dicatat bahwa ini berbeda dari NLClassifier API yang digunakan untuk mengintegrasikan klasifikasi teks dilatih dengan kata model yang vektor arsitektur rata-rata.

Format ekspor dapat berupa salah satu atau daftar berikut ini:

Secara default, ini hanya mengekspor file model TensorFlow Lite yang berisi metadata model. Anda juga dapat memilih untuk mengekspor file lain yang terkait dengan model untuk pemeriksaan yang lebih baik. Misalnya, hanya mengekspor file label dan file vocab sebagai berikut:

model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])

Anda dapat mengevaluasi model TFLite dengan evaluate_tflite metode untuk mengukur akurasinya. Mengonversi model TensorFlow terlatih ke format TFLite dan menerapkan kuantisasi dapat memengaruhi akurasinya sehingga disarankan untuk mengevaluasi akurasi model TFLite sebelum penerapan.

accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)
TFLite model accuracy:  {'accuracy': 0.911697247706422}

Penggunaan Lanjutan

The create fungsi fungsi pengemudi bahwa Model pembuat perpustakaan menggunakan untuk membuat model. The model_spec parameter mendefinisikan spesifikasi model. The AverageWordVecSpec dan BertClassifierSpec kelas yang saat ini didukung. The create terdiri fungsi dari langkah-langkah berikut:

  1. Menciptakan model untuk classifier teks menurut model_spec .
  2. Melatih model classifier. The zaman default dan ukuran default batch yang ditetapkan oleh default_training_epochs dan default_batch_size variabel dalam model_spec objek.

Bagian ini mencakup topik penggunaan lanjutan seperti menyesuaikan model dan hyperparameter pelatihan.

Sesuaikan hyperparameter model MobileBERT

Parameter model yang dapat Anda sesuaikan adalah:

  • seq_len : Panjang urutan feed ke dalam model.
  • initializer_range : Standar deviasi dari truncated_normal_initializer untuk menginisialisasi semua matriks berat badan.
  • trainable : Boolean yang menentukan apakah lapisan pra-terlatih dilatih.

Parameter pipeline pelatihan yang dapat Anda sesuaikan adalah:

  • model_dir : Lokasi file model pos pemeriksaan. Jika tidak disetel, direktori sementara akan digunakan.
  • dropout_rate : Tingkat putus sekolah.
  • learning_rate : Tingkat pembelajaran awal untuk optimizer Adam.
  • tpu : Alamat TPU untuk terhubung ke.

Misalnya, Anda dapat mengatur seq_len=256 (default adalah 128). Ini memungkinkan model untuk mengklasifikasikan teks yang lebih panjang.

new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256

Sesuaikan hyperparameter model penyematan kata rata-rata

Anda dapat menyesuaikan infrastruktur Model seperti wordvec_dim dan seq_len variabel dalam AverageWordVecSpec kelas.

Misalnya, Anda dapat melatih model dengan nilai yang lebih besar dari wordvec_dim . Perhatikan bahwa Anda harus membangun baru model_spec jika Anda memodifikasi model.

new_model_spec = AverageWordVecSpec(wordvec_dim=32)

Dapatkan data yang telah diproses sebelumnya.

new_train_data = DataLoader.from_csv(
      filename='train.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=True)

Latih model baru.

model = text_classifier.create(new_train_data, model_spec=new_model_spec)
2021-08-12 13:04:08.907763: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:08.907807: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:09.074585: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:09.086334: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/3
   2/2104 [..............................] - ETA: 5:58 - loss: 0.6948 - accuracy: 0.4688
2021-08-12 13:04:09.720736: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:09.720777: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
21/2104 [..............................] - ETA: 2:30 - loss: 0.6940 - accuracy: 0.4702
2021-08-12 13:04:10.973207: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2021-08-12 13:04:10.980573: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:11.045547: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:11.052796: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:11.063746: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.068200: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:11.084769: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11

2021-08-12 13:04:11.087101: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:11.087939: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_11/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 8s 4ms/step - loss: 0.6526 - accuracy: 0.6062
Epoch 2/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4705 - accuracy: 0.7775
Epoch 3/3
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3944 - accuracy: 0.8228

Setel hyperparameter pelatihan

Anda juga dapat menyetel hyperparameters pelatihan seperti epochs dan batch_size yang mempengaruhi akurasi model. Contohnya,

  • epochs : zaman lebih bisa mencapai akurasi yang lebih baik, tetapi dapat menyebabkan overfitting.
  • batch_size : jumlah sampel untuk digunakan dalam satu langkah pelatihan.

Misalnya, Anda dapat berlatih dengan lebih banyak epoch.

model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)
2021-08-12 13:04:29.741606: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:29.741645: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:29.923763: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:29.937026: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
Epoch 1/20
   2/2104 [..............................] - ETA: 6:22 - loss: 0.6923 - accuracy: 0.5781
2021-08-12 13:04:30.617172: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-08-12 13:04:30.617216: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-08-12 13:04:30.818046: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
21/2104 [..............................] - ETA: 40s - loss: 0.6939 - accuracy: 0.4866
2021-08-12 13:04:30.819829: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1748] CUPTI activity buffer flushed
2021-08-12 13:04:30.896524: I tensorflow/core/profiler/internal/gpu/cupti_collector.cc:673]  GpuTracer has collected 155 callback api events and 152 activity events. 
2021-08-12 13:04:30.902312: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2021-08-12 13:04:30.911299: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.915427: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.trace.json.gz
2021-08-12 13:04:30.928110: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30

2021-08-12 13:04:30.929821: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.memory_profile.json.gz
2021-08-12 13:04:30.930444: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30
Dumped tool data for xplane.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.xplane.pb
Dumped tool data for overview_page.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.overview_page.pb
Dumped tool data for input_pipeline.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to /tmp/tmphsi7rhs4/summaries/train/plugins/profile/2021_08_12_13_04_30/kokoro-gcp-ubuntu-prod-762150866.kernel_stats.pb
2104/2104 [==============================] - 7s 3ms/step - loss: 0.6602 - accuracy: 0.5985
Epoch 2/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4865 - accuracy: 0.7690
Epoch 3/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.4005 - accuracy: 0.8199
Epoch 4/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3676 - accuracy: 0.8400
Epoch 5/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.3498 - accuracy: 0.8512
Epoch 6/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3380 - accuracy: 0.8567
Epoch 7/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3280 - accuracy: 0.8624
Epoch 8/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3215 - accuracy: 0.8664
Epoch 9/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3164 - accuracy: 0.8691
Epoch 10/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3105 - accuracy: 0.8699
Epoch 11/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3072 - accuracy: 0.8733
Epoch 12/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3045 - accuracy: 0.8739
Epoch 13/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.3028 - accuracy: 0.8742
Epoch 14/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2993 - accuracy: 0.8773
Epoch 15/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2973 - accuracy: 0.8779
Epoch 16/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2957 - accuracy: 0.8791
Epoch 17/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2940 - accuracy: 0.8802
Epoch 18/20
2104/2104 [==============================] - 7s 3ms/step - loss: 0.2919 - accuracy: 0.8807
Epoch 19/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2904 - accuracy: 0.8815
Epoch 20/20
2104/2104 [==============================] - 6s 3ms/step - loss: 0.2895 - accuracy: 0.8825

Evaluasi model yang baru dilatih ulang dengan 20 periode pelatihan.

new_test_data = DataLoader.from_csv(
      filename='dev.csv',
      text_column='sentence',
      label_column='label',
      model_spec=new_model_spec,
      is_training=False)

loss, accuracy = model.evaluate(new_test_data)
28/28 [==============================] - 0s 2ms/step - loss: 0.4997 - accuracy: 0.8349

Ubah Arsitektur Model

Anda dapat mengubah model dengan mengubah model_spec . Berikut ini cara mengubah model BERT-Base.

Mengubah model_spec model Bert-Base untuk classifier teks.

spec = model_spec.get('bert_classifier')

Langkah-langkah yang tersisa adalah sama.

Sesuaikan kuantisasi Pascapelatihan pada model TensorFlow Lite

Pasca pelatihan kuantisasi adalah teknik konversi yang dapat mengurangi ukuran Model dan inferensi latency, sementara juga meningkatkan kecepatan CPU dan hardware accelerator inferensi, dengan degradasi kecil di akurasi model. Jadi, ini banyak digunakan untuk mengoptimalkan model.

Pustaka Model Maker menerapkan teknik kuantisasi pasca-pelatihan default saat mengekspor model. Jika Anda ingin menyesuaikan pasca-pelatihan kuantisasi, Model Maker mendukung beberapa pilihan pasca-pelatihan kuantisasi menggunakan QuantizationConfig juga. Mari kita ambil kuantisasi float16 sebagai contoh. Pertama, tentukan konfigurasi kuantisasi.

config = QuantizationConfig.for_float16()

Kemudian kita ekspor model TensorFlow Lite dengan konfigurasi seperti itu.

model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

Baca lebih lajut

Anda dapat membaca kami klasifikasi teks contoh untuk belajar rincian teknis. Untuk informasi lebih lanjut, silakan merujuk ke: