Klasifikasi gambar dengan TensorFlow Lite Model Maker

Lihat di TensorFlow.org Jalankan di Google Colab Lihat sumber di GitHub Unduh buku catatan Lihat model Hub TF

The TensorFlow Lite Model pembuat perpustakaan menyederhanakan proses beradaptasi dan mengkonversi TensorFlow Model neural-network untuk input data tertentu ketika deploying model ini untuk aplikasi ML pada perangkat.

Notebook ini menunjukkan contoh ujung ke ujung yang menggunakan pustaka Model Maker ini untuk menggambarkan adaptasi dan konversi model klasifikasi gambar yang umum digunakan untuk mengklasifikasikan bunga pada perangkat seluler.

Prasyarat

Untuk menjalankan contoh ini, kita harus terlebih dahulu menginstall beberapa paket yang diperlukan, termasuk Model paket Maker yang di GitHub repo .

pip install -q tflite-model-maker

Impor paket yang diperlukan.

import os

import numpy as np

import tensorflow as tf
assert tf.__version__.startswith('2')

from tflite_model_maker import model_spec
from tflite_model_maker import image_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader

import matplotlib.pyplot as plt
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_addons/utils/ensure_tf_install.py:67: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.3.0 and strictly below 2.6.0 (nightly versions are not supported). 
 The versions of TensorFlow you are currently using is 2.6.0 and is not supported. 
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. 
You can find the compatibility matrix in TensorFlow Addon's readme:
https://github.com/tensorflow/addons
  UserWarning,
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/numba/core/errors.py:154: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
  warnings.warn(msg)

Contoh End-to-End Sederhana

Dapatkan jalur datanya

Mari kita mainkan beberapa gambar dengan contoh ujung ke ujung yang sederhana ini. Ratusan gambar adalah awal yang baik untuk Model Maker sementara lebih banyak data dapat mencapai akurasi yang lebih baik.

Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz
228818944/228813984 [==============================] - 3s 0us/step
228827136/228813984 [==============================] - 3s 0us/step

Anda bisa mengganti image_path dengan folder gambar Anda sendiri. Sedangkan untuk mengunggah data ke colab, Anda dapat menemukan tombol unggah di bilah sisi kiri yang ditunjukkan pada gambar di bawah dengan persegi panjang merah. Coba saja unggah file zip dan unzip. Jalur file root adalah jalur saat ini.

Unggah data

Jika Anda memilih untuk tidak meng-upload foto Anda ke awan, Anda bisa mencoba untuk menjalankan perpustakaan lokal mengikuti panduan di GitHub.

Jalankan contohnya

Contoh hanya terdiri dari 4 baris kode seperti yang ditunjukkan di bawah ini, yang masing-masing mewakili satu langkah dari keseluruhan proses.

Langkah 1. Muat data input khusus untuk aplikasi ML di perangkat. Pisahkan menjadi data pelatihan dan data pengujian.

data = DataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
2021-08-12 11:22:56.386698: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
INFO:tensorflow:Load image with size: 3670, num_label: 5, labels: daisy, dandelion, roses, sunflowers, tulips.
2021-08-12 11:22:56.395523: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:56.396549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:56.398220: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-12 11:22:56.398875: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:56.400004: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:56.400967: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:57.007249: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:57.008317: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:57.009214: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:22:57.010137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0

Langkah 2. Sesuaikan model TensorFlow.

model = image_classifier.create(train_data)
INFO:tensorflow:Retraining the models...
2021-08-12 11:23:00.961952: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hub_keras_layer_v1v2 (HubKer (None, 1280)              3413024   
_________________________________________________________________
dropout (Dropout)            (None, 1280)              0         
_________________________________________________________________
dense (Dense)                (None, 5)                 6405      
=================================================================
Total params: 3,419,429
Trainable params: 6,405
Non-trainable params: 3,413,024
_________________________________________________________________
None
Epoch 1/5
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  "The `lr` argument is deprecated, use `learning_rate` instead.")
2021-08-12 11:23:04.815901: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100
2021-08-12 11:23:05.396630: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
103/103 [==============================] - 7s 38ms/step - loss: 0.8676 - accuracy: 0.7618
Epoch 2/5
103/103 [==============================] - 4s 41ms/step - loss: 0.6568 - accuracy: 0.8880
Epoch 3/5
103/103 [==============================] - 4s 37ms/step - loss: 0.6238 - accuracy: 0.9111
Epoch 4/5
103/103 [==============================] - 4s 37ms/step - loss: 0.6009 - accuracy: 0.9245
Epoch 5/5
103/103 [==============================] - 4s 37ms/step - loss: 0.5872 - accuracy: 0.9287

Langkah 3. Evaluasi model.

loss, accuracy = model.evaluate(test_data)
12/12 [==============================] - 2s 45ms/step - loss: 0.5993 - accuracy: 0.9292

Langkah 4. Ekspor ke model TensorFlow Lite.

Di sini, kita ekspor Model TensorFlow Lite dengan metadata yang menyediakan standar untuk deskripsi Model. File label disematkan dalam metadata. Teknik kuantisasi pasca-pelatihan default adalah kuantisasi interger penuh untuk tugas klasifikasi gambar.

Anda dapat mengunduhnya di bilah sisi kiri sama dengan bagian pengunggahan untuk Anda gunakan sendiri.

model.export(export_dir='.')
2021-08-12 11:23:29.239205: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: /tmp/tmpg7d7peiv/assets
INFO:tensorflow:Assets written to: /tmp/tmpg7d7peiv/assets
2021-08-12 11:23:32.415310: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.415723: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 11:23:32.415840: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 11:23:32.416303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.416699: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.417007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.417414: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.417738: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:23:32.418047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 11:23:32.451651: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 913 nodes (656), 923 edges (664), time = 17.945ms.
  function_optimizer: function_optimizer did nothing. time = 0.391ms.

2021-08-12 11:23:33.380451: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 11:23:33.380503: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
2021-08-12 11:23:33.426653: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
fully_quantize: 0, inference_type: 6, input_inference_type: 3, output_inference_type: 3
WARNING:absl:For model inputs containing unsupported operations which cannot be quantized, the `inference_input_type` attribute will default to the original type.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
INFO:tensorflow:Saving labels in /tmp/tmpny214hzn/labels.txt
INFO:tensorflow:Saving labels in /tmp/tmpny214hzn/labels.txt
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model.tflite
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model.tflite

Setelah ini sederhana 4 langkah, kita lebih bisa menggunakan file model TensorFlow Lite dalam aplikasi pada perangkat seperti di gambar klasifikasi aplikasi referensi.

Proses terperinci

Saat ini, kami mendukung beberapa model seperti model EfficientNet-Lite*, MobileNetV2, ResNet50 sebagai model terlatih untuk klasifikasi gambar. Tetapi sangat fleksibel untuk menambahkan model pra-terlatih baru ke perpustakaan ini hanya dengan beberapa baris kode.

Berikut ini berjalan melalui contoh ujung ke ujung ini langkah demi langkah untuk menunjukkan lebih detail.

Langkah 1: Muat Data Input Khusus untuk Aplikasi ML di Perangkat

Dataset bunga berisi 3670 gambar milik 5 kelas. Unduh versi arsip kumpulan data dan hapus tarnya.

Dataset memiliki struktur direktori berikut:

flower_photos
|__ daisy
    |______ 100080576_f52e8ee070_n.jpg
    |______ 14167534527_781ceb1b7a_n.jpg
    |______ ...
|__ dandelion
    |______ 10043234166_e6dd915111_n.jpg
    |______ 1426682852_e62169221f_m.jpg
    |______ ...
|__ roses
    |______ 102501987_3cdb8e5394_n.jpg
    |______ 14982802401_a3dfb22afb.jpg
    |______ ...
|__ sunflowers
    |______ 12471791574_bb1be83df4.jpg
    |______ 15122112402_cafa41934f.jpg
    |______ ...
|__ tulips
    |______ 13976522214_ccec508fe7.jpg
    |______ 14487943607_651e8062a1_m.jpg
    |______ ...
image_path = tf.keras.utils.get_file(
      'flower_photos.tgz',
      'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
      extract=True)
image_path = os.path.join(os.path.dirname(image_path), 'flower_photos')

Gunakan DataLoader kelas untuk memuat data.

Adapun from_folder() metode, bisa memuat data dari folder. Diasumsikan bahwa data gambar dari kelas yang sama berada di subdirektori yang sama dan nama subfolder adalah nama kelas. Saat ini, gambar berkode JPEG dan gambar berkode PNG didukung.

data = DataLoader.from_folder(image_path)
INFO:tensorflow:Load image with size: 3670, num_label: 5, labels: daisy, dandelion, roses, sunflowers, tulips.
INFO:tensorflow:Load image with size: 3670, num_label: 5, labels: daisy, dandelion, roses, sunflowers, tulips.

Pisahkan menjadi data pelatihan (80%), data validasi (10%, opsional) dan data pengujian (10%).

train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)

Tampilkan 25 contoh gambar dengan label.

plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.gen_dataset().unbatch().take(25)):
  plt.subplot(5,5,i+1)
  plt.xticks([])
  plt.yticks([])
  plt.grid(False)
  plt.imshow(image.numpy(), cmap=plt.cm.gray)
  plt.xlabel(data.index_to_label[label.numpy()])
plt.show()

png

Langkah 2: Sesuaikan Model TensorFlow

Buat model pengklasifikasi gambar khusus berdasarkan data yang dimuat. Model default adalah EfficientNet-Lite0.

model = image_classifier.create(train_data, validation_data=validation_data)
INFO:tensorflow:Retraining the models...
INFO:tensorflow:Retraining the models...
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hub_keras_layer_v1v2_1 (HubK (None, 1280)              3413024   
_________________________________________________________________
dropout_1 (Dropout)          (None, 1280)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 6405      
=================================================================
Total params: 3,419,429
Trainable params: 6,405
Non-trainable params: 3,413,024
_________________________________________________________________
None
Epoch 1/5
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  "The `lr` argument is deprecated, use `learning_rate` instead.")
91/91 [==============================] - 7s 59ms/step - loss: 0.8929 - accuracy: 0.7572 - val_loss: 0.6367 - val_accuracy: 0.9091
Epoch 2/5
91/91 [==============================] - 5s 55ms/step - loss: 0.6598 - accuracy: 0.8905 - val_loss: 0.6097 - val_accuracy: 0.9119
Epoch 3/5
91/91 [==============================] - 5s 54ms/step - loss: 0.6221 - accuracy: 0.9141 - val_loss: 0.6016 - val_accuracy: 0.9347
Epoch 4/5
91/91 [==============================] - 5s 59ms/step - loss: 0.6032 - accuracy: 0.9241 - val_loss: 0.5978 - val_accuracy: 0.9318
Epoch 5/5
91/91 [==============================] - 6s 63ms/step - loss: 0.5890 - accuracy: 0.9344 - val_loss: 0.5954 - val_accuracy: 0.9347

Lihatlah struktur model rinci.

model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hub_keras_layer_v1v2_1 (HubK (None, 1280)              3413024   
_________________________________________________________________
dropout_1 (Dropout)          (None, 1280)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 5)                 6405      
=================================================================
Total params: 3,419,429
Trainable params: 6,405
Non-trainable params: 3,413,024
_________________________________________________________________

Langkah 3: Evaluasi Model yang Disesuaikan

Evaluasi hasil model, dapatkan loss dan akurasi model.

loss, accuracy = model.evaluate(test_data)
12/12 [==============================] - 2s 37ms/step - loss: 0.6337 - accuracy: 0.9019

Kami dapat memplot hasil prediksi dalam 100 gambar uji. Label yang diprediksi dengan warna merah adalah hasil prediksi yang salah sementara yang lain benar.

# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
  if val1 == val2:
    return 'black'
  else:
    return 'red'

# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.gen_dataset().unbatch().take(100)):
  ax = plt.subplot(10, 10, i+1)
  plt.xticks([])
  plt.yticks([])
  plt.grid(False)
  plt.imshow(image.numpy(), cmap=plt.cm.gray)

  predict_label = predicts[i][0][0]
  color = get_label_color(predict_label,
                          test_data.index_to_label[label.numpy()])
  ax.xaxis.label.set_color(color)
  plt.xlabel('Predicted: %s' % predict_label)
plt.show()

png

Jika akurasi tidak memenuhi persyaratan aplikasi, salah satu bisa merujuk ke Penggunaan Lanjutan untuk mengeksplorasi alternatif seperti mengubah ke model yang lebih besar, menyesuaikan parameter pelatihan ulang dll

Langkah 4: Ekspor ke Model TensorFlow Lite

Mengkonversi model dilatih untuk TensorFlow Lite Format model dengan metadata sehingga nanti dapat digunakan dalam aplikasi ML pada perangkat. File label dan file vocab disematkan dalam metadata. Default TFLite nama file adalah model.tflite .

Dalam banyak aplikasi ML di perangkat, ukuran model merupakan faktor penting. Oleh karena itu, disarankan agar Anda menerapkan model kuantisasi untuk membuatnya lebih kecil dan berpotensi berjalan lebih cepat. Teknik kuantisasi pasca-pelatihan default adalah kuantisasi interger penuh untuk tugas klasifikasi gambar.

model.export(export_dir='.')
INFO:tensorflow:Assets written to: /tmp/tmpefawktva/assets
INFO:tensorflow:Assets written to: /tmp/tmpefawktva/assets
2021-08-12 11:25:07.871201: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.871638: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 11:25:07.871768: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 11:25:07.872277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.872639: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.872945: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.873316: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.873619: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:25:07.873884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 11:25:07.906980: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 913 nodes (656), 923 edges (664), time = 17.977ms.
  function_optimizer: function_optimizer did nothing. time = 0.434ms.

2021-08-12 11:25:08.746578: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 11:25:08.746627: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
fully_quantize: 0, inference_type: 6, input_inference_type: 3, output_inference_type: 3
WARNING:absl:For model inputs containing unsupported operations which cannot be quantized, the `inference_input_type` attribute will default to the original type.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
INFO:tensorflow:Saving labels in /tmp/tmp9dnrtkd6/labels.txt
INFO:tensorflow:Saving labels in /tmp/tmp9dnrtkd6/labels.txt
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model.tflite
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model.tflite

Lihat contoh aplikasi dan panduan dari klasifikasi citra untuk rincian lebih lanjut tentang bagaimana untuk mengintegrasikan model TensorFlow Lite ke dalam aplikasi mobile.

Model ini dapat diintegrasikan ke dalam Android atau aplikasi iOS menggunakan ImageClassifier API dari TensorFlow Lite Tugas Perpustakaan .

Format ekspor yang diizinkan dapat berupa salah satu atau daftar berikut ini:

Secara default, itu hanya mengekspor model TensorFlow Lite dengan metadata. Anda juga dapat mengekspor file yang berbeda secara selektif. Misalnya, hanya mengekspor file label sebagai berikut:

model.export(export_dir='.', export_format=ExportFormat.LABEL)
INFO:tensorflow:Saving labels in ./labels.txt
INFO:tensorflow:Saving labels in ./labels.txt

Anda juga dapat mengevaluasi model tflite dengan evaluate_tflite metode.

model.evaluate_tflite('model.tflite', test_data)
{'accuracy': 0.9019073569482289}

Penggunaan Lanjutan

The create fungsi adalah bagian penting dari perpustakaan ini. Menggunakan pembelajaran mentransfer dengan model pretrained mirip dengan tutorial .

The create fungsi berisi langkah-langkah berikut:

  1. Membagi data ke dalam pelatihan, validasi, pengujian data sesuai dengan parameter validation_ratio dan test_ratio . Nilai default dari validation_ratio dan test_ratio adalah 0.1 dan 0.1 .
  2. Download Vector Gambar Fitur sebagai model dasar dari TensorFlow Hub. Model pra-pelatihan default adalah EfficientNet-Lite0.
  3. Tambahkan kepala classifier dengan lapisan Dropout dengan dropout_rate antara lapisan kepala dan model pra-dilatih. Default dropout_rate adalah default dropout_rate nilai dari make_image_classifier_lib oleh TensorFlow Hub.
  4. Praproses data input mentah. Saat ini, langkah-langkah preprocessing termasuk menormalkan nilai setiap piksel gambar untuk memodelkan skala input dan mengubah ukurannya menjadi ukuran input model. EfficientNet-Lite0 memiliki skala masukan [0, 1] dan ukuran gambar input [224, 224, 3] .
  5. Masukkan data ke dalam model pengklasifikasi. Secara default, parameter pelatihan seperti zaman pelatihan, ukuran batch, tingkat pembelajaran, momentum adalah nilai default dari make_image_classifier_lib oleh TensorFlow Hub. Hanya kepala pengklasifikasi yang dilatih.

Di bagian ini, kami menjelaskan beberapa topik lanjutan, termasuk beralih ke model klasifikasi gambar yang berbeda, mengubah hyperparameter pelatihan, dll.

Sesuaikan kuantisasi Pasca-pelatihan pada model TensorFLow Lite

Pasca pelatihan kuantisasi adalah teknik konversi yang dapat mengurangi ukuran Model dan inferensi latency, sementara juga meningkatkan kecepatan CPU dan hardware accelerator inferensi, dengan degradasi kecil di akurasi model. Jadi, ini banyak digunakan untuk mengoptimalkan model.

Pustaka Model Maker menerapkan teknik kuantisasi pasca-pelatihan default saat mengekspor model. Jika Anda ingin menyesuaikan pasca-pelatihan kuantisasi, Model Maker mendukung beberapa pilihan pasca-pelatihan kuantisasi menggunakan QuantizationConfig juga. Mari kita ambil kuantisasi float16 sebagai contoh. Pertama, tentukan konfigurasi kuantisasi.

config = QuantizationConfig.for_float16()

Kemudian kita ekspor model TensorFlow Lite dengan konfigurasi seperti itu.

model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)
INFO:tensorflow:Assets written to: /tmp/tmp3tagi8ov/assets
INFO:tensorflow:Assets written to: /tmp/tmp3tagi8ov/assets
2021-08-12 11:33:18.486299: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.486660: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-08-12 11:33:18.486769: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-08-12 11:33:18.487314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.487754: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.488070: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.488480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.488804: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-12 11:33:18.489094: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
2021-08-12 11:33:18.525503: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 913 nodes (656), 923 edges (664), time = 19.663ms.
  function_optimizer: function_optimizer did nothing. time = 0.423ms.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
2021-08-12 11:33:19.358426: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format.
2021-08-12 11:33:19.358474: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency.
INFO:tensorflow:Label file is inside the TFLite model with metadata.
INFO:tensorflow:Saving labels in /tmp/tmpyiyio9gh/labels.txt
INFO:tensorflow:Saving labels in /tmp/tmpyiyio9gh/labels.txt
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model_fp16.tflite
INFO:tensorflow:TensorFlow Lite model exported successfully: ./model_fp16.tflite

Dalam CoLab, Anda dapat men-download model bernama model_fp16.tflite dari sidebar kiri, sama seperti bagian upload disebutkan di atas.

Ganti modelnya

Ubah ke model yang didukung di perpustakaan ini.

Pustaka ini mendukung model EfficientNet-Lite, MobileNetV2, ResNet50 sekarang. EfficientNet-Lite adalah keluarga dari model klasifikasi citra yang bisa mencapai state-of-art akurasi dan cocok untuk perangkat Edge. Model defaultnya adalah EfficientNet-Lite0.

Kita bisa beralih Model untuk MobileNetV2 dengan hanya menetapkan parameter model_spec ke MobileNetV2 spesifikasi model dalam create metode.

model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data)
INFO:tensorflow:Retraining the models...
INFO:tensorflow:Retraining the models...
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hub_keras_layer_v1v2_2 (HubK (None, 1280)              2257984   
_________________________________________________________________
dropout_2 (Dropout)          (None, 1280)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 5)                 6405      
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
None
Epoch 1/5
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  "The `lr` argument is deprecated, use `learning_rate` instead.")
91/91 [==============================] - 8s 57ms/step - loss: 0.9474 - accuracy: 0.7486 - val_loss: 0.6713 - val_accuracy: 0.8807
Epoch 2/5
91/91 [==============================] - 5s 54ms/step - loss: 0.7013 - accuracy: 0.8764 - val_loss: 0.6342 - val_accuracy: 0.9119
Epoch 3/5
91/91 [==============================] - 5s 54ms/step - loss: 0.6577 - accuracy: 0.8963 - val_loss: 0.6328 - val_accuracy: 0.9119
Epoch 4/5
91/91 [==============================] - 5s 54ms/step - loss: 0.6245 - accuracy: 0.9176 - val_loss: 0.6445 - val_accuracy: 0.9006
Epoch 5/5
91/91 [==============================] - 5s 55ms/step - loss: 0.6034 - accuracy: 0.9303 - val_loss: 0.6290 - val_accuracy: 0.9091

Evaluasi model MobileNetV2 yang baru dilatih ulang untuk melihat akurasi dan kehilangan data pengujian.

loss, accuracy = model.evaluate(test_data)
12/12 [==============================] - 1s 38ms/step - loss: 0.6723 - accuracy: 0.8883

Ubah ke model di TensorFlow Hub

Selain itu, kami juga dapat beralih ke model baru lainnya yang memasukkan gambar dan mengeluarkan vektor fitur dengan format TensorFlow Hub.

Sebagai Inception V3 Model sebagai contoh, kita bisa mendefinisikan inception_v3_spec yang merupakan obyek image_classifier.ModelSpec dan berisi spesifikasi model Inception V3.

Kita perlu menentukan nama model name , url dari TensorFlow Hub Model uri . Sementara itu, nilai default dari input_image_shape adalah [224, 224] . Kita perlu untuk mengubahnya ke [299, 299] untuk model Inception V3.

inception_v3_spec = image_classifier.ModelSpec(
    uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]

Kemudian, dengan pengaturan parameter model_spec untuk inception_v3_spec di create metode, kita bisa melatih model Inception V3.

Langkah-langkah selanjutnya sama persis dan pada akhirnya kita bisa mendapatkan model InceptionV3 TensorFlow Lite yang disesuaikan.

Ubah model kustom Anda sendiri

Jika kita ingin menggunakan model kustom yang tidak di TensorFlow Hub, kita harus membuat dan mengekspor ModelSpec di TensorFlow Hub.

Kemudian mulai untuk mendefinisikan ModelSpec objek seperti proses di atas.

Ubah hyperparameter pelatihan

Kami juga bisa mengubah hyperparameters pelatihan seperti epochs , dropout_rate dan batch_size yang dapat mempengaruhi akurasi model. Parameter model yang dapat Anda sesuaikan adalah:

  • epochs : zaman lebih bisa mencapai akurasi yang lebih baik sampai konvergen tetapi pelatihan untuk terlalu banyak zaman dapat menyebabkan overfitting.
  • dropout_rate : Tingkat putus sekolah, menghindari overfitting. Tidak ada secara default.
  • batch_size : jumlah sampel untuk digunakan dalam satu langkah pelatihan. Tidak ada secara default.
  • validation_data Data Validasi:. Jika Tidak Ada, lewati proses validasi. Tidak ada secara default.
  • train_whole_model : Jika benar, modul Hub dilatih bersama-sama dengan lapisan klasifikasi di atas. Jika tidak, hanya latih lapisan klasifikasi teratas. Tidak ada secara default.
  • learning_rate : Tingkat pembelajaran Base. Tidak ada secara default.
  • momentum : pelampung Python diteruskan ke optimizer. Hanya digunakan ketika use_hub_library Benar. Tidak ada secara default.
  • shuffle : Boolean, apakah data harus dikocok. Salah secara default.
  • use_augmentation : Boolean, augmentasi penggunaan data untuk preprocessing. Salah secara default.
  • use_hub_library : Boolean, penggunaan make_image_classifier_lib dari hub tensorflow untuk melatih model. Pipa pelatihan ini dapat mencapai kinerja yang lebih baik untuk kumpulan data yang rumit dengan banyak kategori. Benar secara default.
  • warmup_steps : Jumlah langkah pemanasan untuk jadwal pemanasan pada tingkat belajar. Jika Tidak Ada, warmup_steps default digunakan yang merupakan langkah pelatihan total dalam dua epoch. Hanya digunakan ketika use_hub_library adalah False. Tidak ada secara default.
  • model_dir : Opsional, lokasi file model pos pemeriksaan. Hanya digunakan ketika use_hub_library adalah False. Tidak ada secara default.

Parameter yang ada secara default seperti epochs akan mendapatkan parameter default beton di make_image_classifier_lib dari TensorFlow Hub perpustakaan atau train_image_classifier_lib .

Misalnya, kita bisa berlatih dengan lebih banyak epoch.

model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
INFO:tensorflow:Retraining the models...
INFO:tensorflow:Retraining the models...
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hub_keras_layer_v1v2_3 (HubK (None, 1280)              3413024   
_________________________________________________________________
dropout_3 (Dropout)          (None, 1280)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 5)                 6405      
=================================================================
Total params: 3,419,429
Trainable params: 6,405
Non-trainable params: 3,413,024
_________________________________________________________________
None
Epoch 1/10
91/91 [==============================] - 7s 57ms/step - loss: 0.8869 - accuracy: 0.7644 - val_loss: 0.6398 - val_accuracy: 0.9006
Epoch 2/10
91/91 [==============================] - 5s 53ms/step - loss: 0.6601 - accuracy: 0.8929 - val_loss: 0.6134 - val_accuracy: 0.9176
Epoch 3/10
91/91 [==============================] - 5s 53ms/step - loss: 0.6273 - accuracy: 0.9121 - val_loss: 0.6068 - val_accuracy: 0.9148
Epoch 4/10
91/91 [==============================] - 5s 53ms/step - loss: 0.6104 - accuracy: 0.9214 - val_loss: 0.6007 - val_accuracy: 0.9205
Epoch 5/10
91/91 [==============================] - 5s 55ms/step - loss: 0.5921 - accuracy: 0.9286 - val_loss: 0.5976 - val_accuracy: 0.9176
Epoch 6/10
91/91 [==============================] - 5s 51ms/step - loss: 0.5745 - accuracy: 0.9409 - val_loss: 0.5940 - val_accuracy: 0.9148
Epoch 7/10
91/91 [==============================] - 4s 49ms/step - loss: 0.5686 - accuracy: 0.9454 - val_loss: 0.5923 - val_accuracy: 0.9148
Epoch 8/10
91/91 [==============================] - 4s 48ms/step - loss: 0.5629 - accuracy: 0.9492 - val_loss: 0.5914 - val_accuracy: 0.9062
Epoch 9/10
91/91 [==============================] - 4s 48ms/step - loss: 0.5592 - accuracy: 0.9485 - val_loss: 0.5892 - val_accuracy: 0.9091
Epoch 10/10
91/91 [==============================] - 4s 48ms/step - loss: 0.5503 - accuracy: 0.9584 - val_loss: 0.5890 - val_accuracy: 0.9176

Evaluasi model yang baru dilatih ulang dengan 10 periode pelatihan.

loss, accuracy = model.evaluate(test_data)
12/12 [==============================] - 1s 32ms/step - loss: 0.6294 - accuracy: 0.9019

Baca lebih lajut

Anda dapat membaca kami gambar klasifikasi contoh untuk belajar rincian teknis. Untuk informasi lebih lanjut, silakan merujuk ke: