تكثير البيانات (data augmentation)

إفتح المحتوى على موقع TensorFlow.org تفاعل مع المحتوى على Google Colab إطّلع على المصدر في Github تنزيل الدّفتر

لمحة

يبيّن هذا الدفتر التعليمي الطرق المستخدمة في معالجة البيانات و تكثيرها باستعمال tf.image.

تعتبر طريقة تكثير البيانات واحدة من الطّرق الشائعة لتحسين نتائج النماذج و تجنّب الوقوع في مشكلة الإفراط في التّعلم (overfitting)، أنظر في محتوى الدورة التعليمية حول مشكلتَي الإفراط و التفريط في التعلّم و كيفية معالجتهما.

إعداد بيئة العمل

pip install -q git+https://github.com/tensorflow/docs
try:
  %tensorflow_version 2.x
except:
  pass

import urllib

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE

import tensorflow_docs as tfdocs
import tensorflow_docs.plots

import tensorflow_datasets as tfds

import PIL.Image

import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)

import numpy as np

لنستكشف و نختبر طرق تكثير البيانات على صورة واحدة ثمّ سنقوم بعد ذلك بتكثير مجموعة بيانات كاملة.

ابدأ بتنزيل هذه الصورة، الملتقطة من قبل المصوّر Von.grzanka، لنستعملها في تجربة طرق تكثير البيانات.

image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
24576/17858 [=========================================] - 0s 0us/step

png

تحميل الصورة و تحويلها إلى Tensor.

image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)

سنستخدم الوظيفة التّالية لرسم و مقارنة الصورة الأصلية مع الصورة الناتجة عن عمليّة التكثير جنبا إلى جنب.

def visualize(original, augmented):
  fig = plt.figure()
  plt.subplot(1,2,1)
  plt.title('Original image')
  plt.imshow(original)

  plt.subplot(1,2,2)
  plt.title('Augmented image')
  plt.imshow(augmented)

كثِّر صورة واحدة

نعرض في الأقسام التّالية عدّة طرق لتكثير الصورة.

قلب الصورة

قم بقلب الصورة عموديّا أو أفقيّا.

flipped = tf.image.flip_left_right(image)
visualize(image, flipped)

png

حوّل الصورة إلى التدرّج الرمادي

قم بتحويل الصورة إلى التدرّج الرمادي هكذا:

grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7f8375b76550>

png

إشباع الصورة

قم بإشباع الصورة من خلال توفير عامل إشباع بالطريقة التّالية:

saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)

png

تغيير درجة سطوع الصورة

قم بتغيير درجة سطوع الصورة عن طريق توفير عامل سطوع بالطريقة التّالية:

bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)

png

تدوير الصورة

قم بتدوير الصورة 90 درجة للحصول على صورة أخرى بالطريقة التّالية:

rotated = tf.image.rot90(image)
visualize(image, rotated)

png

اقتصاص الصورة في المنتصف

قم باقتصاص الصورة في المنتصف إلى الحدّ الذّي تريده بالطريقة التّالية:

cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)

png

أنظر في تفاصيل دليل حزمة الوظائف tf.image للتعرّف على المزيد من تقنيات تكثير البيانات.

قم بتكثير مجموعة بيانات ثمّ درّب نموذجًا عليها

نعرض في التّالي كيفية تدريب نموذج على مجموعة بيانات مكثّرة.

dataset, info =  tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']

num_train_examples= info.splits['train'].num_examples
Downloading and preparing dataset mnist/3.0.1 (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB) to /home/kbuilder/tensorflow_datasets/mnist/3.0.1...

Warning:absl:Dataset mnist is hosted on GCS. It will automatically be downloaded to your
local data directory. If you'd instead prefer to read directly from our public
GCS bucket (recommended if you're running on GCP), you can instead pass
`try_gcs=True` to `tfds.load` or set `data_dir=gs://tfds-data/datasets`.


Dataset mnist downloaded and prepared to /home/kbuilder/tensorflow_datasets/mnist/3.0.1. Subsequent calls will reuse this data.

اكتب الوظيفة التّالية ، augment ، لتكثير الصّور. ثم قم باستعمالها على مجموعة البيانات. بهذه الطريقة يمكننا تكثير البيانات على الطاير.

def convert(image, label):
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  return image, label

def augment(image,label):
  image,label = convert(image, label)
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
  image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
  image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness

  return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048

أنشئ مجموعة البيانات المكثّرة

augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # The augmentation is added here.
    .map(augment, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

و أنشئ مجموعة بيانات غير مُكثّرة للمقارنة.

non_augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # No augmentation.
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

جهّز مجموعة بيانات التحقُّق (validation set). هذه الخطوة لا تتغير إن استخدمت عمليّة تكثير البيانات أم لا.

validation_batches = (
    test_dataset
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(2*BATCH_SIZE)
)

أنشئ و جمّع (compile) النموذج. يتكوّن نموذج الشبكة العصبيّة من طبقتين متّصلتين بالكامل. للتبسيط ، لا نستعمل في هذا النموذج طبقة تلافيفية (convolution layer).

def make_model():
  model = tf.keras.Sequential([
      layers.Flatten(input_shape=(28, 28, 1)),
      layers.Dense(4096, activation='relu'),
      layers.Dense(4096, activation='relu'),
      layers.Dense(10)
  ])
  model.compile(optimizer = 'adam',
                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  return model

درّب النموذج من دون تكثير:

model_without_aug = make_model()

no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 1s 21ms/step - loss: 0.8015 - accuracy: 0.7427 - val_loss: 0.3152 - val_accuracy: 0.9032
Epoch 2/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1651 - accuracy: 0.9448 - val_loss: 0.2718 - val_accuracy: 0.9243
Epoch 3/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0808 - accuracy: 0.9722 - val_loss: 0.2849 - val_accuracy: 0.9225
Epoch 4/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0319 - accuracy: 0.9907 - val_loss: 0.3170 - val_accuracy: 0.9253
Epoch 5/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0373 - accuracy: 0.9912 - val_loss: 0.4960 - val_accuracy: 0.8932
Epoch 6/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0557 - accuracy: 0.9834 - val_loss: 0.4836 - val_accuracy: 0.8949
Epoch 7/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0469 - accuracy: 0.9849 - val_loss: 0.3019 - val_accuracy: 0.9323
Epoch 8/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0208 - accuracy: 0.9941 - val_loss: 0.4618 - val_accuracy: 0.9107
Epoch 9/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0159 - accuracy: 0.9956 - val_loss: 0.4283 - val_accuracy: 0.9229
Epoch 10/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0293 - accuracy: 0.9902 - val_loss: 0.4509 - val_accuracy: 0.9118
Epoch 11/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0361 - accuracy: 0.9937 - val_loss: 0.3700 - val_accuracy: 0.9272
Epoch 12/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0342 - accuracy: 0.9912 - val_loss: 0.3821 - val_accuracy: 0.9255
Epoch 13/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0118 - accuracy: 0.9971 - val_loss: 0.4469 - val_accuracy: 0.9201
Epoch 14/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0364 - accuracy: 0.9883 - val_loss: 0.5666 - val_accuracy: 0.8996
Epoch 15/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0439 - accuracy: 0.9863 - val_loss: 0.4924 - val_accuracy: 0.9134
Epoch 16/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0417 - accuracy: 0.9878 - val_loss: 0.4957 - val_accuracy: 0.9149
Epoch 17/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0894 - accuracy: 0.9702 - val_loss: 0.4362 - val_accuracy: 0.9185
Epoch 18/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0214 - accuracy: 0.9917 - val_loss: 0.4087 - val_accuracy: 0.9291
Epoch 19/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0083 - accuracy: 0.9971 - val_loss: 0.3757 - val_accuracy: 0.9381
Epoch 20/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0085 - accuracy: 0.9976 - val_loss: 0.4040 - val_accuracy: 0.9335
Epoch 21/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0178 - accuracy: 0.9946 - val_loss: 0.4342 - val_accuracy: 0.9339
Epoch 22/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0231 - accuracy: 0.9951 - val_loss: 0.5504 - val_accuracy: 0.9213
Epoch 23/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0240 - accuracy: 0.9946 - val_loss: 0.4373 - val_accuracy: 0.9290
Epoch 24/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0029 - accuracy: 0.9990 - val_loss: 0.3923 - val_accuracy: 0.9351
Epoch 25/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0052 - accuracy: 0.9990 - val_loss: 0.4624 - val_accuracy: 0.9295
Epoch 26/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0118 - accuracy: 0.9961 - val_loss: 0.5419 - val_accuracy: 0.9274
Epoch 27/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0364 - accuracy: 0.9917 - val_loss: 0.6691 - val_accuracy: 0.9055
Epoch 28/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 0.9902 - val_loss: 0.5068 - val_accuracy: 0.9227
Epoch 29/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0236 - accuracy: 0.9937 - val_loss: 0.5552 - val_accuracy: 0.9223
Epoch 30/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0531 - accuracy: 0.9854 - val_loss: 0.5115 - val_accuracy: 0.9273
Epoch 31/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0288 - accuracy: 0.9932 - val_loss: 0.5819 - val_accuracy: 0.9146
Epoch 32/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0158 - accuracy: 0.9946 - val_loss: 0.4036 - val_accuracy: 0.9341
Epoch 33/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0056 - accuracy: 0.9990 - val_loss: 0.3879 - val_accuracy: 0.9367
Epoch 34/50
32/32 [==============================] - 0s 9ms/step - loss: 3.4005e-04 - accuracy: 1.0000 - val_loss: 0.4056 - val_accuracy: 0.9370
Epoch 35/50
32/32 [==============================] - 0s 9ms/step - loss: 7.8261e-05 - accuracy: 1.0000 - val_loss: 0.4028 - val_accuracy: 0.9372
Epoch 36/50
32/32 [==============================] - 0s 9ms/step - loss: 5.3561e-05 - accuracy: 1.0000 - val_loss: 0.4025 - val_accuracy: 0.9375
Epoch 37/50
32/32 [==============================] - 0s 9ms/step - loss: 4.5083e-05 - accuracy: 1.0000 - val_loss: 0.4022 - val_accuracy: 0.9382
Epoch 38/50
32/32 [==============================] - 0s 9ms/step - loss: 3.7653e-05 - accuracy: 1.0000 - val_loss: 0.4030 - val_accuracy: 0.9382
Epoch 39/50
32/32 [==============================] - 0s 9ms/step - loss: 3.2607e-05 - accuracy: 1.0000 - val_loss: 0.4038 - val_accuracy: 0.9384
Epoch 40/50
32/32 [==============================] - 0s 9ms/step - loss: 2.8548e-05 - accuracy: 1.0000 - val_loss: 0.4052 - val_accuracy: 0.9386
Epoch 41/50
32/32 [==============================] - 0s 9ms/step - loss: 2.4995e-05 - accuracy: 1.0000 - val_loss: 0.4065 - val_accuracy: 0.9384
Epoch 42/50
32/32 [==============================] - 0s 9ms/step - loss: 2.1854e-05 - accuracy: 1.0000 - val_loss: 0.4081 - val_accuracy: 0.9384
Epoch 43/50
32/32 [==============================] - 0s 9ms/step - loss: 1.9339e-05 - accuracy: 1.0000 - val_loss: 0.4095 - val_accuracy: 0.9384
Epoch 44/50
32/32 [==============================] - 0s 9ms/step - loss: 1.7178e-05 - accuracy: 1.0000 - val_loss: 0.4117 - val_accuracy: 0.9384
Epoch 45/50
32/32 [==============================] - 0s 9ms/step - loss: 1.5268e-05 - accuracy: 1.0000 - val_loss: 0.4135 - val_accuracy: 0.9385
Epoch 46/50
32/32 [==============================] - 0s 9ms/step - loss: 1.3554e-05 - accuracy: 1.0000 - val_loss: 0.4155 - val_accuracy: 0.9386
Epoch 47/50
32/32 [==============================] - 0s 9ms/step - loss: 1.2030e-05 - accuracy: 1.0000 - val_loss: 0.4180 - val_accuracy: 0.9386
Epoch 48/50
32/32 [==============================] - 0s 9ms/step - loss: 1.0733e-05 - accuracy: 1.0000 - val_loss: 0.4203 - val_accuracy: 0.9385
Epoch 49/50
32/32 [==============================] - 0s 9ms/step - loss: 9.6677e-06 - accuracy: 1.0000 - val_loss: 0.4228 - val_accuracy: 0.9384
Epoch 50/50
32/32 [==============================] - 0s 9ms/step - loss: 8.6517e-06 - accuracy: 1.0000 - val_loss: 0.4248 - val_accuracy: 0.9385

درّب النموذج باستعمال تكثير البيانات:

model_with_aug = make_model()

aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 0s 11ms/step - loss: 2.2562 - accuracy: 0.3203 - val_loss: 1.0580 - val_accuracy: 0.7623
Epoch 2/50
32/32 [==============================] - 0s 9ms/step - loss: 1.3532 - accuracy: 0.5581 - val_loss: 0.6754 - val_accuracy: 0.7974
Epoch 3/50
32/32 [==============================] - 0s 9ms/step - loss: 0.9155 - accuracy: 0.6909 - val_loss: 0.4731 - val_accuracy: 0.8689
Epoch 4/50
32/32 [==============================] - 0s 9ms/step - loss: 0.7981 - accuracy: 0.7373 - val_loss: 0.3906 - val_accuracy: 0.8908
Epoch 5/50
32/32 [==============================] - 0s 9ms/step - loss: 0.6792 - accuracy: 0.7856 - val_loss: 0.3960 - val_accuracy: 0.8672
Epoch 6/50
32/32 [==============================] - 0s 9ms/step - loss: 0.6054 - accuracy: 0.7949 - val_loss: 0.2939 - val_accuracy: 0.9109
Epoch 7/50
32/32 [==============================] - 0s 9ms/step - loss: 0.5106 - accuracy: 0.8315 - val_loss: 0.3376 - val_accuracy: 0.8886
Epoch 8/50
32/32 [==============================] - 0s 9ms/step - loss: 0.5189 - accuracy: 0.8330 - val_loss: 0.2549 - val_accuracy: 0.9237
Epoch 9/50
32/32 [==============================] - 0s 9ms/step - loss: 0.4747 - accuracy: 0.8472 - val_loss: 0.2675 - val_accuracy: 0.9137
Epoch 10/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3872 - accuracy: 0.8701 - val_loss: 0.2224 - val_accuracy: 0.9309
Epoch 11/50
32/32 [==============================] - 0s 9ms/step - loss: 0.4275 - accuracy: 0.8584 - val_loss: 0.2335 - val_accuracy: 0.9259
Epoch 12/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3834 - accuracy: 0.8677 - val_loss: 0.2090 - val_accuracy: 0.9321
Epoch 13/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3565 - accuracy: 0.8857 - val_loss: 0.1939 - val_accuracy: 0.9393
Epoch 14/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3396 - accuracy: 0.8921 - val_loss: 0.2220 - val_accuracy: 0.9308
Epoch 15/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3456 - accuracy: 0.8813 - val_loss: 0.1771 - val_accuracy: 0.9461
Epoch 16/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3261 - accuracy: 0.8931 - val_loss: 0.1873 - val_accuracy: 0.9411
Epoch 17/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3095 - accuracy: 0.9023 - val_loss: 0.1998 - val_accuracy: 0.9396
Epoch 18/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3011 - accuracy: 0.9028 - val_loss: 0.1793 - val_accuracy: 0.9427
Epoch 19/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2741 - accuracy: 0.9092 - val_loss: 0.1929 - val_accuracy: 0.9368
Epoch 20/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3354 - accuracy: 0.8877 - val_loss: 0.1811 - val_accuracy: 0.9427
Epoch 21/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2685 - accuracy: 0.9097 - val_loss: 0.1750 - val_accuracy: 0.9450
Epoch 22/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2303 - accuracy: 0.9253 - val_loss: 0.1762 - val_accuracy: 0.9444
Epoch 23/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2587 - accuracy: 0.9170 - val_loss: 0.1729 - val_accuracy: 0.9437
Epoch 24/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2513 - accuracy: 0.9150 - val_loss: 0.1612 - val_accuracy: 0.9486
Epoch 25/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3097 - accuracy: 0.8994 - val_loss: 0.1901 - val_accuracy: 0.9402
Epoch 26/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2737 - accuracy: 0.9155 - val_loss: 0.1614 - val_accuracy: 0.9472
Epoch 27/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2204 - accuracy: 0.9282 - val_loss: 0.1701 - val_accuracy: 0.9444
Epoch 28/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1884 - accuracy: 0.9399 - val_loss: 0.1565 - val_accuracy: 0.9502
Epoch 29/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2045 - accuracy: 0.9312 - val_loss: 0.1591 - val_accuracy: 0.9505
Epoch 30/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2553 - accuracy: 0.9204 - val_loss: 0.1724 - val_accuracy: 0.9456
Epoch 31/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2223 - accuracy: 0.9312 - val_loss: 0.1585 - val_accuracy: 0.9509
Epoch 32/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1902 - accuracy: 0.9365 - val_loss: 0.1595 - val_accuracy: 0.9493
Epoch 33/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2045 - accuracy: 0.9375 - val_loss: 0.1794 - val_accuracy: 0.9425
Epoch 34/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1967 - accuracy: 0.9351 - val_loss: 0.1494 - val_accuracy: 0.9541
Epoch 35/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2011 - accuracy: 0.9336 - val_loss: 0.1739 - val_accuracy: 0.9496
Epoch 36/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1812 - accuracy: 0.9409 - val_loss: 0.2067 - val_accuracy: 0.9352
Epoch 37/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2342 - accuracy: 0.9263 - val_loss: 0.1718 - val_accuracy: 0.9470
Epoch 38/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2117 - accuracy: 0.9263 - val_loss: 0.1850 - val_accuracy: 0.9449
Epoch 39/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1799 - accuracy: 0.9385 - val_loss: 0.1680 - val_accuracy: 0.9482
Epoch 40/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2083 - accuracy: 0.9346 - val_loss: 0.1470 - val_accuracy: 0.9564
Epoch 41/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1546 - accuracy: 0.9507 - val_loss: 0.1577 - val_accuracy: 0.9529
Epoch 42/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1520 - accuracy: 0.9468 - val_loss: 0.1398 - val_accuracy: 0.9560
Epoch 43/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1623 - accuracy: 0.9463 - val_loss: 0.1553 - val_accuracy: 0.9542
Epoch 44/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1725 - accuracy: 0.9424 - val_loss: 0.1577 - val_accuracy: 0.9561
Epoch 45/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1935 - accuracy: 0.9331 - val_loss: 0.1522 - val_accuracy: 0.9519
Epoch 46/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1607 - accuracy: 0.9448 - val_loss: 0.1614 - val_accuracy: 0.9536
Epoch 47/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1521 - accuracy: 0.9497 - val_loss: 0.1526 - val_accuracy: 0.9519
Epoch 48/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1652 - accuracy: 0.9443 - val_loss: 0.1576 - val_accuracy: 0.9530
Epoch 49/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1494 - accuracy: 0.9487 - val_loss: 0.1505 - val_accuracy: 0.9554
Epoch 50/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1409 - accuracy: 0.9541 - val_loss: 0.1607 - val_accuracy: 0.9534

الإستنتاج :

في هذا المثال، يبلغ النموذج المدربّ باستعمال البيانات المكثّرة (augmented data) درجة دقّة تقارب 95% على بيانات التحقّق. هذا أعلى بقليل (+1%) من دقّة النموذج المدرّب من دون إستعمال تقنية تكثير البيانات (non-augmented data).

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
(0.75, 1.0)

png

أمّا بالنسبة لقيمة الخسارة، فمن الواضح أنّ النموذج المدرّب بدون تكثير يعاني من مشكلة الإفراط في التعلّم (overfitting). في المقابل ، النموذج المدربّ بالبيانات المكثّرة ، رغم أنّه أبطأ بعض الشئ ، إلاّ أنّه يُتِمُّ عمليّة التدريب بشكل صحيح و لا يعاني من مشكلة الإفراط في التعلّم.

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "loss")
plt.title("Loss")
plt.ylim([0,1])
(0.0, 1.0)

png