تكثير البيانات (data augmentation)

إفتح المحتوى على موقع TensorFlow.org تفاعل مع المحتوى على Google Colab إطّلع على المصدر في Github تنزيل الدّفتر

لمحة

يبيّن هذا الدفتر التعليمي الطرق المستخدمة في معالجة البيانات و تكثيرها باستعمال tf.image.

تعتبر طريقة تكثير البيانات واحدة من الطّرق الشائعة لتحسين نتائج النماذج و تجنّب الوقوع في مشكلة الإفراط في التّعلم (overfitting)، أنظر في محتوى الدورة التعليمية حول مشكلتَي الإفراط و التفريط في التعلّم و كيفية معالجتهما.

إعداد بيئة العمل

pip install -q git+https://github.com/tensorflow/docs
WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
You should consider upgrading via the '/tmpfs/src/tf_docs_env/bin/python -m pip install --upgrade pip' command.

try:
  %tensorflow_version 2.x
except:
  pass

import urllib

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE

import tensorflow_docs as tfdocs
import tensorflow_docs.plots

import tensorflow_datasets as tfds

import PIL.Image

import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)

import numpy as np

لنستكشف و نختبر طرق تكثير البيانات على صورة واحدة ثمّ سنقوم بعد ذلك بتكثير مجموعة بيانات كاملة.

ابدأ بتنزيل هذه الصورة، الملتقطة من قبل المصوّر Von.grzanka، لنستعملها في تجربة طرق تكثير البيانات.

image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
24576/17858 [=========================================] - 0s 0us/step

png

تحميل الصورة و تحويلها إلى Tensor.

image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)

سنستخدم الوظيفة التّالية لرسم و مقارنة الصورة الأصلية مع الصورة الناتجة عن عمليّة التكثير جنبا إلى جنب.

def visualize(original, augmented):
  fig = plt.figure()
  plt.subplot(1,2,1)
  plt.title('Original image')
  plt.imshow(original)

  plt.subplot(1,2,2)
  plt.title('Augmented image')
  plt.imshow(augmented)

كثِّر صورة واحدة

نعرض في الأقسام التّالية عدّة طرق لتكثير الصورة.

قلب الصورة

قم بقلب الصورة عموديّا أو أفقيّا.

flipped = tf.image.flip_left_right(image)
visualize(image, flipped)

png

حوّل الصورة إلى التدرّج الرمادي

قم بتحويل الصورة إلى التدرّج الرمادي هكذا:

grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7f8c5ba86320>

png

إشباع الصورة

قم بإشباع الصورة من خلال توفير عامل إشباع بالطريقة التّالية:

saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)

png

تغيير درجة سطوع الصورة

قم بتغيير درجة سطوع الصورة عن طريق توفير عامل سطوع بالطريقة التّالية:

bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)

png

تدوير الصورة

قم بتدوير الصورة 90 درجة للحصول على صورة أخرى بالطريقة التّالية:

rotated = tf.image.rot90(image)
visualize(image, rotated)

png

اقتصاص الصورة في المنتصف

قم باقتصاص الصورة في المنتصف إلى الحدّ الذّي تريده بالطريقة التّالية:

cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)

png

أنظر في تفاصيل دليل حزمة الوظائف tf.image للتعرّف على المزيد من تقنيات تكثير البيانات.

قم بتكثير مجموعة بيانات ثمّ درّب نموذجًا عليها

نعرض في التّالي كيفية تدريب نموذج على مجموعة بيانات مكثّرة.

dataset, info =  tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']

num_train_examples= info.splits['train'].num_examples
Downloading and preparing dataset mnist/3.0.1 (download: 11.06 MiB, generated: 21.00 MiB, total: 32.06 MiB) to /home/kbuilder/tensorflow_datasets/mnist/3.0.1...

Warning:absl:Dataset mnist is hosted on GCS. It will automatically be downloaded to your
local data directory. If you'd instead prefer to read directly from our public
GCS bucket (recommended if you're running on GCP), you can instead pass
`try_gcs=True` to `tfds.load` or set `data_dir=gs://tfds-data/datasets`.


Dataset mnist downloaded and prepared to /home/kbuilder/tensorflow_datasets/mnist/3.0.1. Subsequent calls will reuse this data.

اكتب الوظيفة التّالية ، augment ، لتكثير الصّور. ثم قم باستعمالها على مجموعة البيانات. بهذه الطريقة يمكننا تكثير البيانات على الطاير.

def convert(image, label):
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  return image, label

def augment(image,label):
  image,label = convert(image, label)
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
  image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
  image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness

  return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048

أنشئ مجموعة البيانات المكثّرة

augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # The augmentation is added here.
    .map(augment, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

و أنشئ مجموعة بيانات غير مُكثّرة للمقارنة.

non_augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # No augmentation.
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

جهّز مجموعة بيانات التحقُّق (validation set). هذه الخطوة لا تتغير إن استخدمت عمليّة تكثير البيانات أم لا.

validation_batches = (
    test_dataset
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(2*BATCH_SIZE)
)

أنشئ و جمّع (compile) النموذج. يتكوّن نموذج الشبكة العصبيّة من طبقتين متّصلتين بالكامل. للتبسيط ، لا نستعمل في هذا النموذج طبقة تلافيفية (convolution layer).

def make_model():
  model = tf.keras.Sequential([
      layers.Flatten(input_shape=(28, 28, 1)),
      layers.Dense(4096, activation='relu'),
      layers.Dense(4096, activation='relu'),
      layers.Dense(10)
  ])
  model.compile(optimizer = 'adam',
                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  return model

درّب النموذج من دون تكثير:

model_without_aug = make_model()

no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 1s 22ms/step - loss: 0.8905 - accuracy: 0.7422 - val_loss: 0.4291 - val_accuracy: 0.8835
Epoch 2/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2171 - accuracy: 0.9331 - val_loss: 0.3043 - val_accuracy: 0.9035
Epoch 3/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0786 - accuracy: 0.9717 - val_loss: 0.3392 - val_accuracy: 0.9079
Epoch 4/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0430 - accuracy: 0.9854 - val_loss: 0.3264 - val_accuracy: 0.9184
Epoch 5/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0205 - accuracy: 0.9937 - val_loss: 0.3269 - val_accuracy: 0.9236
Epoch 6/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0252 - accuracy: 0.9917 - val_loss: 0.3850 - val_accuracy: 0.9149
Epoch 7/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0235 - accuracy: 0.9922 - val_loss: 0.3595 - val_accuracy: 0.9230
Epoch 8/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0390 - accuracy: 0.9897 - val_loss: 0.4330 - val_accuracy: 0.9105
Epoch 9/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0566 - accuracy: 0.9795 - val_loss: 0.4464 - val_accuracy: 0.9098
Epoch 10/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0479 - accuracy: 0.9819 - val_loss: 0.3918 - val_accuracy: 0.9221
Epoch 11/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0323 - accuracy: 0.9893 - val_loss: 0.4120 - val_accuracy: 0.9221
Epoch 12/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0275 - accuracy: 0.9932 - val_loss: 0.4325 - val_accuracy: 0.9183
Epoch 13/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0128 - accuracy: 0.9951 - val_loss: 0.3719 - val_accuracy: 0.9268
Epoch 14/50
32/32 [==============================] - 0s 10ms/step - loss: 0.0202 - accuracy: 0.9941 - val_loss: 0.4812 - val_accuracy: 0.9189
Epoch 15/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0185 - accuracy: 0.9922 - val_loss: 0.4556 - val_accuracy: 0.9152
Epoch 16/50
32/32 [==============================] - 0s 10ms/step - loss: 0.0278 - accuracy: 0.9922 - val_loss: 0.5124 - val_accuracy: 0.9164
Epoch 17/50
32/32 [==============================] - 0s 10ms/step - loss: 0.0527 - accuracy: 0.9800 - val_loss: 0.4371 - val_accuracy: 0.9261
Epoch 18/50
32/32 [==============================] - 0s 10ms/step - loss: 0.0410 - accuracy: 0.9888 - val_loss: 0.4689 - val_accuracy: 0.9190
Epoch 19/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0118 - accuracy: 0.9946 - val_loss: 0.4265 - val_accuracy: 0.9308
Epoch 20/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0176 - accuracy: 0.9941 - val_loss: 0.4021 - val_accuracy: 0.9282
Epoch 21/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0063 - accuracy: 0.9976 - val_loss: 0.4458 - val_accuracy: 0.9320
Epoch 22/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0077 - accuracy: 0.9990 - val_loss: 0.4041 - val_accuracy: 0.9355
Epoch 23/50
32/32 [==============================] - 0s 9ms/step - loss: 0.0044 - accuracy: 0.9985 - val_loss: 0.3913 - val_accuracy: 0.9373
Epoch 24/50
32/32 [==============================] - 0s 9ms/step - loss: 9.7255e-04 - accuracy: 0.9995 - val_loss: 0.3986 - val_accuracy: 0.9365
Epoch 25/50
32/32 [==============================] - 0s 9ms/step - loss: 8.1266e-05 - accuracy: 1.0000 - val_loss: 0.4008 - val_accuracy: 0.9370
Epoch 26/50
32/32 [==============================] - 0s 10ms/step - loss: 5.7872e-05 - accuracy: 1.0000 - val_loss: 0.4012 - val_accuracy: 0.9375
Epoch 27/50
32/32 [==============================] - 0s 9ms/step - loss: 4.2473e-05 - accuracy: 1.0000 - val_loss: 0.4018 - val_accuracy: 0.9375
Epoch 28/50
32/32 [==============================] - 0s 9ms/step - loss: 3.7390e-05 - accuracy: 1.0000 - val_loss: 0.4024 - val_accuracy: 0.9374
Epoch 29/50
32/32 [==============================] - 0s 9ms/step - loss: 3.3227e-05 - accuracy: 1.0000 - val_loss: 0.4030 - val_accuracy: 0.9377
Epoch 30/50
32/32 [==============================] - 0s 9ms/step - loss: 3.0141e-05 - accuracy: 1.0000 - val_loss: 0.4036 - val_accuracy: 0.9375
Epoch 31/50
32/32 [==============================] - 0s 9ms/step - loss: 2.7387e-05 - accuracy: 1.0000 - val_loss: 0.4042 - val_accuracy: 0.9375
Epoch 32/50
32/32 [==============================] - 0s 9ms/step - loss: 2.4972e-05 - accuracy: 1.0000 - val_loss: 0.4050 - val_accuracy: 0.9379
Epoch 33/50
32/32 [==============================] - 0s 9ms/step - loss: 2.2938e-05 - accuracy: 1.0000 - val_loss: 0.4060 - val_accuracy: 0.9379
Epoch 34/50
32/32 [==============================] - 0s 9ms/step - loss: 2.1141e-05 - accuracy: 1.0000 - val_loss: 0.4065 - val_accuracy: 0.9380
Epoch 35/50
32/32 [==============================] - 0s 9ms/step - loss: 1.9541e-05 - accuracy: 1.0000 - val_loss: 0.4071 - val_accuracy: 0.9383
Epoch 36/50
32/32 [==============================] - 0s 9ms/step - loss: 1.8105e-05 - accuracy: 1.0000 - val_loss: 0.4077 - val_accuracy: 0.9385
Epoch 37/50
32/32 [==============================] - 0s 9ms/step - loss: 1.6799e-05 - accuracy: 1.0000 - val_loss: 0.4086 - val_accuracy: 0.9384
Epoch 38/50
32/32 [==============================] - 0s 9ms/step - loss: 1.5679e-05 - accuracy: 1.0000 - val_loss: 0.4091 - val_accuracy: 0.9388
Epoch 39/50
32/32 [==============================] - 0s 9ms/step - loss: 1.4537e-05 - accuracy: 1.0000 - val_loss: 0.4097 - val_accuracy: 0.9388
Epoch 40/50
32/32 [==============================] - 0s 9ms/step - loss: 1.3657e-05 - accuracy: 1.0000 - val_loss: 0.4103 - val_accuracy: 0.9387
Epoch 41/50
32/32 [==============================] - 0s 9ms/step - loss: 1.2726e-05 - accuracy: 1.0000 - val_loss: 0.4110 - val_accuracy: 0.9388
Epoch 42/50
32/32 [==============================] - 0s 9ms/step - loss: 1.1920e-05 - accuracy: 1.0000 - val_loss: 0.4115 - val_accuracy: 0.9389
Epoch 43/50
32/32 [==============================] - 0s 9ms/step - loss: 1.1191e-05 - accuracy: 1.0000 - val_loss: 0.4119 - val_accuracy: 0.9388
Epoch 44/50
32/32 [==============================] - 0s 9ms/step - loss: 1.0500e-05 - accuracy: 1.0000 - val_loss: 0.4125 - val_accuracy: 0.9391
Epoch 45/50
32/32 [==============================] - 0s 9ms/step - loss: 9.8507e-06 - accuracy: 1.0000 - val_loss: 0.4135 - val_accuracy: 0.9390
Epoch 46/50
32/32 [==============================] - 0s 10ms/step - loss: 9.3208e-06 - accuracy: 1.0000 - val_loss: 0.4141 - val_accuracy: 0.9388
Epoch 47/50
32/32 [==============================] - 0s 9ms/step - loss: 8.7634e-06 - accuracy: 1.0000 - val_loss: 0.4150 - val_accuracy: 0.9390
Epoch 48/50
32/32 [==============================] - 0s 9ms/step - loss: 8.2344e-06 - accuracy: 1.0000 - val_loss: 0.4159 - val_accuracy: 0.9391
Epoch 49/50
32/32 [==============================] - 0s 9ms/step - loss: 7.7156e-06 - accuracy: 1.0000 - val_loss: 0.4166 - val_accuracy: 0.9392
Epoch 50/50
32/32 [==============================] - 0s 9ms/step - loss: 7.2488e-06 - accuracy: 1.0000 - val_loss: 0.4175 - val_accuracy: 0.9392

درّب النموذج باستعمال تكثير البيانات:

model_with_aug = make_model()

aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 0s 12ms/step - loss: 2.5928 - accuracy: 0.2554 - val_loss: 1.4829 - val_accuracy: 0.6655
Epoch 2/50
32/32 [==============================] - 0s 9ms/step - loss: 1.5005 - accuracy: 0.4863 - val_loss: 0.8066 - val_accuracy: 0.7645
Epoch 3/50
32/32 [==============================] - 0s 9ms/step - loss: 1.0874 - accuracy: 0.6299 - val_loss: 0.5633 - val_accuracy: 0.8501
Epoch 4/50
32/32 [==============================] - 0s 9ms/step - loss: 0.8656 - accuracy: 0.7124 - val_loss: 0.4659 - val_accuracy: 0.8763
Epoch 5/50
32/32 [==============================] - 0s 9ms/step - loss: 0.6930 - accuracy: 0.7681 - val_loss: 0.3905 - val_accuracy: 0.8834
Epoch 6/50
32/32 [==============================] - 0s 10ms/step - loss: 0.5798 - accuracy: 0.8086 - val_loss: 0.3417 - val_accuracy: 0.8888
Epoch 7/50
32/32 [==============================] - 0s 9ms/step - loss: 0.5635 - accuracy: 0.8115 - val_loss: 0.2855 - val_accuracy: 0.9147
Epoch 8/50
32/32 [==============================] - 0s 10ms/step - loss: 0.4997 - accuracy: 0.8481 - val_loss: 0.2984 - val_accuracy: 0.9006
Epoch 9/50
32/32 [==============================] - 0s 9ms/step - loss: 0.5025 - accuracy: 0.8325 - val_loss: 0.2641 - val_accuracy: 0.9224
Epoch 10/50
32/32 [==============================] - 0s 10ms/step - loss: 0.4275 - accuracy: 0.8535 - val_loss: 0.2512 - val_accuracy: 0.9223
Epoch 11/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3934 - accuracy: 0.8579 - val_loss: 0.2223 - val_accuracy: 0.9301
Epoch 12/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3858 - accuracy: 0.8691 - val_loss: 0.2589 - val_accuracy: 0.9139
Epoch 13/50
32/32 [==============================] - 0s 10ms/step - loss: 0.4134 - accuracy: 0.8638 - val_loss: 0.2400 - val_accuracy: 0.9243
Epoch 14/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3294 - accuracy: 0.8965 - val_loss: 0.2110 - val_accuracy: 0.9336
Epoch 15/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2758 - accuracy: 0.9077 - val_loss: 0.1814 - val_accuracy: 0.9438
Epoch 16/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3392 - accuracy: 0.8804 - val_loss: 0.2024 - val_accuracy: 0.9380
Epoch 17/50
32/32 [==============================] - 0s 9ms/step - loss: 0.3073 - accuracy: 0.8975 - val_loss: 0.1961 - val_accuracy: 0.9416
Epoch 18/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2996 - accuracy: 0.9087 - val_loss: 0.1726 - val_accuracy: 0.9449
Epoch 19/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2959 - accuracy: 0.8970 - val_loss: 0.1669 - val_accuracy: 0.9459
Epoch 20/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2734 - accuracy: 0.9082 - val_loss: 0.1915 - val_accuracy: 0.9417
Epoch 21/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2642 - accuracy: 0.9146 - val_loss: 0.1856 - val_accuracy: 0.9409
Epoch 22/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2513 - accuracy: 0.9170 - val_loss: 0.1782 - val_accuracy: 0.9430
Epoch 23/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2628 - accuracy: 0.9087 - val_loss: 0.1760 - val_accuracy: 0.9421
Epoch 24/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2582 - accuracy: 0.9150 - val_loss: 0.1734 - val_accuracy: 0.9475
Epoch 25/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2411 - accuracy: 0.9165 - val_loss: 0.1707 - val_accuracy: 0.9487
Epoch 26/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2155 - accuracy: 0.9263 - val_loss: 0.1819 - val_accuracy: 0.9474
Epoch 27/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2686 - accuracy: 0.9155 - val_loss: 0.2078 - val_accuracy: 0.9366
Epoch 28/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2362 - accuracy: 0.9185 - val_loss: 0.1770 - val_accuracy: 0.9443
Epoch 29/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2066 - accuracy: 0.9316 - val_loss: 0.1744 - val_accuracy: 0.9458
Epoch 30/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2207 - accuracy: 0.9253 - val_loss: 0.1687 - val_accuracy: 0.9474
Epoch 31/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2001 - accuracy: 0.9331 - val_loss: 0.1642 - val_accuracy: 0.9492
Epoch 32/50
32/32 [==============================] - 0s 10ms/step - loss: 0.1930 - accuracy: 0.9390 - val_loss: 0.1556 - val_accuracy: 0.9512
Epoch 33/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2109 - accuracy: 0.9346 - val_loss: 0.1638 - val_accuracy: 0.9476
Epoch 34/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1868 - accuracy: 0.9399 - val_loss: 0.1478 - val_accuracy: 0.9526
Epoch 35/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1847 - accuracy: 0.9429 - val_loss: 0.1607 - val_accuracy: 0.9496
Epoch 36/50
32/32 [==============================] - 0s 10ms/step - loss: 0.2157 - accuracy: 0.9346 - val_loss: 0.1496 - val_accuracy: 0.9504
Epoch 37/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2056 - accuracy: 0.9321 - val_loss: 0.1584 - val_accuracy: 0.9523
Epoch 38/50
32/32 [==============================] - 0s 9ms/step - loss: 0.2227 - accuracy: 0.9282 - val_loss: 0.1401 - val_accuracy: 0.9583
Epoch 39/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1744 - accuracy: 0.9419 - val_loss: 0.1696 - val_accuracy: 0.9491
Epoch 40/50
32/32 [==============================] - 0s 10ms/step - loss: 0.1803 - accuracy: 0.9424 - val_loss: 0.1615 - val_accuracy: 0.9513
Epoch 41/50
32/32 [==============================] - 0s 10ms/step - loss: 0.1937 - accuracy: 0.9336 - val_loss: 0.1605 - val_accuracy: 0.9516
Epoch 42/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1355 - accuracy: 0.9585 - val_loss: 0.1814 - val_accuracy: 0.9460
Epoch 43/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1559 - accuracy: 0.9478 - val_loss: 0.1551 - val_accuracy: 0.9515
Epoch 44/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1488 - accuracy: 0.9487 - val_loss: 0.1892 - val_accuracy: 0.9412
Epoch 45/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1434 - accuracy: 0.9526 - val_loss: 0.1462 - val_accuracy: 0.9577
Epoch 46/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1771 - accuracy: 0.9404 - val_loss: 0.1426 - val_accuracy: 0.9588
Epoch 47/50
32/32 [==============================] - 0s 10ms/step - loss: 0.1634 - accuracy: 0.9463 - val_loss: 0.1525 - val_accuracy: 0.9555
Epoch 48/50
32/32 [==============================] - 0s 10ms/step - loss: 0.1642 - accuracy: 0.9463 - val_loss: 0.1427 - val_accuracy: 0.9563
Epoch 49/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1486 - accuracy: 0.9497 - val_loss: 0.1559 - val_accuracy: 0.9530
Epoch 50/50
32/32 [==============================] - 0s 9ms/step - loss: 0.1665 - accuracy: 0.9448 - val_loss: 0.1503 - val_accuracy: 0.9533

الإستنتاج :

في هذا المثال، يبلغ النموذج المدربّ باستعمال البيانات المكثّرة (augmented data) درجة دقّة تقارب 95% على بيانات التحقّق. هذا أعلى بقليل (+1%) من دقّة النموذج المدرّب من دون إستعمال تقنية تكثير البيانات (non-augmented data).

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
(0.75, 1.0)

png

أمّا بالنسبة لقيمة الخسارة، فمن الواضح أنّ النموذج المدرّب بدون تكثير يعاني من مشكلة الإفراط في التعلّم (overfitting). في المقابل ، النموذج المدربّ بالبيانات المكثّرة ، رغم أنّه أبطأ بعض الشئ ، إلاّ أنّه يُتِمُّ عمليّة التدريب بشكل صحيح و لا يعاني من مشكلة الإفراط في التعلّم.

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "loss")
plt.title("Loss")
plt.ylim([0,1])
(0.0, 1.0)

png