تكثير البيانات (data augmentation)

إفتح المحتوى على موقع TensorFlow.org تفاعل مع المحتوى على Google Colab إطّلع على المصدر في Github تنزيل الدّفتر

لمحة

يبيّن هذا الدفتر التعليمي الطرق المستخدمة في معالجة البيانات و تكثيرها باستعمال tf.image.

تعتبر طريقة تكثير البيانات واحدة من الطّرق الشائعة لتحسين نتائج النماذج و تجنّب الوقوع في مشكلة الإفراط في التّعلم (overfitting)، أنظر في محتوى الدورة التعليمية حول مشكلتَي الإفراط و التفريط في التعلّم و كيفية معالجتهما.

إعداد بيئة العمل

pip install -q git+https://github.com/tensorflow/docs
try:
  %tensorflow_version 2.x
except:
  pass

import urllib

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE

import tensorflow_docs as tfdocs
import tensorflow_docs.plots

import tensorflow_datasets as tfds

import PIL.Image

import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)

import numpy as np

لنستكشف و نختبر طرق تكثير البيانات على صورة واحدة ثمّ سنقوم بعد ذلك بتكثير مجموعة بيانات كاملة.

ابدأ بتنزيل هذه الصورة، الملتقطة من قبل المصوّر Von.grzanka، لنستعملها في تجربة طرق تكثير البيانات.

image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
24576/17858 [=========================================] - 0s 0us/step

png

تحميل الصورة و تحويلها إلى Tensor.

image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)

سنستخدم الوظيفة التّالية لرسم و مقارنة الصورة الأصلية مع الصورة الناتجة عن عمليّة التكثير جنبا إلى جنب.

def visualize(original, augmented):
  fig = plt.figure()
  plt.subplot(1,2,1)
  plt.title('Original image')
  plt.imshow(original)

  plt.subplot(1,2,2)
  plt.title('Augmented image')
  plt.imshow(augmented)

كثِّر صورة واحدة

نعرض في الأقسام التّالية عدّة طرق لتكثير الصورة.

قلب الصورة

قم بقلب الصورة عموديّا أو أفقيّا.

flipped = tf.image.flip_left_right(image)
visualize(image, flipped)

png

حوّل الصورة إلى التدرّج الرمادي

قم بتحويل الصورة إلى التدرّج الرمادي هكذا:

grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7fc937b15cc0>

png

إشباع الصورة

قم بإشباع الصورة من خلال توفير عامل إشباع بالطريقة التّالية:

saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)

png

تغيير درجة سطوع الصورة

قم بتغيير درجة سطوع الصورة عن طريق توفير عامل سطوع بالطريقة التّالية:

bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)

png

تدوير الصورة

قم بتدوير الصورة 90 درجة للحصول على صورة أخرى بالطريقة التّالية:

rotated = tf.image.rot90(image)
visualize(image, rotated)

png

اقتصاص الصورة في المنتصف

قم باقتصاص الصورة في المنتصف إلى الحدّ الذّي تريده بالطريقة التّالية:

cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)

png

أنظر في تفاصيل دليل حزمة الوظائف tf.image للتعرّف على المزيد من تقنيات تكثير البيانات.

قم بتكثير مجموعة بيانات ثمّ درّب نموذجًا عليها

نعرض في التّالي كيفية تدريب نموذج على مجموعة بيانات مكثّرة.

dataset, info =  tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']

num_train_examples= info.splits['train'].num_examples
Downloading and preparing dataset mnist/3.0.0 (download: 11.06 MiB, generated: Unknown size, total: 11.06 MiB) to /home/kbuilder/tensorflow_datasets/mnist/3.0.0...

Warning:absl:Dataset mnist is hosted on GCS. It will automatically be downloaded to your
local data directory. If you'd instead prefer to read directly from our public
GCS bucket (recommended if you're running on GCP), you can instead set
data_dir=gs://tfds-data/datasets.


HBox(children=(FloatProgress(value=0.0, description='Dl Completed...', max=4.0, style=ProgressStyle(descriptio…


Dataset mnist downloaded and prepared to /home/kbuilder/tensorflow_datasets/mnist/3.0.0. Subsequent calls will reuse this data.

اكتب الوظيفة التّالية ، augment ، لتكثير الصّور. ثم قم باستعمالها على مجموعة البيانات. بهذه الطريقة يمكننا تكثير البيانات على الطاير.

def convert(image, label):
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  return image, label

def augment(image,label):
  image,label = convert(image, label)
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
  image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
  image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness

  return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048

أنشئ مجموعة البيانات المكثّرة

augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # The augmentation is added here.
    .map(augment, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

و أنشئ مجموعة بيانات غير مُكثّرة للمقارنة.

non_augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # No augmentation.
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

جهّز مجموعة بيانات التحقُّق (validation set). هذه الخطوة لا تتغير إن استخدمت عمليّة تكثير البيانات أم لا.

validation_batches = (
    test_dataset
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(2*BATCH_SIZE)
)

أنشئ و جمّع (compile) النموذج. يتكوّن نموذج الشبكة العصبيّة من طبقتين متّصلتين بالكامل. للتبسيط ، لا نستعمل في هذا النموذج طبقة تلافيفية (convolution layer).

def make_model():
  model = tf.keras.Sequential([
      layers.Flatten(input_shape=(28, 28, 1)),
      layers.Dense(4096, activation='relu'),
      layers.Dense(4096, activation='relu'),
      layers.Dense(10)
  ])
  model.compile(optimizer = 'adam',
                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  return model

درّب النموذج من دون تكثير:

model_without_aug = make_model()

no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 1s 41ms/step - loss: 0.7673 - accuracy: 0.7617 - val_loss: 0.3442 - val_accuracy: 0.8910
Epoch 2/50
32/32 [==============================] - 1s 26ms/step - loss: 0.1832 - accuracy: 0.9409 - val_loss: 0.2910 - val_accuracy: 0.9133
Epoch 3/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0838 - accuracy: 0.9722 - val_loss: 0.3470 - val_accuracy: 0.9127
Epoch 4/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0516 - accuracy: 0.9839 - val_loss: 0.3726 - val_accuracy: 0.9119
Epoch 5/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0498 - accuracy: 0.9829 - val_loss: 0.3162 - val_accuracy: 0.9266
Epoch 6/50
32/32 [==============================] - 1s 24ms/step - loss: 0.0243 - accuracy: 0.9922 - val_loss: 0.3772 - val_accuracy: 0.9195
Epoch 7/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0232 - accuracy: 0.9932 - val_loss: 0.5004 - val_accuracy: 0.9106
Epoch 8/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0586 - accuracy: 0.9839 - val_loss: 0.3843 - val_accuracy: 0.9171
Epoch 9/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0471 - accuracy: 0.9868 - val_loss: 0.3588 - val_accuracy: 0.9208
Epoch 10/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0307 - accuracy: 0.9912 - val_loss: 0.4678 - val_accuracy: 0.9096
Epoch 11/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0271 - accuracy: 0.9941 - val_loss: 0.3845 - val_accuracy: 0.9263
Epoch 12/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0178 - accuracy: 0.9956 - val_loss: 0.3713 - val_accuracy: 0.9273
Epoch 13/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0093 - accuracy: 0.9961 - val_loss: 0.5013 - val_accuracy: 0.9152
Epoch 14/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0158 - accuracy: 0.9951 - val_loss: 0.3496 - val_accuracy: 0.9355
Epoch 15/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0118 - accuracy: 0.9971 - val_loss: 0.3756 - val_accuracy: 0.9315
Epoch 16/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0153 - accuracy: 0.9951 - val_loss: 0.4947 - val_accuracy: 0.9179
Epoch 17/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0931 - accuracy: 0.9795 - val_loss: 0.4475 - val_accuracy: 0.9084
Epoch 18/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0708 - accuracy: 0.9824 - val_loss: 0.4665 - val_accuracy: 0.9101
Epoch 19/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0378 - accuracy: 0.9893 - val_loss: 0.4144 - val_accuracy: 0.9233
Epoch 20/50
32/32 [==============================] - 1s 27ms/step - loss: 0.0215 - accuracy: 0.9941 - val_loss: 0.3426 - val_accuracy: 0.9327
Epoch 21/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0132 - accuracy: 0.9966 - val_loss: 0.3291 - val_accuracy: 0.9363
Epoch 22/50
32/32 [==============================] - 1s 26ms/step - loss: 0.0207 - accuracy: 0.9951 - val_loss: 0.3948 - val_accuracy: 0.9270
Epoch 23/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0076 - accuracy: 0.9976 - val_loss: 0.3342 - val_accuracy: 0.9376
Epoch 24/50
32/32 [==============================] - 1s 25ms/step - loss: 9.6848e-04 - accuracy: 1.0000 - val_loss: 0.3548 - val_accuracy: 0.9357
Epoch 25/50
32/32 [==============================] - 1s 25ms/step - loss: 0.0013 - accuracy: 0.9995 - val_loss: 0.3448 - val_accuracy: 0.9378
Epoch 26/50
32/32 [==============================] - 1s 25ms/step - loss: 8.2853e-05 - accuracy: 1.0000 - val_loss: 0.3473 - val_accuracy: 0.9389
Epoch 27/50
32/32 [==============================] - 1s 25ms/step - loss: 6.9202e-05 - accuracy: 1.0000 - val_loss: 0.3490 - val_accuracy: 0.9391
Epoch 28/50
32/32 [==============================] - 1s 25ms/step - loss: 5.4728e-05 - accuracy: 1.0000 - val_loss: 0.3507 - val_accuracy: 0.9393
Epoch 29/50
32/32 [==============================] - 1s 25ms/step - loss: 4.4680e-05 - accuracy: 1.0000 - val_loss: 0.3519 - val_accuracy: 0.9397
Epoch 30/50
32/32 [==============================] - 1s 27ms/step - loss: 3.7992e-05 - accuracy: 1.0000 - val_loss: 0.3542 - val_accuracy: 0.9395
Epoch 31/50
32/32 [==============================] - 1s 26ms/step - loss: 3.2359e-05 - accuracy: 1.0000 - val_loss: 0.3558 - val_accuracy: 0.9395
Epoch 32/50
32/32 [==============================] - 1s 26ms/step - loss: 2.8061e-05 - accuracy: 1.0000 - val_loss: 0.3581 - val_accuracy: 0.9398
Epoch 33/50
32/32 [==============================] - 1s 25ms/step - loss: 2.4284e-05 - accuracy: 1.0000 - val_loss: 0.3599 - val_accuracy: 0.9403
Epoch 34/50
32/32 [==============================] - 1s 24ms/step - loss: 2.1361e-05 - accuracy: 1.0000 - val_loss: 0.3618 - val_accuracy: 0.9406
Epoch 35/50
32/32 [==============================] - 1s 24ms/step - loss: 1.8855e-05 - accuracy: 1.0000 - val_loss: 0.3640 - val_accuracy: 0.9408
Epoch 36/50
32/32 [==============================] - 1s 25ms/step - loss: 1.6711e-05 - accuracy: 1.0000 - val_loss: 0.3656 - val_accuracy: 0.9407
Epoch 37/50
32/32 [==============================] - 1s 25ms/step - loss: 1.4918e-05 - accuracy: 1.0000 - val_loss: 0.3673 - val_accuracy: 0.9411
Epoch 38/50
32/32 [==============================] - 1s 25ms/step - loss: 1.3353e-05 - accuracy: 1.0000 - val_loss: 0.3692 - val_accuracy: 0.9414
Epoch 39/50
32/32 [==============================] - 1s 26ms/step - loss: 1.2004e-05 - accuracy: 1.0000 - val_loss: 0.3711 - val_accuracy: 0.9411
Epoch 40/50
32/32 [==============================] - 1s 24ms/step - loss: 1.0868e-05 - accuracy: 1.0000 - val_loss: 0.3728 - val_accuracy: 0.9412
Epoch 41/50
32/32 [==============================] - 1s 25ms/step - loss: 9.8916e-06 - accuracy: 1.0000 - val_loss: 0.3745 - val_accuracy: 0.9410
Epoch 42/50
32/32 [==============================] - 1s 25ms/step - loss: 8.9848e-06 - accuracy: 1.0000 - val_loss: 0.3759 - val_accuracy: 0.9409
Epoch 43/50
32/32 [==============================] - 1s 25ms/step - loss: 8.2414e-06 - accuracy: 1.0000 - val_loss: 0.3779 - val_accuracy: 0.9411
Epoch 44/50
32/32 [==============================] - 1s 24ms/step - loss: 7.5833e-06 - accuracy: 1.0000 - val_loss: 0.3796 - val_accuracy: 0.9409
Epoch 45/50
32/32 [==============================] - 1s 24ms/step - loss: 6.9848e-06 - accuracy: 1.0000 - val_loss: 0.3809 - val_accuracy: 0.9408
Epoch 46/50
32/32 [==============================] - 1s 25ms/step - loss: 6.4766e-06 - accuracy: 1.0000 - val_loss: 0.3826 - val_accuracy: 0.9409
Epoch 47/50
32/32 [==============================] - 1s 25ms/step - loss: 5.9964e-06 - accuracy: 1.0000 - val_loss: 0.3843 - val_accuracy: 0.9411
Epoch 48/50
32/32 [==============================] - 1s 25ms/step - loss: 5.5681e-06 - accuracy: 1.0000 - val_loss: 0.3854 - val_accuracy: 0.9414
Epoch 49/50
32/32 [==============================] - 1s 25ms/step - loss: 5.1967e-06 - accuracy: 1.0000 - val_loss: 0.3871 - val_accuracy: 0.9414
Epoch 50/50
32/32 [==============================] - 1s 25ms/step - loss: 4.8631e-06 - accuracy: 1.0000 - val_loss: 0.3885 - val_accuracy: 0.9414

درّب النموذج باستعمال تكثير البيانات:

model_with_aug = make_model()

aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 1s 38ms/step - loss: 2.2663 - accuracy: 0.3071 - val_loss: 1.0310 - val_accuracy: 0.7379
Epoch 2/50
32/32 [==============================] - 1s 25ms/step - loss: 1.3325 - accuracy: 0.5503 - val_loss: 0.6800 - val_accuracy: 0.7854
Epoch 3/50
32/32 [==============================] - 1s 25ms/step - loss: 0.9562 - accuracy: 0.6934 - val_loss: 0.4634 - val_accuracy: 0.8595
Epoch 4/50
32/32 [==============================] - 1s 25ms/step - loss: 0.7901 - accuracy: 0.7402 - val_loss: 0.4024 - val_accuracy: 0.8869
Epoch 5/50
32/32 [==============================] - 1s 25ms/step - loss: 0.6696 - accuracy: 0.7866 - val_loss: 0.3425 - val_accuracy: 0.8931
Epoch 6/50
32/32 [==============================] - 1s 26ms/step - loss: 0.5754 - accuracy: 0.8071 - val_loss: 0.3047 - val_accuracy: 0.9109
Epoch 7/50
32/32 [==============================] - 1s 27ms/step - loss: 0.5223 - accuracy: 0.8354 - val_loss: 0.3190 - val_accuracy: 0.8996
Epoch 8/50
32/32 [==============================] - 1s 25ms/step - loss: 0.4891 - accuracy: 0.8394 - val_loss: 0.2583 - val_accuracy: 0.9199
Epoch 9/50
32/32 [==============================] - 1s 26ms/step - loss: 0.4841 - accuracy: 0.8491 - val_loss: 0.2739 - val_accuracy: 0.9175
Epoch 10/50
32/32 [==============================] - 1s 26ms/step - loss: 0.4402 - accuracy: 0.8589 - val_loss: 0.2224 - val_accuracy: 0.9353
Epoch 11/50
32/32 [==============================] - 1s 26ms/step - loss: 0.4095 - accuracy: 0.8667 - val_loss: 0.2144 - val_accuracy: 0.9345
Epoch 12/50
32/32 [==============================] - 1s 26ms/step - loss: 0.3889 - accuracy: 0.8730 - val_loss: 0.2193 - val_accuracy: 0.9282
Epoch 13/50
32/32 [==============================] - 1s 25ms/step - loss: 0.3431 - accuracy: 0.8838 - val_loss: 0.2002 - val_accuracy: 0.9358
Epoch 14/50
32/32 [==============================] - 1s 25ms/step - loss: 0.3078 - accuracy: 0.8989 - val_loss: 0.2115 - val_accuracy: 0.9303
Epoch 15/50
32/32 [==============================] - 1s 25ms/step - loss: 0.3767 - accuracy: 0.8735 - val_loss: 0.2184 - val_accuracy: 0.9302
Epoch 16/50
32/32 [==============================] - 1s 25ms/step - loss: 0.3488 - accuracy: 0.8813 - val_loss: 0.2550 - val_accuracy: 0.9150
Epoch 17/50
32/32 [==============================] - 1s 27ms/step - loss: 0.3296 - accuracy: 0.8984 - val_loss: 0.2040 - val_accuracy: 0.9338
Epoch 18/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2740 - accuracy: 0.9087 - val_loss: 0.2114 - val_accuracy: 0.9361
Epoch 19/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2664 - accuracy: 0.9150 - val_loss: 0.2079 - val_accuracy: 0.9334
Epoch 20/50
32/32 [==============================] - 1s 27ms/step - loss: 0.2750 - accuracy: 0.9087 - val_loss: 0.1981 - val_accuracy: 0.9409
Epoch 21/50
32/32 [==============================] - 1s 27ms/step - loss: 0.2320 - accuracy: 0.9297 - val_loss: 0.1690 - val_accuracy: 0.9483
Epoch 22/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2661 - accuracy: 0.9092 - val_loss: 0.1952 - val_accuracy: 0.9406
Epoch 23/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2927 - accuracy: 0.9023 - val_loss: 0.1811 - val_accuracy: 0.9434
Epoch 24/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2661 - accuracy: 0.9160 - val_loss: 0.1687 - val_accuracy: 0.9485
Epoch 25/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2238 - accuracy: 0.9287 - val_loss: 0.1693 - val_accuracy: 0.9480
Epoch 26/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2609 - accuracy: 0.9097 - val_loss: 0.1787 - val_accuracy: 0.9485
Epoch 27/50
32/32 [==============================] - 1s 28ms/step - loss: 0.2263 - accuracy: 0.9238 - val_loss: 0.1720 - val_accuracy: 0.9464
Epoch 28/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2779 - accuracy: 0.9077 - val_loss: 0.1614 - val_accuracy: 0.9491
Epoch 29/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1916 - accuracy: 0.9365 - val_loss: 0.1633 - val_accuracy: 0.9486
Epoch 30/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2574 - accuracy: 0.9087 - val_loss: 0.1556 - val_accuracy: 0.9522
Epoch 31/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2426 - accuracy: 0.9165 - val_loss: 0.1844 - val_accuracy: 0.9458
Epoch 32/50
32/32 [==============================] - 1s 27ms/step - loss: 0.2319 - accuracy: 0.9233 - val_loss: 0.1488 - val_accuracy: 0.9567
Epoch 33/50
32/32 [==============================] - 1s 28ms/step - loss: 0.1952 - accuracy: 0.9351 - val_loss: 0.1810 - val_accuracy: 0.9440
Epoch 34/50
32/32 [==============================] - 1s 26ms/step - loss: 0.1639 - accuracy: 0.9492 - val_loss: 0.1645 - val_accuracy: 0.9480
Epoch 35/50
32/32 [==============================] - 1s 26ms/step - loss: 0.2298 - accuracy: 0.9253 - val_loss: 0.1602 - val_accuracy: 0.9525
Epoch 36/50
32/32 [==============================] - 1s 26ms/step - loss: 0.1805 - accuracy: 0.9458 - val_loss: 0.1788 - val_accuracy: 0.9440
Epoch 37/50
32/32 [==============================] - 1s 26ms/step - loss: 0.1890 - accuracy: 0.9424 - val_loss: 0.1554 - val_accuracy: 0.9531
Epoch 38/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1707 - accuracy: 0.9390 - val_loss: 0.1780 - val_accuracy: 0.9477
Epoch 39/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1602 - accuracy: 0.9546 - val_loss: 0.1570 - val_accuracy: 0.9519
Epoch 40/50
32/32 [==============================] - 1s 25ms/step - loss: 0.2347 - accuracy: 0.9233 - val_loss: 0.1650 - val_accuracy: 0.9490
Epoch 41/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1849 - accuracy: 0.9443 - val_loss: 0.1659 - val_accuracy: 0.9501
Epoch 42/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1545 - accuracy: 0.9468 - val_loss: 0.1804 - val_accuracy: 0.9447
Epoch 43/50
32/32 [==============================] - 1s 27ms/step - loss: 0.1453 - accuracy: 0.9502 - val_loss: 0.1526 - val_accuracy: 0.9560
Epoch 44/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1495 - accuracy: 0.9473 - val_loss: 0.1785 - val_accuracy: 0.9480
Epoch 45/50
32/32 [==============================] - 1s 27ms/step - loss: 0.1597 - accuracy: 0.9502 - val_loss: 0.1569 - val_accuracy: 0.9532
Epoch 46/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1878 - accuracy: 0.9404 - val_loss: 0.1823 - val_accuracy: 0.9481
Epoch 47/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1525 - accuracy: 0.9429 - val_loss: 0.1664 - val_accuracy: 0.9516
Epoch 48/50
32/32 [==============================] - 1s 26ms/step - loss: 0.1692 - accuracy: 0.9473 - val_loss: 0.1440 - val_accuracy: 0.9586
Epoch 49/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1848 - accuracy: 0.9414 - val_loss: 0.1499 - val_accuracy: 0.9548
Epoch 50/50
32/32 [==============================] - 1s 25ms/step - loss: 0.1328 - accuracy: 0.9536 - val_loss: 0.1569 - val_accuracy: 0.9543

الإستنتاج :

في هذا المثال، يبلغ النموذج المدربّ باستعمال البيانات المكثّرة (augmented data) درجة دقّة تقارب 95% على بيانات التحقّق. هذا أعلى بقليل (+1%) من دقّة النموذج المدرّب من دون إستعمال تقنية تكثير البيانات (non-augmented data).

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
(0.75, 1.0)

png

أمّا بالنسبة لقيمة الخسارة، فمن الواضح أنّ النموذج المدرّب بدون تكثير يعاني من مشكلة الإفراط في التعلّم (overfitting). في المقابل ، النموذج المدربّ بالبيانات المكثّرة ، رغم أنّه أبطأ بعض الشئ ، إلاّ أنّه يُتِمُّ عمليّة التدريب بشكل صحيح و لا يعاني من مشكلة الإفراط في التعلّم.

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "loss")
plt.title("Loss")
plt.ylim([0,1])
(0.0, 1.0)

png