مقدمة إلى Keras Tuner

عرض على TensorFlow.org تشغيل في Google Colab عرض المصدر على جيثب تحميل دفتر

ملخص

Keras Tuner هي مكتبة تساعدك على اختيار المجموعة المثلى من المعلمات الفائقة لبرنامج TensorFlow الخاص بك. تسمى عملية اختيار المجموعة الصحيحة من المعلمات الفائقة لتطبيق التعلم الآلي (ML) الخاص بك بضبط المعلمات الفائقة أو التوليف العالي .

المعلمات التشعبية هي المتغيرات التي تحكم عملية التدريب وطوبولوجيا نموذج ML. تظل هذه المتغيرات ثابتة خلال عملية التدريب وتؤثر بشكل مباشر على أداء برنامج ML الخاص بك. المعلمات الفائقة من نوعين:

  1. نموذج المعلمات الفائقة التي تؤثر على اختيار النموذج مثل عدد وعرض الطبقات المخفية
  2. معلمات الخوارزمية الفائقة التي تؤثر على سرعة وجودة خوارزمية التعلم مثل معدل التعلم لـ Stochastic Gradient Descent (SGD) وعدد أقرب الجيران لمصنف ak Nearest Neighbours (KNN)

في هذا البرنامج التعليمي ، سوف تستخدم Keras Tuner لإجراء التوليف العالي لتطبيق تصنيف الصور.

يثبت

import tensorflow as tf
from tensorflow import keras

قم بتثبيت واستيراد Keras Tuner.

pip install -q -U keras-tuner
import keras_tuner as kt

قم بتنزيل وإعداد مجموعة البيانات

في هذا البرنامج التعليمي ، ستستخدم Keras Tuner للعثور على أفضل المعلمات الفائقة لنموذج التعلم الآلي الذي يصنف صور الملابس من مجموعة بيانات Fashion MNIST .

قم بتحميل البيانات.

(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0

حدد النموذج

عند إنشاء نموذج للضبط الفائق ، فإنك تحدد أيضًا مساحة بحث المعامل التشعبي بالإضافة إلى بنية النموذج. يسمى النموذج الذي تقوم بإعداده للضبط العالي بالنموذج التشعبي.

يمكنك تحديد نموذج تشعبي من خلال طريقتين:

  • باستخدام وظيفة بناء النموذج
  • عن طريق تصنيف فئة HyperModel لواجهة برمجة تطبيقات Keras Tuner

يمكنك أيضًا استخدام فئتي HyperModel مسبقًا - HyperXception و HyperResNet لتطبيقات رؤية الكمبيوتر.

في هذا البرنامج التعليمي ، يمكنك استخدام وظيفة منشئ النماذج لتحديد نموذج تصنيف الصور. ترجع وظيفة منشئ النموذج نموذجًا مترجمًا وتستخدم المعلمات التشعبية التي تحددها بشكل مضمن لضبط النموذج.

def model_builder(hp):
  model = keras.Sequential()
  model.add(keras.layers.Flatten(input_shape=(28, 28)))

  # Tune the number of units in the first Dense layer
  # Choose an optimal value between 32-512
  hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
  model.add(keras.layers.Dense(units=hp_units, activation='relu'))
  model.add(keras.layers.Dense(10))

  # Tune the learning rate for the optimizer
  # Choose an optimal value from 0.01, 0.001, or 0.0001
  hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])

  model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])

  return model

إنشاء الموالف وإجراء التوليف العالي

قم بإنشاء الموالف لإجراء التوليف الفائق. يحتوي Keras Tuner على أربعة موالفات متوفرة - RandomSearch و Hyperband و BayesianOptimization و Sklearn . في هذا البرنامج التعليمي ، يمكنك استخدام موالف Hyperband .

لإنشاء مثيل موالف Hyperband ، يجب عليك تحديد النموذج التشعبي ، objective من التحسين والحد الأقصى لعدد الفترات للتدريب ( max_epochs ).

tuner = kt.Hyperband(model_builder,
                     objective='val_accuracy',
                     max_epochs=10,
                     factor=3,
                     directory='my_dir',
                     project_name='intro_to_kt')

تستخدم خوارزمية Hyperband Tuning تخصيص الموارد التكيفي والتوقف المبكر للتلاقي بسرعة مع نموذج عالي الأداء. يتم ذلك باستخدام قوس على غرار البطولة الرياضية. تقوم الخوارزمية بتدريب عدد كبير من النماذج لبضع حقب وتنقل فقط نصف النماذج عالية الأداء إلى الجولة التالية. يحدد Hyperband عدد النماذج التي سيتم تدريبها في قوس عن طريق حساب factor السجل 1 + ( max_epochs ) وتقريبه إلى أقرب عدد صحيح.

قم بإنشاء رد اتصال لإيقاف التدريب مبكرًا بعد الوصول إلى قيمة معينة لفقدان التحقق من الصحة.

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

قم بتشغيل البحث عن المعامل التشعبي. وسيطات طريقة البحث هي نفسها المستخدمة في tf.keras.model.fit بالإضافة إلى رد النداء أعلاه.

tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])

# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
Trial 30 Complete [00h 00m 35s]
val_accuracy: 0.8925833106040955

Best val_accuracy So Far: 0.8925833106040955
Total elapsed time: 00h 07m 26s
INFO:tensorflow:Oracle triggered exit

The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is 320 and the optimal learning rate for the optimizer
is 0.001.

تدريب النموذج

ابحث عن العدد الأمثل للعهود لتدريب النموذج باستخدام المعلمات الفائقة التي تم الحصول عليها من البحث.

# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)

val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
Epoch 1/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.4988 - accuracy: 0.8232 - val_loss: 0.4142 - val_accuracy: 0.8517
Epoch 2/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3717 - accuracy: 0.8646 - val_loss: 0.3437 - val_accuracy: 0.8773
Epoch 3/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3317 - accuracy: 0.8779 - val_loss: 0.3806 - val_accuracy: 0.8639
Epoch 4/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3079 - accuracy: 0.8867 - val_loss: 0.3321 - val_accuracy: 0.8801
Epoch 5/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2882 - accuracy: 0.8943 - val_loss: 0.3313 - val_accuracy: 0.8806
Epoch 6/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2727 - accuracy: 0.8977 - val_loss: 0.3152 - val_accuracy: 0.8857
Epoch 7/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2610 - accuracy: 0.9016 - val_loss: 0.3225 - val_accuracy: 0.8873
Epoch 8/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2474 - accuracy: 0.9060 - val_loss: 0.3198 - val_accuracy: 0.8867
Epoch 9/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2385 - accuracy: 0.9105 - val_loss: 0.3266 - val_accuracy: 0.8822
Epoch 10/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2295 - accuracy: 0.9142 - val_loss: 0.3382 - val_accuracy: 0.8835
Epoch 11/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2170 - accuracy: 0.9185 - val_loss: 0.3215 - val_accuracy: 0.8885
Epoch 12/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2102 - accuracy: 0.9202 - val_loss: 0.3194 - val_accuracy: 0.8923
Epoch 13/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2036 - accuracy: 0.9235 - val_loss: 0.3176 - val_accuracy: 0.8901
Epoch 14/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1955 - accuracy: 0.9272 - val_loss: 0.3269 - val_accuracy: 0.8912
Epoch 15/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1881 - accuracy: 0.9292 - val_loss: 0.3391 - val_accuracy: 0.8878
Epoch 16/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1821 - accuracy: 0.9321 - val_loss: 0.3272 - val_accuracy: 0.8920
Epoch 17/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1771 - accuracy: 0.9332 - val_loss: 0.3536 - val_accuracy: 0.8876
Epoch 18/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1697 - accuracy: 0.9363 - val_loss: 0.3395 - val_accuracy: 0.8927
Epoch 19/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1652 - accuracy: 0.9374 - val_loss: 0.3464 - val_accuracy: 0.8937
Epoch 20/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1606 - accuracy: 0.9392 - val_loss: 0.3576 - val_accuracy: 0.8888
Epoch 21/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1539 - accuracy: 0.9417 - val_loss: 0.3724 - val_accuracy: 0.8867
Epoch 22/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1503 - accuracy: 0.9435 - val_loss: 0.3607 - val_accuracy: 0.8954
Epoch 23/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1450 - accuracy: 0.9454 - val_loss: 0.3525 - val_accuracy: 0.8919
Epoch 24/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1398 - accuracy: 0.9473 - val_loss: 0.3745 - val_accuracy: 0.8919
Epoch 25/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1370 - accuracy: 0.9478 - val_loss: 0.3616 - val_accuracy: 0.8941
Epoch 26/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1334 - accuracy: 0.9498 - val_loss: 0.3866 - val_accuracy: 0.8956
Epoch 27/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1282 - accuracy: 0.9519 - val_loss: 0.3947 - val_accuracy: 0.8924
Epoch 28/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1254 - accuracy: 0.9538 - val_loss: 0.4223 - val_accuracy: 0.8870
Epoch 29/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1222 - accuracy: 0.9536 - val_loss: 0.3805 - val_accuracy: 0.8898
Epoch 30/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1179 - accuracy: 0.9546 - val_loss: 0.4052 - val_accuracy: 0.8942
Epoch 31/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1162 - accuracy: 0.9560 - val_loss: 0.3909 - val_accuracy: 0.8955
Epoch 32/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1152 - accuracy: 0.9572 - val_loss: 0.4160 - val_accuracy: 0.8908
Epoch 33/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1100 - accuracy: 0.9583 - val_loss: 0.4280 - val_accuracy: 0.8938
Epoch 34/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1055 - accuracy: 0.9603 - val_loss: 0.4148 - val_accuracy: 0.8963
Epoch 35/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1044 - accuracy: 0.9606 - val_loss: 0.4302 - val_accuracy: 0.8921
Epoch 36/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1046 - accuracy: 0.9605 - val_loss: 0.4205 - val_accuracy: 0.8947
Epoch 37/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0993 - accuracy: 0.9621 - val_loss: 0.4551 - val_accuracy: 0.8875
Epoch 38/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0972 - accuracy: 0.9635 - val_loss: 0.4622 - val_accuracy: 0.8914
Epoch 39/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0951 - accuracy: 0.9642 - val_loss: 0.4423 - val_accuracy: 0.8950
Epoch 40/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0947 - accuracy: 0.9637 - val_loss: 0.4498 - val_accuracy: 0.8948
Epoch 41/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0876 - accuracy: 0.9675 - val_loss: 0.4694 - val_accuracy: 0.8959
Epoch 42/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0902 - accuracy: 0.9657 - val_loss: 0.4778 - val_accuracy: 0.8938
Epoch 43/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0876 - accuracy: 0.9676 - val_loss: 0.4716 - val_accuracy: 0.8911
Epoch 44/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0884 - accuracy: 0.9674 - val_loss: 0.4827 - val_accuracy: 0.8918
Epoch 45/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0764 - accuracy: 0.9715 - val_loss: 0.5008 - val_accuracy: 0.8953
Epoch 46/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0823 - accuracy: 0.9695 - val_loss: 0.5157 - val_accuracy: 0.8874
Epoch 47/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0789 - accuracy: 0.9704 - val_loss: 0.5198 - val_accuracy: 0.8910
Epoch 48/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0778 - accuracy: 0.9716 - val_loss: 0.5031 - val_accuracy: 0.8932
Epoch 49/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0747 - accuracy: 0.9718 - val_loss: 0.4982 - val_accuracy: 0.8953
Epoch 50/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0786 - accuracy: 0.9706 - val_loss: 0.5198 - val_accuracy: 0.8976
Best epoch: 50

أعد إنشاء النموذج التشعبي وقم بتدريبه باستخدام العدد الأمثل للعهود من أعلى.

hypermodel = tuner.hypermodel.build(best_hps)

# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
Epoch 1/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.4987 - accuracy: 0.8236 - val_loss: 0.4065 - val_accuracy: 0.8488
Epoch 2/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3738 - accuracy: 0.8652 - val_loss: 0.3847 - val_accuracy: 0.8613
Epoch 3/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3344 - accuracy: 0.8775 - val_loss: 0.3568 - val_accuracy: 0.8750
Epoch 4/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.3065 - accuracy: 0.8865 - val_loss: 0.3326 - val_accuracy: 0.8811
Epoch 5/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2880 - accuracy: 0.8930 - val_loss: 0.3208 - val_accuracy: 0.8843
Epoch 6/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2744 - accuracy: 0.8981 - val_loss: 0.3313 - val_accuracy: 0.8810
Epoch 7/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2585 - accuracy: 0.9019 - val_loss: 0.3352 - val_accuracy: 0.8790
Epoch 8/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2445 - accuracy: 0.9078 - val_loss: 0.3151 - val_accuracy: 0.8849
Epoch 9/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2366 - accuracy: 0.9113 - val_loss: 0.3167 - val_accuracy: 0.8881
Epoch 10/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2241 - accuracy: 0.9162 - val_loss: 0.3258 - val_accuracy: 0.8857
Epoch 11/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.2158 - accuracy: 0.9194 - val_loss: 0.3087 - val_accuracy: 0.8927
Epoch 12/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2091 - accuracy: 0.9218 - val_loss: 0.3287 - val_accuracy: 0.8904
Epoch 13/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1998 - accuracy: 0.9243 - val_loss: 0.3131 - val_accuracy: 0.8950
Epoch 14/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1937 - accuracy: 0.9271 - val_loss: 0.3177 - val_accuracy: 0.8925
Epoch 15/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1859 - accuracy: 0.9303 - val_loss: 0.3334 - val_accuracy: 0.8918
Epoch 16/50
1500/1500 [==============================] - 4s 2ms/step - loss: 0.1779 - accuracy: 0.9334 - val_loss: 0.3299 - val_accuracy: 0.8929
Epoch 17/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1743 - accuracy: 0.9348 - val_loss: 0.3391 - val_accuracy: 0.8920
Epoch 18/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1687 - accuracy: 0.9366 - val_loss: 0.3302 - val_accuracy: 0.8974
Epoch 19/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1628 - accuracy: 0.9385 - val_loss: 0.3641 - val_accuracy: 0.8868
Epoch 20/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1597 - accuracy: 0.9405 - val_loss: 0.3523 - val_accuracy: 0.8942
Epoch 21/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1534 - accuracy: 0.9434 - val_loss: 0.3584 - val_accuracy: 0.8951
Epoch 22/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1507 - accuracy: 0.9441 - val_loss: 0.3577 - val_accuracy: 0.8923
Epoch 23/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1453 - accuracy: 0.9452 - val_loss: 0.3807 - val_accuracy: 0.8957
Epoch 24/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1392 - accuracy: 0.9476 - val_loss: 0.3711 - val_accuracy: 0.8960
Epoch 25/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1364 - accuracy: 0.9494 - val_loss: 0.3731 - val_accuracy: 0.8940
Epoch 26/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1315 - accuracy: 0.9511 - val_loss: 0.3805 - val_accuracy: 0.8932
Epoch 27/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1319 - accuracy: 0.9507 - val_loss: 0.3966 - val_accuracy: 0.8880
Epoch 28/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1266 - accuracy: 0.9534 - val_loss: 0.3994 - val_accuracy: 0.8920
Epoch 29/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1207 - accuracy: 0.9546 - val_loss: 0.3918 - val_accuracy: 0.8959
Epoch 30/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1174 - accuracy: 0.9567 - val_loss: 0.4043 - val_accuracy: 0.8928
Epoch 31/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1191 - accuracy: 0.9546 - val_loss: 0.4114 - val_accuracy: 0.8951
Epoch 32/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1140 - accuracy: 0.9563 - val_loss: 0.4149 - val_accuracy: 0.8962
Epoch 33/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1121 - accuracy: 0.9574 - val_loss: 0.4373 - val_accuracy: 0.8931
Epoch 34/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1085 - accuracy: 0.9598 - val_loss: 0.4353 - val_accuracy: 0.8939
Epoch 35/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1056 - accuracy: 0.9591 - val_loss: 0.4325 - val_accuracy: 0.8938
Epoch 36/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1066 - accuracy: 0.9600 - val_loss: 0.4700 - val_accuracy: 0.8899
Epoch 37/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1019 - accuracy: 0.9615 - val_loss: 0.4440 - val_accuracy: 0.8947
Epoch 38/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0973 - accuracy: 0.9635 - val_loss: 0.4481 - val_accuracy: 0.8959
Epoch 39/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1008 - accuracy: 0.9622 - val_loss: 0.4772 - val_accuracy: 0.8954
Epoch 40/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0919 - accuracy: 0.9653 - val_loss: 0.4723 - val_accuracy: 0.8916
Epoch 41/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0921 - accuracy: 0.9653 - val_loss: 0.4867 - val_accuracy: 0.8953
Epoch 42/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0919 - accuracy: 0.9657 - val_loss: 0.4710 - val_accuracy: 0.8936
Epoch 43/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0873 - accuracy: 0.9664 - val_loss: 0.4844 - val_accuracy: 0.8905
Epoch 44/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0884 - accuracy: 0.9669 - val_loss: 0.4972 - val_accuracy: 0.8963
Epoch 45/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0849 - accuracy: 0.9685 - val_loss: 0.4790 - val_accuracy: 0.8969
Epoch 46/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0831 - accuracy: 0.9687 - val_loss: 0.5028 - val_accuracy: 0.8945
Epoch 47/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0793 - accuracy: 0.9698 - val_loss: 0.5031 - val_accuracy: 0.8945
Epoch 48/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0806 - accuracy: 0.9693 - val_loss: 0.5065 - val_accuracy: 0.8990
Epoch 49/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0751 - accuracy: 0.9714 - val_loss: 0.5719 - val_accuracy: 0.8924
Epoch 50/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0785 - accuracy: 0.9707 - val_loss: 0.5123 - val_accuracy: 0.8985
<keras.callbacks.History at 0x7fb39810a150>

لإنهاء هذا البرنامج التعليمي ، قم بتقييم النموذج التشعبي على بيانات الاختبار.

eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
313/313 [==============================] - 1s 2ms/step - loss: 0.5632 - accuracy: 0.8908
[test loss, test accuracy]: [0.5631944537162781, 0.8907999992370605]

يحتوي دليل my_dir/intro_to_kt على سجلات ونقاط فحص مفصلة لكل تجربة (تكوين نموذج) يتم تشغيلها أثناء البحث عن المعامل التشعبي. إذا أعدت تشغيل البحث عن المعلمات الفائقة ، فسيستخدم Keras Tuner الحالة الحالية من هذه السجلات لاستئناف البحث. لتعطيل هذا السلوك ، قم بتمرير overwrite=True أثناء إنشاء مثيل للموالف.

ملخص

في هذا البرنامج التعليمي ، تعلمت كيفية استخدام Keras Tuner لضبط المعلمات الفائقة لنموذج. لمعرفة المزيد حول Keras Tuner ، تحقق من هذه الموارد الإضافية:

تحقق أيضًا من HParams Dashboard في TensorBoard لضبط المعلمات التشعبية للنموذج بشكل تفاعلي.