¡Google I / O regresa del 18 al 20 de mayo! Reserva espacio y crea tu horario Regístrate ahora

Introducción al afinador Keras

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

Descripción general

Keras Tuner es una biblioteca que lo ayuda a elegir el conjunto óptimo de hiperparámetros para su programa TensorFlow. El proceso de seleccionar el conjunto correcto de hiperparámetros para su aplicación de aprendizaje automático (ML) se denomina ajuste de hiperparámetros o hipertunización .

Los hiperparámetros son las variables que gobiernan el proceso de entrenamiento y la topología de un modelo ML. Estas variables permanecen constantes durante el proceso de capacitación e impactan directamente en el rendimiento de su programa de AA. Los hiperparámetros son de dos tipos:

  1. Hiperparámetros del modelo que influyen en la selección del modelo, como el número y el ancho de las capas ocultas
  2. Hiperparámetros de algoritmo que influyen en la velocidad y la calidad del algoritmo de aprendizaje, como la tasa de aprendizaje para el descenso de gradiente estocástico (SGD) y el número de vecinos más cercanos para un clasificador de vecinos más cercanos (KNN).

En este tutorial, usará Keras Tuner para realizar un hipertuning para una aplicación de clasificación de imágenes.

Configuración

import tensorflow as tf
from tensorflow import keras

Instale e importe el sintonizador Keras.

pip install -q -U keras-tuner
import kerastuner as kt

Descargue y prepare el conjunto de datos

En este tutorial, usará Keras Tuner para encontrar los mejores hiperparámetros para un modelo de aprendizaje automático que clasifica imágenes de ropa del conjunto de datos Fashion MNIST .

Cargue los datos.

(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0

Definir el modelo

Cuando crea un modelo para el hipertuning, también define el espacio de búsqueda de hiperparámetros además de la arquitectura del modelo. El modelo que configura para la hipertonía se llama hipermodelo .

Puede definir un hipermodelo a través de dos enfoques:

  • Mediante el uso de una función de generador de modelos
  • HyperModel clase HyperModel de la API Keras Tuner

También puede utilizar dos clases de HyperModel predefinidas: HyperXception e HyperResNet para aplicaciones de visión por computadora.

En este tutorial, utiliza una función de generador de modelos para definir el modelo de clasificación de imágenes. La función del generador de modelos devuelve un modelo compilado y utiliza los hiperparámetros que define en línea para hipertintonizar el modelo.

def model_builder(hp):
  model = keras.Sequential()
  model.add(keras.layers.Flatten(input_shape=(28, 28)))

  # Tune the number of units in the first Dense layer
  # Choose an optimal value between 32-512
  hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
  model.add(keras.layers.Dense(units=hp_units, activation='relu'))
  model.add(keras.layers.Dense(10))

  # Tune the learning rate for the optimizer
  # Choose an optimal value from 0.01, 0.001, or 0.0001
  hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])

  model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])

  return model

Cree una instancia del sintonizador y realice un hipertuning

Cree una instancia del sintonizador para realizar el hipertuning. El Keras Tuner tiene cuatro sintonizadores disponibles: RandomSearch , Hyperband , BayesianOptimization y Sklearn . En este tutorial, usa el sintonizador Hyperband .

Para crear una instancia del sintonizador de hiperbanda, debe especificar el hipermodelo, el objective a optimizar y el número máximo de épocas para entrenar ( max_epochs ).

tuner = kt.Hyperband(model_builder,
                     objective='val_accuracy',
                     max_epochs=10,
                     factor=3,
                     directory='my_dir',
                     project_name='intro_to_kt')

El algoritmo de ajuste de hiperbanda utiliza la asignación de recursos adaptativa y la detención anticipada para converger rápidamente en un modelo de alto rendimiento. Esto se hace usando un soporte estilo campeonato deportivo. El algoritmo entrena una gran cantidad de modelos durante algunas épocas y lleva solo la mitad de los modelos con mejor rendimiento a la siguiente ronda. Hyperband determina el número de modelos a entrenar en un paréntesis calculando 1 + factor registro ( max_epochs ) y redondeándolo al número entero más cercano.

Cree una devolución de llamada para detener el entrenamiento antes de alcanzar un cierto valor para la pérdida de validación.

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

Ejecute la búsqueda de hiperparámetros. Los argumentos para el método de búsqueda son los mismos que los utilizados para tf.keras.model.fit además de la devolución de llamada anterior.

tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])

# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
Trial 30 Complete [00h 00m 24s]
val_accuracy: 0.8824166655540466

Best val_accuracy So Far: 0.8901666402816772
Total elapsed time: 00h 05m 34s
INFO:tensorflow:Oracle triggered exit

The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is 448 and the optimal learning rate for the optimizer
is 0.001.

Entrena el modelo

Encuentre el número óptimo de épocas para entrenar el modelo con los hiperparámetros obtenidos de la búsqueda.

# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)

val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
Epoch 1/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.6307 - accuracy: 0.7788 - val_loss: 0.4389 - val_accuracy: 0.8450
Epoch 2/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3789 - accuracy: 0.8625 - val_loss: 0.3897 - val_accuracy: 0.8593
Epoch 3/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3302 - accuracy: 0.8791 - val_loss: 0.3356 - val_accuracy: 0.8766
Epoch 4/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2995 - accuracy: 0.8890 - val_loss: 0.3360 - val_accuracy: 0.8798
Epoch 5/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2745 - accuracy: 0.8990 - val_loss: 0.3447 - val_accuracy: 0.8756
Epoch 6/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2624 - accuracy: 0.9023 - val_loss: 0.3433 - val_accuracy: 0.8793
Epoch 7/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2619 - accuracy: 0.9020 - val_loss: 0.3105 - val_accuracy: 0.8886
Epoch 8/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2429 - accuracy: 0.9108 - val_loss: 0.3114 - val_accuracy: 0.8895
Epoch 9/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2284 - accuracy: 0.9136 - val_loss: 0.3099 - val_accuracy: 0.8913
Epoch 10/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2194 - accuracy: 0.9168 - val_loss: 0.3154 - val_accuracy: 0.8918
Epoch 11/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2153 - accuracy: 0.9171 - val_loss: 0.3407 - val_accuracy: 0.8856
Epoch 12/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2052 - accuracy: 0.9238 - val_loss: 0.3190 - val_accuracy: 0.8903
Epoch 13/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1941 - accuracy: 0.9262 - val_loss: 0.3205 - val_accuracy: 0.8903
Epoch 14/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1893 - accuracy: 0.9301 - val_loss: 0.3242 - val_accuracy: 0.8896
Epoch 15/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1780 - accuracy: 0.9307 - val_loss: 0.3584 - val_accuracy: 0.8844
Epoch 16/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1748 - accuracy: 0.9337 - val_loss: 0.3303 - val_accuracy: 0.8937
Epoch 17/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1719 - accuracy: 0.9349 - val_loss: 0.3491 - val_accuracy: 0.8882
Epoch 18/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1662 - accuracy: 0.9383 - val_loss: 0.3509 - val_accuracy: 0.8925
Epoch 19/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1592 - accuracy: 0.9398 - val_loss: 0.3324 - val_accuracy: 0.8938
Epoch 20/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1515 - accuracy: 0.9436 - val_loss: 0.3500 - val_accuracy: 0.8900
Epoch 21/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1469 - accuracy: 0.9432 - val_loss: 0.3486 - val_accuracy: 0.8955
Epoch 22/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1412 - accuracy: 0.9467 - val_loss: 0.3602 - val_accuracy: 0.8878
Epoch 23/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1415 - accuracy: 0.9470 - val_loss: 0.3568 - val_accuracy: 0.8913
Epoch 24/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1320 - accuracy: 0.9507 - val_loss: 0.3832 - val_accuracy: 0.8908
Epoch 25/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1288 - accuracy: 0.9514 - val_loss: 0.3890 - val_accuracy: 0.8865
Epoch 26/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1277 - accuracy: 0.9533 - val_loss: 0.3796 - val_accuracy: 0.8935
Epoch 27/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1228 - accuracy: 0.9529 - val_loss: 0.3876 - val_accuracy: 0.8933
Epoch 28/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1210 - accuracy: 0.9536 - val_loss: 0.3913 - val_accuracy: 0.8947
Epoch 29/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1179 - accuracy: 0.9556 - val_loss: 0.3880 - val_accuracy: 0.8942
Epoch 30/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1145 - accuracy: 0.9563 - val_loss: 0.4126 - val_accuracy: 0.8922
Epoch 31/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1109 - accuracy: 0.9571 - val_loss: 0.4014 - val_accuracy: 0.8944
Epoch 32/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1101 - accuracy: 0.9580 - val_loss: 0.3997 - val_accuracy: 0.8934
Epoch 33/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1114 - accuracy: 0.9567 - val_loss: 0.4134 - val_accuracy: 0.8938
Epoch 34/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1001 - accuracy: 0.9639 - val_loss: 0.4370 - val_accuracy: 0.8938
Epoch 35/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1005 - accuracy: 0.9630 - val_loss: 0.4414 - val_accuracy: 0.8922
Epoch 36/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0980 - accuracy: 0.9628 - val_loss: 0.4800 - val_accuracy: 0.8912
Epoch 37/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0989 - accuracy: 0.9621 - val_loss: 0.4597 - val_accuracy: 0.8923
Epoch 38/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0962 - accuracy: 0.9630 - val_loss: 0.4699 - val_accuracy: 0.8933
Epoch 39/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0884 - accuracy: 0.9665 - val_loss: 0.4515 - val_accuracy: 0.8939
Epoch 40/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0889 - accuracy: 0.9660 - val_loss: 0.4753 - val_accuracy: 0.8926
Epoch 41/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0856 - accuracy: 0.9673 - val_loss: 0.4669 - val_accuracy: 0.8940
Epoch 42/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0860 - accuracy: 0.9674 - val_loss: 0.4870 - val_accuracy: 0.8882
Epoch 43/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0827 - accuracy: 0.9693 - val_loss: 0.5101 - val_accuracy: 0.8881
Epoch 44/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0839 - accuracy: 0.9678 - val_loss: 0.5078 - val_accuracy: 0.8934
Epoch 45/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0762 - accuracy: 0.9720 - val_loss: 0.5508 - val_accuracy: 0.8882
Epoch 46/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0893 - accuracy: 0.9658 - val_loss: 0.5130 - val_accuracy: 0.8907
Epoch 47/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0771 - accuracy: 0.9696 - val_loss: 0.5162 - val_accuracy: 0.8938
Epoch 48/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0736 - accuracy: 0.9714 - val_loss: 0.5392 - val_accuracy: 0.8929
Epoch 49/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0782 - accuracy: 0.9718 - val_loss: 0.5215 - val_accuracy: 0.8961
Epoch 50/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0730 - accuracy: 0.9721 - val_loss: 0.5605 - val_accuracy: 0.8876
Best epoch: 49

Vuelva a crear una instancia del hipermodelo y entrénelo con el número óptimo de épocas desde arriba.

hypermodel = tuner.hypermodel.build(best_hps)

# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
Epoch 1/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.6207 - accuracy: 0.7805 - val_loss: 0.3978 - val_accuracy: 0.8568
Epoch 2/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3716 - accuracy: 0.8642 - val_loss: 0.3721 - val_accuracy: 0.8659
Epoch 3/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3303 - accuracy: 0.8766 - val_loss: 0.3721 - val_accuracy: 0.8626
Epoch 4/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3107 - accuracy: 0.8847 - val_loss: 0.3727 - val_accuracy: 0.8642
Epoch 5/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2848 - accuracy: 0.8956 - val_loss: 0.3179 - val_accuracy: 0.8857
Epoch 6/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2694 - accuracy: 0.8997 - val_loss: 0.3394 - val_accuracy: 0.8802
Epoch 7/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2561 - accuracy: 0.9033 - val_loss: 0.3095 - val_accuracy: 0.8933
Epoch 8/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2411 - accuracy: 0.9083 - val_loss: 0.3252 - val_accuracy: 0.8842
Epoch 9/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2321 - accuracy: 0.9135 - val_loss: 0.3250 - val_accuracy: 0.8897
Epoch 10/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2202 - accuracy: 0.9171 - val_loss: 0.3144 - val_accuracy: 0.8942
Epoch 11/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2129 - accuracy: 0.9218 - val_loss: 0.3313 - val_accuracy: 0.8874
Epoch 12/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2015 - accuracy: 0.9243 - val_loss: 0.3215 - val_accuracy: 0.8924
Epoch 13/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1950 - accuracy: 0.9283 - val_loss: 0.3234 - val_accuracy: 0.8929
Epoch 14/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1854 - accuracy: 0.9321 - val_loss: 0.3257 - val_accuracy: 0.8946
Epoch 15/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1801 - accuracy: 0.9312 - val_loss: 0.3427 - val_accuracy: 0.8900
Epoch 16/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1778 - accuracy: 0.9326 - val_loss: 0.3382 - val_accuracy: 0.8940
Epoch 17/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1716 - accuracy: 0.9361 - val_loss: 0.3218 - val_accuracy: 0.8938
Epoch 18/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1632 - accuracy: 0.9383 - val_loss: 0.3612 - val_accuracy: 0.8918
Epoch 19/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1592 - accuracy: 0.9399 - val_loss: 0.3602 - val_accuracy: 0.8901
Epoch 20/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1498 - accuracy: 0.9438 - val_loss: 0.3501 - val_accuracy: 0.8957
Epoch 21/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1455 - accuracy: 0.9436 - val_loss: 0.3590 - val_accuracy: 0.8906
Epoch 22/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1469 - accuracy: 0.9455 - val_loss: 0.3442 - val_accuracy: 0.8978
Epoch 23/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1425 - accuracy: 0.9474 - val_loss: 0.3632 - val_accuracy: 0.8939
Epoch 24/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1370 - accuracy: 0.9486 - val_loss: 0.3728 - val_accuracy: 0.8936
Epoch 25/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1348 - accuracy: 0.9502 - val_loss: 0.3653 - val_accuracy: 0.8953
Epoch 26/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1241 - accuracy: 0.9525 - val_loss: 0.3778 - val_accuracy: 0.8917
Epoch 27/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1232 - accuracy: 0.9530 - val_loss: 0.3655 - val_accuracy: 0.8977
Epoch 28/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1191 - accuracy: 0.9549 - val_loss: 0.3960 - val_accuracy: 0.8930
Epoch 29/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1193 - accuracy: 0.9548 - val_loss: 0.3805 - val_accuracy: 0.8999
Epoch 30/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1129 - accuracy: 0.9569 - val_loss: 0.4280 - val_accuracy: 0.8878
Epoch 31/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1142 - accuracy: 0.9579 - val_loss: 0.3975 - val_accuracy: 0.8996
Epoch 32/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1116 - accuracy: 0.9576 - val_loss: 0.3960 - val_accuracy: 0.8982
Epoch 33/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1072 - accuracy: 0.9585 - val_loss: 0.4042 - val_accuracy: 0.8957
Epoch 34/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1040 - accuracy: 0.9615 - val_loss: 0.4243 - val_accuracy: 0.8976
Epoch 35/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0968 - accuracy: 0.9645 - val_loss: 0.4184 - val_accuracy: 0.8977
Epoch 36/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1056 - accuracy: 0.9605 - val_loss: 0.4181 - val_accuracy: 0.8990
Epoch 37/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0924 - accuracy: 0.9642 - val_loss: 0.4557 - val_accuracy: 0.8932
Epoch 38/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0942 - accuracy: 0.9653 - val_loss: 0.4716 - val_accuracy: 0.8932
Epoch 39/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0978 - accuracy: 0.9643 - val_loss: 0.4396 - val_accuracy: 0.9006
Epoch 40/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0885 - accuracy: 0.9672 - val_loss: 0.4782 - val_accuracy: 0.8925
Epoch 41/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0881 - accuracy: 0.9652 - val_loss: 0.4886 - val_accuracy: 0.8935
Epoch 42/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0846 - accuracy: 0.9677 - val_loss: 0.4566 - val_accuracy: 0.8978
Epoch 43/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0836 - accuracy: 0.9688 - val_loss: 0.4734 - val_accuracy: 0.8972
Epoch 44/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0791 - accuracy: 0.9702 - val_loss: 0.4885 - val_accuracy: 0.8954
Epoch 45/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0818 - accuracy: 0.9701 - val_loss: 0.5213 - val_accuracy: 0.8874
Epoch 46/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0813 - accuracy: 0.9687 - val_loss: 0.5160 - val_accuracy: 0.8945
Epoch 47/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0738 - accuracy: 0.9720 - val_loss: 0.5002 - val_accuracy: 0.8970
Epoch 48/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0748 - accuracy: 0.9715 - val_loss: 0.5465 - val_accuracy: 0.8921
Epoch 49/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0789 - accuracy: 0.9701 - val_loss: 0.5297 - val_accuracy: 0.8941
<tensorflow.python.keras.callbacks.History at 0x7f27226a7b00>

Para finalizar este tutorial, evalúe el hipermodelo en los datos de prueba.

eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
313/313 [==============================] - 1s 2ms/step - loss: 0.5915 - accuracy: 0.8867
[test loss, test accuracy]: [0.5915395617485046, 0.8866999745368958]

El directorio my_dir/intro_to_kt contiene registros detallados y puntos de control para cada prueba (configuración del modelo) ejecutada durante la búsqueda de hiperparámetros. Si vuelve a ejecutar la búsqueda de hiperparámetros, Keras Tuner utiliza el estado existente de estos registros para reanudar la búsqueda. Para deshabilitar este comportamiento, pase un argumento adicional overwrite=True mientras crea una instancia del sintonizador.

Resumen

En este tutorial, aprendió a usar Keras Tuner para ajustar los hiperparámetros de un modelo. Para obtener más información sobre Keras Tuner, consulte estos recursos adicionales:

Consulte también HParams Dashboard en TensorBoard para ajustar de forma interactiva los hiperparámetros de su modelo.