Google I/O returns May 18-20! Reserve space and build your schedule Register now

Introduction to the Keras Tuner

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

Overview

The Keras Tuner is a library that helps you pick the optimal set of hyperparameters for your TensorFlow program. The process of selecting the right set of hyperparameters for your machine learning (ML) application is called hyperparameter tuning or hypertuning.

Hyperparameters are the variables that govern the training process and the topology of an ML model. These variables remain constant over the training process and directly impact the performance of your ML program. Hyperparameters are of two types:

  1. Model hyperparameters which influence model selection such as the number and width of hidden layers
  2. Algorithm hyperparameters which influence the speed and quality of the learning algorithm such as the learning rate for Stochastic Gradient Descent (SGD) and the number of nearest neighbors for a k Nearest Neighbors (KNN) classifier

In this tutorial, you will use the Keras Tuner to perform hypertuning for an image classification application.

Setup

import tensorflow as tf
from tensorflow import keras

Install and import the Keras Tuner.

pip install -q -U keras-tuner
import kerastuner as kt

Download and prepare the dataset

In this tutorial, you will use the Keras Tuner to find the best hyperparameters for a machine learning model that classifies images of clothing from the Fashion MNIST dataset.

Load the data.

(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0

Define the model

When you build a model for hypertuning, you also define the hyperparameter search space in addition to the model architecture. The model you set up for hypertuning is called a hypermodel.

You can define a hypermodel through two approaches:

  • By using a model builder function
  • By subclassing the HyperModel class of the Keras Tuner API

You can also use two pre-defined HyperModel classes - HyperXception and HyperResNet for computer vision applications.

In this tutorial, you use a model builder function to define the image classification model. The model builder function returns a compiled model and uses hyperparameters you define inline to hypertune the model.

def model_builder(hp):
  model = keras.Sequential()
  model.add(keras.layers.Flatten(input_shape=(28, 28)))

  # Tune the number of units in the first Dense layer
  # Choose an optimal value between 32-512
  hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
  model.add(keras.layers.Dense(units=hp_units, activation='relu'))
  model.add(keras.layers.Dense(10))

  # Tune the learning rate for the optimizer
  # Choose an optimal value from 0.01, 0.001, or 0.0001
  hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])

  model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
                loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])

  return model

Instantiate the tuner and perform hypertuning

Instantiate the tuner to perform the hypertuning. The Keras Tuner has four tuners available - RandomSearch, Hyperband, BayesianOptimization, and Sklearn. In this tutorial, you use the Hyperband tuner.

To instantiate the Hyperband tuner, you must specify the hypermodel, the objective to optimize and the maximum number of epochs to train (max_epochs).

tuner = kt.Hyperband(model_builder,
                     objective='val_accuracy',
                     max_epochs=10,
                     factor=3,
                     directory='my_dir',
                     project_name='intro_to_kt')

The Hyperband tuning algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. This is done using a sports championship style bracket. The algorithm trains a large number of models for a few epochs and carries forward only the top-performing half of models to the next round. Hyperband determines the number of models to train in a bracket by computing 1 + logfactor(max_epochs) and rounding it up to the nearest integer.

Create a callback to stop training early after reaching a certain value for the validation loss.

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

Run the hyperparameter search. The arguments for the search method are the same as those used for tf.keras.model.fit in addition to the callback above.

tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])

# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
Trial 30 Complete [00h 00m 24s]
val_accuracy: 0.8824166655540466

Best val_accuracy So Far: 0.8901666402816772
Total elapsed time: 00h 05m 34s
INFO:tensorflow:Oracle triggered exit

The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is 448 and the optimal learning rate for the optimizer
is 0.001.

Train the model

Find the optimal number of epochs to train the model with the hyperparameters obtained from the search.

# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)

val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
Epoch 1/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.6307 - accuracy: 0.7788 - val_loss: 0.4389 - val_accuracy: 0.8450
Epoch 2/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3789 - accuracy: 0.8625 - val_loss: 0.3897 - val_accuracy: 0.8593
Epoch 3/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3302 - accuracy: 0.8791 - val_loss: 0.3356 - val_accuracy: 0.8766
Epoch 4/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2995 - accuracy: 0.8890 - val_loss: 0.3360 - val_accuracy: 0.8798
Epoch 5/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2745 - accuracy: 0.8990 - val_loss: 0.3447 - val_accuracy: 0.8756
Epoch 6/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2624 - accuracy: 0.9023 - val_loss: 0.3433 - val_accuracy: 0.8793
Epoch 7/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2619 - accuracy: 0.9020 - val_loss: 0.3105 - val_accuracy: 0.8886
Epoch 8/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2429 - accuracy: 0.9108 - val_loss: 0.3114 - val_accuracy: 0.8895
Epoch 9/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2284 - accuracy: 0.9136 - val_loss: 0.3099 - val_accuracy: 0.8913
Epoch 10/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2194 - accuracy: 0.9168 - val_loss: 0.3154 - val_accuracy: 0.8918
Epoch 11/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2153 - accuracy: 0.9171 - val_loss: 0.3407 - val_accuracy: 0.8856
Epoch 12/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2052 - accuracy: 0.9238 - val_loss: 0.3190 - val_accuracy: 0.8903
Epoch 13/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1941 - accuracy: 0.9262 - val_loss: 0.3205 - val_accuracy: 0.8903
Epoch 14/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1893 - accuracy: 0.9301 - val_loss: 0.3242 - val_accuracy: 0.8896
Epoch 15/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1780 - accuracy: 0.9307 - val_loss: 0.3584 - val_accuracy: 0.8844
Epoch 16/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1748 - accuracy: 0.9337 - val_loss: 0.3303 - val_accuracy: 0.8937
Epoch 17/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1719 - accuracy: 0.9349 - val_loss: 0.3491 - val_accuracy: 0.8882
Epoch 18/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1662 - accuracy: 0.9383 - val_loss: 0.3509 - val_accuracy: 0.8925
Epoch 19/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1592 - accuracy: 0.9398 - val_loss: 0.3324 - val_accuracy: 0.8938
Epoch 20/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1515 - accuracy: 0.9436 - val_loss: 0.3500 - val_accuracy: 0.8900
Epoch 21/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1469 - accuracy: 0.9432 - val_loss: 0.3486 - val_accuracy: 0.8955
Epoch 22/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1412 - accuracy: 0.9467 - val_loss: 0.3602 - val_accuracy: 0.8878
Epoch 23/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1415 - accuracy: 0.9470 - val_loss: 0.3568 - val_accuracy: 0.8913
Epoch 24/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1320 - accuracy: 0.9507 - val_loss: 0.3832 - val_accuracy: 0.8908
Epoch 25/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1288 - accuracy: 0.9514 - val_loss: 0.3890 - val_accuracy: 0.8865
Epoch 26/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1277 - accuracy: 0.9533 - val_loss: 0.3796 - val_accuracy: 0.8935
Epoch 27/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1228 - accuracy: 0.9529 - val_loss: 0.3876 - val_accuracy: 0.8933
Epoch 28/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1210 - accuracy: 0.9536 - val_loss: 0.3913 - val_accuracy: 0.8947
Epoch 29/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1179 - accuracy: 0.9556 - val_loss: 0.3880 - val_accuracy: 0.8942
Epoch 30/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1145 - accuracy: 0.9563 - val_loss: 0.4126 - val_accuracy: 0.8922
Epoch 31/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1109 - accuracy: 0.9571 - val_loss: 0.4014 - val_accuracy: 0.8944
Epoch 32/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1101 - accuracy: 0.9580 - val_loss: 0.3997 - val_accuracy: 0.8934
Epoch 33/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1114 - accuracy: 0.9567 - val_loss: 0.4134 - val_accuracy: 0.8938
Epoch 34/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1001 - accuracy: 0.9639 - val_loss: 0.4370 - val_accuracy: 0.8938
Epoch 35/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1005 - accuracy: 0.9630 - val_loss: 0.4414 - val_accuracy: 0.8922
Epoch 36/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0980 - accuracy: 0.9628 - val_loss: 0.4800 - val_accuracy: 0.8912
Epoch 37/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0989 - accuracy: 0.9621 - val_loss: 0.4597 - val_accuracy: 0.8923
Epoch 38/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0962 - accuracy: 0.9630 - val_loss: 0.4699 - val_accuracy: 0.8933
Epoch 39/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0884 - accuracy: 0.9665 - val_loss: 0.4515 - val_accuracy: 0.8939
Epoch 40/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0889 - accuracy: 0.9660 - val_loss: 0.4753 - val_accuracy: 0.8926
Epoch 41/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0856 - accuracy: 0.9673 - val_loss: 0.4669 - val_accuracy: 0.8940
Epoch 42/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0860 - accuracy: 0.9674 - val_loss: 0.4870 - val_accuracy: 0.8882
Epoch 43/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0827 - accuracy: 0.9693 - val_loss: 0.5101 - val_accuracy: 0.8881
Epoch 44/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0839 - accuracy: 0.9678 - val_loss: 0.5078 - val_accuracy: 0.8934
Epoch 45/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0762 - accuracy: 0.9720 - val_loss: 0.5508 - val_accuracy: 0.8882
Epoch 46/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0893 - accuracy: 0.9658 - val_loss: 0.5130 - val_accuracy: 0.8907
Epoch 47/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0771 - accuracy: 0.9696 - val_loss: 0.5162 - val_accuracy: 0.8938
Epoch 48/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0736 - accuracy: 0.9714 - val_loss: 0.5392 - val_accuracy: 0.8929
Epoch 49/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0782 - accuracy: 0.9718 - val_loss: 0.5215 - val_accuracy: 0.8961
Epoch 50/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0730 - accuracy: 0.9721 - val_loss: 0.5605 - val_accuracy: 0.8876
Best epoch: 49

Re-instantiate the hypermodel and train it with the optimal number of epochs from above.

hypermodel = tuner.hypermodel.build(best_hps)

# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
Epoch 1/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.6207 - accuracy: 0.7805 - val_loss: 0.3978 - val_accuracy: 0.8568
Epoch 2/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3716 - accuracy: 0.8642 - val_loss: 0.3721 - val_accuracy: 0.8659
Epoch 3/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3303 - accuracy: 0.8766 - val_loss: 0.3721 - val_accuracy: 0.8626
Epoch 4/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.3107 - accuracy: 0.8847 - val_loss: 0.3727 - val_accuracy: 0.8642
Epoch 5/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2848 - accuracy: 0.8956 - val_loss: 0.3179 - val_accuracy: 0.8857
Epoch 6/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2694 - accuracy: 0.8997 - val_loss: 0.3394 - val_accuracy: 0.8802
Epoch 7/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2561 - accuracy: 0.9033 - val_loss: 0.3095 - val_accuracy: 0.8933
Epoch 8/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2411 - accuracy: 0.9083 - val_loss: 0.3252 - val_accuracy: 0.8842
Epoch 9/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2321 - accuracy: 0.9135 - val_loss: 0.3250 - val_accuracy: 0.8897
Epoch 10/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2202 - accuracy: 0.9171 - val_loss: 0.3144 - val_accuracy: 0.8942
Epoch 11/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2129 - accuracy: 0.9218 - val_loss: 0.3313 - val_accuracy: 0.8874
Epoch 12/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2015 - accuracy: 0.9243 - val_loss: 0.3215 - val_accuracy: 0.8924
Epoch 13/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1950 - accuracy: 0.9283 - val_loss: 0.3234 - val_accuracy: 0.8929
Epoch 14/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1854 - accuracy: 0.9321 - val_loss: 0.3257 - val_accuracy: 0.8946
Epoch 15/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1801 - accuracy: 0.9312 - val_loss: 0.3427 - val_accuracy: 0.8900
Epoch 16/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1778 - accuracy: 0.9326 - val_loss: 0.3382 - val_accuracy: 0.8940
Epoch 17/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1716 - accuracy: 0.9361 - val_loss: 0.3218 - val_accuracy: 0.8938
Epoch 18/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1632 - accuracy: 0.9383 - val_loss: 0.3612 - val_accuracy: 0.8918
Epoch 19/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1592 - accuracy: 0.9399 - val_loss: 0.3602 - val_accuracy: 0.8901
Epoch 20/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1498 - accuracy: 0.9438 - val_loss: 0.3501 - val_accuracy: 0.8957
Epoch 21/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1455 - accuracy: 0.9436 - val_loss: 0.3590 - val_accuracy: 0.8906
Epoch 22/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1469 - accuracy: 0.9455 - val_loss: 0.3442 - val_accuracy: 0.8978
Epoch 23/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1425 - accuracy: 0.9474 - val_loss: 0.3632 - val_accuracy: 0.8939
Epoch 24/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1370 - accuracy: 0.9486 - val_loss: 0.3728 - val_accuracy: 0.8936
Epoch 25/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1348 - accuracy: 0.9502 - val_loss: 0.3653 - val_accuracy: 0.8953
Epoch 26/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1241 - accuracy: 0.9525 - val_loss: 0.3778 - val_accuracy: 0.8917
Epoch 27/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1232 - accuracy: 0.9530 - val_loss: 0.3655 - val_accuracy: 0.8977
Epoch 28/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1191 - accuracy: 0.9549 - val_loss: 0.3960 - val_accuracy: 0.8930
Epoch 29/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1193 - accuracy: 0.9548 - val_loss: 0.3805 - val_accuracy: 0.8999
Epoch 30/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1129 - accuracy: 0.9569 - val_loss: 0.4280 - val_accuracy: 0.8878
Epoch 31/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1142 - accuracy: 0.9579 - val_loss: 0.3975 - val_accuracy: 0.8996
Epoch 32/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1116 - accuracy: 0.9576 - val_loss: 0.3960 - val_accuracy: 0.8982
Epoch 33/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1072 - accuracy: 0.9585 - val_loss: 0.4042 - val_accuracy: 0.8957
Epoch 34/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1040 - accuracy: 0.9615 - val_loss: 0.4243 - val_accuracy: 0.8976
Epoch 35/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0968 - accuracy: 0.9645 - val_loss: 0.4184 - val_accuracy: 0.8977
Epoch 36/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1056 - accuracy: 0.9605 - val_loss: 0.4181 - val_accuracy: 0.8990
Epoch 37/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0924 - accuracy: 0.9642 - val_loss: 0.4557 - val_accuracy: 0.8932
Epoch 38/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0942 - accuracy: 0.9653 - val_loss: 0.4716 - val_accuracy: 0.8932
Epoch 39/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0978 - accuracy: 0.9643 - val_loss: 0.4396 - val_accuracy: 0.9006
Epoch 40/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0885 - accuracy: 0.9672 - val_loss: 0.4782 - val_accuracy: 0.8925
Epoch 41/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0881 - accuracy: 0.9652 - val_loss: 0.4886 - val_accuracy: 0.8935
Epoch 42/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0846 - accuracy: 0.9677 - val_loss: 0.4566 - val_accuracy: 0.8978
Epoch 43/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0836 - accuracy: 0.9688 - val_loss: 0.4734 - val_accuracy: 0.8972
Epoch 44/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0791 - accuracy: 0.9702 - val_loss: 0.4885 - val_accuracy: 0.8954
Epoch 45/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0818 - accuracy: 0.9701 - val_loss: 0.5213 - val_accuracy: 0.8874
Epoch 46/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0813 - accuracy: 0.9687 - val_loss: 0.5160 - val_accuracy: 0.8945
Epoch 47/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0738 - accuracy: 0.9720 - val_loss: 0.5002 - val_accuracy: 0.8970
Epoch 48/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0748 - accuracy: 0.9715 - val_loss: 0.5465 - val_accuracy: 0.8921
Epoch 49/49
1500/1500 [==============================] - 3s 2ms/step - loss: 0.0789 - accuracy: 0.9701 - val_loss: 0.5297 - val_accuracy: 0.8941
<tensorflow.python.keras.callbacks.History at 0x7f27226a7b00>

To finish this tutorial, evaluate the hypermodel on the test data.

eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
313/313 [==============================] - 1s 2ms/step - loss: 0.5915 - accuracy: 0.8867
[test loss, test accuracy]: [0.5915395617485046, 0.8866999745368958]

The my_dir/intro_to_kt directory contains detailed logs and checkpoints for every trial (model configuration) run during the hyperparameter search. If you re-run the hyperparameter search, the Keras Tuner uses the existing state from these logs to resume the search. To disable this behavior, pass an additional overwrite=True argument while instantiating the tuner.

Summary

In this tutorial, you learned how to use the Keras Tuner to tune hyperparameters for a model. To learn more about the Keras Tuner, check out these additional resources:

Also check out the HParams Dashboard in TensorBoard to interactively tune your model hyperparameters.