Watch talks from the 2019 TensorFlow Dev Summit Watch now

Hyperparameter Tuning with the HParams Dashboard

View on Run in Google Colab View source on GitHub

When building machine learning models, you need to choose various hyperparameters, such as the dropout rate in a layer or the learning rate. These decisions impact model metrics, such as accuracy. Therefore, an important step in the machine learning workflow is to identify the best hyperparameters for your problem, which often involves experimentation. This process is known as "Hyperparameter Optimization" or "Hyperparameter Tuning".

The HParams dashboard in TensorBoard provides several tools to help with this process of identifying the best experiment or most promising sets of hyperparameters.

This tutorial will focus on the following steps: 1. Experiment setup and HParams summary 2. Adapt TensorFlow runs to log hyperparameters and metrics 3. Start runs and log them all under one parent directory 4. Visualize the results in TensorBoard's HParams dashboard

Start by installing TF 2.0 and loading the TensorBoard notebook extension:

!pip install -q tf-nightly-2.0-preview
# Load the TensorBoard notebook extension
%load_ext tensorboard.notebook 
    100% |████████████████████████████████| 79.8MB 324kB/s 
    100% |████████████████████████████████| 348kB 19.2MB/s 
    100% |████████████████████████████████| 3.0MB 11.4MB/s 
    100% |████████████████████████████████| 61kB 18.2MB/s 
# Clear any logs from previous runs
!rm -rf ./logs/ 

Import TensorFlow and other packages needed for the HParams dashboard:

import datetime
import tensorflow as tf

# Imports for the HParams plugin
from tensorboard.plugins.hparams import api_pb2
from tensorboard.plugins.hparams import summary as hparams_summary
from google.protobuf import struct_pb2

Download the FashionMNIST dataset and scale it:

fashion_mnist = tf.keras.datasets.fashion_mnist

(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Downloading data from
32768/29515 [=================================] - 0s 0us/step
Downloading data from
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from
8192/5148 [===============================================] - 0s 0us/step
Downloading data from
4423680/4422102 [==============================] - 0s 0us/step

1. Experiment setup and the HParams experiment summary

Experiment with three hyperparameters in the model:

  1. Number of units in the first dense layer
  2. Dropout rate in the dropout layer
  3. Optimizer

List the values to try:

num_units_list = [16, 32]
dropout_rate_list = [0.1, 0.2] 
optimizer_list = ['adam', 'sgd'] 

Log an experiment summary. This is how the HParams dashboard knows what are the hyperparameters and metrics (API details are here):

def create_experiment_summary(num_units_list, dropout_rate_list, optimizer_list):
  num_units_list_val = struct_pb2.ListValue()
  dropout_rate_list_val = struct_pb2.ListValue()
  optimizer_list_val = struct_pb2.ListValue()
  return hparams_summary.experiment_pb(
      # The hyperparameters being changed
                             display_name='Number of units',
                             display_name='Dropout rate',
      # The metrics being tracked

exp_summary = create_experiment_summary(num_units_list, dropout_rate_list, optimizer_list)
root_logdir_writer = tf.summary.create_file_writer("logs/hparam_tuning")
with root_logdir_writer.as_default():

2. Adapt TensorFlow runs to log hyperparameters and metrics

The model will be quite simple: two dense layers with a dropout layer between them. The training code will look familiar, although the hyperparameters are no longer hardcoded. Instead, the hyperparameters are provided in an hparams dictionary and used throughout the training function:

def train_test_model(hparams):

  model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(hparams['num_units'], activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)
                metrics=['accuracy']), y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
  _, accuracy = model.evaluate(x_test, y_test)
  return accuracy

For each run, log an hparams summary with the hyperparameters and final accuracy:

def run(run_dir, hparams):
  writer = tf.summary.create_file_writer(run_dir)
  summary_start = hparams_summary.session_start_pb(hparams=hparams)

  with writer.as_default():
    accuracy = train_test_model(run_dir, hparams)
    summary_end = hparams_summary.session_end_pb(api_pb2.STATUS_SUCCESS)
    tf.summary.scalar('accuracy', accuracy, step=1, description="The accuracy")

3. Start runs and log them all under one parent directory

You can now try multiple experiments, training each one with a different set of hyperparameters.

For simplicity, try all combinations (this is called a grid search). For more complex scenarios, it might be more effective to choose each hyperparameter value randomly (this is called a random search). There are more advanced methods that can be used.

Run a few experiments, which will take a few minutes:

session_num = 0

for num_units in num_units_list:
  for dropout_rate in dropout_rate_list:
    for optimizer in optimizer_list:
      hparams = {'num_units': num_units, 'dropout_rate': dropout_rate, 'optimizer': optimizer}
      print('--- Running training session %d' % (session_num + 1))
      run_name = "run-%d" % session_num
      run("logs/hparam_tuning/" + run_name, hparams)
      session_num += 1
--- Running training session 1
{'num_units': 16, 'dropout_rate': 0.1, 'optimizer': 'adam'}
60000/60000 [==============================] - 4s 60us/sample - loss: 0.7210 - accuracy: 0.7441
10000/10000 [==============================] - 0s 38us/sample - loss: 0.5077 - accuracy: 0.8219
--- Running training session 2
{'num_units': 16, 'dropout_rate': 0.1, 'optimizer': 'sgd'}
60000/60000 [==============================] - 3s 52us/sample - loss: 1.8030 - accuracy: 0.4162
10000/10000 [==============================] - 0s 39us/sample - loss: 1.3829 - accuracy: 0.6366
--- Running training session 3
{'num_units': 16, 'dropout_rate': 0.2, 'optimizer': 'adam'}
60000/60000 [==============================] - 3s 58us/sample - loss: 0.7570 - accuracy: 0.7339
10000/10000 [==============================] - 0s 39us/sample - loss: 0.5112 - accuracy: 0.8199
--- Running training session 4
{'num_units': 16, 'dropout_rate': 0.2, 'optimizer': 'sgd'}
60000/60000 [==============================] - 3s 51us/sample - loss: 1.8379 - accuracy: 0.3270
10000/10000 [==============================] - 0s 38us/sample - loss: 1.4468 - accuracy: 0.5909
--- Running training session 5
{'num_units': 32, 'dropout_rate': 0.1, 'optimizer': 'adam'}
60000/60000 [==============================] - 4s 61us/sample - loss: 0.5985 - accuracy: 0.7936
10000/10000 [==============================] - 0s 40us/sample - loss: 0.4662 - accuracy: 0.8335
--- Running training session 6
{'num_units': 32, 'dropout_rate': 0.1, 'optimizer': 'sgd'}
60000/60000 [==============================] - 3s 56us/sample - loss: 1.7895 - accuracy: 0.4130
10000/10000 [==============================] - 0s 41us/sample - loss: 1.3227 - accuracy: 0.6278
--- Running training session 7
{'num_units': 32, 'dropout_rate': 0.2, 'optimizer': 'adam'}
60000/60000 [==============================] - 4s 64us/sample - loss: 0.6546 - accuracy: 0.7748
10000/10000 [==============================] - 0s 39us/sample - loss: 0.4731 - accuracy: 0.8340
--- Running training session 8
{'num_units': 32, 'dropout_rate': 0.2, 'optimizer': 'sgd'}
60000/60000 [==============================] - 3s 55us/sample - loss: 1.6611 - accuracy: 0.4529
10000/10000 [==============================] - 0s 40us/sample - loss: 1.1841 - accuracy: 0.6557

4. Visualize the results in TensorBoard's HParams plugin

The HParams dashboard can now be opened. Start TensorBoard and click on "HParams" at the top.

%tensorboard --logdir logs/hparam_tuning

The left pane of the dashboard provides filtering capabilities that are active across all the views in the HParams dashboard: - Filter which hyperparameters/metrics are shown in the dashboard - Filter which hyperparameter/metrics values are shown in the dashboard - Filter on run status (running, success, ...) - Sort by hyperparameter/metric in the table view - Number of session groups to show (useful for performance when there are many experiments)

The HParams dashboard has three different views, with various useful information: * The Table View lists the runs, their hyperparameters, and their metrics. * The Parallel Coordinates View shows each run as a line going through an axis for each hyperparemeter and metric. Click and drag the mouse on any axis to mark a region which will highlight only the runs that pass through it. This can be useful for identifying which groups of hyperparameters are most important. The axes themselves can be re-ordered by dragging them. * The Scatter Plot View shows plots comparing each hyperparameter/metric with each metric. This can help identify correlations. Click and drag to select a region in a specific plot and highlight those sessions across the other plots.

A table row, a parallel coordinates line, and a scatter plot market can be clicked to see a plot of the metrics as a function of training steps for that session (although in this tutorial only one step is used for each run).

To further explore the capabilities of the HParams dashboard, download a set of pregenerated logs with more experiments:

wget -q ''
unzip -q -d logs/hparam_demo

View these logs in TensorBoard:

%tensorboard --logdir logs/hparam_demo

You can try out the different views in the HParams dashboard.

For example, by going to the parallel coordinates view and clicking and dragging on the accuracy axis, you can select the runs with the highest accuracy. As these runs pass through 'adam' in the optimizer axis, you can conclude that 'adam' performed better than 'sgd' on these experiments.