Ter uma questão? Conecte-se com a comunidade no Fórum TensorFlow Visite o Fórum

Escrevendo seus próprios callbacks

Ver no TensorFlow.org Executar no Google Colab Ver fonte no GitHub Baixar caderno

Introdução

Um retorno de chamada é uma ferramenta poderosa para personalizar o comportamento de um modelo Keras durante o treinamento, avaliação ou inferência. Os exemplos incluem tf.keras.callbacks.TensorBoard para visualizar o progresso e os resultados do treinamento com TensorBoard ou tf.keras.callbacks.ModelCheckpoint para salvar periodicamente seu modelo durante o treinamento.

Neste guia, você aprenderá o que é um retorno de chamada de Keras, o que ele pode fazer e como você pode criar o seu próprio. Fornecemos algumas demonstrações de aplicativos simples de retorno de chamada para você começar.

Configurar

import tensorflow as tf
from tensorflow import keras

Visão geral dos retornos de chamada de Keras

Todos os retornos de chamada subclassificam a classe keras.callbacks.Callback e substituem um conjunto de métodos chamados em vários estágios de treinamento, teste e previsão. Os retornos de chamada são úteis para obter uma visão dos estados internos e estatísticas do modelo durante o treinamento.

Você pode passar uma lista de chamadas de retorno (como o argumento de palavra-chave callbacks ) para os seguintes métodos de modelo:

Uma visão geral dos métodos de retorno de chamada

Métodos globais

on_(train|test|predict)_begin(self, logs=None)

Chamado no início do fit / evaluate / predict .

on_(train|test|predict)_end(self, logs=None)

Chamado no final do fit / evaluate / predict .

Métodos em nível de lote para treinamento / teste / previsão

on_(train|test|predict)_batch_begin(self, batch, logs=None)

Chamado imediatamente antes de processar um lote durante o treinamento / teste / previsão.

on_(train|test|predict)_batch_end(self, batch, logs=None)

Chamado no final do treinamento / teste / previsão de um lote. Nesse método, os logs são um dicionário que contém os resultados das métricas.

Métodos de nível de época (apenas treinamento)

on_epoch_begin(self, epoch, logs=None)

Chamado no início de uma época durante o treinamento.

on_epoch_end(self, epoch, logs=None)

Chamado no final de uma época durante o treinamento.

Um exemplo básico

Vamos dar uma olhada em um exemplo concreto. Para começar, vamos importar tensorflow e definir um modelo sequencial simples de Keras:

# Define the Keras model to add callbacks to
def get_model():
    model = keras.Sequential()
    model.add(keras.layers.Dense(1, input_dim=784))
    model.compile(
        optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
        loss="mean_squared_error",
        metrics=["mean_absolute_error"],
    )
    return model

Em seguida, carregue os dados MNIST para treinamento e teste da API de conjuntos de dados Keras:

# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0

# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

Agora, defina um retorno de chamada personalizado simples que registre:

  • Quando fit / evaluate / predict início e o fim
  • Quando cada época começa e termina
  • Quando cada lote de treinamento começa e termina
  • Quando cada lote de avaliação (teste) começa e termina
  • Quando cada lote de inferência (predição) começa e termina
class CustomCallback(keras.callbacks.Callback):
    def on_train_begin(self, logs=None):
        keys = list(logs.keys())
        print("Starting training; got log keys: {}".format(keys))

    def on_train_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop training; got log keys: {}".format(keys))

    def on_epoch_begin(self, epoch, logs=None):
        keys = list(logs.keys())
        print("Start epoch {} of training; got log keys: {}".format(epoch, keys))

    def on_epoch_end(self, epoch, logs=None):
        keys = list(logs.keys())
        print("End epoch {} of training; got log keys: {}".format(epoch, keys))

    def on_test_begin(self, logs=None):
        keys = list(logs.keys())
        print("Start testing; got log keys: {}".format(keys))

    def on_test_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop testing; got log keys: {}".format(keys))

    def on_predict_begin(self, logs=None):
        keys = list(logs.keys())
        print("Start predicting; got log keys: {}".format(keys))

    def on_predict_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop predicting; got log keys: {}".format(keys))

    def on_train_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Training: start of batch {}; got log keys: {}".format(batch, keys))

    def on_train_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Training: end of batch {}; got log keys: {}".format(batch, keys))

    def on_test_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))

    def on_test_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))

    def on_predict_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))

    def on_predict_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))

Vamos experimentar:

model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=128,
    epochs=1,
    verbose=0,
    validation_split=0.5,
    callbacks=[CustomCallback()],
)

res = model.evaluate(
    x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)

res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])
Starting training; got log keys: []
Start epoch 0 of training; got log keys: []
...Training: start of batch 0; got log keys: []
...Training: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 1; got log keys: []
...Training: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 2; got log keys: []
...Training: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 3; got log keys: []
...Training: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
Start testing; got log keys: []
...Evaluating: start of batch 0; got log keys: []
...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 1; got log keys: []
...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 2; got log keys: []
...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 3; got log keys: []
...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
Stop testing; got log keys: ['loss', 'mean_absolute_error']
End epoch 0 of training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error']
Stop training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error']
Start testing; got log keys: []
...Evaluating: start of batch 0; got log keys: []
...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 1; got log keys: []
...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 2; got log keys: []
...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 3; got log keys: []
...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 4; got log keys: []
...Evaluating: end of batch 4; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 5; got log keys: []
...Evaluating: end of batch 5; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 6; got log keys: []
...Evaluating: end of batch 6; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 7; got log keys: []
...Evaluating: end of batch 7; got log keys: ['loss', 'mean_absolute_error']
Stop testing; got log keys: ['loss', 'mean_absolute_error']
Start predicting; got log keys: []
...Predicting: start of batch 0; got log keys: []
...Predicting: end of batch 0; got log keys: ['outputs']
...Predicting: start of batch 1; got log keys: []
...Predicting: end of batch 1; got log keys: ['outputs']
...Predicting: start of batch 2; got log keys: []
...Predicting: end of batch 2; got log keys: ['outputs']
...Predicting: start of batch 3; got log keys: []
...Predicting: end of batch 3; got log keys: ['outputs']
...Predicting: start of batch 4; got log keys: []
...Predicting: end of batch 4; got log keys: ['outputs']
...Predicting: start of batch 5; got log keys: []
...Predicting: end of batch 5; got log keys: ['outputs']
...Predicting: start of batch 6; got log keys: []
...Predicting: end of batch 6; got log keys: ['outputs']
...Predicting: start of batch 7; got log keys: []
...Predicting: end of batch 7; got log keys: ['outputs']
Stop predicting; got log keys: []

Uso de logs dict

Os logs dict contém o valor da perda e todas as métricas no final de um lote ou época. O exemplo inclui a perda e o erro absoluto médio.

class LossAndErrorPrintingCallback(keras.callbacks.Callback):
    def on_train_batch_end(self, batch, logs=None):
        print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"]))

    def on_test_batch_end(self, batch, logs=None):
        print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"]))

    def on_epoch_end(self, epoch, logs=None):
        print(
            "The average loss for epoch {} is {:7.2f} "
            "and mean absolute error is {:7.2f}.".format(
                epoch, logs["loss"], logs["mean_absolute_error"]
            )
        )


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=128,
    epochs=2,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback()],
)

res = model.evaluate(
    x_test,
    y_test,
    batch_size=128,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback()],
)
For batch 0, loss is   27.09.
For batch 1, loss is  455.54.
For batch 2, loss is  310.84.
For batch 3, loss is  235.38.
For batch 4, loss is  189.59.
For batch 5, loss is  159.45.
For batch 6, loss is  137.62.
For batch 7, loss is  123.95.
The average loss for epoch 0 is  123.95 and mean absolute error is    6.04.
For batch 0, loss is    4.68.
For batch 1, loss is    4.44.
For batch 2, loss is    4.25.
For batch 3, loss is    4.19.
For batch 4, loss is    4.10.
For batch 5, loss is    4.15.
For batch 6, loss is    4.41.
For batch 7, loss is    4.44.
The average loss for epoch 1 is    4.44 and mean absolute error is    1.70.
For batch 0, loss is    4.60.
For batch 1, loss is    4.22.
For batch 2, loss is    4.30.
For batch 3, loss is    4.23.
For batch 4, loss is    4.37.
For batch 5, loss is    4.35.
For batch 6, loss is    4.34.
For batch 7, loss is    4.28.

Uso do atributo self.model

Além de receber informações de log quando um de seus métodos é chamado, os retornos de chamada têm acesso ao modelo associado à rodada atual de treinamento / avaliação / inferência: self.model .

Aqui estão algumas das coisas que você pode fazer com self.model em um retorno de chamada:

  • Defina self.model.stop_training = True para interromper imediatamente o treinamento.
  • Hiperparâmetros self.model.optimizer do otimizador (disponível como self.model.optimizer ), como self.model.optimizer.learning_rate .
  • Salve o modelo em intervalos de período.
  • Registre a saída de model.predict() em alguns exemplos de teste no final de cada época, para usar como uma verificação de model.predict() durante o treinamento.
  • Extraia visualizações de recursos intermediários no final de cada época, para monitorar o que o modelo está aprendendo ao longo do tempo.
  • etc.

Vamos ver isso em ação em alguns exemplos.

Exemplos de aplicativos de retorno de chamada Keras

Parada antecipada com perda mínima

Este primeiro exemplo mostra a criação de um Callback que interrompe o treinamento quando o mínimo de perda foi atingido, configurando o atributo self.model.stop_training (boolean). Opcionalmente, você pode fornecer um argumento patience para especificar quantas épocas devemos esperar antes de parar após ter atingido um mínimo local.

tf.keras.callbacks.EarlyStopping fornece uma implementação mais completa e geral.

import numpy as np


class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
    """Stop training when the loss is at its min, i.e. the loss stops decreasing.

  Arguments:
      patience: Number of epochs to wait after min has been hit. After this
      number of no improvement, training stops.
  """

    def __init__(self, patience=0):
        super(EarlyStoppingAtMinLoss, self).__init__()
        self.patience = patience
        # best_weights to store the weights at which the minimum loss occurs.
        self.best_weights = None

    def on_train_begin(self, logs=None):
        # The number of epoch it has waited when loss is no longer minimum.
        self.wait = 0
        # The epoch the training stops at.
        self.stopped_epoch = 0
        # Initialize the best as infinity.
        self.best = np.Inf

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get("loss")
        if np.less(current, self.best):
            self.best = current
            self.wait = 0
            # Record the best weights if current results is better (less).
            self.best_weights = self.model.get_weights()
        else:
            self.wait += 1
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
                print("Restoring model weights from the end of the best epoch.")
                self.model.set_weights(self.best_weights)

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0:
            print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=64,
    steps_per_epoch=5,
    epochs=30,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
For batch 0, loss is   36.12.
For batch 1, loss is  473.15.
For batch 2, loss is  324.54.
For batch 3, loss is  245.95.
For batch 4, loss is  198.35.
The average loss for epoch 0 is  198.35 and mean absolute error is    8.54.
For batch 0, loss is    8.53.
For batch 1, loss is    7.74.
For batch 2, loss is    6.75.
For batch 3, loss is    7.01.
For batch 4, loss is    7.12.
The average loss for epoch 1 is    7.12 and mean absolute error is    2.20.
For batch 0, loss is    6.39.
For batch 1, loss is    6.75.
For batch 2, loss is    6.46.
For batch 3, loss is    6.55.
For batch 4, loss is    7.21.
The average loss for epoch 2 is    7.21 and mean absolute error is    2.20.
Restoring model weights from the end of the best epoch.
Epoch 00003: early stopping
<tensorflow.python.keras.callbacks.History at 0x7f39a680ffd0>

Programação da taxa de aprendizagem

Neste exemplo, mostramos como um retorno de chamada personalizado pode ser usado para alterar dinamicamente a taxa de aprendizado do otimizador durante o treinamento.

Consulte callbacks.LearningRateScheduler para implementações mais gerais.

class CustomLearningRateScheduler(keras.callbacks.Callback):
    """Learning rate scheduler which sets the learning rate according to schedule.

  Arguments:
      schedule: a function that takes an epoch index
          (integer, indexed from 0) and current learning rate
          as inputs and returns a new learning rate as output (float).
  """

    def __init__(self, schedule):
        super(CustomLearningRateScheduler, self).__init__()
        self.schedule = schedule

    def on_epoch_begin(self, epoch, logs=None):
        if not hasattr(self.model.optimizer, "lr"):
            raise ValueError('Optimizer must have a "lr" attribute.')
        # Get the current learning rate from model's optimizer.
        lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
        # Call schedule function to get the scheduled learning rate.
        scheduled_lr = self.schedule(epoch, lr)
        # Set the value back to the optimizer before this epoch starts
        tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
        print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))


LR_SCHEDULE = [
    # (epoch to start, learning rate) tuples
    (3, 0.05),
    (6, 0.01),
    (9, 0.005),
    (12, 0.001),
]


def lr_schedule(epoch, lr):
    """Helper function to retrieve the scheduled learning rate based on epoch."""
    if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
        return lr
    for i in range(len(LR_SCHEDULE)):
        if epoch == LR_SCHEDULE[i][0]:
            return LR_SCHEDULE[i][1]
    return lr


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=64,
    steps_per_epoch=5,
    epochs=15,
    verbose=0,
    callbacks=[
        LossAndErrorPrintingCallback(),
        CustomLearningRateScheduler(lr_schedule),
    ],
)
Epoch 00000: Learning rate is 0.1000.
For batch 0, loss is   28.49.
For batch 1, loss is  432.45.
For batch 2, loss is  298.60.
For batch 3, loss is  227.34.
For batch 4, loss is  183.34.
The average loss for epoch 0 is  183.34 and mean absolute error is    8.37.

Epoch 00001: Learning rate is 0.1000.
For batch 0, loss is    5.96.
For batch 1, loss is    6.24.
For batch 2, loss is    5.68.
For batch 3, loss is    5.64.
For batch 4, loss is    5.41.
The average loss for epoch 1 is    5.41 and mean absolute error is    1.89.

Epoch 00002: Learning rate is 0.1000.
For batch 0, loss is    4.84.
For batch 1, loss is    4.66.
For batch 2, loss is    5.96.
For batch 3, loss is    7.54.
For batch 4, loss is    8.48.
The average loss for epoch 2 is    8.48 and mean absolute error is    2.29.

Epoch 00003: Learning rate is 0.0500.
For batch 0, loss is   11.10.
For batch 1, loss is    6.77.
For batch 2, loss is    5.99.
For batch 3, loss is    5.07.
For batch 4, loss is    5.03.
The average loss for epoch 3 is    5.03 and mean absolute error is    1.76.

Epoch 00004: Learning rate is 0.0500.
For batch 0, loss is    4.72.
For batch 1, loss is    4.30.
For batch 2, loss is    4.20.
For batch 3, loss is    4.29.
For batch 4, loss is    4.30.
The average loss for epoch 4 is    4.30 and mean absolute error is    1.66.

Epoch 00005: Learning rate is 0.0500.
For batch 0, loss is    5.52.
For batch 1, loss is    5.15.
For batch 2, loss is    4.51.
For batch 3, loss is    4.40.
For batch 4, loss is    4.80.
The average loss for epoch 5 is    4.80 and mean absolute error is    1.77.

Epoch 00006: Learning rate is 0.0100.
For batch 0, loss is    7.07.
For batch 1, loss is    6.72.
For batch 2, loss is    5.62.
For batch 3, loss is    4.79.
For batch 4, loss is    4.68.
The average loss for epoch 6 is    4.68 and mean absolute error is    1.69.

Epoch 00007: Learning rate is 0.0100.
For batch 0, loss is    2.61.
For batch 1, loss is    2.50.
For batch 2, loss is    2.76.
For batch 3, loss is    2.96.
For batch 4, loss is    3.14.
The average loss for epoch 7 is    3.14 and mean absolute error is    1.38.

Epoch 00008: Learning rate is 0.0100.
For batch 0, loss is    4.12.
For batch 1, loss is    3.91.
For batch 2, loss is    3.37.
For batch 3, loss is    3.30.
For batch 4, loss is    3.08.
The average loss for epoch 8 is    3.08 and mean absolute error is    1.37.

Epoch 00009: Learning rate is 0.0050.
For batch 0, loss is    5.81.
For batch 1, loss is    5.12.
For batch 2, loss is    4.53.
For batch 3, loss is    4.08.
For batch 4, loss is    3.95.
The average loss for epoch 9 is    3.95 and mean absolute error is    1.56.

Epoch 00010: Learning rate is 0.0050.
For batch 0, loss is    2.73.
For batch 1, loss is    2.83.
For batch 2, loss is    2.75.
For batch 3, loss is    3.07.
For batch 4, loss is    2.93.
The average loss for epoch 10 is    2.93 and mean absolute error is    1.35.

Epoch 00011: Learning rate is 0.0050.
For batch 0, loss is    3.33.
For batch 1, loss is    3.60.
For batch 2, loss is    3.77.
For batch 3, loss is    3.51.
For batch 4, loss is    3.43.
The average loss for epoch 11 is    3.43 and mean absolute error is    1.40.

Epoch 00012: Learning rate is 0.0010.
For batch 0, loss is    4.29.
For batch 1, loss is    3.72.
For batch 2, loss is    3.78.
For batch 3, loss is    3.61.
For batch 4, loss is    3.47.
The average loss for epoch 12 is    3.47 and mean absolute error is    1.46.

Epoch 00013: Learning rate is 0.0010.
For batch 0, loss is    3.01.
For batch 1, loss is    3.10.
For batch 2, loss is    3.20.
For batch 3, loss is    3.00.
For batch 4, loss is    3.16.
The average loss for epoch 13 is    3.16 and mean absolute error is    1.36.

Epoch 00014: Learning rate is 0.0010.
For batch 0, loss is    5.22.
For batch 1, loss is    3.80.
For batch 2, loss is    3.61.
For batch 3, loss is    3.45.
For batch 4, loss is    3.43.
The average loss for epoch 14 is    3.43 and mean absolute error is    1.43.
<tensorflow.python.keras.callbacks.History at 0x7f39a6875400>

Retornos de chamada Keras integrados

Certifique-se de verificar os retornos de chamada existentes de Keras lendo os documentos da API . Os aplicativos incluem registro em CSV, salvamento do modelo, visualização de métricas no TensorBoard e muito mais!