तिथि को रक्षित करें! Google I / O 18-20 मई को पंजीकृत करता है
इस पेज का अनुवाद Cloud Translation API से किया गया है.
Switch to English

अपनी खुद की कॉलबैक लिखना

TensorFlow.org पर देखें Google Colab में चलाएं GitHub पर स्रोत देखें नोटबुक डाउनलोड करें

परिचय

कॉलबैक प्रशिक्षण, मूल्यांकन, या अनुमान के दौरान केरस मॉडल के व्यवहार को अनुकूलित करने का एक शक्तिशाली उपकरण है। उदाहरणों में शामिल हैं tf.keras.callbacks.TensorBoard प्रशिक्षण प्रगति और TensorBoard साथ परिणाम, या कल्पना करने के लिए tf.keras.callbacks.ModelCheckpoint प्रशिक्षण के दौरान अपने मॉडल को बचाने के लिए समय समय के लिए।

इस गाइड में, आप सीखेंगे कि केरस कॉलबैक क्या है, यह क्या कर सकता है, और आप अपना खुद का निर्माण कैसे कर सकते हैं। हम आपको शुरू करने के लिए सरल कॉलबैक अनुप्रयोगों के कुछ डेमो प्रदान करते हैं।

सेट अप

import tensorflow as tf
from tensorflow import keras

केरस कॉलबैक अवलोकन

सभी कॉलबैक keras.callbacks.Callback क्लास को keras.callbacks.Callback , और प्रशिक्षण, परीक्षण और भविष्यवाणी के विभिन्न चरणों में बुलाए गए तरीकों का एक सेट ओवरराइड करते हैं। प्रशिक्षण के दौरान आंतरिक स्थिति और मॉडल के आंकड़ों पर एक विचार प्राप्त करने के लिए कॉलबैक उपयोगी हैं।

आप निम्न मॉडल विधियों में कॉलबैक (कीवर्ड तर्क callbacks रूप में) की एक सूची पारित कर सकते हैं:

कॉलबैक विधियों का अवलोकन

वैश्विक तरीके

on_(train|test|predict)_begin(self, logs=None)

fit / evaluate / predict की शुरुआत में कहा जाता है।

on_(train|test|predict)_end(self, logs=None)

fit / evaluate / predict के अंत में कहा जाता है।

प्रशिक्षण / परीक्षण / भविष्यवाणी के लिए बैच स्तर के तरीके

on_(train|test|predict)_batch_begin(self, batch, logs=None)

प्रशिक्षण / परीक्षण / भविष्यवाणी के दौरान एक बैच को संसाधित करने से पहले सही कहा जाता है।

on_(train|test|predict)_batch_end(self, batch, logs=None)

एक बैच के प्रशिक्षण / परीक्षण / भविष्यवाणी के अंत में कहा जाता है। इस पद्धति के भीतर, logs एक तानाशाही है जिसमें मैट्रिक्स परिणाम होते हैं।

युग-स्तरीय विधियाँ (केवल प्रशिक्षण)

on_epoch_begin(self, epoch, logs=None)

प्रशिक्षण के दौरान एक युग की शुरुआत में बुलाया गया।

on_epoch_end(self, epoch, logs=None)

प्रशिक्षण के दौरान एक युग के अंत में कहा जाता है।

एक मूल उदाहरण

आइए एक ठोस उदाहरण देखें। आरंभ करने के लिए, आइए टेंसोफ़्लो को आयात करें और एक सरल अनुक्रमिक करेस मॉडल को परिभाषित करें:

# Define the Keras model to add callbacks to
def get_model():
    model = keras.Sequential()
    model.add(keras.layers.Dense(1, input_dim=784))
    model.compile(
        optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
        loss="mean_squared_error",
        metrics=["mean_absolute_error"],
    )
    return model

फिर, Keras डेटासेट API से प्रशिक्षण और परीक्षण के लिए MNIST डेटा लोड करें:

# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0

# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

अब, एक साधारण कस्टम कॉलबैक को परिभाषित करें जो लॉग करता है:

  • जब fit / evaluate / predict शुरू और समाप्त होती है
  • जब प्रत्येक युग शुरू होता है और समाप्त होता है
  • जब प्रत्येक प्रशिक्षण बैच शुरू होता है और समाप्त होता है
  • जब प्रत्येक मूल्यांकन (परीक्षण) बैच शुरू होता है और समाप्त होता है
  • जब प्रत्येक अनुमान (भविष्यवाणी) बैच शुरू होता है और समाप्त होता है
class CustomCallback(keras.callbacks.Callback):
    def on_train_begin(self, logs=None):
        keys = list(logs.keys())
        print("Starting training; got log keys: {}".format(keys))

    def on_train_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop training; got log keys: {}".format(keys))

    def on_epoch_begin(self, epoch, logs=None):
        keys = list(logs.keys())
        print("Start epoch {} of training; got log keys: {}".format(epoch, keys))

    def on_epoch_end(self, epoch, logs=None):
        keys = list(logs.keys())
        print("End epoch {} of training; got log keys: {}".format(epoch, keys))

    def on_test_begin(self, logs=None):
        keys = list(logs.keys())
        print("Start testing; got log keys: {}".format(keys))

    def on_test_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop testing; got log keys: {}".format(keys))

    def on_predict_begin(self, logs=None):
        keys = list(logs.keys())
        print("Start predicting; got log keys: {}".format(keys))

    def on_predict_end(self, logs=None):
        keys = list(logs.keys())
        print("Stop predicting; got log keys: {}".format(keys))

    def on_train_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Training: start of batch {}; got log keys: {}".format(batch, keys))

    def on_train_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Training: end of batch {}; got log keys: {}".format(batch, keys))

    def on_test_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))

    def on_test_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))

    def on_predict_batch_begin(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))

    def on_predict_batch_end(self, batch, logs=None):
        keys = list(logs.keys())
        print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))

आइये इसे आजमाते हैं:

model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=128,
    epochs=1,
    verbose=0,
    validation_split=0.5,
    callbacks=[CustomCallback()],
)

res = model.evaluate(
    x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)

res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])
Starting training; got log keys: []
Start epoch 0 of training; got log keys: []
...Training: start of batch 0; got log keys: []
...Training: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 1; got log keys: []
...Training: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 2; got log keys: []
...Training: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Training: start of batch 3; got log keys: []
...Training: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
Start testing; got log keys: []
...Evaluating: start of batch 0; got log keys: []
...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 1; got log keys: []
...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 2; got log keys: []
...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 3; got log keys: []
...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
Stop testing; got log keys: ['loss', 'mean_absolute_error']
End epoch 0 of training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error']
Stop training; got log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error']
Start testing; got log keys: []
...Evaluating: start of batch 0; got log keys: []
...Evaluating: end of batch 0; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 1; got log keys: []
...Evaluating: end of batch 1; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 2; got log keys: []
...Evaluating: end of batch 2; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 3; got log keys: []
...Evaluating: end of batch 3; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 4; got log keys: []
...Evaluating: end of batch 4; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 5; got log keys: []
...Evaluating: end of batch 5; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 6; got log keys: []
...Evaluating: end of batch 6; got log keys: ['loss', 'mean_absolute_error']
...Evaluating: start of batch 7; got log keys: []
...Evaluating: end of batch 7; got log keys: ['loss', 'mean_absolute_error']
Stop testing; got log keys: ['loss', 'mean_absolute_error']
Start predicting; got log keys: []
...Predicting: start of batch 0; got log keys: []
...Predicting: end of batch 0; got log keys: ['outputs']
...Predicting: start of batch 1; got log keys: []
...Predicting: end of batch 1; got log keys: ['outputs']
...Predicting: start of batch 2; got log keys: []
...Predicting: end of batch 2; got log keys: ['outputs']
...Predicting: start of batch 3; got log keys: []
...Predicting: end of batch 3; got log keys: ['outputs']
...Predicting: start of batch 4; got log keys: []
...Predicting: end of batch 4; got log keys: ['outputs']
...Predicting: start of batch 5; got log keys: []
...Predicting: end of batch 5; got log keys: ['outputs']
...Predicting: start of batch 6; got log keys: []
...Predicting: end of batch 6; got log keys: ['outputs']
...Predicting: start of batch 7; got log keys: []
...Predicting: end of batch 7; got log keys: ['outputs']
Stop predicting; got log keys: []

logs का उपयोग तानाशाही

logs हुक में हानि मान होता है, और सभी मेट्रिक्स एक बैच या युग के अंत में होते हैं। उदाहरण में नुकसान और मतलब पूर्ण त्रुटि शामिल है।

class LossAndErrorPrintingCallback(keras.callbacks.Callback):
    def on_train_batch_end(self, batch, logs=None):
        print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"]))

    def on_test_batch_end(self, batch, logs=None):
        print("For batch {}, loss is {:7.2f}.".format(batch, logs["loss"]))

    def on_epoch_end(self, epoch, logs=None):
        print(
            "The average loss for epoch {} is {:7.2f} "
            "and mean absolute error is {:7.2f}.".format(
                epoch, logs["loss"], logs["mean_absolute_error"]
            )
        )


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=128,
    epochs=2,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback()],
)

res = model.evaluate(
    x_test,
    y_test,
    batch_size=128,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback()],
)
For batch 0, loss is   27.09.
For batch 1, loss is  455.54.
For batch 2, loss is  310.84.
For batch 3, loss is  235.38.
For batch 4, loss is  189.59.
For batch 5, loss is  159.45.
For batch 6, loss is  137.62.
For batch 7, loss is  123.95.
The average loss for epoch 0 is  123.95 and mean absolute error is    6.04.
For batch 0, loss is    4.68.
For batch 1, loss is    4.44.
For batch 2, loss is    4.25.
For batch 3, loss is    4.19.
For batch 4, loss is    4.10.
For batch 5, loss is    4.15.
For batch 6, loss is    4.41.
For batch 7, loss is    4.44.
The average loss for epoch 1 is    4.44 and mean absolute error is    1.70.
For batch 0, loss is    4.60.
For batch 1, loss is    4.22.
For batch 2, loss is    4.30.
For batch 3, loss is    4.23.
For batch 4, loss is    4.37.
For batch 5, loss is    4.35.
For batch 6, loss is    4.34.
For batch 7, loss is    4.28.

self.model विशेषता का उपयोग

लॉग जानकारी प्राप्त करने के अलावा जब उनके तरीकों में से एक को कॉल किया जाता है, तो कॉलबैक में प्रशिक्षण / मूल्यांकन / अनुमान के वर्तमान दौर से जुड़े मॉडल तक पहुंच होती है: self.model

यहाँ कुछ चीजें हैं जो आप कॉलबैक में self.model साथ कर सकते हैं:

  • self.model.stop_training = True तुरंत प्रशिक्षण बाधित करने के लिए self.model.stop_training = True सेट करें।
  • ऑप्टिमाइज़र के self.model.optimizer ( self.model.optimizer रूप में उपलब्ध), जैसे self.model.optimizer.learning_rate
  • मॉडल को अवधि अंतराल पर सहेजें।
  • model.predict() के अंत में कुछ परीक्षण नमूनों पर model.predict() के आउटपुट को रिकॉर्ड करें, प्रशिक्षण के दौरान एक पवित्रता जांच के रूप में उपयोग करने के लिए।
  • समय के साथ मॉडल क्या सीख रहा है, इस पर नजर रखने के लिए, प्रत्येक युग के अंत में मध्यवर्ती विशेषताओं के विज़ुअलाइज़ेशन निकालें।
  • आदि।

आइए इसे कुछ उदाहरणों में कार्रवाई में देखें।

केरस कॉलबैक एप्लिकेशन के उदाहरण

न्यूनतम नुकसान पर जल्दी रोक

यह पहला उदाहरण एक Callback के निर्माण को दिखाता है जो प्रशिक्षण को रोक देता है जब विशेषता का नुकसान कम से कम हो गया है, विशेषता self.model.stop_training (बूलियन) सेट करके। वैकल्पिक रूप से, आप यह निर्दिष्ट करने के लिए एक तर्क patience प्रदान कर सकते हैं कि एक स्थानीय न्यूनतम तक पहुंचने के बाद हमें कितने युगों तक रुकने से पहले इंतजार करना चाहिए।

tf.keras.callbacks.EarlyStopping एक अधिक पूर्ण और सामान्य कार्यान्वयन प्रदान करता है।

import numpy as np


class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
    """Stop training when the loss is at its min, i.e. the loss stops decreasing.

  Arguments:
      patience: Number of epochs to wait after min has been hit. After this
      number of no improvement, training stops.
  """

    def __init__(self, patience=0):
        super(EarlyStoppingAtMinLoss, self).__init__()
        self.patience = patience
        # best_weights to store the weights at which the minimum loss occurs.
        self.best_weights = None

    def on_train_begin(self, logs=None):
        # The number of epoch it has waited when loss is no longer minimum.
        self.wait = 0
        # The epoch the training stops at.
        self.stopped_epoch = 0
        # Initialize the best as infinity.
        self.best = np.Inf

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get("loss")
        if np.less(current, self.best):
            self.best = current
            self.wait = 0
            # Record the best weights if current results is better (less).
            self.best_weights = self.model.get_weights()
        else:
            self.wait += 1
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
                print("Restoring model weights from the end of the best epoch.")
                self.model.set_weights(self.best_weights)

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0:
            print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=64,
    steps_per_epoch=5,
    epochs=30,
    verbose=0,
    callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
)
For batch 0, loss is   36.12.
For batch 1, loss is  473.15.
For batch 2, loss is  324.54.
For batch 3, loss is  245.95.
For batch 4, loss is  198.35.
The average loss for epoch 0 is  198.35 and mean absolute error is    8.54.
For batch 0, loss is    8.53.
For batch 1, loss is    7.74.
For batch 2, loss is    6.75.
For batch 3, loss is    7.01.
For batch 4, loss is    7.12.
The average loss for epoch 1 is    7.12 and mean absolute error is    2.20.
For batch 0, loss is    6.39.
For batch 1, loss is    6.75.
For batch 2, loss is    6.46.
For batch 3, loss is    6.55.
For batch 4, loss is    7.21.
The average loss for epoch 2 is    7.21 and mean absolute error is    2.20.
Restoring model weights from the end of the best epoch.
Epoch 00003: early stopping
<tensorflow.python.keras.callbacks.History at 0x7f39a680ffd0>

सीखने की दर निर्धारण

इस उदाहरण में, हम दिखाते हैं कि प्रशिक्षण के दौरान ऑप्टिमाइज़र की सीखने की दर को गतिशील रूप से बदलने के लिए एक कस्टम कॉलबैक का उपयोग कैसे किया जा सकता है।

अधिक सामान्य कार्यान्वयन के लिए callbacks.LearningRateScheduler देखें।

class CustomLearningRateScheduler(keras.callbacks.Callback):
    """Learning rate scheduler which sets the learning rate according to schedule.

  Arguments:
      schedule: a function that takes an epoch index
          (integer, indexed from 0) and current learning rate
          as inputs and returns a new learning rate as output (float).
  """

    def __init__(self, schedule):
        super(CustomLearningRateScheduler, self).__init__()
        self.schedule = schedule

    def on_epoch_begin(self, epoch, logs=None):
        if not hasattr(self.model.optimizer, "lr"):
            raise ValueError('Optimizer must have a "lr" attribute.')
        # Get the current learning rate from model's optimizer.
        lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
        # Call schedule function to get the scheduled learning rate.
        scheduled_lr = self.schedule(epoch, lr)
        # Set the value back to the optimizer before this epoch starts
        tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
        print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))


LR_SCHEDULE = [
    # (epoch to start, learning rate) tuples
    (3, 0.05),
    (6, 0.01),
    (9, 0.005),
    (12, 0.001),
]


def lr_schedule(epoch, lr):
    """Helper function to retrieve the scheduled learning rate based on epoch."""
    if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
        return lr
    for i in range(len(LR_SCHEDULE)):
        if epoch == LR_SCHEDULE[i][0]:
            return LR_SCHEDULE[i][1]
    return lr


model = get_model()
model.fit(
    x_train,
    y_train,
    batch_size=64,
    steps_per_epoch=5,
    epochs=15,
    verbose=0,
    callbacks=[
        LossAndErrorPrintingCallback(),
        CustomLearningRateScheduler(lr_schedule),
    ],
)
Epoch 00000: Learning rate is 0.1000.
For batch 0, loss is   28.49.
For batch 1, loss is  432.45.
For batch 2, loss is  298.60.
For batch 3, loss is  227.34.
For batch 4, loss is  183.34.
The average loss for epoch 0 is  183.34 and mean absolute error is    8.37.

Epoch 00001: Learning rate is 0.1000.
For batch 0, loss is    5.96.
For batch 1, loss is    6.24.
For batch 2, loss is    5.68.
For batch 3, loss is    5.64.
For batch 4, loss is    5.41.
The average loss for epoch 1 is    5.41 and mean absolute error is    1.89.

Epoch 00002: Learning rate is 0.1000.
For batch 0, loss is    4.84.
For batch 1, loss is    4.66.
For batch 2, loss is    5.96.
For batch 3, loss is    7.54.
For batch 4, loss is    8.48.
The average loss for epoch 2 is    8.48 and mean absolute error is    2.29.

Epoch 00003: Learning rate is 0.0500.
For batch 0, loss is   11.10.
For batch 1, loss is    6.77.
For batch 2, loss is    5.99.
For batch 3, loss is    5.07.
For batch 4, loss is    5.03.
The average loss for epoch 3 is    5.03 and mean absolute error is    1.76.

Epoch 00004: Learning rate is 0.0500.
For batch 0, loss is    4.72.
For batch 1, loss is    4.30.
For batch 2, loss is    4.20.
For batch 3, loss is    4.29.
For batch 4, loss is    4.30.
The average loss for epoch 4 is    4.30 and mean absolute error is    1.66.

Epoch 00005: Learning rate is 0.0500.
For batch 0, loss is    5.52.
For batch 1, loss is    5.15.
For batch 2, loss is    4.51.
For batch 3, loss is    4.40.
For batch 4, loss is    4.80.
The average loss for epoch 5 is    4.80 and mean absolute error is    1.77.

Epoch 00006: Learning rate is 0.0100.
For batch 0, loss is    7.07.
For batch 1, loss is    6.72.
For batch 2, loss is    5.62.
For batch 3, loss is    4.79.
For batch 4, loss is    4.68.
The average loss for epoch 6 is    4.68 and mean absolute error is    1.69.

Epoch 00007: Learning rate is 0.0100.
For batch 0, loss is    2.61.
For batch 1, loss is    2.50.
For batch 2, loss is    2.76.
For batch 3, loss is    2.96.
For batch 4, loss is    3.14.
The average loss for epoch 7 is    3.14 and mean absolute error is    1.38.

Epoch 00008: Learning rate is 0.0100.
For batch 0, loss is    4.12.
For batch 1, loss is    3.91.
For batch 2, loss is    3.37.
For batch 3, loss is    3.30.
For batch 4, loss is    3.08.
The average loss for epoch 8 is    3.08 and mean absolute error is    1.37.

Epoch 00009: Learning rate is 0.0050.
For batch 0, loss is    5.81.
For batch 1, loss is    5.12.
For batch 2, loss is    4.53.
For batch 3, loss is    4.08.
For batch 4, loss is    3.95.
The average loss for epoch 9 is    3.95 and mean absolute error is    1.56.

Epoch 00010: Learning rate is 0.0050.
For batch 0, loss is    2.73.
For batch 1, loss is    2.83.
For batch 2, loss is    2.75.
For batch 3, loss is    3.07.
For batch 4, loss is    2.93.
The average loss for epoch 10 is    2.93 and mean absolute error is    1.35.

Epoch 00011: Learning rate is 0.0050.
For batch 0, loss is    3.33.
For batch 1, loss is    3.60.
For batch 2, loss is    3.77.
For batch 3, loss is    3.51.
For batch 4, loss is    3.43.
The average loss for epoch 11 is    3.43 and mean absolute error is    1.40.

Epoch 00012: Learning rate is 0.0010.
For batch 0, loss is    4.29.
For batch 1, loss is    3.72.
For batch 2, loss is    3.78.
For batch 3, loss is    3.61.
For batch 4, loss is    3.47.
The average loss for epoch 12 is    3.47 and mean absolute error is    1.46.

Epoch 00013: Learning rate is 0.0010.
For batch 0, loss is    3.01.
For batch 1, loss is    3.10.
For batch 2, loss is    3.20.
For batch 3, loss is    3.00.
For batch 4, loss is    3.16.
The average loss for epoch 13 is    3.16 and mean absolute error is    1.36.

Epoch 00014: Learning rate is 0.0010.
For batch 0, loss is    5.22.
For batch 1, loss is    3.80.
For batch 2, loss is    3.61.
For batch 3, loss is    3.45.
For batch 4, loss is    3.43.
The average loss for epoch 14 is    3.43 and mean absolute error is    1.43.
<tensorflow.python.keras.callbacks.History at 0x7f39a6875400>

अंतर्निहित केरस कॉलबैक

एपीआई डॉक्स को पढ़कर मौजूदा केरस कॉलबैक की जांच करना सुनिश्चित करें। आवेदन में CSV में प्रवेश करना, मॉडल को सहेजना, TensorBoard में मेट्रिक्स की कल्पना करना और बहुत कुछ शामिल है!