Aide à protéger la Grande barrière de corail avec tensorflow sur Kaggle Rejoignez Défi

Estimateurs prédéfinis

Voir sur TensorFlow.org Exécuter dans Google Colab Voir la source sur GitHub Télécharger le cahier

Ce didacticiel vous montre comment résoudre le problème de classification Iris dans TensorFlow à l'aide d'estimators. Un estimateur est une représentation de haut niveau héritée de TensorFlow d'un modèle complet. Pour plus de détails, voir Estimateurs .

Tout d'abord

Pour commencer, vous allez d'abord importer TensorFlow et un certain nombre de bibliothèques dont vous aurez besoin.

import tensorflow as tf

import pandas as pd
2021-07-09 01:21:17.647127: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0

L'ensemble de données

L'exemple de programme de ce document construit et teste un modèle qui classe les fleurs d'iris en trois espèces différentes en fonction de la taille de leurs sépales et pétales .

Vous entraînerez un modèle à l'aide de l'ensemble de données Iris. L'ensemble de données Iris contient quatre entités et une étiquette . Les quatre caractéristiques identifient les caractéristiques botaniques suivantes des fleurs d'iris individuelles :

  • longueur des sépales
  • largeur des sépales
  • longueur des pétales
  • largeur des pétales

Sur la base de ces informations, vous pouvez définir quelques constantes utiles pour analyser les données :

CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']

Ensuite, téléchargez et analysez l'ensemble de données Iris à l'aide de Keras et de Pandas. Notez que vous conservez des ensembles de données distincts pour la formation et les tests.

train_path = tf.keras.utils.get_file(
    "iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
    "iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")

train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv
8192/2194 [================================================================================================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv
8192/573 [============================================================================================================================================================================================================================================================================================================================================================================================================================================] - 0s 0us/step

Vous pouvez inspecter vos données pour voir que vous avez quatre colonnes de caractéristiques flottantes et une étiquette int32.

train.head()

Pour chacun des ensembles de données, divisez les étiquettes que le modèle sera entraîné à prédire.

train_y = train.pop('Species')
test_y = test.pop('Species')

# The label column has now been removed from the features.
train.head()

Aperçu de la programmation avec les estimateurs

Maintenant que vous avez configuré les données, vous pouvez définir un modèle à l'aide d'un estimateur TensorFlow. Un estimateur est une classe dérivée de tf.estimator.Estimator . TensorFlow fournit une collection de tf.estimator (par exemple, LinearRegressor ) pour implémenter des algorithmes ML courants. Au-delà de ceux-ci, vous pouvez écrire vos propres estimateurs personnalisés . Il est recommandé d'utiliser des estimateurs prédéfinis lorsque vous commencez tout juste.

Pour écrire un programme TensorFlow basé sur des estimateurs prédéfinis, vous devez effectuer les tâches suivantes :

  • Créez une ou plusieurs fonctions d'entrée.
  • Définissez les colonnes de caractéristiques du modèle.
  • Instanciez un estimateur en spécifiant les colonnes de caractéristiques et divers hyperparamètres.
  • Appelez une ou plusieurs méthodes sur l'objet Estimator, en transmettant la fonction d'entrée appropriée comme source des données.

Voyons comment ces tâches sont implémentées pour la classification Iris.

Créer des fonctions d'entrée

Vous devez créer des fonctions d'entrée pour fournir des données pour la formation, l'évaluation et la prédiction.

Une fonction d'entrée est une fonction qui renvoie un objet tf.data.Dataset qui génère le tuple à deux éléments suivant :

  • features - Un dictionnaire Python dans lequel :
    • Chaque clé est le nom d'une fonction.
    • Chaque valeur est un tableau contenant toutes les valeurs de cette fonctionnalité.
  • label - Un tableau contenant les valeurs du label pour chaque exemple.

Juste pour démontrer le format de la fonction d'entrée, voici une implémentation simple :

def input_evaluation_set():
    features = {'SepalLength': np.array([6.4, 5.0]),
                'SepalWidth':  np.array([2.8, 2.3]),
                'PetalLength': np.array([5.6, 3.3]),
                'PetalWidth':  np.array([2.2, 1.0])}
    labels = np.array([2, 1])
    return features, labels

Votre fonction d'entrée peut générer le dictionnaire des features et la liste des label comme vous le souhaitez. Cependant, il est recommandé d'utiliser l' API Dataset de TensorFlow, qui peut analyser toutes sortes de données.

L'API Dataset peut gérer de nombreux cas courants pour vous. Par exemple, à l'aide de l'API Dataset, vous pouvez facilement lire les enregistrements d'une grande collection de fichiers en parallèle et les joindre en un seul flux.

Pour simplifier les choses dans cet exemple, vous allez charger les données avec pandas et créer un pipeline d'entrée à partir de ces données en mémoire :

def input_fn(features, labels, training=True, batch_size=256):
    """An input function for training or evaluating"""
    # Convert the inputs to a Dataset.
    dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))

    # Shuffle and repeat if you are in training mode.
    if training:
        dataset = dataset.shuffle(1000).repeat()

    return dataset.batch(batch_size)

Définir les colonnes de fonctionnalités

Une colonne de caractéristiques est un objet décrivant comment le modèle doit utiliser les données d'entrée brutes du dictionnaire de caractéristiques. Lorsque vous créez un modèle Estimator, vous lui transmettez une liste de colonnes de fonctionnalités décrivant chacune des fonctionnalités que vous souhaitez que le modèle utilise. Le module tf.feature_column fournit de nombreuses options pour représenter les données dans le modèle.

Pour Iris, les 4 caractéristiques brutes sont des valeurs numériques, vous allez donc créer une liste de colonnes de caractéristiques pour indiquer au modèle Estimator de représenter chacune des quatre caractéristiques sous forme de valeurs à virgule flottante 32 bits. Par conséquent, le code pour créer la colonne de fonctionnalité est :

# Feature columns describe how to use the input.
my_feature_columns = []
for key in train.keys():
    my_feature_columns.append(tf.feature_column.numeric_column(key=key))

Les colonnes de caractéristiques peuvent être beaucoup plus sophistiquées que celles présentées ici. Vous pouvez en savoir plus sur les colonnes de fonctionnalités dans ce guide .

Maintenant que vous avez la description de la manière dont vous souhaitez que le modèle représente les caractéristiques brutes, vous pouvez créer l'estimateur.

Instancier un estimateur

Le problème d'Iris est un problème de classification classique. Heureusement, TensorFlow fournit plusieurs estimateurs de classificateur prédéfinis, notamment :

Pour le problème Iris, tf.estimator.DNNClassifier semble être le meilleur choix. Voici comment vous avez instancié cet estimateur :

# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
    feature_columns=my_feature_columns,
    # Two hidden layers of 30 and 10 nodes respectively.
    hidden_units=[30, 10],
    # The model must choose between 3 classes.
    n_classes=3)
2021-07-09 01:21:19.558010: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-09 01:21:20.231408: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.232032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-09 01:21:20.232063: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-09 01:21:20.235269: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-09 01:21:20.235347: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-09 01:21:20.236422: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-09 01:21:20.236742: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-09 01:21:20.237708: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-09 01:21:20.238567: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-09 01:21:20.238730: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-09 01:21:20.238818: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.239441: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.240009: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-09 01:21:20.240696: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-09 01:21:20.241238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.241818: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-09 01:21:20.241888: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.242488: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.243024: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-09 01:21:20.243060: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-09 01:21:20.804599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-09 01:21:20.804631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-09 01:21:20.804639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-09 01:21:20.804832: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.805481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.806065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:20.806673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpseh3yn6n
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpseh3yn6n', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

Former, évaluer et prédire

Maintenant que vous disposez d'un objet Estimator, vous pouvez appeler des méthodes pour effectuer les opérations suivantes :

  • Entraînez le modèle.
  • Évaluez le modèle entraîné.
  • Utilisez le modèle formé pour faire des prédictions.

Former le modèle

Entraînez le modèle en appelant la méthode d' train de l'estimateur comme suit :

# Train the Model.
classifier.train(
    input_fn=lambda: input_fn(train, train_y, training=True),
    steps=5000)
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/adagrad.py:88: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
2021-07-09 01:21:21.556477: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:21.556846: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-09 01:21:21.556958: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:21.557247: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:21.557501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-09 01:21:21.557538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-09 01:21:21.557545: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-09 01:21:21.557551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-09 01:21:21.557647: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:21.557945: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:21.558204: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
2021-07-09 01:21:21.573557: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2000179999 Hz
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpseh3yn6n/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
2021-07-09 01:21:21.918584: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
INFO:tensorflow:loss = 1.0744461, step = 0
2021-07-09 01:21:22.316159: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
INFO:tensorflow:global_step/sec: 334.538
INFO:tensorflow:loss = 0.9813716, step = 100 (0.300 sec)
INFO:tensorflow:global_step/sec: 394.633
INFO:tensorflow:loss = 0.9640286, step = 200 (0.253 sec)
INFO:tensorflow:global_step/sec: 405.418
INFO:tensorflow:loss = 0.870553, step = 300 (0.247 sec)
INFO:tensorflow:global_step/sec: 414.009
INFO:tensorflow:loss = 0.7720107, step = 400 (0.241 sec)
INFO:tensorflow:global_step/sec: 419.099
INFO:tensorflow:loss = 0.7458334, step = 500 (0.239 sec)
INFO:tensorflow:global_step/sec: 413.423
INFO:tensorflow:loss = 0.73442066, step = 600 (0.242 sec)
INFO:tensorflow:global_step/sec: 407.202
INFO:tensorflow:loss = 0.6953498, step = 700 (0.246 sec)
INFO:tensorflow:global_step/sec: 399.47
INFO:tensorflow:loss = 0.6968536, step = 800 (0.250 sec)
INFO:tensorflow:global_step/sec: 417.937
INFO:tensorflow:loss = 0.669106, step = 900 (0.239 sec)
INFO:tensorflow:global_step/sec: 415.499
INFO:tensorflow:loss = 0.6549559, step = 1000 (0.241 sec)
INFO:tensorflow:global_step/sec: 399.254
INFO:tensorflow:loss = 0.644197, step = 1100 (0.251 sec)
INFO:tensorflow:global_step/sec: 381.26
INFO:tensorflow:loss = 0.628752, step = 1200 (0.262 sec)
INFO:tensorflow:global_step/sec: 380.687
INFO:tensorflow:loss = 0.6085156, step = 1300 (0.263 sec)
INFO:tensorflow:global_step/sec: 385.335
INFO:tensorflow:loss = 0.6050654, step = 1400 (0.259 sec)
INFO:tensorflow:global_step/sec: 379.021
INFO:tensorflow:loss = 0.5857471, step = 1500 (0.264 sec)
INFO:tensorflow:global_step/sec: 383.237
INFO:tensorflow:loss = 0.5734547, step = 1600 (0.261 sec)
INFO:tensorflow:global_step/sec: 379.347
INFO:tensorflow:loss = 0.5768546, step = 1700 (0.264 sec)
INFO:tensorflow:global_step/sec: 385.847
INFO:tensorflow:loss = 0.5602146, step = 1800 (0.259 sec)
INFO:tensorflow:global_step/sec: 374.918
INFO:tensorflow:loss = 0.5603363, step = 1900 (0.268 sec)
INFO:tensorflow:global_step/sec: 379.286
INFO:tensorflow:loss = 0.53942347, step = 2000 (0.263 sec)
INFO:tensorflow:global_step/sec: 387.69
INFO:tensorflow:loss = 0.5318261, step = 2100 (0.258 sec)
INFO:tensorflow:global_step/sec: 373.274
INFO:tensorflow:loss = 0.519292, step = 2200 (0.268 sec)
INFO:tensorflow:global_step/sec: 368.826
INFO:tensorflow:loss = 0.51804626, step = 2300 (0.271 sec)
INFO:tensorflow:global_step/sec: 381.156
INFO:tensorflow:loss = 0.49958432, step = 2400 (0.262 sec)
INFO:tensorflow:global_step/sec: 380.416
INFO:tensorflow:loss = 0.47292516, step = 2500 (0.263 sec)
INFO:tensorflow:global_step/sec: 379.532
INFO:tensorflow:loss = 0.4866906, step = 2600 (0.264 sec)
INFO:tensorflow:global_step/sec: 367.405
INFO:tensorflow:loss = 0.4665504, step = 2700 (0.272 sec)
INFO:tensorflow:global_step/sec: 393.807
INFO:tensorflow:loss = 0.46227247, step = 2800 (0.254 sec)
INFO:tensorflow:global_step/sec: 389.936
INFO:tensorflow:loss = 0.44966495, step = 2900 (0.257 sec)
INFO:tensorflow:global_step/sec: 376.884
INFO:tensorflow:loss = 0.44808808, step = 3000 (0.265 sec)
INFO:tensorflow:global_step/sec: 392.111
INFO:tensorflow:loss = 0.43817097, step = 3100 (0.255 sec)
INFO:tensorflow:global_step/sec: 392.087
INFO:tensorflow:loss = 0.43738297, step = 3200 (0.255 sec)
INFO:tensorflow:global_step/sec: 397.546
INFO:tensorflow:loss = 0.42564616, step = 3300 (0.252 sec)
INFO:tensorflow:global_step/sec: 399.665
INFO:tensorflow:loss = 0.41426587, step = 3400 (0.250 sec)
INFO:tensorflow:global_step/sec: 398.991
INFO:tensorflow:loss = 0.41321295, step = 3500 (0.251 sec)
INFO:tensorflow:global_step/sec: 400.251
INFO:tensorflow:loss = 0.41148052, step = 3600 (0.250 sec)
INFO:tensorflow:global_step/sec: 392.045
INFO:tensorflow:loss = 0.40983573, step = 3700 (0.255 sec)
INFO:tensorflow:global_step/sec: 387.784
INFO:tensorflow:loss = 0.39185163, step = 3800 (0.258 sec)
INFO:tensorflow:global_step/sec: 385.973
INFO:tensorflow:loss = 0.38712424, step = 3900 (0.259 sec)
INFO:tensorflow:global_step/sec: 397.847
INFO:tensorflow:loss = 0.3770343, step = 4000 (0.251 sec)
INFO:tensorflow:global_step/sec: 399.684
INFO:tensorflow:loss = 0.39354473, step = 4100 (0.250 sec)
INFO:tensorflow:global_step/sec: 394.732
INFO:tensorflow:loss = 0.37436056, step = 4200 (0.253 sec)
INFO:tensorflow:global_step/sec: 394.382
INFO:tensorflow:loss = 0.37443662, step = 4300 (0.254 sec)
INFO:tensorflow:global_step/sec: 381.682
INFO:tensorflow:loss = 0.35983646, step = 4400 (0.262 sec)
INFO:tensorflow:global_step/sec: 386.426
INFO:tensorflow:loss = 0.3579504, step = 4500 (0.259 sec)
INFO:tensorflow:global_step/sec: 387.776
INFO:tensorflow:loss = 0.35766554, step = 4600 (0.258 sec)
INFO:tensorflow:global_step/sec: 393.279
INFO:tensorflow:loss = 0.3629043, step = 4700 (0.254 sec)
INFO:tensorflow:global_step/sec: 396.422
INFO:tensorflow:loss = 0.34867007, step = 4800 (0.253 sec)
INFO:tensorflow:global_step/sec: 390.771
INFO:tensorflow:loss = 0.33946946, step = 4900 (0.255 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5000...
INFO:tensorflow:Saving checkpoints for 5000 into /tmp/tmpseh3yn6n/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5000...
INFO:tensorflow:Loss for final step: 0.34298706.
<tensorflow_estimator.python.estimator.canned.dnn.DNNClassifierV2 at 0x7f8a375c6610>

Notez que vous encapsulez votre appel input_fn dans un lambda pour capturer les arguments tout en fournissant une fonction d'entrée qui ne prend aucun argument, comme prévu par l'Estimator. L'argument steps indique à la méthode d'arrêter l'apprentissage après un certain nombre d'étapes d'apprentissage.

Évaluer le modèle entraîné

Maintenant que le modèle a été entraîné, vous pouvez obtenir des statistiques sur ses performances. Le bloc de code suivant évalue la précision du modèle entraîné sur les données de test :

eval_result = classifier.evaluate(
    input_fn=lambda: input_fn(test, test_y, training=False))

print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-09T01:21:35
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpseh3yn6n/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.20065s
INFO:tensorflow:Finished evaluation at 2021-07-09-01:21:35
INFO:tensorflow:Saving dict for global step 5000: accuracy = 0.93333334, average_loss = 0.40179652, global_step = 5000, loss = 0.40179652
2021-07-09 01:21:35.538566: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.538955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-09 01:21:35.539101: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.539507: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.539833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-09 01:21:35.539878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-09 01:21:35.539886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-09 01:21:35.539892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-09 01:21:35.540030: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.540370: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.540714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 5000: /tmp/tmpseh3yn6n/model.ckpt-5000

Test set accuracy: 0.933

Contrairement à l'appel à la méthode train , vous n'avez pas passé l'argument steps à évaluer. L' input_fn pour eval ne produit qu'une seule époque de données.

Le dictionnaire eval_result contient également le average_loss (perte moyenne par échantillon), le loss (perte moyenne par mini-lot) et la valeur du global_step de l'estimateur (le nombre d'itérations d'apprentissage subies).

Faire des prédictions (inférer) à partir du modèle formé

Vous avez maintenant un modèle formé qui produit de bons résultats d'évaluation. Vous pouvez maintenant utiliser le modèle formé pour prédire l'espèce d'une fleur d'iris en fonction de certaines mesures non étiquetées. Comme pour l'entraînement et l'évaluation, vous effectuez des prédictions à l'aide d'un seul appel de fonction :

# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
    'SepalLength': [5.1, 5.9, 6.9],
    'SepalWidth': [3.3, 3.0, 3.1],
    'PetalLength': [1.7, 4.2, 5.4],
    'PetalWidth': [0.5, 1.5, 2.1],
}

def input_fn(features, batch_size=256):
    """An input function for prediction."""
    # Convert the inputs to a Dataset without labels.
    return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)

predictions = classifier.predict(
    input_fn=lambda: input_fn(predict_x))

La méthode predict renvoie un itérable Python, produisant un dictionnaire de résultats de prédiction pour chaque exemple. Le code suivant imprime quelques prédictions et leurs probabilités :

for pred_dict, expec in zip(predictions, expected):
    class_id = pred_dict['class_ids'][0]
    probability = pred_dict['probabilities'][class_id]

    print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
        SPECIES[class_id], 100 * probability, expec))
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpseh3yn6n/model.ckpt-5000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Prediction is "Setosa" (87.5%), expected "Setosa"
Prediction is "Versicolor" (52.7%), expected "Versicolor"
Prediction is "Virginica" (64.5%), expected "Virginica"
2021-07-09 01:21:35.958955: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.959406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-09 01:21:35.959597: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.960061: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.960403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-09 01:21:35.960461: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-09 01:21:35.960471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-09 01:21:35.960482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-09 01:21:35.960646: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.961092: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-09 01:21:35.961439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)