Migrez votre code TensorFlow 1 vers TensorFlow 2

Voir sur TensorFlow.org Exécuter dans Google Colab Voir la source sur GitHub Télécharger le cahier

Ce guide est destiné aux utilisateurs d'API TensorFlow de bas niveau. Si vous utilisez les API de haut niveau ( tf.keras ) il peut y avoir peu ou pas d' action que vous devez prendre pour rendre votre code entièrement tensorflow 2.x compatible:

Il est encore possible d'exécuter du code 1.x, non modifié ( sauf pour contrib ), en tensorflow 2.x:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

Cependant, cela ne vous permet pas de profiter de nombreuses améliorations apportées à TensorFlow 2.x. Ce guide vous aidera à mettre à niveau votre code, le rendant plus simple, plus performant et plus facile à maintenir.

Script de conversion automatique

La première étape, avant de tenter de mettre en œuvre les changements décrits dans ce guide, est d'essayer d' exécuter le scénario de mise à niveau .

Cela exécutera une première passe lors de la mise à niveau de votre code vers TensorFlow 2.x, mais cela ne peut pas rendre votre code idiomatique vers la v2. Votre code peut encore utiliser tf.compat.v1 terminaux d'accès aux espaces réservés, des sessions, des collections et d' autres fonctionnalités de style 1.x.

Changements comportementaux de haut niveau

Si votre code fonctionne à l' aide tensorflow 2.x tf.compat.v1.disable_v2_behavior , il y a encore des changements de comportement global que vous pourriez avoir besoin d'adresse. Les changements majeurs sont :

  • Exécution Eager, v1.enable_eager_execution() : Tout code qui utilise implicitement un tf.Graph échouera. Assurez - vous d'envelopper ce code dans un with tf.Graph().as_default() Contexte.

  • Variables de ressources, v1.enable_resource_variables() : Une partie du code peut dépend de comportements non-déterministe activés par des variables de référence tensorflow. Les variables de ressource sont verrouillées lors de leur écriture et offrent ainsi des garanties de cohérence plus intuitives.

    • Cela peut changer le comportement dans les cas extrêmes.
    • Cela peut créer des copies supplémentaires et augmenter l'utilisation de la mémoire.
    • Ceci peut être désactivé en passant use_resource=False au tf.Variable constructeur.
  • Tenseurs formes, v1.enable_v2_tensorshape() : tensorflow 2.x simplifie le comportement des formes de tenseur. Au lieu de t.shape[0].value vous pouvez dire t.shape[0] . Ces changements doivent être minimes et il est logique de les corriger immédiatement. Reportez - vous à la TensorShape section des exemples.

  • Flux de contrôle, v1.enable_control_flow_v2() : La mise en œuvre de flux de contrôle 2.x tensorflow a été simplifiée, et produit ainsi différentes représentations graphiques. Please bugs fichier pour tous les problèmes.

Créer du code pour TensorFlow 2.x

Ce guide présente plusieurs exemples de conversion de code TensorFlow 1.x en TensorFlow 2.x. Ces modifications permettront à votre code de profiter d'optimisations de performances et d'appels d'API simplifiés.

Dans chaque cas, le modèle est :

1. Remplacer v1.Session.run appels

Chaque v1.Session.run appel doit être remplacé par une fonction Python.

  • Les feed_dict et v1.placeholder s deviennent des arguments de la fonction.
  • Les fetches deviennent la valeur de retour de la fonction.
  • Lors de la conversion d' exécution permet le débogage facile hâte avec des outils standard de Python comme pdb .

Après cela, ajoutez un tf.function décorateur pour le faire fonctionner efficacement dans le graphique. Consultez le Guide Autograph pour plus d' informations sur la façon dont cela fonctionne.

Noter que:

  • Contrairement à v1.Session.run , un tf.function a une signature de retour fixe et renvoie toujours toutes les sorties. Si cela provoque des problèmes de performances, créez deux fonctions distinctes.

  • Il n'y a pas besoin d'un tf.control_dependencies ou des opérations similaires: A tf.function se comporte comme si elle était exécutée dans l'ordre écrit. tf.Variable missions et tf.assert s, par exemple, sont exécutées automatiquement.

La section des modèles de conversion contient un exemple de travail de ce processus de conversion.

2. Utilisez des objets Python pour suivre les variables et les pertes

Tout suivi de variable basé sur le nom est fortement déconseillé dans TensorFlow 2.x. Utilisez des objets Python pour suivre les variables.

Utilisez tf.Variable au lieu de v1.get_variable .

Chaque v1.variable_scope doit être converti en un objet Python. Il s'agira généralement de l'un des éléments suivants :

Si vous avez besoin de listes de variables globales (comme tf.Graph.get_collection(tf.GraphKeys.VARIABLES) ), utilisez les .variables et .trainable_variables attributs de la Layer et le Model des objets.

Ces Layer et Model classes implémentent plusieurs autres propriétés qui éliminent la nécessité de collections mondiales. Leur .losses propriété peut être un remplacement pour l' utilisation de la tf.GraphKeys.LOSSES collection.

Reportez - vous aux guides KERAS pour plus de détails.

3. Améliorez vos boucles d'entraînement

Utilisez l'API de plus haut niveau qui fonctionne pour votre cas d'utilisation. Préférez tf.keras.Model.fit sur la construction de vos propres boucles de formation.

Ces fonctions de haut niveau gèrent de nombreux détails de bas niveau qui pourraient être faciles à manquer si vous écrivez votre propre boucle d'entraînement. Par exemple, ils collectent automatiquement les pertes de régularisation, et définir la training=True l' argument lors de l' appel du modèle.

4. Mettez à niveau vos pipelines d'entrée de données

Utilisation tf.data ensembles de données pour l' entrée de données. Ces objets sont efficaces, expressifs et s'intègrent bien avec tensorflow.

Ils peuvent être transmis directement à la tf.keras.Model.fit méthode.

model.fit(dataset, epochs=5)

Ils peuvent être itérés sur Python directement standard :

for example_batch, label_batch in dataset:
    break

5. Migrate hors compat.v1 symboles

Le tf.compat.v1 module contient l'API complète 1.x tensorflow, avec sa sémantique d' origine.

La tensorflow 2.x mise à v1.arg_max niveau de script permet de convertir des symboles à leurs équivalents v2 si une telle conversion est sûre, à savoir si elle peut déterminer que le comportement de la version 2.x tensorflow est exactement équivalent (par exemple, il va renommer v1.arg_max à tf.argmax , puisque ce sont la même fonction).

Une fois le script de mise à niveau se fait avec un morceau de code, il est probable il y a beaucoup de mentions compat.v1 . Cela vaut la peine de parcourir le code et de les convertir manuellement en l'équivalent v2 (il devrait être mentionné dans le journal s'il y en a un).

Conversion de modèles

Variables de bas niveau et exécution de l'opérateur

Voici des exemples d'utilisation d'API de bas niveau :

  • Utilisation de portées variables pour contrôler la réutilisation.
  • Création de variables avec v1.get_variable .
  • Accéder explicitement aux collections.
  • Accéder implicitement aux collections avec des méthodes telles que :

  • En utilisant v1.placeholder pour mettre en place des entrées de graphique.

  • Graphiques exécution avec Session.run .

  • Initialisation manuelle des variables.

Avant de convertir

Voici à quoi peuvent ressembler ces modèles dans le code utilisant TensorFlow 1.x.

import tensorflow as tf
import tensorflow.compat.v1 as v1

import tensorflow_datasets as tfds
2021-07-19 23:37:03.701382: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
g = v1.Graph()

with g.as_default():
  in_a = v1.placeholder(dtype=v1.float32, shape=(2))
  in_b = v1.placeholder(dtype=v1.float32, shape=(2))

  def forward(x):
    with v1.variable_scope("matmul", reuse=v1.AUTO_REUSE):
      W = v1.get_variable("W", initializer=v1.ones(shape=(2,2)),
                          regularizer=lambda x:tf.reduce_mean(x**2))
      b = v1.get_variable("b", initializer=v1.zeros(shape=(2)))
      return W * x + b

  out_a = forward(in_a)
  out_b = forward(in_b)
  reg_loss=v1.losses.get_regularization_loss(scope="matmul")

with v1.Session(graph=g) as sess:
  sess.run(v1.global_variables_initializer())
  outs = sess.run([out_a, out_b, reg_loss],
                feed_dict={in_a: [1, 0], in_b: [0, 1]})

print(outs[0])
print()
print(outs[1])
print()
print(outs[2])
2021-07-19 23:37:05.720243: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-19 23:37:06.406838: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.407495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:06.407533: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 23:37:06.410971: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-19 23:37:06.411090: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-19 23:37:06.412239: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-19 23:37:06.412612: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-19 23:37:06.413657: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-19 23:37:06.414637: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-19 23:37:06.414862: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 23:37:06.415002: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.415823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.416461: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:06.417159: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 23:37:06.417858: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.418588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:06.418704: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.419416: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.420021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:06.420085: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 23:37:07.053897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:07.053954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:07.053964: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:07.054212: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.054962: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.055685: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.056348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
2021-07-19 23:37:07.060371: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2000165000 Hz
[[1. 0.]
 [1. 0.]]

[[0. 1.]
 [0. 1.]]

1.0

Après la conversion

Dans le code converti :

  • Les variables sont des objets Python locaux.
  • L' forward fonction définit encore le calcul.
  • Le Session.run appel est remplacé par un appel à forward .
  • Option tf.function décorateur peut être ajouté pour la performance.
  • Les régularisations sont calculées manuellement, sans référence à une quelconque collecte globale.
  • Il n'y a pas de sessions ou l' utilisation des espaces réservés.
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")

@tf.function
def forward(x):
  return W * x + b

out_a = forward([1,0])
print(out_a)
tf.Tensor(
[[1. 0.]
 [1. 0.]], shape=(2, 2), dtype=float32)
2021-07-19 23:37:07.370160: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.370572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:07.370699: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371011: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:07.371360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:07.371370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:07.371377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:07.371511: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371844: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.372131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
2021-07-19 23:37:07.419147: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
out_b = forward([0,1])

regularizer = tf.keras.regularizers.l2(0.04)
reg_loss=regularizer(W)

Les modèles basés sur tf.layers

Le v1.layers module est utilisé pour contenir la couche-fonctions qui comptaient sur v1.variable_scope pour définir des variables et de réutilisation.

Avant de convertir

def model(x, training, scope='model'):
  with v1.variable_scope(scope, reuse=v1.AUTO_REUSE):
    x = v1.layers.conv2d(x, 32, 3, activation=v1.nn.relu,
          kernel_regularizer=lambda x:0.004*tf.reduce_mean(x**2))
    x = v1.layers.max_pooling2d(x, (2, 2), 1)
    x = v1.layers.flatten(x)
    x = v1.layers.dropout(x, 0.1, training=training)
    x = v1.layers.dense(x, 64, activation=v1.nn.relu)
    x = v1.layers.batch_normalization(x, training=training)
    x = v1.layers.dense(x, 10)
    return x
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

print(train_out)
print()
print(test_out)
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/convolutional.py:414: UserWarning: `tf.layers.conv2d` is deprecated and will be removed in a future version. Please Use `tf.keras.layers.Conv2D` instead.
  warnings.warn('`tf.layers.conv2d` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:2183: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
2021-07-19 23:37:07.471106: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 23:37:09.562531: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8100
2021-07-19 23:37:14.794726: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
tf.Tensor([[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(1, 10), dtype=float32)

tf.Tensor(
[[ 0.04853132 -0.08974641 -0.32679698  0.07017353  0.12982666 -0.2153313
  -0.09793851  0.10957378  0.01823931  0.00898573]], shape=(1, 10), dtype=float32)
2021-07-19 23:37:15.173234: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/pooling.py:310: UserWarning: `tf.layers.max_pooling2d` is deprecated and will be removed in a future version. Please use `tf.keras.layers.MaxPooling2D` instead.
  warnings.warn('`tf.layers.max_pooling2d` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:329: UserWarning: `tf.layers.flatten` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Flatten` instead.
  warnings.warn('`tf.layers.flatten` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:268: UserWarning: `tf.layers.dropout` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dropout` instead.
  warnings.warn('`tf.layers.dropout` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:171: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  warnings.warn('`tf.layers.dense` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/normalization.py:308: UserWarning: `tf.layers.batch_normalization` is deprecated and will be removed in a future version. Please use `tf.keras.layers.BatchNormalization` instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.BatchNormalization` documentation).
  '`tf.layers.batch_normalization` is deprecated and '

Après conversion

La plupart des arguments sont restés les mêmes. Mais notez les différences :

  • La training argument est transmis à chaque couche par le modèle lors de son exécution.
  • Le premier argument de l'original model fonction (l'entrée x ) a disparu. En effet, les couches d'objets séparent la construction du modèle de l'appel du modèle.

Notez également que :

  • Si vous utilisez ou régularisations initializers de tf.contrib , ceux - ci ont plus de changements d'arguments que d' autres.
  • Le code n'écrit plus aux collections, les fonctions comme v1.losses.get_regularization_loss ne seront plus retourner ces valeurs, ce qui peut potentiellement vos boucles de formation.
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.04),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
print(train_out)
tf.Tensor([[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(1, 10), dtype=float32)
test_out = model(test_data, training=False)
print(test_out)
tf.Tensor(
[[-0.06252427  0.30122417 -0.18610534 -0.04890637 -0.01496555  0.41607457
   0.24905115  0.014429   -0.12719882 -0.22354674]], shape=(1, 10), dtype=float32)
# Here are all the trainable variables
len(model.trainable_variables)
8
# Here is the regularization loss
model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=0.07443664>]

Les variables mixtes et v1.layers

Code existant souvent des mélanges-niveau inférieur tensorflow 1.x variables et opérations avec niveau supérieur v1.layers .

Avant de convertir

def model(x, training, scope='model'):
  with v1.variable_scope(scope, reuse=v1.AUTO_REUSE):
    W = v1.get_variable(
      "W", dtype=v1.float32,
      initializer=v1.ones(shape=x.shape),
      regularizer=lambda x:0.004*tf.reduce_mean(x**2),
      trainable=True)
    if training:
      x = x + W
    else:
      x = x + W * 0.5
    x = v1.layers.conv2d(x, 32, 3, activation=tf.nn.relu)
    x = v1.layers.max_pooling2d(x, (2, 2), 1)
    x = v1.layers.flatten(x)
    return x

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

Après conversion

Pour convertir ce code, suivez le modèle de mappage des couches aux couches comme dans l'exemple précédent.

Le schéma général est :

  • Récupérer des paramètres de la couche à __init__ .
  • Construire les variables de build .
  • Exécuter les calculs dans l' call , et renvoyer le résultat.

Le v1.variable_scope est essentiellement une couche propre. Donc réécrire comme tf.keras.layers.Layer . Consultez les faire de nouveaux calques et modèles via le sous - classement Guide pour les détails.

# Create a custom layer for part of the model
class CustomLayer(tf.keras.layers.Layer):
  def __init__(self, *args, **kwargs):
    super(CustomLayer, self).__init__(*args, **kwargs)

  def build(self, input_shape):
    self.w = self.add_weight(
        shape=input_shape[1:],
        dtype=tf.float32,
        initializer=tf.keras.initializers.ones(),
        regularizer=tf.keras.regularizers.l2(0.02),
        trainable=True)

  # Call method will sometimes get used in graph mode,
  # training will get turned into a tensor
  @tf.function
  def call(self, inputs, training=None):
    if training:
      return inputs + self.w
    else:
      return inputs + self.w * 0.5
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
[1.5]
[2.]
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

# Build the model including the custom layer
model = tf.keras.Sequential([
    CustomLayer(input_shape=(28, 28, 1)),
    tf.keras.layers.Conv2D(32, 3, activation='relu'),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
])

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

Quelques points à noter :

  • Les modèles et couches Keras sous-classés doivent s'exécuter à la fois dans les graphiques v1 (pas de dépendances de contrôle automatique) et en mode avide :

    • Enveloppez l' call dans un tf.function pour obtenir les dépendances contrôle autographes et automatique.
  • Ne pas oublier d'accepter une training argument à l' call :

    • Parfois , il est un tf.Tensor
    • Parfois, c'est un booléen Python
  • Créer des variables modèle dans le constructeur ou Model.build en utilisant `self.add_weight:

    • Dans Model.build vous avez accès à la forme d'entrée, donc créer des poids avec la forme de correspondance
    • L' utilisation tf.keras.layers.Layer.add_weight permet Keras aux variables de la piste et des pertes de régularisation
  • Ne gardez pas tf.Tensors dans vos objets:

    • Ils pourraient se soit créé dans un tf.function ou dans le contexte avide, et ces tenseurs se comporter différemment
    • Utilisez tf.Variable s pour l' état, ils sont toujours utilisables des deux contextes
    • tf.Tensors ne sont que pour des valeurs intermédiaires

Une note sur Slim et contrib.layers

Une grande quantité de code 1.x tensorflow plus utilise la mince bibliothèque, qui a été emballé avec 1.x tensorflow comme tf.contrib.layers . En tant que contrib module, ce n'est plus disponible en tensorflow 2.x, même dans tf.compat.v1 . Convertir le code en utilisant Slim tensorflow 2.x est plus impliqué que la conversion des dépôts qui utilisent v1.layers . En fait, il peut être judicieux de convertir votre code Slim à v1.layers d' abord, puis convertir en Keras.

  • Supprimer arg_scopes , tous les args doivent être explicites.
  • Si vous les utilisez, split normalizer_fn et activation_fn dans leurs propres couches.
  • Les couches de conv séparables correspondent à une ou plusieurs couches Keras différentes (couches Keras en profondeur, ponctuelles et séparables).
  • Slim et v1.layers ont des noms d'arguments et les valeurs par défaut.
  • Certains arguments ont des échelles différentes.
  • Si vous utilisez Slim modèles pré-formés, essayer les modèles pré-traimed de KERAS de tf.keras.applications ou TF Hub tensorflow de 2.x SavedModels exportés à partir du code Slim d' origine.

Certaines tf.contrib couches pourraient ne pas avoir été déplacés au cœur tensorflow , mais ont plutôt été déplacés vers le package tensorflow addons .

Formation

Il y a plusieurs façons de données d'alimentation à un tf.keras modèle. Ils accepteront les générateurs Python et les tableaux Numpy en entrée.

La méthode recommandée aux données d'alimentation à un modèle est d'utiliser le tf.data package qui contient une collection de classes de haute performance pour la manipulation de données.

Si vous utilisez toujours tf.queue , ceux - ci sont désormais pris en charge uniquement en tant que structures de données, et non comme des pipelines d'entrée.

Utiliser les ensembles de données TensorFlow

Le Datasets tensorflow paquet ( tfds ) contient des outils pour les jeux de données prédéfinis de chargement comme tf.data.Dataset objets.

Pour cet exemple, vous pouvez charger le jeu de données à l' aide MNIST tfds :

datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']

Ensuite, préparez les données pour l'entraînement :

  • Redimensionnez chaque image.
  • Mélangez l'ordre des exemples.
  • Collectez des lots d'images et d'étiquettes.
BUFFER_SIZE = 10 # Use a much larger value for real code
BATCH_SIZE = 64
NUM_EPOCHS = 5


def scale(image, label):
  image = tf.cast(image, tf.float32)
  image /= 255

  return image, label

Pour garder l'exemple court, coupez l'ensemble de données pour ne renvoyer que 5 lots :

train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)

STEPS_PER_EPOCH = 5

train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
2021-07-19 23:37:19.049077: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Utiliser les boucles d'entraînement Keras

Si vous n'avez pas besoin de contrôle de bas niveau de votre processus de formation, en utilisant intégré dans Keras fit , d' evaluate et de predict des méthodes est recommandée. Ces méthodes fournissent une interface uniforme pour entraîner le modèle quelle que soit l'implémentation (séquentielle, fonctionnelle ou sous-classée).

Les avantages de ces méthodes incluent :

  • Ils acceptent des réseaux NumPy, générateurs Python et, tf.data.Datasets .
  • Ils appliquent la régularisation et les pertes d'activation automatiquement.
  • Ils soutiennent tf.distribute pour la formation multi-dispositif .
  • Ils prennent en charge les callables arbitraires en tant que pertes et métriques.
  • Ils soutiennent callbacks comme tf.keras.callbacks.TensorBoard et callbacks personnalisés.
  • Ils sont performants et utilisent automatiquement des graphiques TensorFlow.

Voici un exemple de la formation d' un modèle à l' aide d' un Dataset . (Pour plus de détails sur la façon dont cela fonctionne, consultez le tutoriels section.)

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

# Model is the full model w/o custom layers
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)

print("Loss {}, Accuracy {}".format(loss, acc))
Epoch 1/5
5/5 [==============================] - 2s 8ms/step - loss: 1.5874 - accuracy: 0.4719
Epoch 2/5
2021-07-19 23:37:20.919125: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 5ms/step - loss: 0.4435 - accuracy: 0.9094
Epoch 3/5
2021-07-19 23:37:21.242435: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 6ms/step - loss: 0.2764 - accuracy: 0.9594
Epoch 4/5
2021-07-19 23:37:21.576808: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 5ms/step - loss: 0.1889 - accuracy: 0.9844
Epoch 5/5
2021-07-19 23:37:21.888991: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 1s 6ms/step - loss: 0.1504 - accuracy: 0.9906
2021-07-19 23:37:23.082199: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 1s 3ms/step - loss: 1.6299 - accuracy: 0.7031
Loss 1.6299388408660889, Accuracy 0.703125
2021-07-19 23:37:23.932781: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Écrivez votre propre boucle

Si l'étape de formation du modèle Keras fonctionne pour vous, mais vous avez besoin d'un plus grand contrôle en dehors de cette étape, pensez à utiliser la tf.keras.Model.train_on_batch méthode, dans votre propre boucle de données itération.

Rappelez - vous: Beaucoup de choses peuvent être mises en œuvre en tant que tf.keras.callbacks.Callback .

Cette méthode présente de nombreux avantages des méthodes mentionnées dans la section précédente, mais donne à l'utilisateur le contrôle de la boucle externe.

Vous pouvez également utiliser tf.keras.Model.test_on_batch ou tf.keras.Model.evaluate pour vérifier les performances lors de la formation.

Pour continuer à entraîner le modèle ci-dessus :

# Model is the full model w/o custom layers
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

for epoch in range(NUM_EPOCHS):
  # Reset the metric accumulators
  model.reset_metrics()

  for image_batch, label_batch in train_data:
    result = model.train_on_batch(image_batch, label_batch)
    metrics_names = model.metrics_names
    print("train: ",
          "{}: {:.3f}".format(metrics_names[0], result[0]),
          "{}: {:.3f}".format(metrics_names[1], result[1]))
  for image_batch, label_batch in test_data:
    result = model.test_on_batch(image_batch, label_batch,
                                 # Return accumulated metrics
                                 reset_metrics=False)
  metrics_names = model.metrics_names
  print("\neval: ",
        "{}: {:.3f}".format(metrics_names[0], result[0]),
        "{}: {:.3f}".format(metrics_names[1], result[1]))
train:  loss: 0.131 accuracy: 1.000
train:  loss: 0.179 accuracy: 0.969
train:  loss: 0.117 accuracy: 0.984
train:  loss: 0.187 accuracy: 0.969
train:  loss: 0.168 accuracy: 0.969
2021-07-19 23:37:24.758128: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:25.476778: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.655 accuracy: 0.703
train:  loss: 0.083 accuracy: 1.000
train:  loss: 0.080 accuracy: 1.000
train:  loss: 0.099 accuracy: 0.984
train:  loss: 0.088 accuracy: 1.000
train:  loss: 0.084 accuracy: 1.000
2021-07-19 23:37:25.822978: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:26.103858: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.645 accuracy: 0.759
train:  loss: 0.066 accuracy: 1.000
train:  loss: 0.070 accuracy: 1.000
train:  loss: 0.062 accuracy: 1.000
train:  loss: 0.067 accuracy: 1.000
train:  loss: 0.061 accuracy: 1.000
2021-07-19 23:37:26.454306: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:26.715112: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.609 accuracy: 0.819
train:  loss: 0.056 accuracy: 1.000
train:  loss: 0.053 accuracy: 1.000
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.057 accuracy: 1.000
train:  loss: 0.069 accuracy: 0.984
2021-07-19 23:37:27.059747: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:27.327066: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.568 accuracy: 0.825
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.044 accuracy: 1.000
train:  loss: 0.045 accuracy: 1.000
train:  loss: 0.045 accuracy: 1.000
2021-07-19 23:37:28.593597: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.531 accuracy: 0.841
2021-07-19 23:37:29.220455: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Personnaliser l'étape de formation

Si vous avez besoin de plus de flexibilité et de contrôle, vous pouvez l'avoir en mettant en œuvre votre propre boucle d'entraînement. Il y a trois étapes :

  1. Itérer sur un générateur Python ou tf.data.Dataset pour obtenir des lots d'exemples.
  2. Utilisez tf.GradientTape à des gradients Collect.
  3. Utilisez l' une des tf.keras.optimizers pour appliquer les mises à jour de poids aux variables du modèle.

Rappelles toi:

  • Toujours inclure une training argumentation sur l' call méthode des couches et des modèles sous - classées.
  • Assurez - vous d'appeler le modèle avec la training argumentation correctement.
  • Selon l'utilisation, les variables de modèle peuvent ne pas exister tant que le modèle n'est pas exécuté sur un lot de données.
  • Vous devez gérer manuellement des choses comme les pertes de régularisation pour le modèle.

Notez les simplifications relatives à la v1 :

  • Il n'est pas nécessaire d'exécuter des initialiseurs de variables. Les variables sont initialisées à la création.
  • Il n'est pas nécessaire d'ajouter des dépendances de contrôle manuel. Même dans les tf.function opérations agissent comme en mode hâte.
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

@tf.function
def train_step(inputs, labels):
  with tf.GradientTape() as tape:
    predictions = model(inputs, training=True)
    regularization_loss=tf.math.add_n(model.losses)
    pred_loss=loss_fn(labels, predictions)
    total_loss=pred_loss + regularization_loss

  gradients = tape.gradient(total_loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

for epoch in range(NUM_EPOCHS):
  for inputs, labels in train_data:
    train_step(inputs, labels)
  print("Finished epoch", epoch)
2021-07-19 23:37:29.998049: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 0
2021-07-19 23:37:30.316333: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 1
2021-07-19 23:37:30.618560: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 2
2021-07-19 23:37:30.946881: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 3
Finished epoch 4
2021-07-19 23:37:31.261594: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Des mesures et des pertes de style nouveau

Dans TensorFlow 2.x, les métriques et les pertes sont des objets. Ces travaux à la fois avec impatience et dans tf.function s.

Un objet de perte est appelable et attend le (y_true, y_pred) comme arguments :

cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
4.01815

Un objet métrique a les méthodes suivantes :

  • Metric.update_state() : ajouter de nouvelles observations.
  • Metric.result() : obtenir le résultat courant de la mesure, compte tenu des valeurs observées.
  • Metric.reset_states() : effacer toutes les observations.

L'objet lui-même est appelable. L' appel des mises à jour de l'état de nouvelles observations, comme update_state , et renvoie le nouveau résultat de la métrique.

Vous n'avez pas besoin d'initialiser manuellement les variables d'une métrique, et comme TensorFlow 2.x a des dépendances de contrôle automatique, vous n'avez pas non plus à vous en soucier.

Le code ci-dessous utilise une métrique pour suivre la perte moyenne observée dans une boucle d'entraînement personnalisée.

# Create the metrics
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

@tf.function
def train_step(inputs, labels):
  with tf.GradientTape() as tape:
    predictions = model(inputs, training=True)
    regularization_loss=tf.math.add_n(model.losses)
    pred_loss=loss_fn(labels, predictions)
    total_loss=pred_loss + regularization_loss

  gradients = tape.gradient(total_loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))
  # Update the metrics
  loss_metric.update_state(total_loss)
  accuracy_metric.update_state(labels, predictions)


for epoch in range(NUM_EPOCHS):
  # Reset the metrics
  loss_metric.reset_states()
  accuracy_metric.reset_states()

  for inputs, labels in train_data:
    train_step(inputs, labels)
  # Get the metric results
  mean_loss=loss_metric.result()
  mean_accuracy = accuracy_metric.result()

  print('Epoch: ', epoch)
  print('  loss:     {:.3f}'.format(mean_loss))
  print('  accuracy: {:.3f}'.format(mean_accuracy))
2021-07-19 23:37:31.878403: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  0
  loss:     0.172
  accuracy: 0.988
2021-07-19 23:37:32.177136: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  1
  loss:     0.143
  accuracy: 0.997
2021-07-19 23:37:32.493570: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  2
  loss:     0.126
  accuracy: 0.997
2021-07-19 23:37:32.807739: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  3
  loss:     0.109
  accuracy: 1.000
Epoch:  4
  loss:     0.092
  accuracy: 1.000
2021-07-19 23:37:33.155028: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Noms des métriques Keras

Dans TensorFlow 2.x, les modèles Keras sont plus cohérents dans la gestion des noms de métriques.

Maintenant , quand vous passez une chaîne dans la liste des paramètres, cette chaîne exacte est utilisée comme indicateur du name . Ces noms sont visibles dans l'objet retourné par l' histoire model.fit , et dans les journaux passés à keras.callbacks . est défini sur la chaîne que vous avez passée dans la liste de métriques.

model.compile(
    optimizer = tf.keras.optimizers.Adam(0.001),
    loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")])
history = model.fit(train_data)
5/5 [==============================] - 1s 6ms/step - loss: 0.1042 - acc: 0.9969 - accuracy: 0.9969 - my_accuracy: 0.9969
2021-07-19 23:37:34.039643: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
history.history.keys()
dict_keys(['loss', 'acc', 'accuracy', 'my_accuracy'])

Cela diffère des versions précédentes où le dépassement des metrics=["accuracy"] aboutirait à la dict_keys(['loss', 'acc'])

Optimiseurs Keras

Les optimiseurs de v1.train , tels que v1.train.AdamOptimizer et v1.train.GradientDescentOptimizer , ont des équivalents dans tf.keras.optimizers .

Convertir v1.train à keras.optimizers

Voici les points à garder à l'esprit lors de la conversion de vos optimiseurs :

Les nouvelles valeurs par défaut pour certains tf.keras.optimizers

Il n'y a aucun changement pour optimizers.SGD , optimizers.Adam ou optimizers.RMSprop .

Les taux d'apprentissage par défaut suivants ont changé :

TensorBoard

Tensorflow 2.x inclut des changements importants à la tf.summary API utilisée pour les données de synthèse d'écriture pour la visualisation en TensorBoard. Pour une introduction générale à la nouvelle tf.summary , il y a plusieurs tutoriels disponibles qui utilisent l'API 2.x tensorflow. Cela comprend un guide de migration 2.x TensorBoard tensorflow .

Sauvegarde et chargement

Compatibilité des points de contrôle

Tensorflow 2.x utilisations des points de contrôle à base d'objet .

Les points de contrôle basés sur les noms à l'ancienne peuvent toujours être chargés, si vous faites attention. Le processus de conversion de code peut entraîner des modifications de nom de variable, mais il existe des solutions de contournement.

L'approche la plus simple consiste à aligner les noms du nouveau modèle avec les noms du point de contrôle :

  • Les variables ont toujours tous un name argument que vous pouvez définir.
  • Modèles KERAS prennent également un name argument qu'ils fixent comme préfixe pour leurs variables.
  • La v1.name_scope fonction peut être utilisée pour définir les préfixes de nom variable. Ceci est très différent de tf.variable_scope . Il n'affecte que les noms et ne suit pas les variables et la réutilisation.

Si cela ne fonctionne pas pour votre cas d'utilisation, essayez la v1.train.init_from_checkpoint fonction. Il faut un assignment_map argument, qui spécifie le mappage des noms anciens aux nouveaux noms.

Le dépôt estimateur tensorflow comprend un outil de conversion pour mettre à niveau les postes de contrôle pour les estimateurs premade de 1.x tensorflow à 2,0. Il peut servir d'exemple sur la façon de créer un outil pour un cas d'utilisation similaire.

Compatibilité des modèles enregistrés

Il n'y a pas de problèmes de compatibilité significatifs pour les modèles enregistrés.

  • Les modèles enregistrés TensorFlow 1.x fonctionnent dans TensorFlow 2.x.
  • Les modèles enregistrés TensorFlow 2.x fonctionnent dans TensorFlow 1.x si toutes les opérations sont prises en charge.

Un Graph.pb ou Graph.pbtxt

Il n'y a aucun moyen simple de mettre à jour une première Graph.pb fichier à tensorflow 2.x. Votre meilleur pari est de mettre à niveau le code qui a généré le fichier.

Mais, si vous avez un « graphique gelé » (un tf.Graph où les variables ont été transformées en constantes), alors il est possible de convertir en un concrete_function en utilisant v1.wrap_function :

def wrap_frozen_graph(graph_def, inputs, outputs):
  def _imports_graph_def():
    tf.compat.v1.import_graph_def(graph_def, name="")
  wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
  import_graph = wrapped_import.graph
  return wrapped_import.prune(
      tf.nest.map_structure(import_graph.as_graph_element, inputs),
      tf.nest.map_structure(import_graph.as_graph_element, outputs))

Par exemple, voici un graphique gelé pour Inception v1, à partir de 2016 :

path = tf.keras.utils.get_file(
    'inception_v1_2016_08_28_frozen.pb',
    'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz',
    untar=True)
Downloading data from http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz
24698880/24695710 [==============================] - 1s 0us/step

Charger le tf.GraphDef :

graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(path,'rb').read())

Enveloppez dans un concrete_function :

inception_func = wrap_frozen_graph(
    graph_def, inputs='input:0',
    outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')

Passez-lui un tenseur en entrée :

input_img = tf.ones([1,224,224,3], dtype=tf.float32)
inception_func(input_img).shape
TensorShape([1, 28, 28, 96])

Estimateurs

Formation avec des estimateurs

Les estimateurs sont pris en charge dans TensorFlow 2.x.

Lorsque vous utilisez estimateurs, vous pouvez utiliser input_fn , tf.estimator.TrainSpec et tf.estimator.EvalSpec de tensorflow 1.x.

Voici un exemple d' utilisation input_fn avec le train et évaluer les spécifications.

Création des spécifications input_fn et train/eval

# Define the estimator's input_fn
def input_fn():
  datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
  mnist_train, mnist_test = datasets['train'], datasets['test']

  BUFFER_SIZE = 10000
  BATCH_SIZE = 64

  def scale(image, label):
    image = tf.cast(image, tf.float32)
    image /= 255

    return image, label[..., tf.newaxis]

  train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
  return train_data.repeat()

# Define train and eval specs
train_spec = tf.estimator.TrainSpec(input_fn=input_fn,
                                    max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)
eval_spec = tf.estimator.EvalSpec(input_fn=input_fn,
                                  steps=STEPS_PER_EPOCH)

Utilisation d'une définition de modèle Keras

Il existe quelques différences dans la manière de construire vos estimateurs dans TensorFlow 2.x.

Il est recommandé de définir votre modèle à l' aide Keras, puis utilisez le tf.keras.estimator.model_to_estimator utilitaire pour transformer votre modèle en un estimateur. Le code ci-dessous montre comment utiliser cet utilitaire lors de la création et de la formation d'un estimateur.

def make_model():
  return tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
  ])
model = make_model()

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

estimator = tf.keras.estimator.model_to_estimator(
  keras_model = model
)

tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbhtumut0
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbhtumut0
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:435: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbhtumut0', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
2021-07-19 23:37:36.453946: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:36.454461: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454737: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:36.455020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:36.455027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:36.455033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:36.455126: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.455479: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.455779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbhtumut0', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tmpbhtumut0/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tmpbhtumut0/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tmpbhtumut0/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tmpbhtumut0/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 8 variables.
INFO:tensorflow:Warm-started 8 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:39.175917: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:39.176424: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176729: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:39.177042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:39.177050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:39.177057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:39.177159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.177481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.177761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 3.1193407, step = 0
INFO:tensorflow:loss = 3.1193407, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:2426: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:42
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:42
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:42.476830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:42.477339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477648: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:42.477955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:42.477963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:42.477969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:42.478058: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.478332: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
INFO:tensorflow:Restoring parameters from /tmp/tmpbhtumut0/model.ckpt-25
2021-07-19 23:37:42.478592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.02146s
2021-07-19 23:37:43.437293: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.02146s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:43
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:43
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.634375, global_step = 25, loss = 1.493957
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.634375, global_step = 25, loss = 1.493957
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.37796202.
2021-07-19 23:37:43.510911: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.37796202.
({'accuracy': 0.634375, 'loss': 1.493957, 'global_step': 25}, [])

En utilisant une coutume model_fn

Si vous avez un estimateur personnalisé existant model_fn que vous devez maintenir, vous pouvez convertir votre model_fn d'utiliser un modèle Keras.

Cependant, pour des raisons de compatibilité, une coutume model_fn continue à fonctionner dans 1.x style mode graphique. Cela signifie qu'il n'y a pas d'exécution rapide et aucune dépendance de contrôle automatique.

Model_fn personnalisé avec des changements minimes

Pour faire votre commande model_fn travail tensorflow 2.x, si vous préférez des changements minimes au code existant, tf.compat.v1 des symboles tels que les optimizers et les metrics peuvent être utilisés.

En utilisant un modèle Keras dans une mesure model_fn est similaire à l' utiliser dans une boucle de formation sur mesure:

  • Définissez la training de phase appropriée, en fonction du mode de l' argument.
  • Passer Explicitement du modèle trainable_variables à l'optimiseur.

Mais il y a des différences importantes, par rapport à une boucle personnalisée :

  • Au lieu d'utiliser Model.losses , extraire les pertes en utilisant Model.get_losses_for .
  • Extrait des mises à jour du modèle en utilisant Model.get_updates_for .

Le code suivant crée un estimateur d'une coutume model_fn , ce qui illustre toutes ces préoccupations.

def my_model_fn(features, labels, mode):
  model = make_model()

  optimizer = tf.compat.v1.train.AdamOptimizer()
  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

  training = (mode == tf.estimator.ModeKeys.TRAIN)
  predictions = model(features, training=training)

  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

  reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
  total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses)

  accuracy = tf.compat.v1.metrics.accuracy(labels=labels,
                                           predictions=tf.math.argmax(predictions, axis=1),
                                           name='acc_op')

  update_ops = model.get_updates_for(None) + model.get_updates_for(features)
  minimize_op = optimizer.minimize(
      total_loss,
      var_list=model.trainable_variables,
      global_step=tf.compat.v1.train.get_or_create_global_step())
  train_op = tf.group(minimize_op, update_ops)

  return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op, eval_metric_ops={'accuracy': accuracy})

# Create the Estimator & Train
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpqiom6a5s
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpqiom6a5s
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpqiom6a5s', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpqiom6a5s', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:46.140692: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:46.141220: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141517: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:46.141807: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:46.141814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:46.141820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:46.141907: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.142234: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.142497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 2.9167266, step = 0
INFO:tensorflow:loss = 2.9167266, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:49
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:49
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpqiom6a5s/model.ckpt-25
2021-07-19 23:37:49.640699: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:49.641238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641848: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:49.641893: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:49.641901: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:49.641910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:49.642029: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.642355: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.642657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.38362s
2021-07-19 23:37:50.924973: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.38362s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:50
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:50
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.70625, global_step = 25, loss = 1.6135181
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.70625, global_step = 25, loss = 1.6135181
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.60315084.
2021-07-19 23:37:51.035953: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.60315084.
({'accuracy': 0.70625, 'loss': 1.6135181, 'global_step': 25}, [])

Coutume model_fn avec tensorflow symboles 2.x

Si vous voulez vous débarrasser de tous les symboles 1.x tensorflow et améliorez votre commande model_fn à tensorflow 2.x, vous devez mettre à jour l'optimiseur et les mesures à tf.keras.optimizers et tf.keras.metrics .

Dans la coutume model_fn , outre ce qui précède les changements , doivent apporter plus d' améliorations:

Pour l'exemple ci - dessus my_model_fn , le code migrée avec des symboles 2.x tensorflow apparaît comme suit:

def my_model_fn(features, labels, mode):
  model = make_model()

  training = (mode == tf.estimator.ModeKeys.TRAIN)
  loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
  predictions = model(features, training=training)

  # Get both the unconditional losses (the None part)
  # and the input-conditional losses (the features part).
  reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
  total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses)

  # Upgrade to tf.keras.metrics.
  accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')
  accuracy = accuracy_obj.update_state(
      y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))

  train_op = None
  if training:
    # Upgrade to tf.keras.optimizers.
    optimizer = tf.keras.optimizers.Adam()
    # Manually assign tf.compat.v1.global_step variable to optimizer.iterations
    # to make tf.compat.v1.train.global_step increased correctly.
    # This assignment is a must for any `tf.train.SessionRunHook` specified in
    # estimator, as SessionRunHooks rely on global step.
    optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()
    # Get both the unconditional updates (the None part)
    # and the input-conditional updates (the features part).
    update_ops = model.get_updates_for(None) + model.get_updates_for(features)
    # Compute the minimize_op.
    minimize_op = optimizer.get_updates(
        total_loss,
        model.trainable_variables)[0]
    train_op = tf.group(minimize_op, *update_ops)

  return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op,
    eval_metric_ops={'Accuracy': accuracy_obj})

# Create the Estimator and train.
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpomveromc
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpomveromc
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpomveromc', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpomveromc', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:53.371110: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.371633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:53.371845: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.372311: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.372679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:53.372742: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:53.372779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:53.372790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:53.372939: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.373380: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.373693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 2.874814, step = 0
INFO:tensorflow:loss = 2.874814, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:56
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:56
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpomveromc/model.ckpt-25
2021-07-19 23:37:56.884303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.884746: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:56.884934: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.885330: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.885640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:56.885696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:56.885711: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:56.885720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:56.885861: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.886386: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.886729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.04574s
2021-07-19 23:37:57.852422: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.04574s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:57
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:57
INFO:tensorflow:Saving dict for global step 25: Accuracy = 0.790625, global_step = 25, loss = 1.4257433
INFO:tensorflow:Saving dict for global step 25: Accuracy = 0.790625, global_step = 25, loss = 1.4257433
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.42627147.
2021-07-19 23:37:57.941217: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.42627147.
({'Accuracy': 0.790625, 'loss': 1.4257433, 'global_step': 25}, [])

Estimateurs prédéfinis

Premade Estimateurs dans la famille de tf.estimator.DNN* , tf.estimator.Linear* et tf.estimator.DNNLinearCombined* sont toujours pris en charge dans l'API 2.x tensorflow. Cependant, certains arguments ont changé :

  1. input_layer_partitioner : Suppression v2.
  2. loss_reduction : Mise à jour à tf.keras.losses.Reduction au lieu de tf.compat.v1.losses.Reduction . Sa valeur par défaut est également modifiée pour tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE de tf.compat.v1.losses.Reduction.SUM .
  3. optimizer , dnn_optimizer et linear_optimizer : cet argument a été mis à jour tf.keras.optimizers au lieu du tf.compat.v1.train.Optimizer .

Pour migrer les modifications ci-dessus :

  1. Aucune migration est nécessaire pour input_layer_partitioner depuis la Distribution Strategy se chargera automatiquement dans tensorflow 2.x.
  2. Pour loss_reduction , vérifiez tf.keras.losses.Reduction pour les options prises en charge.
  3. Pour optimizer arguments:
    • Si vous ne le faites pas: 1) passer l' optimizer , dnn_optimizer ou linear_optimizer argument ou 2) préciser l' optimizer argument comme une string dans votre code, vous n'avez pas besoin de quoi que ce soit parce que le changement tf.keras.optimizers est utilisé par défaut .
    • Dans le cas contraire, vous devez mettre à jour de tf.compat.v1.train.Optimizer à ses correspondants tf.keras.optimizers .

Convertisseur de point de contrôle

La migration vers keras.optimizers briseront points de contrôle enregistré à l' aide 1.x tensorflow, comme tf.keras.optimizers génère un ensemble différent de variables à enregistrer dans les points de contrôle. Pour faire vieux réutilisable checkpoint après votre migration vers tensorflow 2.x, essayez l' outil de conversion de point de contrôle .

 curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14889  100 14889    0     0  60771      0 --:--:-- --:--:-- --:--:-- 60771

L'outil dispose d'une aide intégrée :

 python checkpoint_converter.py -h
2021-07-19 23:37:58.805973: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
usage: checkpoint_converter.py [-h]
                               {dnn,linear,combined} source_checkpoint
                               source_graph target_checkpoint

positional arguments:
  {dnn,linear,combined}
                        The type of estimator to be converted. So far, the
                        checkpoint converter only supports Canned Estimator.
                        So the allowed types include linear, dnn and combined.
  source_checkpoint     Path to source checkpoint file to be read in.
  source_graph          Path to source graph file to be read in.
  target_checkpoint     Path to checkpoint file to be written out.

optional arguments:
  -h, --help            show this help message and exit

TensorShape

Cette classe a été simplifiée pour tenir int s, au lieu de tf.compat.v1.Dimension objets. Il n'y a donc pas besoin d'appeler .value pour obtenir un int .

Individuels tf.compat.v1.Dimension objets sont toujours accessibles à partir tf.TensorShape.dims .

Les éléments suivants illustrent les différences entre TensorFlow 1.x et TensorFlow 2.x.

# Create a shape and choose an index
i = 0
shape = tf.TensorShape([16, None, 256])
shape
TensorShape([16, None, 256])

Si vous aviez ceci dans TensorFlow 1.x :

value = shape[i].value

Effectuez ensuite cette opération dans TensorFlow 2.x :

value = shape[i]
value
16

Si vous aviez ceci dans TensorFlow 1.x :

for dim in shape:
    value = dim.value
    print(value)

Effectuez ensuite cette opération dans TensorFlow 2.x :

for value in shape:
  print(value)
16
None
256

Si vous aviez cela dans TensorFlow 1.x (ou utilisiez une autre méthode de dimension) :

dim = shape[i]
dim.assert_is_compatible_with(other_dim)

Effectuez ensuite cette opération dans TensorFlow 2.x :

other_dim = 16
Dimension = tf.compat.v1.Dimension

if shape.rank is None:
  dim = Dimension(None)
else:
  dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
True
shape = tf.TensorShape(None)

if shape:
  dim = shape.dims[i]
  dim.is_compatible_with(other_dim) # or any other dimension method

La valeur booléenne d'un tf.TensorShape est True si le rang est connu, False sinon.

print(bool(tf.TensorShape([])))      # Scalar
print(bool(tf.TensorShape([0])))     # 0-length vector
print(bool(tf.TensorShape([1])))     # 1-length vector
print(bool(tf.TensorShape([None])))  # Unknown-length vector
print(bool(tf.TensorShape([1, 10, 100])))       # 3D tensor
print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions
print()
print(bool(tf.TensorShape(None)))  # A tensor with unknown rank.
True
True
True
True
True
True

False

Autres changements

  • Retirer tf.colocate_with : les algorithmes de placement de l' appareil de tensorflow se sont considérablement améliorés. Cela ne devrait plus être nécessaire. Si la suppression provoque une dégradation de la performance fichier s'il vous plaît un bug .

  • Remplacer v1.ConfigProto l' utilisation des fonctions équivalentes de tf.config .

Conclusion

Le processus global est :

  1. Exécutez le script de mise à niveau.
  2. Supprimer les symboles de contribution.
  3. Basculez vos modèles vers un style orienté objet (Keras).
  4. Utilisez tf.keras ou tf.estimator formation et boucles d'évaluation où vous pouvez.
  5. Sinon, utilisez des boucles personnalisées, mais veillez à éviter les sessions et les collections.

Il faut un peu de travail pour convertir le code en TensorFlow 2.x idiomatique, mais chaque changement entraîne :

  • Moins de lignes de code.
  • Plus de clarté et de simplicité.
  • Débogage plus facile.