Migrieren Sie Ihren TensorFlow 1-Code zu TensorFlow 2

Auf TensorFlow.org ansehen In Google Colab ausführen Quelle auf GitHub anzeigen Notizbuch herunterladen

Dieses Handbuch richtet sich an Benutzer von Low-Level-TensorFlow-APIs. Wenn Sie den High-Level - APIs verwenden ( tf.keras ) kann es wenig oder keine Aktion , die Sie ergreifen müssen , um den Code vollständig machen TensorFlow 2.x kompatibel:

Es ist immer noch möglich 1.x Code, unmodifizierten (läuft mit Ausnahme contrib ), in TensorFlow 2.x:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

Dadurch können Sie jedoch viele der in TensorFlow 2.x vorgenommenen Verbesserungen nicht nutzen. Dieses Handbuch hilft Ihnen, Ihren Code zu aktualisieren, um ihn einfacher, leistungsfähiger und wartungsfreundlicher zu machen.

Automatisches Konvertierungsskript

Der erste Schritt, bevor die Änderungen in diesem Handbuch beschrieben zu implementieren, ist zu versuchen , den laufenden Upgrade - Skript .

Dies führt einen ersten Durchgang beim Upgrade Ihres Codes auf TensorFlow 2.x aus, kann Ihren Code jedoch nicht idiomatisch auf v2. Ihr Code kann immer noch Gebrauch machen tf.compat.v1 Endpunkte Zugang Platzhalter, Sitzungen, Sammlungen und andere 1.x-Stil - Funktionalität.

Verhaltensänderungen auf höchster Ebene

Wenn Ihr Code funktioniert in TensorFlow 2.x tf.compat.v1.disable_v2_behavior , gibt es noch globale Verhaltensänderungen Sie Adresse benötigen. Die wichtigsten Änderungen sind:

  • Eager Ausführung, v1.enable_eager_execution() : Jeder Code, der ein implizit verwendet tf.Graph fehl. Achten Sie darauf , diesen Code in einem einzuwickeln with tf.Graph().as_default() Kontext.

  • Ressourcenvariablen, v1.enable_resource_variables() : Einiger Code kann auf nicht-deterministisches Verhalten hängt aktiviert durch TensorFlow Referenzvariablen. Ressourcenvariablen werden beim Schreiben gesperrt und bieten so intuitivere Konsistenzgarantien.

    • Dies kann das Verhalten in Grenzfällen ändern.
    • Dies kann zusätzliche Kopien erstellen und eine höhere Speicherauslastung haben.
    • Dies kann , indem deaktiviert werden use_resource=False an den tf.Variable Konstruktor.
  • Tensor Formen, v1.enable_v2_tensorshape() : TensorFlow 2.x vereinfacht das Verhalten von Tensor Formen. Statt t.shape[0].value kann man sagen , t.shape[0] . Diese Änderungen sollten klein sein, und es ist sinnvoll, sie sofort zu beheben. Wenden Sie sich an den TensorShape Abschnitt für Beispiele.

  • Der Steuerfluss, v1.enable_control_flow_v2() : Die TensorFlow 2.x Steuerfluß Umsetzung wird vereinfacht, und erzeugt so unterschiedliche Grafikdarstellungen. Bitte Datei Fehler für alle Probleme.

Code für TensorFlow 2.x erstellen

In diesem Handbuch werden mehrere Beispiele für die Konvertierung von TensorFlow 1.x-Code in TensorFlow 2.x erläutert. Durch diese Änderungen kann Ihr Code Leistungsoptimierungen und vereinfachte API-Aufrufe nutzen.

Das Muster ist in jedem Fall:

1. Ersetzen v1.Session.run Anrufe

Jeder v1.Session.run Aufruf sollte durch eine Python - Funktion ersetzt werden.

  • Die feed_dict und v1.placeholder s werden Funktionsargumente.
  • Die fetches werden die Rückgabewert der Funktion.
  • Bei der Konvertierung ermöglicht eifrig Ausführung leicht Debuggen mit Standard - Python - Tool wie pdb .

Danach fügen Sie einen tf.function Dekorateur , um es effizient in Graph laufen zu lassen. Schauen Sie sich den Autograph Guide für weitere Informationen darüber , wie das funktioniert.

Beachten Sie, dass:

  • Im Gegensatz zu v1.Session.run ein tf.function hat eine feste Rendite Signatur und gibt immer alle Ausgänge. Wenn dies zu Leistungsproblemen führt, erstellen Sie zwei separate Funktionen.

  • Es besteht keine Notwendigkeit für eine tf.control_dependencies oder ähnliche Operationen: A tf.function verhält sich , als ob es geschrieben in der Reihenfolge ausgeführt wurden. tf.Variable Zuweisungen und tf.assert s, zum Beispiel, werden automatisch ausgeführt.

Die Umwandlung von Modellen Abschnitt enthält ein Arbeitsbeispiel dieser Umwandlungsprozess.

2. Verwenden Sie Python-Objekte, um Variablen und Verluste zu verfolgen

In TensorFlow 2.x wird von jeglicher namensbasierten Variablenverfolgung dringend abgeraten. Verwenden Sie Python-Objekte, um Variablen zu verfolgen.

Verwenden tf.Variable statt v1.get_variable .

Jeder v1.variable_scope sollte auf ein Python - Objekt umgewandelt werden. In der Regel wird dies einer der folgenden sein:

Wenn Sie benötigen , um aggregierte Listen von Variablen (wie tf.Graph.get_collection(tf.GraphKeys.VARIABLES) ), die Verwendung .variables und .trainable_variables Attribute der Layer - und Model - Objekte.

Diese Layer und Model - Klassen implementieren mehrere andere Eigenschaften , die die Notwendigkeit einer globalen Sammlungen entfernen. Ihre .losses Eigenschaft kann ein Ersatz sein für die Verwendung tf.GraphKeys.LOSSES Sammlung.

Siehe die Keras Führungen für weitere Details.

3. Aktualisieren Sie Ihre Trainingsschleifen

Verwenden Sie die API der höchsten Ebene, die für Ihren Anwendungsfall geeignet ist. Bevorzugen tf.keras.Model.fit über Ihre eigenen Trainingsschlaufen zu bauen.

Diese High-Level-Funktionen verwalten viele der Low-Level-Details, die leicht zu übersehen sind, wenn Sie Ihre eigene Trainingsschleife schreiben. Zum Beispiel sie die Regularisierung Verluste automatisch zu sammeln und die eingestellte training=True Argument , wenn das Modell aufrufen.

4. Aktualisieren Sie Ihre Dateneingabepipelines

Verwenden tf.data Datensätze für die Dateneingabe. Diese Objekte sind effizient, ausdrucksstark und lassen sich gut in Tensorflow integrieren.

Sie können direkt an die weitergegeben werden tf.keras.Model.fit Methode.

model.fit(dataset, epochs=5)

Sie können direkt über Standard-Python iteriert werden:

for example_batch, label_batch in dataset:
    break

5. Migrate off compat.v1 Symbole

Das tf.compat.v1 Modul enthält den vollständigen TensorFlow 1.x API, mit seiner ursprünglichen Semantik.

Die TensorFlow 2.x Upgrade - v1.arg_max Skript Symbole , um ihre v2 Äquivalente umwandeln, wenn eine solche Umwandlung sicher ist, das heißt, wenn es , dass das Verhalten der TensorFlow 2.x Version genau entspricht (zum Beispiel bestimmen kann, wird es umbenennen v1.arg_max zu tf.argmax da diejenigen, sind die gleiche Funktion).

Nachdem das Upgrade - Skript mit einem Stück Code durchgeführt wird, ist es wahrscheinlich , es gibt viele die Nennungen compat.v1 . Es lohnt sich, den Code durchzugehen und manuell in das v2-Äquivalent zu konvertieren (sollte im Protokoll erwähnt werden, falls vorhanden).

Konvertieren von Modellen

Low-Level-Variablen und Operatorausführung

Beispiele für die Low-Level-API-Nutzung sind:

Vor der Konvertierung

So können diese Muster im Code mit TensorFlow 1.x aussehen.

import tensorflow as tf
import tensorflow.compat.v1 as v1

import tensorflow_datasets as tfds
2021-07-19 23:37:03.701382: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
g = v1.Graph()

with g.as_default():
  in_a = v1.placeholder(dtype=v1.float32, shape=(2))
  in_b = v1.placeholder(dtype=v1.float32, shape=(2))

  def forward(x):
    with v1.variable_scope("matmul", reuse=v1.AUTO_REUSE):
      W = v1.get_variable("W", initializer=v1.ones(shape=(2,2)),
                          regularizer=lambda x:tf.reduce_mean(x**2))
      b = v1.get_variable("b", initializer=v1.zeros(shape=(2)))
      return W * x + b

  out_a = forward(in_a)
  out_b = forward(in_b)
  reg_loss=v1.losses.get_regularization_loss(scope="matmul")

with v1.Session(graph=g) as sess:
  sess.run(v1.global_variables_initializer())
  outs = sess.run([out_a, out_b, reg_loss],
                feed_dict={in_a: [1, 0], in_b: [0, 1]})

print(outs[0])
print()
print(outs[1])
print()
print(outs[2])
2021-07-19 23:37:05.720243: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-19 23:37:06.406838: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.407495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:06.407533: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 23:37:06.410971: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-19 23:37:06.411090: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-19 23:37:06.412239: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-19 23:37:06.412612: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-19 23:37:06.413657: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-19 23:37:06.414637: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-19 23:37:06.414862: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 23:37:06.415002: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.415823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.416461: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:06.417159: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 23:37:06.417858: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.418588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:06.418704: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.419416: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:06.420021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:06.420085: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 23:37:07.053897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:07.053954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:07.053964: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:07.054212: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.054962: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.055685: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.056348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
2021-07-19 23:37:07.060371: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2000165000 Hz
[[1. 0.]
 [1. 0.]]

[[0. 1.]
 [0. 1.]]

1.0

Nach der Konvertierung

Im konvertierten Code:

  • Die Variablen sind lokale Python-Objekte.
  • Die forward Funktion definiert noch die Berechnung.
  • Der Session.run Anruf wird mit einem Aufruf ersetzt forward .
  • Das optionale tf.function Dekorateur kann für die Leistung hinzugefügt werden.
  • Die Regularisierungen werden manuell berechnet, ohne auf eine globale Sammlung zu verweisen.
  • Es gibt keine Verwendung von Sitzungen oder Platzhalter.
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")

@tf.function
def forward(x):
  return W * x + b

out_a = forward([1,0])
print(out_a)
tf.Tensor(
[[1. 0.]
 [1. 0.]], shape=(2, 2), dtype=float32)
2021-07-19 23:37:07.370160: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.370572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:07.370699: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371011: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:07.371360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:07.371370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:07.371377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:07.371511: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.371844: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:07.372131: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
2021-07-19 23:37:07.419147: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
out_b = forward([0,1])

regularizer = tf.keras.regularizers.l2(0.04)
reg_loss=regularizer(W)

Alle Modelle basieren auf tf.layers

Die v1.layers Modul dient Schicht-Funktionen enthalten , die auf verlassen v1.variable_scope zu definieren und die Wiederverwendung Variablen.

Vor der Konvertierung

def model(x, training, scope='model'):
  with v1.variable_scope(scope, reuse=v1.AUTO_REUSE):
    x = v1.layers.conv2d(x, 32, 3, activation=v1.nn.relu,
          kernel_regularizer=lambda x:0.004*tf.reduce_mean(x**2))
    x = v1.layers.max_pooling2d(x, (2, 2), 1)
    x = v1.layers.flatten(x)
    x = v1.layers.dropout(x, 0.1, training=training)
    x = v1.layers.dense(x, 64, activation=v1.nn.relu)
    x = v1.layers.batch_normalization(x, training=training)
    x = v1.layers.dense(x, 10)
    return x
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

print(train_out)
print()
print(test_out)
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/convolutional.py:414: UserWarning: `tf.layers.conv2d` is deprecated and will be removed in a future version. Please Use `tf.keras.layers.Conv2D` instead.
  warnings.warn('`tf.layers.conv2d` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:2183: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
  warnings.warn('`layer.apply` is deprecated and '
2021-07-19 23:37:07.471106: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 23:37:09.562531: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8100
2021-07-19 23:37:14.794726: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
tf.Tensor([[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(1, 10), dtype=float32)

tf.Tensor(
[[ 0.04853132 -0.08974641 -0.32679698  0.07017353  0.12982666 -0.2153313
  -0.09793851  0.10957378  0.01823931  0.00898573]], shape=(1, 10), dtype=float32)
2021-07-19 23:37:15.173234: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/pooling.py:310: UserWarning: `tf.layers.max_pooling2d` is deprecated and will be removed in a future version. Please use `tf.keras.layers.MaxPooling2D` instead.
  warnings.warn('`tf.layers.max_pooling2d` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:329: UserWarning: `tf.layers.flatten` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Flatten` instead.
  warnings.warn('`tf.layers.flatten` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:268: UserWarning: `tf.layers.dropout` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dropout` instead.
  warnings.warn('`tf.layers.dropout` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:171: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
  warnings.warn('`tf.layers.dense` is deprecated and '
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/legacy_tf_layers/normalization.py:308: UserWarning: `tf.layers.batch_normalization` is deprecated and will be removed in a future version. Please use `tf.keras.layers.BatchNormalization` instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.BatchNormalization` documentation).
  '`tf.layers.batch_normalization` is deprecated and '

Nach der Konvertierung

Die meisten Argumente blieben gleich. Aber beachte die Unterschiede:

  • Das training Argument wird durch das Modell zu jeder Schicht übergeben , wenn es ausgeführt wird .
  • Das erste Argument der ursprünglichen model (der Eingang x ) ist verschwunden. Dies liegt daran, dass Objekt-Layer das Erstellen des Modells vom Aufrufen des Modells trennen.

Beachten Sie auch, dass:

  • Wenn Sie mit regularizers oder initializers aus sind tf.contrib , haben diese mehr Argument Änderungen als andere.
  • Der Code nicht schreibt mehr Sammlungen, so Funktionen wie v1.losses.get_regularization_loss nicht mehr diese Werte zurück, die möglicherweise Ihre Trainingsschlaufen zu brechen.
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.04),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
print(train_out)
tf.Tensor([[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(1, 10), dtype=float32)
test_out = model(test_data, training=False)
print(test_out)
tf.Tensor(
[[-0.06252427  0.30122417 -0.18610534 -0.04890637 -0.01496555  0.41607457
   0.24905115  0.014429   -0.12719882 -0.22354674]], shape=(1, 10), dtype=float32)
# Here are all the trainable variables
len(model.trainable_variables)
8
# Here is the regularization loss
model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=0.07443664>]

Gemischte Variablen & v1.layers

Bestehende Code oft Mischungen niedrigere Ebene TensorFlow 1.x Variablen und Operationen mit höherer Ebene v1.layers .

Vor der Konvertierung

def model(x, training, scope='model'):
  with v1.variable_scope(scope, reuse=v1.AUTO_REUSE):
    W = v1.get_variable(
      "W", dtype=v1.float32,
      initializer=v1.ones(shape=x.shape),
      regularizer=lambda x:0.004*tf.reduce_mean(x**2),
      trainable=True)
    if training:
      x = x + W
    else:
      x = x + W * 0.5
    x = v1.layers.conv2d(x, 32, 3, activation=tf.nn.relu)
    x = v1.layers.max_pooling2d(x, (2, 2), 1)
    x = v1.layers.flatten(x)
    return x

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

Nach der Konvertierung

Folgen Sie zum Konvertieren dieses Codes dem Muster zum Zuordnen von Layern zu Layern wie im vorherigen Beispiel.

Das allgemeine Muster ist:

  • Collect Schichtparameter in __init__ .
  • Erstellen Sie die Variablen in build .
  • Führen Sie die Berechnungen in call und gibt das Ergebnis.

Die v1.variable_scope ist im Wesentlichen eine Schicht aus seinem eigenen. So umschreiben es als tf.keras.layers.Layer . Schauen Sie sich die Herstellung neuer Ebenen und Modelle über Subklassen Leitfaden für weitere Einzelheiten.

# Create a custom layer for part of the model
class CustomLayer(tf.keras.layers.Layer):
  def __init__(self, *args, **kwargs):
    super(CustomLayer, self).__init__(*args, **kwargs)

  def build(self, input_shape):
    self.w = self.add_weight(
        shape=input_shape[1:],
        dtype=tf.float32,
        initializer=tf.keras.initializers.ones(),
        regularizer=tf.keras.regularizers.l2(0.02),
        trainable=True)

  # Call method will sometimes get used in graph mode,
  # training will get turned into a tensor
  @tf.function
  def call(self, inputs, training=None):
    if training:
      return inputs + self.w
    else:
      return inputs + self.w * 0.5
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
[1.5]
[2.]
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

# Build the model including the custom layer
model = tf.keras.Sequential([
    CustomLayer(input_shape=(28, 28, 1)),
    tf.keras.layers.Conv2D(32, 3, activation='relu'),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
])

train_out = model(train_data, training=True)
test_out = model(test_data, training=False)

Einige Dinge zu beachten:

  • Untergeordnete Keras-Modelle und -Layer müssen sowohl in v1-Graphen (keine Abhängigkeiten der automatischen Steuerung) als auch im Eager-Modus ausgeführt werden:

    • Wickeln Sie den call in einem tf.function zu Autogramme und automatische Steuerung Abhängigkeiten zu erhalten.
  • Vergessen Sie nicht akzeptieren training Argument call :

    • Manchmal ist es ein tf.Tensor
    • Manchmal ist es ein Python-Boolean
  • Erstellen Sie Modellvariablen in Konstruktor oder Model.build mit `self.add_weight:

    • In Model.build haben Sie Zugriff auf die Eingabeform können so schaffen Gewichte mit passender Form
    • Mit tf.keras.layers.Layer.add_weight ermöglicht Keras zu Spur Variablen und Regularisierung Verluste
  • Bewahren Sie keine tf.Tensors in Ihre Objekte:

    • Sie könnten entweder in einem erstellt bekommen tf.function oder im eifrigen Kontext, und diese Tensoren verhalten sich anders
    • Verwenden tf.Variable s für Staat, sind sie aus beiden Kontexten immer nutzbar
    • tf.Tensors sind nur für Zwischenwerte

Ein Hinweis zu Slim und contrib.layers

Eine große Menge an älterem TensorFlow 1.x Code verwendet die dünne Bibliothek, die mit TensorFlow 1.x wie verpackt wurde tf.contrib.layers . Als contrib - Modul, das ist nicht mehr verfügbar in TensorFlow 2.x, auch in tf.compat.v1 . Konvertieren von Code Slim TensorFlow 2.x ist mehr beteiligt als Repositories umgewandelt werden, die Verwendung v1.layers . In der Tat kann es sinnvoll , Ihren dünnen Code zu konvertieren v1.layers zuerst, dann zu Keras konvertieren.

  • Remove arg_scopes müssen alle Argumente explizit sein.
  • Wenn Sie sie, Split normalizer_fn und activation_fn in ihre eigenen Schichten.
  • Trennbare Conv-Layer werden einem oder mehreren verschiedenen Keras-Layern zugeordnet (tiefen-, punktweise und trennbare Keras-Layer).
  • Schlank und v1.layers haben unterschiedliche Argumentnamen und Standardwerte.
  • Einige Argumente haben unterschiedliche Skalen.
  • Wenn Sie schlank verwenden Modelle vorbestellt trainiert, ausprobieren Keras pre-traimed Modelle von tf.keras.applications oder TF - Hub ‚s TensorFlow 2.x SavedModels aus dem ursprünglichen Schlanke Code exportiert.

Einige tf.contrib Schichten möglicherweise nicht zu dem Kern TensorFlow bewegt worden sind , aber stattdessen auf das verschoben worden TensorFlow Addons Paket .

Ausbildung

Es gibt viele Möglichkeiten , um Feed - Daten zu einem tf.keras Modell. Sie akzeptieren Python-Generatoren und Numpy-Arrays als Eingabe.

Der empfohlene Weg einzuspeisen Daten an ein Modell ist , das verwenden tf.data - Paket, das für die Manipulation von Daten , die eine Sammlung von hohen Leistungsklassen enthält.

Wenn Sie immer noch verwenden tf.queue , werden diese nur noch als Datenstrukturen unterstützt, nicht als Eingangsleitungen.

Verwenden von TensorFlow-Datensätzen

Die TensorFlow Datensätze Paket ( tfds ) enthält Werkzeuge zum Laden von Datensätzen als vordefinierte tf.data.Dataset Objekte.

In diesem Beispiel können Sie die MNIST Dataset laden tfds :

datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']

Bereiten Sie dann die Daten für das Training vor:

  • Skalieren Sie jedes Bild neu.
  • Mischen Sie die Reihenfolge der Beispiele.
  • Sammeln Sie Stapel von Bildern und Etiketten.
BUFFER_SIZE = 10 # Use a much larger value for real code
BATCH_SIZE = 64
NUM_EPOCHS = 5


def scale(image, label):
  image = tf.cast(image, tf.float32)
  image /= 255

  return image, label

Um das Beispiel kurz zu halten, trimmen Sie das Dataset so, dass nur 5 Batches zurückgegeben werden:

train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)

STEPS_PER_EPOCH = 5

train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
2021-07-19 23:37:19.049077: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Verwenden Sie Keras-Trainingsschleifen

Wenn Sie nicht über Low-Level - Kontrolle über Ihren Trainingsprozess benötigen, mit Keras eingebauter in fit , zu evaluate und predict Methoden empfohlen. Diese Methoden bieten eine einheitliche Schnittstelle zum Trainieren des Modells unabhängig von der Implementierung (sequentiell, funktional oder untergeordnet).

Zu den Vorteilen dieser Methoden gehören:

  • Sie akzeptieren Numpy Arrays, Python - Generatoren und tf.data.Datasets .
  • Sie wenden Regularisierung und Aktivierungsverluste automatisch an.
  • Sie unterstützen tf.distribute für Mehrgerätetraining .
  • Sie unterstützen beliebige Callables als Verluste und Metriken.
  • Sie unterstützen Rückrufe wie tf.keras.callbacks.TensorBoard und benutzerdefinierte Rückrufe.
  • Sie sind leistungsstark und verwenden automatisch TensorFlow-Graphen.

Hier ist ein Beispiel für die Ausbildung eines Modells unter Verwendung eines Dataset . (Einzelheiten dazu , wie das funktioniert, überprüfen Sie die out - Tutorials Abschnitt.)

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

# Model is the full model w/o custom layers
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)

print("Loss {}, Accuracy {}".format(loss, acc))
Epoch 1/5
5/5 [==============================] - 2s 8ms/step - loss: 1.5874 - accuracy: 0.4719
Epoch 2/5
2021-07-19 23:37:20.919125: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 5ms/step - loss: 0.4435 - accuracy: 0.9094
Epoch 3/5
2021-07-19 23:37:21.242435: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 6ms/step - loss: 0.2764 - accuracy: 0.9594
Epoch 4/5
2021-07-19 23:37:21.576808: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 0s 5ms/step - loss: 0.1889 - accuracy: 0.9844
Epoch 5/5
2021-07-19 23:37:21.888991: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 1s 6ms/step - loss: 0.1504 - accuracy: 0.9906
2021-07-19 23:37:23.082199: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
5/5 [==============================] - 1s 3ms/step - loss: 1.6299 - accuracy: 0.7031
Loss 1.6299388408660889, Accuracy 0.703125
2021-07-19 23:37:23.932781: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Schreibe deine eigene Schleife

Wenn die Trainingsschritt arbeitet der Keras Modell für Sie, aber Sie mehr Kontrolle außerhalb brauchen , dass der Schritt, sollten Sie die Verwendung von tf.keras.Model.train_on_batch Verfahren, in Ihren eigenen Daten-Iterationsschleife.

Denken Sie daran: Viele Dinge können als umgesetzt werden tf.keras.callbacks.Callback .

Diese Methode hat viele der Vorteile der im vorherigen Abschnitt erwähnten Methoden, gibt dem Benutzer jedoch die Kontrolle über die äußere Schleife.

Sie können auch verwenden tf.keras.Model.test_on_batch oder tf.keras.Model.evaluate zu Kontroll Leistung während des Trainings.

Um das obige Modell weiter zu trainieren:

# Model is the full model w/o custom layers
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

for epoch in range(NUM_EPOCHS):
  # Reset the metric accumulators
  model.reset_metrics()

  for image_batch, label_batch in train_data:
    result = model.train_on_batch(image_batch, label_batch)
    metrics_names = model.metrics_names
    print("train: ",
          "{}: {:.3f}".format(metrics_names[0], result[0]),
          "{}: {:.3f}".format(metrics_names[1], result[1]))
  for image_batch, label_batch in test_data:
    result = model.test_on_batch(image_batch, label_batch,
                                 # Return accumulated metrics
                                 reset_metrics=False)
  metrics_names = model.metrics_names
  print("\neval: ",
        "{}: {:.3f}".format(metrics_names[0], result[0]),
        "{}: {:.3f}".format(metrics_names[1], result[1]))
train:  loss: 0.131 accuracy: 1.000
train:  loss: 0.179 accuracy: 0.969
train:  loss: 0.117 accuracy: 0.984
train:  loss: 0.187 accuracy: 0.969
train:  loss: 0.168 accuracy: 0.969
2021-07-19 23:37:24.758128: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:25.476778: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.655 accuracy: 0.703
train:  loss: 0.083 accuracy: 1.000
train:  loss: 0.080 accuracy: 1.000
train:  loss: 0.099 accuracy: 0.984
train:  loss: 0.088 accuracy: 1.000
train:  loss: 0.084 accuracy: 1.000
2021-07-19 23:37:25.822978: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:26.103858: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.645 accuracy: 0.759
train:  loss: 0.066 accuracy: 1.000
train:  loss: 0.070 accuracy: 1.000
train:  loss: 0.062 accuracy: 1.000
train:  loss: 0.067 accuracy: 1.000
train:  loss: 0.061 accuracy: 1.000
2021-07-19 23:37:26.454306: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:26.715112: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.609 accuracy: 0.819
train:  loss: 0.056 accuracy: 1.000
train:  loss: 0.053 accuracy: 1.000
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.057 accuracy: 1.000
train:  loss: 0.069 accuracy: 0.984
2021-07-19 23:37:27.059747: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
2021-07-19 23:37:27.327066: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.568 accuracy: 0.825
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.048 accuracy: 1.000
train:  loss: 0.044 accuracy: 1.000
train:  loss: 0.045 accuracy: 1.000
train:  loss: 0.045 accuracy: 1.000
2021-07-19 23:37:28.593597: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
eval:  loss: 1.531 accuracy: 0.841
2021-07-19 23:37:29.220455: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Passen Sie den Trainingsschritt an

Wenn Sie mehr Flexibilität und Kontrolle benötigen, können Sie diese durch die Implementierung Ihrer eigenen Trainingsschleife erreichen. Es gibt drei Schritte:

  1. Iterate über einen Python - Generator oder tf.data.Dataset zu bekommen Chargen von Beispielen.
  2. Verwenden Sie tf.GradientTape zu sammeln Steigungen.
  3. Verwenden Sie eine der tf.keras.optimizers Gewicht Aktuelles zu den Modellgrößen gelten.

Merken:

  • Immer auch ein training Argument auf der call - Methode von subclassed Schichten und Modellen.
  • Achten Sie darauf , das Modell mit dem nennen training richtig Argument gesetzt.
  • Je nach Verwendung sind Modellvariablen möglicherweise erst vorhanden, wenn das Modell mit einem Datenstapel ausgeführt wird.
  • Sie müssen Dinge wie Regularisierungsverluste für das Modell manuell behandeln.

Beachten Sie die Vereinfachungen gegenüber v1:

  • Es ist nicht erforderlich, Variableninitialisierer auszuführen. Variablen werden bei der Erstellung initialisiert.
  • Es ist nicht erforderlich, manuelle Steuerungsabhängigkeiten hinzuzufügen. Auch in tf.function Operationen wirken als in eifrig Modus.
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
])

optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

@tf.function
def train_step(inputs, labels):
  with tf.GradientTape() as tape:
    predictions = model(inputs, training=True)
    regularization_loss=tf.math.add_n(model.losses)
    pred_loss=loss_fn(labels, predictions)
    total_loss=pred_loss + regularization_loss

  gradients = tape.gradient(total_loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

for epoch in range(NUM_EPOCHS):
  for inputs, labels in train_data:
    train_step(inputs, labels)
  print("Finished epoch", epoch)
2021-07-19 23:37:29.998049: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 0
2021-07-19 23:37:30.316333: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 1
2021-07-19 23:37:30.618560: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 2
2021-07-19 23:37:30.946881: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Finished epoch 3
Finished epoch 4
2021-07-19 23:37:31.261594: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Neue Metriken und Verluste

In TensorFlow 2.x sind Metriken und Verluste Objekte. Diese arbeiten beide eifrig und in tf.function s.

Ein Verlustobjekt ist aufrufbar und erwartet (y_true, y_pred) als Argumente:

cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
4.01815

Ein Metrikobjekt hat die folgenden Methoden:

  • Metric.update_state() : neue Beobachtungen hinzuzufügen.
  • Metric.result() : das aktuelle Ergebnis der Metrik erhalten, die beobachteten Werte gegeben.
  • Metric.reset_states() : alle Beobachtungen löschen.

Das Objekt selbst ist aufrufbar. Der Aufruf aktualisiert und zeigt den Zustand mit neuen Beobachtungen, wie mit update_state , und gibt das neue Ergebnis der Metrik.

Sie müssen die Variablen einer Metrik nicht manuell initialisieren, und da TensorFlow 2.x über automatische Steuerungsabhängigkeiten verfügt, müssen Sie sich auch nicht darum kümmern.

Der folgende Code verwendet eine Metrik, um den mittleren Verlust zu verfolgen, der in einer benutzerdefinierten Trainingsschleife beobachtet wurde.

# Create the metrics
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

@tf.function
def train_step(inputs, labels):
  with tf.GradientTape() as tape:
    predictions = model(inputs, training=True)
    regularization_loss=tf.math.add_n(model.losses)
    pred_loss=loss_fn(labels, predictions)
    total_loss=pred_loss + regularization_loss

  gradients = tape.gradient(total_loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))
  # Update the metrics
  loss_metric.update_state(total_loss)
  accuracy_metric.update_state(labels, predictions)


for epoch in range(NUM_EPOCHS):
  # Reset the metrics
  loss_metric.reset_states()
  accuracy_metric.reset_states()

  for inputs, labels in train_data:
    train_step(inputs, labels)
  # Get the metric results
  mean_loss=loss_metric.result()
  mean_accuracy = accuracy_metric.result()

  print('Epoch: ', epoch)
  print('  loss:     {:.3f}'.format(mean_loss))
  print('  accuracy: {:.3f}'.format(mean_accuracy))
2021-07-19 23:37:31.878403: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  0
  loss:     0.172
  accuracy: 0.988
2021-07-19 23:37:32.177136: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  1
  loss:     0.143
  accuracy: 0.997
2021-07-19 23:37:32.493570: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  2
  loss:     0.126
  accuracy: 0.997
2021-07-19 23:37:32.807739: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Epoch:  3
  loss:     0.109
  accuracy: 1.000
Epoch:  4
  loss:     0.092
  accuracy: 1.000
2021-07-19 23:37:33.155028: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

Keras-Metriknamen

In TensorFlow 2.x sind Keras-Modelle bei der Handhabung von Metriknamen konsistenter.

Nun , wenn Sie übergeben eine Zeichenfolge in der Liste der Metriken, die genaue Zeichenfolge als die Metrik verwendet wird name . Diese Namen sind sichtbar in der Geschichte Objekt durch zurück model.fit , und in den Protokollen übergeben keras.callbacks . wird auf die Zeichenfolge gesetzt, die Sie in der Metrikliste übergeben haben.

model.compile(
    optimizer = tf.keras.optimizers.Adam(0.001),
    loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")])
history = model.fit(train_data)
5/5 [==============================] - 1s 6ms/step - loss: 0.1042 - acc: 0.9969 - accuracy: 0.9969 - my_accuracy: 0.9969
2021-07-19 23:37:34.039643: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
history.history.keys()
dict_keys(['loss', 'acc', 'accuracy', 'my_accuracy'])

Dies unterscheidet sich von früheren Versionen , bei denen vorbei metrics=["accuracy"] in Folge hätte dict_keys(['loss', 'acc'])

Keras-Optimierer

Die Optimizern in v1.train wie v1.train.AdamOptimizer und v1.train.GradientDescentOptimizer haben Äquivalente in tf.keras.optimizers .

Konvertieren v1.train zu keras.optimizers

Beachten Sie bei der Konvertierung Ihrer Optimierer Folgendes:

Neue Standardwerte für einige tf.keras.optimizers

Es gibt keine Änderungen für optimizers.SGD , optimizers.Adam oder optimizers.RMSprop .

Die folgenden Standard-Lernraten haben sich geändert:

TensorBoard

TensorFlow 2.x enthält wesentliche Änderungen an der tf.summary API Schreibzusammengefasste Daten für die Visualisierung in TensorBoard verwendet. Für eine allgemeine Einführung in die neuen tf.summary gibt es mehrere Tutorien zur Verfügung , die den TensorFlow 2.x - API verwenden. Dazu gehört auch ein TensorBoard TensorFlow 2.x Migrationsleitfaden .

Speichern und laden

Checkpoint-Kompatibilität

TensorFlow 2.x Anwendungen objektbasierte Checkpoints .

Namensbasierte Checkpoints im alten Stil können immer noch geladen werden, wenn Sie vorsichtig sind. Der Codekonvertierungsprozess kann zu Änderungen des Variablennamens führen, es gibt jedoch Problemumgehungen.

Der einfachste Ansatz besteht darin, die Namen des neuen Modells mit den Namen im Checkpoint auszurichten:

  • Variablen noch alle einen haben name Argument Sie einstellen können.
  • Keras Modelle auch einen name Argument als die sie für ihre Variablen als Präfix gesetzt.
  • Die v1.name_scope Funktion kann verwendet werden , um variable Namenspräfixe einzustellen. Das ist sehr verschieden von tf.variable_scope . Es betrifft nur Namen und verfolgt keine Variablen und Wiederverwendung.

Wenn das nicht für Ihren Anwendungsfall nicht funktioniert, versuchen Sie die v1.train.init_from_checkpoint Funktion. Es dauert ein assignment_map Argument, das die Zuordnung von alten Namen neue Namen angibt.

Das TensorFlow Estimator Repository enthält ein Konvertierungstool die Checkpoints für premade Schätzer von TensorFlow 1.x auf 2.0 zu aktualisieren. Es kann als Beispiel dafür dienen, wie ein Tool für einen ähnlichen Anwendungsfall erstellt wird.

Kompatibilität gespeicherter Modelle

Für gespeicherte Modelle bestehen keine wesentlichen Kompatibilitätsprobleme.

  • Gespeicherte_Modelle von TensorFlow 1.x funktionieren in TensorFlow 2.x.
  • TensorFlow 2.x saved_models funktionieren in TensorFlow 1.x, wenn alle Ops unterstützt werden.

A Graph.pb oder Graph.pbtxt

Es gibt keine einfache Möglichkeit , eine rohe Upgrade Graph.pb Datei zu TensorFlow 2.x. Am besten aktualisieren Sie den Code, der die Datei generiert hat.

Aber, wenn Sie ein „frozen Graph“ haben (a tf.Graph wobei die Variablen in Konstanten gedreht wurden), dann ist es möglich , dies auf eine konvertieren concrete_function mit v1.wrap_function :

def wrap_frozen_graph(graph_def, inputs, outputs):
  def _imports_graph_def():
    tf.compat.v1.import_graph_def(graph_def, name="")
  wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
  import_graph = wrapped_import.graph
  return wrapped_import.prune(
      tf.nest.map_structure(import_graph.as_graph_element, inputs),
      tf.nest.map_structure(import_graph.as_graph_element, outputs))

Hier ist beispielsweise eine eingefrorene Grafik für Inception v1 von 2016:

path = tf.keras.utils.get_file(
    'inception_v1_2016_08_28_frozen.pb',
    'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz',
    untar=True)
Downloading data from http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz
24698880/24695710 [==============================] - 1s 0us/step

Laden Sie die tf.GraphDef :

graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(path,'rb').read())

Wickeln Sie es in ein concrete_function :

inception_func = wrap_frozen_graph(
    graph_def, inputs='input:0',
    outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')

Übergeben Sie ihm einen Tensor als Eingabe:

input_img = tf.ones([1,224,224,3], dtype=tf.float32)
inception_func(input_img).shape
TensorShape([1, 28, 28, 96])

Schätzer

Training mit Schätzern

Schätzer werden in TensorFlow 2.x unterstützt.

Wenn Sie Schätzer verwenden, können Sie verwenden input_fn , tf.estimator.TrainSpec und tf.estimator.EvalSpec von TensorFlow 1.x

Hier ist ein Beispiel unter Verwendung von input_fn mit Zug und Spezifikationen zu bewerten.

Erstellen der input_fn- und train/eval-Spezifikationen

# Define the estimator's input_fn
def input_fn():
  datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
  mnist_train, mnist_test = datasets['train'], datasets['test']

  BUFFER_SIZE = 10000
  BATCH_SIZE = 64

  def scale(image, label):
    image = tf.cast(image, tf.float32)
    image /= 255

    return image, label[..., tf.newaxis]

  train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
  return train_data.repeat()

# Define train and eval specs
train_spec = tf.estimator.TrainSpec(input_fn=input_fn,
                                    max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)
eval_spec = tf.estimator.EvalSpec(input_fn=input_fn,
                                  steps=STEPS_PER_EPOCH)

Verwenden einer Keras-Modelldefinition

Es gibt einige Unterschiede beim Erstellen Ihrer Schätzer in TensorFlow 2.x.

Es wird empfohlen , dass Sie Ihr Modell definieren Keras Sie dann die Verwendung tf.keras.estimator.model_to_estimator Dienstprogramm Ihr Modell in einen Schätzer zu drehen. Der folgende Code zeigt, wie Sie dieses Dienstprogramm beim Erstellen und Trainieren eines Schätzers verwenden.

def make_model():
  return tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu',
                           kernel_regularizer=tf.keras.regularizers.l2(0.02),
                           input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dropout(0.1),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(10)
  ])
model = make_model()

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

estimator = tf.keras.estimator.model_to_estimator(
  keras_model = model
)

tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbhtumut0
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbhtumut0
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using the Keras model provided.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:435: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbhtumut0', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
2021-07-19 23:37:36.453946: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:36.454461: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454737: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.454977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:36.455020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:36.455027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:36.455033: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:36.455126: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.455479: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:36.455779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbhtumut0', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tmpbhtumut0/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/tmp/tmpbhtumut0/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={})
INFO:tensorflow:Warm-starting from: /tmp/tmpbhtumut0/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting from: /tmp/tmpbhtumut0/keras/keras_model.ckpt
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-starting variables only in TRAINABLE_VARIABLES.
INFO:tensorflow:Warm-started 8 variables.
INFO:tensorflow:Warm-started 8 variables.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:39.175917: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:39.176424: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176729: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.176999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:39.177042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:39.177050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:39.177057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:39.177159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.177481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:39.177761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 3.1193407, step = 0
INFO:tensorflow:loss = 3.1193407, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpbhtumut0/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:2426: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:42
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:42
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:42.476830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:42.477339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477648: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.477910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:42.477955: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:42.477963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:42.477969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:42.478058: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:42.478332: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
INFO:tensorflow:Restoring parameters from /tmp/tmpbhtumut0/model.ckpt-25
2021-07-19 23:37:42.478592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.02146s
2021-07-19 23:37:43.437293: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.02146s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:43
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:43
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.634375, global_step = 25, loss = 1.493957
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.634375, global_step = 25, loss = 1.493957
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpbhtumut0/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.37796202.
2021-07-19 23:37:43.510911: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.37796202.
({'accuracy': 0.634375, 'loss': 1.493957, 'global_step': 25}, [])

Verwenden einer benutzerdefinierten model_fn

Wenn Sie eine vorhandene benutzerdefinierte Schätzer haben model_fn , dass Sie pflegen müssen, können Sie konvertieren model_fn ein Keras Modell zu verwenden.

Jedoch aus Gründen der Kompatibilität, eine benutzerdefinierte model_fn noch in 1.x-Stil Graph - Modus. Dies bedeutet, dass es keine eifrige Ausführung und keine Abhängigkeiten von der automatischen Steuerung gibt.

Benutzerdefiniertes model_fn mit minimalen Änderungen

Um die benutzerdefinierte zu machen model_fn Arbeit in TensorFlow 2.x, wenn Sie nur minimale Änderungen an den vorhandenen Code bevorzugen, tf.compat.v1 Symbole wie optimizers und metrics verwendet werden.

Ein Keras Modell in einem benutzerdefinierten Mit model_fn ähnelt es in einer benutzerdefinierten Trainingsschleife zu verwenden:

  • Stellen Sie die training in geeigneter Weise auf der Grundlage des mode Argument.
  • Explizit das Modell übergeben trainable_variables an den Optimierer.

Aber es gibt wichtige Unterschiede in Bezug auf eine individuelle Schleife :

  • Anstelle der Verwendung von Model.losses , extrahiert die Verluste mit Model.get_losses_for .
  • Extrahieren des Modells Updates mit Model.get_updates_for .

Der folgende Code erstellt einen Schätzer von einem benutzerdefinierten model_fn , alle diese Bedenken darstellt.

def my_model_fn(features, labels, mode):
  model = make_model()

  optimizer = tf.compat.v1.train.AdamOptimizer()
  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

  training = (mode == tf.estimator.ModeKeys.TRAIN)
  predictions = model(features, training=training)

  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

  reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
  total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses)

  accuracy = tf.compat.v1.metrics.accuracy(labels=labels,
                                           predictions=tf.math.argmax(predictions, axis=1),
                                           name='acc_op')

  update_ops = model.get_updates_for(None) + model.get_updates_for(features)
  minimize_op = optimizer.minimize(
      total_loss,
      var_list=model.trainable_variables,
      global_step=tf.compat.v1.train.get_or_create_global_step())
  train_op = tf.group(minimize_op, update_ops)

  return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op, eval_metric_ops={'accuracy': accuracy})

# Create the Estimator & Train
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpqiom6a5s
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpqiom6a5s
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpqiom6a5s', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpqiom6a5s', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:46.140692: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:46.141220: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141517: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.141765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:46.141807: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:46.141814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:46.141820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:46.141907: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.142234: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:46.142497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 2.9167266, step = 0
INFO:tensorflow:loss = 2.9167266, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpqiom6a5s/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:49
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:49
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpqiom6a5s/model.ckpt-25
2021-07-19 23:37:49.640699: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:49.641238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.641848: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:49.641893: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:49.641901: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:49.641910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:49.642029: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.642355: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:49.642657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.38362s
2021-07-19 23:37:50.924973: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.38362s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:50
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:50
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.70625, global_step = 25, loss = 1.6135181
INFO:tensorflow:Saving dict for global step 25: accuracy = 0.70625, global_step = 25, loss = 1.6135181
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpqiom6a5s/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.60315084.
2021-07-19 23:37:51.035953: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.60315084.
({'accuracy': 0.70625, 'loss': 1.6135181, 'global_step': 25}, [])

Benutzerdefinierte model_fn mit TensorFlow 2.x Symbole

Wenn Sie loswerden alle TensorFlow 1.x Symbole erhalten möchten und aktualisieren Sie Ihre individuelle model_fn zu TensorFlow 2.x, müssen Sie den Optimierer und Metriken aktualisieren tf.keras.optimizers und tf.keras.metrics .

In der benutzerdefinierten model_fn neben den oben Änderungen müssen mehr Upgrades vorgenommen werden:

Für das obige Beispiel von my_model_fn wird der migrierten Code mit TensorFlow 2.x Symbole gezeigt als:

def my_model_fn(features, labels, mode):
  model = make_model()

  training = (mode == tf.estimator.ModeKeys.TRAIN)
  loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
  predictions = model(features, training=training)

  # Get both the unconditional losses (the None part)
  # and the input-conditional losses (the features part).
  reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
  total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses)

  # Upgrade to tf.keras.metrics.
  accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')
  accuracy = accuracy_obj.update_state(
      y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))

  train_op = None
  if training:
    # Upgrade to tf.keras.optimizers.
    optimizer = tf.keras.optimizers.Adam()
    # Manually assign tf.compat.v1.global_step variable to optimizer.iterations
    # to make tf.compat.v1.train.global_step increased correctly.
    # This assignment is a must for any `tf.train.SessionRunHook` specified in
    # estimator, as SessionRunHooks rely on global step.
    optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()
    # Get both the unconditional updates (the None part)
    # and the input-conditional updates (the features part).
    update_ops = model.get_updates_for(None) + model.get_updates_for(features)
    # Compute the minimize_op.
    minimize_op = optimizer.get_updates(
        total_loss,
        model.trainable_variables)[0]
    train_op = tf.group(minimize_op, *update_ops)

  return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op,
    eval_metric_ops={'Accuracy': accuracy_obj})

# Create the Estimator and train.
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpomveromc
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpomveromc
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpomveromc', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpomveromc', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 600.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
2021-07-19 23:37:53.371110: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.371633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:53.371845: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.372311: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.372679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:53.372742: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:53.372779: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:53.372790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:53.372939: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.373380: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:53.373693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 2.874814, step = 0
INFO:tensorflow:loss = 2.874814, step = 0
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25...
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Saving checkpoints for 25 into /tmp/tmpomveromc/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:56
INFO:tensorflow:Starting evaluation at 2021-07-19T23:37:56
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpomveromc/model.ckpt-25
2021-07-19 23:37:56.884303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.884746: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:05.0 name: NVIDIA Tesla V100-SXM2-16GB computeCapability: 7.0
coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 15.78GiB deviceMemoryBandwidth: 836.37GiB/s
2021-07-19 23:37:56.884934: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.885330: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.885640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-19 23:37:56.885696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-19 23:37:56.885711: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-07-19 23:37:56.885720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-07-19 23:37:56.885861: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.886386: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-07-19 23:37:56.886729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14646 MB memory) -> physical GPU (device: 0, name: NVIDIA Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Evaluation [5/5]
INFO:tensorflow:Inference Time : 1.04574s
2021-07-19 23:37:57.852422: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Inference Time : 1.04574s
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:57
INFO:tensorflow:Finished evaluation at 2021-07-19-23:37:57
INFO:tensorflow:Saving dict for global step 25: Accuracy = 0.790625, global_step = 25, loss = 1.4257433
INFO:tensorflow:Saving dict for global step 25: Accuracy = 0.790625, global_step = 25, loss = 1.4257433
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 25: /tmp/tmpomveromc/model.ckpt-25
INFO:tensorflow:Loss for final step: 0.42627147.
2021-07-19 23:37:57.941217: W tensorflow/core/kernels/data/cache_dataset_ops.cc:768] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
INFO:tensorflow:Loss for final step: 0.42627147.
({'Accuracy': 0.790625, 'loss': 1.4257433, 'global_step': 25}, [])

Vorgefertigte Schätzer

Vorgefertigte Schätzer in der Familie von tf.estimator.DNN* , tf.estimator.Linear* und tf.estimator.DNNLinearCombined* sind noch in der TensorFlow 2.x API unterstützt. Einige Argumente haben sich jedoch geändert:

  1. input_layer_partitioner : in v2 entfernt.
  2. loss_reduction : Aktualisiert tf.keras.losses.Reduction statt tf.compat.v1.losses.Reduction . Der Standardwert wird auch geändert tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE von tf.compat.v1.losses.Reduction.SUM .
  3. optimizer , dnn_optimizer und linear_optimizer : Dieses Argument wurde aktualisiert tf.keras.optimizers anstelle des tf.compat.v1.train.Optimizer .

So migrieren Sie die oben genannten Änderungen:

  1. Keine Migration ist für erforderlich input_layer_partitioner seit Distribution Strategy automatisch in TensorFlow 2.x. behandelt
  2. Für loss_reduction , überprüfen tf.keras.losses.Reduction für die unterstützten Optionen.
  3. Für optimizer Argumente:
    • Wenn Sie dies nicht tun: 1) in den Pass - optimizer , dnn_optimizer oder linear_optimizer Argument, oder 2) die angeben optimizer Argument als string in Ihrem Code, dann brauchen Sie nichts zu ändern , da tf.keras.optimizers wird standardmäßig verwendet .
    • Andernfalls müssen Sie es von aktualisieren tf.compat.v1.train.Optimizer zu seinem entsprechenden tf.keras.optimizers .

Checkpoint-Konverter

Die Migration auf keras.optimizers wird Checkpoints brechen gespeichert TensorFlow 1.x, als tf.keras.optimizers ein anderer Satz von Variablen erzeugt in Checkpoints gespeichert werden. Um alten Kontrollpunkt wiederverwendbar nach der Migration auf TensorFlow 2.x zu machen, versuchen , das Kontrollpunkt - Konverter - Tool .

 curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14889  100 14889    0     0  60771      0 --:--:-- --:--:-- --:--:-- 60771

Das Tool verfügt über eine integrierte Hilfe:

 python checkpoint_converter.py -h
2021-07-19 23:37:58.805973: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
usage: checkpoint_converter.py [-h]
                               {dnn,linear,combined} source_checkpoint
                               source_graph target_checkpoint

positional arguments:
  {dnn,linear,combined}
                        The type of estimator to be converted. So far, the
                        checkpoint converter only supports Canned Estimator.
                        So the allowed types include linear, dnn and combined.
  source_checkpoint     Path to source checkpoint file to be read in.
  source_graph          Path to source graph file to be read in.
  target_checkpoint     Path to checkpoint file to be written out.

optional arguments:
  -h, --help            show this help message and exit

TensorForm

Diese Klasse wurde zu halten , vereinfacht int s, statt tf.compat.v1.Dimension Objekte. So gibt es keine Notwendigkeit zu nennen .value ein um eine int .

Einzelne tf.compat.v1.Dimension Objekte sind noch erreichbar von tf.TensorShape.dims .

Im Folgenden werden die Unterschiede zwischen TensorFlow 1.x und TensorFlow 2.x veranschaulicht.

# Create a shape and choose an index
i = 0
shape = tf.TensorShape([16, None, 256])
shape
TensorShape([16, None, 256])

Wenn Sie dies in TensorFlow 1.x hatten:

value = shape[i].value

Dann tun Sie dies in TensorFlow 2.x:

value = shape[i]
value
16

Wenn Sie dies in TensorFlow 1.x hatten:

for dim in shape:
    value = dim.value
    print(value)

Dann tun Sie dies in TensorFlow 2.x:

for value in shape:
  print(value)
16
None
256

Wenn Sie dies in TensorFlow 1.x hatten (oder eine andere Dimensionsmethode verwendet haben):

dim = shape[i]
dim.assert_is_compatible_with(other_dim)

Dann tun Sie dies in TensorFlow 2.x:

other_dim = 16
Dimension = tf.compat.v1.Dimension

if shape.rank is None:
  dim = Dimension(None)
else:
  dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
True
shape = tf.TensorShape(None)

if shape:
  dim = shape.dims[i]
  dim.is_compatible_with(other_dim) # or any other dimension method

Der Boolesche Wert eines tf.TensorShape ist True , wenn der Rang bekannt ist, False sonst.

print(bool(tf.TensorShape([])))      # Scalar
print(bool(tf.TensorShape([0])))     # 0-length vector
print(bool(tf.TensorShape([1])))     # 1-length vector
print(bool(tf.TensorShape([None])))  # Unknown-length vector
print(bool(tf.TensorShape([1, 10, 100])))       # 3D tensor
print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions
print()
print(bool(tf.TensorShape(None)))  # A tensor with unknown rank.
True
True
True
True
True
True

False

Andere Änderungen

  • Entfernen tf.colocate_with : TensorFlow des Geräts Platzierung Algorithmen haben sich deutlich verbessert. Dies sollte nicht mehr nötig sein. Wenn das Entfernen es eine Performance degredation verursacht Sie einen Bug- .

  • Ersetzen v1.ConfigProto Nutzung mit den entsprechenden Funktionen von tf.config .

Schlussfolgerungen

Der Gesamtprozess ist:

  1. Führen Sie das Upgrade-Skript aus.
  2. Entfernen Sie Beitragssymbole.
  3. Wechseln Sie Ihre Modelle zu einem objektorientierten Stil (Keras).
  4. Verwenden tf.keras oder tf.estimator Ausbildung und Bewertung Schleifen , wo Sie können.
  5. Verwenden Sie andernfalls benutzerdefinierte Schleifen, aber vermeiden Sie Sitzungen und Sammlungen.

Es erfordert ein wenig Arbeit, Code in das idiomatische TensorFlow 2.x zu konvertieren, aber jede Änderung führt zu:

  • Weniger Codezeilen.
  • Erhöhte Klarheit und Einfachheit.
  • Einfacheres Debuggen.