TensorFlow 2 is fundamentally different from TF1.x in several ways. You can still run unmodified TF1.x code (except for contrib) against TF2 binary installations like so:
import tensorflow.compat.v1 as tf tf.disable_v2_behavior()
However, this is not running TF2 behaviors and APIs, and may not work as expected with code written for TF2. If you are not running with TF2 behaviors active, you are effectively running TF1.x on top of a TF2 installation. Read the TF1 vs TF2 behaviors guide for more details on how TF2 is different from TF1.x.
This guide provides an overview of the process to migrate your TF1.x code to TF2. This enables you to take advantage of new and future feature improvements and also make your code simpler, more performant, and easier to maintain.
If you are using
tf.keras's high level APIs and training exclusively with
model.fit, your code should more or less be fully compatible with TF2 except
for the following caveats:
- TF2 has new default learning rates for Keras optimizers.
- TF2 may have changed the "name" that metrics are logged to.
TF2 migration process
Before migrating, learn about the behavior and API differences between TF1.x and TF2 by reading the guide.
- Run the automated script to convert some of your TF1.x API usage to
- Remove old
tf.contribsymbols (check TF Addons and TF-Slim).
- Make your TF1.x model forward passes run in TF2 with eager execution enabled.
- Upgrade your TF1.x code for training loops and saving/loading models to TF2 equivalents.
- (Optional) Migrate your TF2-compatible
tf.compat.v1APIs to idiomatic TF2 APIs.
The following sections expand upon the steps outlined above.
Run the symbol conversion script
This executes an initial pass at rewriting your code symbols to run against TF 2.x binaries, but won't make your code idiomatic to TF 2.x nor will it automatically make your code compatible with TF2 behaviors.
Your code will most likely still make use of
tf.compat.v1 endpoints to access
placeholders, sessions, collections, and other TF1.x-style functionality.
Read the guide to find out more about the best practices for using the symbol conversion script.
Remove usage of
A large amount of older TF1.x code uses the
library, which was packaged with TF1.x as
tf.contrib.layers. When migrating
your Slim code to TF2, switch your Slim API usages to point to the
tf-slim pip package. Then, read the
model mapping guide
to learn how to convert Slim code.
Make TF1.x model forward passes run with TF2 behaviors enabled
Track variables and losses
Eager execution in TF2 does not support
tf.Graph collection-based APIs. This
affects how you construct and track variables.
Aggregate lists of variables (like
tf.Graph.get_collection(tf.GraphKeys.VARIABLES)) with the
.trainable_variables attributes of the
Model classes implement several other properties that remove
the need for global collections. Their
.losses property can be a replacement
for using the
Read the model mapping guide to find out more about
using the TF2 code modeling shims to embed your existing
variable_scope based code inside of
will let you the execute forward passes with eager execution enabled without
Adapting to other behavior changes
If the model mapping guide on its own is insufficient to get your model forward pass running other behavior changes that may be more details, see the guide on TF1.x vs TF2 behaviors to learn about the other behavior changes and how you can adapt to them. Also check out the making new Layers and Models via subclassing guide for details.
Validating your results
See the model validation guide for easy tools and guidance around how you can (numerically) validate that your model is behaving correctly when eager execution is enabled. You may find this especially useful when paired with the model mapping guide.
Upgrade training, evaluation, and import/export code
TF1.x training loops built with
other collections-based approaches are not compatible with the new behaviors of
TF2. It is important you migrate all your TF1.x training code as combining it
with TF2 code can cause unexpected behaviors.
You can choose from among several strategies to do this.
The highest-level approach is to use
tf.keras. The high level functions in
Keras manage a lot of the low-level details that might be easy to miss if you
write your own training loop. For example, they automatically collect the
regularization losses, and set the
training=True argument when calling the
Custom training loops give you finer control over your model such as tracking
the weights of individual layers. Read the guide on
building training loops from scratch
to learn how to use
tf.GradientTape to retrieve model weights and use them to
update the model.
Convert TF1.x optimizers to Keras optimizers
The table below summarizes how you can convert these legacy optimizers to their Keras equivalents. You can directly replace the TF1.x version with the TF2 version unless additional steps (such as updating the default learning rate) are required.
Note that converting your optimizers may make old checkpoints incompatible.
||Include the `momentum` argument|
||Rename `beta1` and `beta2` arguments to `beta_1` and `beta_2`|
||Rename the `decay` argument to `rho`|
||Remove the `accum_name` and `linear_name` arguments|
||Rename the `beta1`, and `beta2` arguments to `beta_1` and `beta_2`|
||Rename the `beta1`, and `beta2` arguments to `beta_1` and `beta_2`|
Upgrade data input pipelines
There are many ways to feed data to a
tf.keras model. They will accept Python
generators and Numpy arrays as input.
The recommended way to feed data to a model is to use the
which contains a collection of high performance classes for manipulating data.
datasets belonging to
tf.data are efficient, expressive, and integrate
well with TF2.
They can be passed directly to the
They can be iterated over directly standard Python:
for example_batch, label_batch in dataset: break
If you are still using
tf.queue, these are now only supported as
data-structures, not as input pipelines.
You should also migrate all feature preprocessing code that
tf.feature_columns. Read the
migration guide for more details.
Saving and loading models
There are no significant compatibility concerns for saved models. Read the
SavedModel guide for more information about migrating
SavedModels in TF1.x to TF2. In general,
- TF1.x saved_models work in TF2.
- TF2 saved_models work in TF1.x if all the ops are supported.
Also refer to the
GraphDef section in the
SavedModel migration guide for more information on working with
(Optional) Migrate off
tf.compat.v1 module contains the complete TF1.x API, with its original
Even after following the steps above and ending up with code that is fully
compatible with all TF2 behaviors, it is likely there may be many mentions of
compat.v1 apis that happen to be compatible with TF2. You should avoid using
compat.v1 apis for any new code that you write, though they will
continue working for your already-written code.
However, you may choose to migrate the existing usages to non-legacy TF2 APIs.
The docstrings of individual
compat.v1 symbols will often explain how to
migrate them to non-legacy TF2 APIs. Additionally, the
model mapping guide's section on incremental migration to idiomatic TF2 APIs
may help with this as well.
Resources and further reading
As mentioned previously, it is a good practice to migrate all your TF1.x code to TF2. Read the guides in the Migrate to TF2 section of the TensorFlow guide to learn more.