Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

Better performance with tf.function and AutoGraph

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

TF 2.0 brings together the ease of eager execution and the power of TF 1.0. At the center of this merger is tf.function, which allows you to transform a subset of Python syntax into portable, high-performance TensorFlow graphs.

A cool new feature of tf.function is AutoGraph, which lets you write graph code using natural Python syntax. For a list of the Python features that you can use with AutoGraph, see AutoGraph Capabilities and Limitations. For more details about tf.function, see the RFC TF 2.0: Functions, not Sessions. For more details about AutoGraph, see tf.autograph.

This tutorial will walk you through the basic features of tf.function and AutoGraph.


Import TensorFlow 2.0:

import numpy as np
import tensorflow as tf

The tf.function decorator

When you annotate a function with tf.function, you can still call it like any other function. But it will be compiled into a graph, which means you get the benefits of faster execution, running on GPU or TPU, or exporting to SavedModel.

def simple_nn_layer(x, y):
  return tf.nn.relu(tf.matmul(x, y))

x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))

simple_nn_layer(x, y)
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[1.468885 , 1.798887 , 0.5989893],
       [1.4049681, 1.4643832, 0.604734 ],
       [1.0404211, 1.1051425, 0.3798284]], dtype=float32)>

If we examine the result of the annotation, we can see that it's a special callable that handles all interactions with the TensorFlow runtime.

<tensorflow.python.eager.def_function.Function at 0x7ff5b22bbf98>

If your code uses multiple functions, you don't need to annotate them all - any functions called from an annotated function will also run in graph mode.

def linear_layer(x):
  return 2 * x + 1

def deep_net(x):
  return tf.nn.relu(linear_layer(x))

deep_net(tf.constant((1, 2, 3)))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([3, 5, 7], dtype=int32)>

Functions can be faster than eager code, for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.

import timeit
conv_layer = tf.keras.layers.Conv2D(100, 3)

def conv_fn(image):
  return conv_layer(image)

image = tf.zeros([1, 200, 200, 100])
# warm up
conv_layer(image); conv_fn(image)
print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10))
print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10))
print("Note how there's not much difference in performance for convolutions")

Eager conv: 0.004021771999759949
Function conv: 0.0028740709999510727
Note how there's not much difference in performance for convolutions

lstm_cell = tf.keras.layers.LSTMCell(10)

def lstm_fn(input, state):
  return lstm_cell(input, state)

input = tf.zeros([10, 10])
state = [tf.zeros([10, 10])] * 2
# warm up
lstm_cell(input, state); lstm_fn(input, state)
print("eager lstm:", timeit.timeit(lambda: lstm_cell(input, state), number=10))
print("function lstm:", timeit.timeit(lambda: lstm_fn(input, state), number=10))

eager lstm: 0.006284275999860256
function lstm: 0.0034338760001446644

Use Python control flow

When using data-dependent control flow inside tf.function, you can use Python control flow statements and AutoGraph will convert them into appropriate TensorFlow ops. For example, if statements will be converted into tf.cond() if they depend on a Tensor.

In the example below, x is a Tensor but the if statement works as expected:

def square_if_positive(x):
  if x > 0:
    x = x * x
    x = 0
  return x

print('square_if_positive(2) = {}'.format(square_if_positive(tf.constant(2))))
print('square_if_positive(-2) = {}'.format(square_if_positive(tf.constant(-2))))
square_if_positive(2) = 4
square_if_positive(-2) = 0

AutoGraph supports common Python statements like while, for, if, break, continue and return, with support for nesting. That means you can use Tensor expressions in the condition of while and if statements, or iterate over a Tensor in a for loop.

def sum_even(items):
  s = 0
  for c in items:
    if c % 2 > 0:
    s += c
  return s

sum_even(tf.constant([10, 12, 15, 20]))
<tf.Tensor: shape=(), dtype=int32, numpy=42>

AutoGraph also provides a low-level API for advanced users. For example we can use it to have a look at the generated code.

def tf__sum_even(items):
  do_return = False
  retval_ = ag__.UndefinedReturnValue()
  with ag__.FunctionScope('sum_even', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
    s = 0

    def get_state_2():
      return ()

    def set_state_2(_):

    def loop_body(iterates, s):
      c = iterates
      continue_ = False

      def get_state():
        return ()

      def set_state(_):

      def if_true():
        continue_ = True
        return continue_

      def if_false():
        return continue_
      cond = c % 2 > 0
      continue_ = ag__.if_stmt(cond, if_true, if_false, get_state, set_state, ('continue_',), ())

      def get_state_1():
        return ()

      def set_state_1(_):

      def if_true_1():
        s_1, = s,
        s_1 += c
        return s_1

      def if_false_1():
        return s
      cond_1 = ag__.not_(continue_)
      s = ag__.if_stmt(cond_1, if_true_1, if_false_1, get_state_1, set_state_1, ('s',), ())
      return s,
    s, = ag__.for_stmt(items, None, loop_body, get_state_2, set_state_2, (s,), ('s',), ())
    do_return = True
    retval_ = fscope.mark_return_value(s)
  return ag__.retval(retval_)

Here's an example of more complicated control flow:

def fizzbuzz(n):
  for i in tf.range(n):
    if i % 3 == 0:
    elif i % 5 == 0:


Keras and AutoGraph

AutoGraph is available by default in non-dynamic Keras models. For more information, see tf.keras.

class CustomModel(tf.keras.models.Model):

  def call(self, input_data):
    if tf.reduce_mean(input_data) > 0:
      return input_data
      return input_data // 2

model = CustomModel()

model(tf.constant([-2, -4]))
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([-1, -2], dtype=int32)>

Side effects

Just like in eager mode, you can use operations with side effects, like tf.assign or tf.print normally inside tf.function, and it will insert the necessary control dependencies to ensure they execute in order.

v = tf.Variable(5)

def find_next_odd():
  v.assign(v + 1)
  if v % 2 == 0:
    v.assign(v + 1)

<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=7>


tf.function and AutoGraph work by generating code and tracing it into TensorFlow graphs. This mechanism does not yet support step-by-step debuggers like pdb. However, you can call tf.config.experimental_run_functions_eagerly(True) to temporarily enable eager execution inside the `tf.function' and use your favorite debugger:

def f(x):
  if x > 0:
    # Try setting a breakpoint here!
    # Example:
    #   import pdb
    #   pdb.set_trace()
    x = x + 1
  return x


# You can now set breakpoints and run the code in a debugger.


Advanced example: An in-graph training loop

The previous section showed that AutoGraph can be used inside Keras layers and models. Keras models can also be used in AutoGraph code.

This example shows how to train a simple Keras model on MNIST with the entire training process—loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence—is performed in-graph.

Download data

def prepare_mnist_features_and_labels(x, y):
  x = tf.cast(x, tf.float32) / 255.0
  y = tf.cast(y, tf.int64)
  return x, y

def mnist_dataset():
  (x, y), _ = tf.keras.datasets.mnist.load_data()
  ds = tf.data.Dataset.from_tensor_slices((x, y))
  ds = ds.map(prepare_mnist_features_and_labels)
  ds = ds.take(20000).shuffle(20000).batch(100)
  return ds

train_dataset = mnist_dataset()

Define the model

model = tf.keras.Sequential((
    tf.keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)),
    tf.keras.layers.Dense(100, activation='relu'),
    tf.keras.layers.Dense(100, activation='relu'),
optimizer = tf.keras.optimizers.Adam()

Define the training loop

compute_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

compute_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

def train_one_step(model, optimizer, x, y):
  with tf.GradientTape() as tape:
    logits = model(x)
    loss = compute_loss(y, logits)

  grads = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(grads, model.trainable_variables))

  compute_accuracy(y, logits)
  return loss

def train(model, optimizer):
  train_ds = mnist_dataset()
  step = 0
  loss = 0.0
  accuracy = 0.0
  for x, y in train_ds:
    step += 1
    loss = train_one_step(model, optimizer, x, y)
    if step % 10 == 0:
      tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
  return step, loss, accuracy

step, loss, accuracy = train(model, optimizer)
print('Final step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
Step 10 : loss 1.90009379 ; accuracy 0.372
Step 20 : loss 1.29197729 ; accuracy 0.508
Step 30 : loss 0.757852912 ; accuracy 0.593666673
Step 40 : loss 0.551428318 ; accuracy 0.64825
Step 50 : loss 0.631847858 ; accuracy 0.6856
Step 60 : loss 0.598846793 ; accuracy 0.717333317
Step 70 : loss 0.420379639 ; accuracy 0.739857137
Step 80 : loss 0.310861468 ; accuracy 0.759
Step 90 : loss 0.32919234 ; accuracy 0.776
Step 100 : loss 0.380437076 ; accuracy 0.7878
Step 110 : loss 0.395042688 ; accuracy 0.797727287
Step 120 : loss 0.436106026 ; accuracy 0.806833327
Step 130 : loss 0.224314824 ; accuracy 0.815846145
Step 140 : loss 0.210455775 ; accuracy 0.82307142
Step 150 : loss 0.210413113 ; accuracy 0.830266654
Step 160 : loss 0.406054586 ; accuracy 0.835187495
Step 170 : loss 0.373840034 ; accuracy 0.839588225
Step 180 : loss 0.326159567 ; accuracy 0.844222248
Step 190 : loss 0.21542713 ; accuracy 0.847842097
Step 200 : loss 0.272144347 ; accuracy 0.8523
Final step tf.Tensor(200, shape=(), dtype=int32) : loss tf.Tensor(0.27214435, shape=(), dtype=float32) ; accuracy tf.Tensor(0.8523, shape=(), dtype=float32)


In real applications batching is essential for performance. The best code to convert to AutoGraph is code where the control flow is decided at the batch level. If making decisions at the individual example level, try to use batch APIs to maintain performance.

For example, if you have the following code in Python:

def square_if_positive(x):
  return [i ** 2 if i > 0 else i for i in x]

square_if_positive(range(-5, 5))
[-5, -4, -3, -2, -1, 0, 1, 4, 9, 16]

You may be tempted to write it in TensorFlow as such (and this would work!):

def square_if_positive_naive(x):
  result = tf.TensorArray(tf.int32, size=x.shape[0])
  for i in tf.range(x.shape[0]):
    if x[i] > 0:
      result = result.write(i, x[i] ** 2)
      result = result.write(i, x[i])
  return result.stack()

square_if_positive_naive(tf.range(-5, 5))
<tf.Tensor: shape=(10,), dtype=int32, numpy=array([-5, -4, -3, -2, -1,  0,  1,  4,  9, 16], dtype=int32)>

But in this case, it turns out you can write the following:

def square_if_positive_vectorized(x):
  return tf.where(x > 0, x ** 2, x)

square_if_positive_vectorized(tf.range(-5, 5))
<tf.Tensor: shape=(10,), dtype=int32, numpy=array([-5, -4, -3, -2, -1,  0,  1,  4,  9, 16], dtype=int32)>


Key points:

  • Exercise caution when calling functions with non-tensor arguments, or with arguments that change shapes.
  • Decorate module-level functions, and methods of module-level classes, and avoid decorating local functions or methods.

tf.function can give you significant speedup over eager execution, at the cost of a slower first-time execution. This is because when executed for the first time, the function is also traced into a TensorFlow graph. Constructing and optimizing a graph is usually much slower compared to actually executing it:

import timeit

def f(x, y):
  return tf.matmul(x, y)

    "First invocation:",
    timeit.timeit(lambda: f(tf.ones((10, 10)), tf.ones((10, 10))), number=1))

    "Second invocation:",
    timeit.timeit(lambda: f(tf.ones((10, 10)), tf.ones((10, 10))), number=1))
First invocation: 0.04282790999968711
Second invocation: 0.0008704130000296573

You can easily tell when a function is traced by adding a print statement to the top of the function. Because any Python code is only executed at trace time, you will only see the output of print when the function is traced:

def f():

print('First invocation:')

print('Second invocation:')
First invocation:
Second invocation:

tf.function may also re-trace when called with different non-tensor arguments:

def f(n):
  print(n, 'Tracing!')
  tf.print(n, 'Executing')


1 Tracing!
1 Executing
1 Executing
2 Tracing!
2 Executing
2 Executing

A re-trace can also happen when tensor arguments change shape, unless you specified an input_signature:

def f(x):
  print(x.shape, 'Tracing!')
  tf.print(x, 'Executing')


f(tf.constant([1, 2]))
f(tf.constant([3, 4]))
(1,) Tracing!
[1] Executing
[2] Executing
(2,) Tracing!
[1 2] Executing
[3 4] Executing

In addition, tf.function always creates a new graph function with its own set of traces whenever it is called:

def f():


This can lead to surprising behavior when using the @tf.function decorator in a nested function:

def outer():
  def f():