Get started with TensorFlow 2.0 for experts

View on Run in Google Colab View source on GitHub Download notebook

This is a Google Colaboratory notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.

  1. In Colab, connect to a Python runtime: At the top-right of the menu bar, select CONNECT.
  2. Run all the notebook code cells: Select Runtime > Run all.

Download and install the TensorFlow 2.0 Beta package:

!pip install -q tensorflow==2.0.0-beta1

Import TensorFlow into your program:

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

Load and prepare the MNIST dataset.

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]

Use to batch and shuffle the dataset:

train_ds =
    (x_train, y_train)).shuffle(10000).batch(32)
test_ds =, y_test)).batch(32)

Build the tf.keras model using the Keras model subclassing API:

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.conv1 = Conv2D(32, 3, activation='relu')
    self.flatten = Flatten()
    self.d1 = Dense(128, activation='relu')
    self.d2 = Dense(10, activation='softmax')

  def call(self, x):
    x = self.conv1(x)
    x = self.flatten(x)
    x = self.d1(x)
    return self.d2(x)

model = MyModel()

Choose an optimizer and loss function for training:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy()

optimizer = tf.keras.optimizers.Adam()

Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result.

train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

Use tf.GradientTape to train the model:

def train_step(images, labels):
  with tf.GradientTape() as tape:
    predictions = model(images)
    loss = loss_object(labels, predictions)
  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

  train_accuracy(labels, predictions)

Test the model:

def test_step(images, labels):
  predictions = model(images)
  t_loss = loss_object(labels, predictions)

  test_accuracy(labels, predictions)

for epoch in range(EPOCHS):
  for images, labels in train_ds:
    train_step(images, labels)

  for test_images, test_labels in test_ds:
    test_step(test_images, test_labels)

  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
  print (template.format(epoch+1,
Epoch 1, Loss: 0.13861244916915894, Accuracy: 95.83000183105469, Test Loss: 0.06789647787809372, Test Accuracy: 97.75999450683594
Epoch 2, Loss: 0.09070724248886108, Accuracy: 97.26333618164062, Test Loss: 0.06203747168183327, Test Accuracy: 97.95500183105469
Epoch 3, Loss: 0.06735743582248688, Accuracy: 97.96666717529297, Test Loss: 0.06339888274669647, Test Accuracy: 97.96666717529297
Epoch 4, Loss: 0.05398847535252571, Accuracy: 98.36042022705078, Test Loss: 0.06847456097602844, Test Accuracy: 97.98249816894531
Epoch 5, Loss: 0.044840775430202484, Accuracy: 98.63066864013672, Test Loss: 0.069434754550457, Test Accuracy: 98.01599884033203

The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.