This guide trains a neural network model to classify images of clothing, like sneakers and shirts, saves the trained model, and then serves it with TensorFlow Serving. The focus is on TensorFlow Serving, rather than the modeling and training in TensorFlow, so for a complete example which focuses on the modeling and training see the Basic Classification example.
This guide uses tf.keras, a high-level API to build and train models in TensorFlow.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
Create your model
Import the Fashion MNIST dataset
This guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
![]() |
Figure 1. Fashion-MNIST samples (by Zalando, MIT License). |
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. You can access the Fashion MNIST directly from TensorFlow, just import and load the data.
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step train_images.shape: (60000, 28, 28, 1), of float64 test_images.shape: (10000, 28, 28, 1), of float64
Train and evaluate your model
Let's use the simplest possible CNN, since we're not focused on the modeling part.
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, name='Dense')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Conv1 (Conv2D) (None, 13, 13, 8) 80 _________________________________________________________________ flatten (Flatten) (None, 1352) 0 _________________________________________________________________ Dense (Dense) (None, 10) 13530 ================================================================= Total params: 13,610 Trainable params: 13,610 Non-trainable params: 0 _________________________________________________________________ Epoch 1/5 1875/1875 [==============================] - 13s 2ms/step - loss: 0.7546 - sparse_categorical_accuracy: 0.7457 Epoch 2/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.4254 - sparse_categorical_accuracy: 0.8521 Epoch 3/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3812 - sparse_categorical_accuracy: 0.8668 Epoch 4/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3557 - sparse_categorical_accuracy: 0.8770 Epoch 5/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.3415 - sparse_categorical_accuracy: 0.8795 313/313 [==============================] - 1s 2ms/step - loss: 0.3699 - sparse_categorical_accuracy: 0.8694 Test accuracy: 0.8694000244140625
Save your model
To load our trained model into TensorFlow Serving we first need to save it in SavedModel format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. TensorFlow Serving allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
export_path = /tmp/1 INFO:tensorflow:Assets written to: /tmp/1/assets Saved model: total 88 drwxr-xr-x 2 kbuilder kbuilder 4096 Mar 9 10:10 assets -rw-rw-r-- 1 kbuilder kbuilder 78123 Mar 9 10:10 saved_model.pb drwxr-xr-x 2 kbuilder kbuilder 4096 Mar 9 10:10 variables
Examine your saved model
We'll use the command line utility saved_model_cli
to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
saved_model_cli show --dir {export_path} --all
2021-03-09 10:10:12.685464: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Conv1_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28, 1) name: serving_default_Conv1_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['Dense'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None
That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, like the fact that this is grayscale image data for example, but it's a great start.
Serve your model with TensorFlow Serving
Add TensorFlow Serving distribution URI as a package source:
We're preparing to install TensorFlow Serving using Aptitude since this Colab runs in a Debian environment. We'll add the tensorflow-model-server
package to the list of packages that Aptitude knows about. Note that we're running as root.
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2943 100 2943 0 0 15822 0 --:--:-- --:--:-- --:--:-- 15822 OK Hit:1 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic InRelease Hit:2 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:3 http://asia-east1.gce.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:4 http://packages.cloud.google.com/apt google-cloud-logging-wheezy InRelease Get:5 https://packages.cloud.google.com/apt eip-cloud-bionic InRelease [5419 B] Hit:6 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64 InRelease Hit:7 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64 InRelease Get:8 https://nvidia.github.io/nvidia-docker/ubuntu18.04/amd64 InRelease [1129 B] Get:9 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3012 B] Hit:10 http://archive.canonical.com/ubuntu bionic InRelease Hit:11 http://security.ubuntu.com/ubuntu bionic-security InRelease Get:12 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [340 B] Get:13 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [348 B] Fetched 10.2 kB in 1s (7051 B/s) 114 packages can be upgraded. Run 'apt list --upgradable' to see them.
Install TensorFlow Serving
This is all you need - one command line!
{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
The following packages were automatically installed and are no longer required: adwaita-icon-theme ca-certificates-java dconf-gsettings-backend dconf-service default-jre default-jre-headless dkms fonts-dejavu-extra freeglut3 freeglut3-dev g++-6 glib-networking glib-networking-common glib-networking-services gsettings-desktop-schemas gtk-update-icon-cache hicolor-icon-theme humanity-icon-theme java-common libaccinj64-9.1 libasound2 libasound2-data libasyncns0 libatk-bridge2.0-0 libatk-wrapper-java libatk-wrapper-java-jni libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libavahi-common-data libavahi-common3 libcairo-gobject2 libcolord2 libcroco3 libcudart9.1 libcufft9.1 libcufftw9.1 libcups2 libcurand9.1 libcusolver9.1 libcusparse9.1 libdconf1 libdrm-amdgpu1 libdrm-dev libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libegl-mesa0 libegl1 libegl1-mesa libepoxy0 libflac8 libfontenc1 libgbm1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgif7 libgl1 libgl1-mesa-dev libgl1-mesa-dri libglapi-mesa libgles1 libgles2 libglu1-mesa libglu1-mesa-dev libglvnd-core-dev libglvnd-dev libglvnd0 libglx-mesa0 libglx0 libgtk-3-0 libgtk-3-common libgtk2.0-0 libgtk2.0-common libice-dev libjansson4 libjson-glib-1.0-0 libjson-glib-1.0-common liblcms2-2 libllvm9 libnppc9.1 libnppial9.1 libnppicc9.1 libnppicom9.1 libnppidei9.1 libnppif9.1 libnppig9.1 libnppim9.1 libnppist9.1 libnppisu9.1 libnppitc9.1 libnpps9.1 libnvrtc9.1 libnvtoolsext1 libnvvm3 libogg0 libopengl0 libpciaccess0 libpcsclite1 libproxy1v5 libpthread-stubs0-dev libpulse0 librest-0.7-0 librsvg2-2 librsvg2-common libsensors4 libsm-dev libsndfile1 libsoup-gnome2.4-1 libsoup2.4-1 libstdc++-6-dev libthrust-dev libvdpau-dev libvdpau1 libvorbis0a libvorbisenc2 libwayland-client0 libwayland-cursor0 libwayland-egl1 libwayland-server0 libx11-dev libx11-xcb-dev libx11-xcb1 libxau-dev libxcb-dri2-0 libxcb-dri2-0-dev libxcb-dri3-0 libxcb-dri3-dev libxcb-glx0 libxcb-glx0-dev libxcb-present-dev libxcb-present0 libxcb-randr0 libxcb-randr0-dev libxcb-render0-dev libxcb-shape0 libxcb-shape0-dev libxcb-sync-dev libxcb-sync1 libxcb-xfixes0 libxcb-xfixes0-dev libxcb1-dev libxcomposite1 libxcursor1 libxdamage-dev libxdamage1 libxdmcp-dev libxext-dev libxfixes-dev libxfixes3 libxfont2 libxft2 libxi-dev libxi6 libxinerama1 libxkbcommon0 libxkbfile1 libxmu-dev libxmu-headers libxnvctrl0 libxrandr2 libxshmfence-dev libxshmfence1 libxt-dev libxtst6 libxv1 libxxf86dga1 libxxf86vm-dev libxxf86vm1 linux-gcp-5.3-headers-5.3.0-1030 linux-gcp-headers-5.0.0-1026 linux-headers-5.3.0-1030-gcp linux-image-5.3.0-1030-gcp linux-modules-5.3.0-1030-gcp linux-modules-extra-5.3.0-1030-gcp mesa-common-dev ocl-icd-libopencl1 ocl-icd-opencl-dev opencl-c-headers openjdk-11-jre openjdk-11-jre-headless openjdk-8-jre openjdk-8-jre-headless pkg-config policykit-1-gnome python3-xkit screen-resolution-extra ubuntu-mono x11-utils x11-xkb-utils x11proto-core-dev x11proto-damage-dev x11proto-dev x11proto-fixes-dev x11proto-input-dev x11proto-xext-dev x11proto-xf86vidmode-dev xorg-sgml-doctools xserver-common xserver-xorg-core-hwe-18.04 xtrans-dev Use 'sudo apt autoremove' to remove them. The following NEW packages will be installed: tensorflow-model-server 0 upgraded, 1 newly installed, 0 to remove and 114 not upgraded. Need to get 223 MB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.4.1 [223 MB] Fetched 223 MB in 6s (40.3 MB/s) Selecting previously unselected package tensorflow-model-server. (Reading database ... 242337 files and directories currently installed.) Preparing to unpack .../tensorflow-model-server_2.4.1_all.deb ... Unpacking tensorflow-model-server (2.4.1) ... Setting up tensorflow-model-server (2.4.1) ...
Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters:
rest_api_port
: The port that you'll use for REST requests.model_name
: You'll use this in the URL of REST requests. It can be anything.model_base_path
: This is the path to the directory where you've saved your model.
os.environ["MODEL_DIR"] = MODEL_DIR
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
tail server.log
Make a request to your model in TensorFlow Serving
First, let's take a look at a random example from our test data.
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
Ok, that looks interesting. How hard is that for you to recognize? Now let's create the JSON object for a batch of three inference requests, and see how well our model recognizes things:
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
Data: {"signature_name": "serving_default", "instances": ... [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]]]}
Make REST requests
Newest version of the servable
We'll send a predict request as a POST to our server's REST endpoint, and pass it three examples. We'll ask our server to give us the latest version of our servable by not specifying a particular version.
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
A particular version of the servable
Now let's specify a particular version of our servable. Since we only have one, let's select version 1. We'll also look at all three results.
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))