Attend the Women in ML Symposium on December 7 Register now

TensorFlow Decision Forests and TensorFlow Serving

Stay organized with collections Save and categorize content based on your preferences.

TensorFlow Serving (TF Serving) is a tool to run TensorFlow models online in large production settings using a RPC or REST API. TensorFlow Decision Forests (TF-DF) is supported natively by TF Serving >=2.11.

TF-DF models are directly compatible with TF Serving. Yggdrasil models can be used with TF Serving after being converted first.

Limitations

TensorFlow adds a significant amount of computation overhead. For small latency sensitive models (e.g., models running is less than 1µs), this overhead can be an order of magnitude larger the cost of the model itself. In this case, it is recommended to run the TF-DF models with Yggdrasil Decision Forests.

Usage example

The following example shows how to run a TF-DF model in TF Serving:

First, install TF Serving. In this example, we will use a pre-compiled version of TF-Serving + TF-DF.

# Download TF Serving
wget https://github.com/tensorflow/decision-forests/releases/download/serving-1.0.1/tensorflow_model_server_linux.zip
unzip tensorflow_model_server_linux.zip

# Check that TF Serving works.
./tensorflow_model_server --version

In this example, we use an already trained TF-DF model trained.

# Get a TF-DF model
git clone https://github.com/tensorflow/decision-forests.git
MODEL_PATH=$(pwd)/decision-forests/tensorflow_decision_forests/test_data/model/saved_model_adult_gbt

echo "The TF-DF model is available at: ${MODEL_PATH}"

Notes: TF-Serving requires the model's full path. This is why we use $(pwd).

TF-Serving supports model versioning. The model should be contained in a directory whose name is the version of the model. A model version is an integer e.g., "1". Here is a typical directory for TF-Serving.

  • /path/to/model
    • 1 : Version 1 of the model
    • 5 : Version 5 of the model
    • 6 : Version 6 of the model

For this example, we only need to put the model in a directory called "1".

mkdir -p /tmp/tf_serving_model
cp -R "${MODEL_PATH}" /tmp/tf_serving_model/1

Now, we can start TF-Sering on the model.

./tensorflow_model_server \
    --rest_api_port=8502 \
    --model_name=my_model \
    --model_base_path=/tmp/tf_serving_model

Finally, you can send a request to TF Serving using the Rest API. Two formats are available: predict+instances API and predict+inputs API. Here is an example of each of them:

# Predictions with the predict+instances API.
curl http://localhost:8502/v1/models/my_model:predict -X POST \
    -d '{"instances": [{"age":39,"workclass":"State-gov","fnlwgt":77516,"education":"Bachelors","education_num":13,"marital_status":"Never-married","occupation":"Adm-clerical","relationship":"Not-in-family","race":"White","sex":"Male","capital_gain":2174,"capital_loss":0,"hours_per_week":40,"native_country":"United-States"}]}'
# Predictions with the predict+inputs API
curl http://localhost:8502/v1/models/my_model:predict -X POST \
    -d '{"inputs": {"age":[39],"workclass":["State-gov"],"fnlwgt":[77516],"education":["Bachelors"],"education_num":[13],"marital_status":["Never-married"],"occupation":["Adm-clerical"],"relationship":["Not-in-family"],"race":["White"],"sex":["Male"],"capital_gain":[2174],"capital_loss":[0],"hours_per_week":[40],"native_country":["United-States"]} }'