This document describes the URL conventions used when hosting all model types on
tfhub.dev - TFJS, TF Lite and TensorFlow models. It also
describes the HTTP(S)-based protocol implemented by the
in order to load TensorFlow models from tfhub.dev and
compatible services into TensorFlow programs.
Its key feature is to use the same URL in code to load a model and in a browser to view the model documentation.
General URL conventions
tfhub.dev supports the following URL formats:
- TF Hub publishers follow
- TF Hub collections follow
- TF Hub models have versioned url
https://tfhub.dev/<publisher>/<model_name>/<version>and unversioned url
https://tfhub.dev/<publisher>/<model_name>that resolves to the latest version of the model.
TF Hub models can be downloaded as compressed assets by appending URL parameters to the tfhub.dev model URL. However, the URL parameters required to achieve that depend on the model type:
- TensorFlow models (both SavedModel and TF1 Hub formats): append
?tf-hub-format=compressedto the TensorFlow model url.
- TFJS models: append
?tfjs-format=compressedto the TFJS model url to download the compressed or
/model.json?tfjs-format=fileto read if from remote storage.
- TF lite models: append
?lite-format=tfliteto the TF Lite model url.
|Type||Model URL||Download type||URL param||Download URL|
|TensorFlow (SavedModel, TF1 Hub format)||https://tfhub.dev/google/spice/2||.tar.gz||?tf-hub-format=compressed||https://tfhub.dev/google/spice/2?tf-hub-format=compressed|
Additionally, some models also are hosted in a format that can be read directly from remote storage without being downloaded. This is especially useful if there is no local storage available, such as running a TF.js model in the browser or loading a SavedModel on Colab. Be conscious that reading models that are hosted remotely without being downloaded locally may increase latency.
|Type||Model URL||Response type||URL param||Request URL|
|TensorFlow (SavedModel, TF1 Hub format)||https://tfhub.dev/google/spice/2||String (Path to GCS folder where the uncompressed model is stored)||?tf-hub-format=uncompressed||https://tfhub.dev/google/spice/2?tf-hub-format=uncompressed|
tensorflow_hub library protocol
This section describes how we host models on tfhub.dev for use with the tensorflow_hub library. If you want to host your own model repository to work with the tensorflow_hub library, your HTTP(s) distribution service should provide an implementation of this protocol.
Note that this section does not address hosting TF Lite and TFJS models since
they are not downloaded via the
tensorflow_hub library. For more information
on hosting these model types, please check above.
Models are stored on tfhub.dev as compressed tar.gz files.
By default, the tensorflow_hub library automatically downloads the compressed
model. They can also be manually downloaded by appending the
?tf-hub-format=compressed to the model url, for example:
The root of the archive is the root of the model directory and should contain a SavedModel, as in this example:
# Create a compressed model from a SavedModel directory. $ tar -cz -f model.tar.gz --owner=0 --group=0 -C /tmp/export-model/ . # Inspect files inside a compressed model $ tar -tf model.tar.gz ./ ./variables/ ./variables/variables.data-00000-of-00001 ./variables/variables.index ./assets/ ./saved_model.pb
Tarballs for use with the legacy
TF1 Hub format will also
When one of
tensorflow_hub library model loading APIs is invoked
hub.load, etc) the
library downloads the model, uncompresses the model and caches it locally. The
tensorflow_hub library expects that model URLs are versioned and that the
model content of a given version is immutable, so that it can be cached
indefinitely. Learn more about caching models.
When the environment variable
TFHUB_MODEL_LOAD_FORMAT or the command-line flag
--tfhub_model_load_format is set to
UNCOMPRESSED, the model is read directly
from remote storage (GCS) instead of being downloaded and uncompressed locally.
When this behavior is enabled the library appends
to the model URL. That request returns the path to the folder on GCS that
contains the uncompressed model files. As an example,
gs://tfhub-modules/google/spice/2/uncompressed in the body of the 303
response. The library then reads the model from that GCS destination.