This document describes the URL coventions used when hosting all model types on
tfhub.dev - TFJS, TF Lite and TensorFlow models. It also
describes describes the HTTP(S)-based protocol implemented by the
tensorflow_hub library in order to load TensorFlow models from
tfhub.dev and compatibe services into TensorFlow programs.
Its key feature is to use the same URL in code to load a model and in a browser to view the model documentation.
General URL conventions
tfhub.dev supports the following URL formats:
- TF Hub publishers follow
- TF Hub collections follow
- TF Hub models have versioned url
https://tfhub.dev/<publisher>/<model_name>/<version>and unversioned url
https://tfhub.dev/<publisher>/<model_name>that resolves to the latest version of the model.
TF Hub models can be downloaded as compressed assets by appending URL parameters to the tfhub.dev model URL. However, the URL paramters required to achieve that depend on the model type:
- TensorFlow models (both SavedModel and TF1 Hub formats): append
?tf-hub-format=compressedto the TensorFlow model url.
- TFJS models: append
?tfjs-format=compressedto the TFJS model url to download the compressed or
/model.json?tfjs-format=fileto read if from remote storage.
- TF lite models: append
?lite-format=tfliteto the TF Lite model url.
|Type||Model URL||Download type||URL param||Download URL|
|TensorFlow (SavedModel, TF1 Hub format)||https://tfhub.dev/google/spice/2||.tar.gz||?tf-hub-format=compressed||https://tfhub.dev/google/spice/2?tf-hub-format=compressed|
Additionally, some models also are hosted in a format that can be read directly from remote storage without being downloaded. This is especially useful if there is no local storage available, such as running a TF.js model in the browser. Be conscious that reading models that are hosted remotely without being downloaded locally may increase latency.
|Type||Model URL||File type||URL param||File URL|
tensorflow_hub library protocol
This section describes how we host models on tfhub.dev for use with the tensorflow_hub library. If you want to host your own model repository to work with the tensorflow_hub library, your HTTP(s) distribution service should provide an implementation of this protocol.
Note that this section does not address hosting TF Lite and TFJS models since
they are not downloaded via the
tensorflow_hub library. For more information
on hosting these model types, please check above.
Models are stored on tfhub.dev as compressed tar.gz files.
The tensorflow_hub library automatically downloads the compressed model. They
can also be manually downloaded by appending the
the model url, for example:
The root of the archive is the root of the model directory and should contain a SavedModel, as in this example:
# Create a compressed model from a SavedModel directory. $ tar -cz -f model.tar.gz --owner=0 --group=0 -C /tmp/export-model/ . # Inspect files inside a compressed model $ tar -tf model.tar.gz ./ ./variables/ ./variables/variables.data-00000-of-00001 ./variables/variables.index ./assets/ ./saved_model.pb
Tarballs for use with the legacy
TF1 Hub format will also
When one of
tensorflow_hub library model loading APIs is invoked
hub.load, etc) the
library downloads the model, uncomrepsses the model and caches it locally. The
tensorflow_hub library expects that model URLs are versioned and that the
model content of a given version is immutable, so that it can be cached
indefinitely. Learn more about caching models.