Feature vectors of images with MobileNet V1 (depth multiplier 0.75) trained on ImageNet (ILSVRC-2012-CLS).
MobileNet V1 is a family of neural network architectures for efficient on-device image classification, originally published by
- Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam: "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", 2017.
Mobilenets come in various sizes controlled by a multiplier for the
depth (number of features) in the convolutional layers. They can also be
trained for various sizes of input images to control inference speed.
This TF-Hub module uses the TF-Slim implementation of
mobilenet_v1_v1_075, instrumented for quantization,
with a depth multiplier of 0.75 and an input size of
The module contains a trained instance of the network, packaged to get
feature vectors from images.
If you want the full model including the classification it was originally
trained for, use module
This module is meant for use in models whose weights will be quantized to
uint8 by TensorFlow Lite
for deployment to mobile devices.
The trained weights of this module are shipped as
but its graph has been augmented by
tf.contrib.quantize with extra ops
that simulate the effect of quantization already during training,
so that the model can adjust to it.
The checkpoint exported into this module was
MobileNet pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("Imagenet"), with simulated quantization.
This module implements the common signature for computing image feature vectors. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/mobilenet_v1_075_160/quantops/feature_vector/1") height, width = hub.get_expected_image_size(module) images = ... # A batch of images with shape [batch_size, height, width, 3]. features = module(images) # Features with shape [batch_size, num_features].
...or using the signature name
image_feature_vector. The output for each image
in the batch is a feature vector of size
num_features = 768.
For this module, the size of the input image is fixed to
width = 160 x 160 pixels.
images are expected to have color values in the range [0,1],
common image input
The current version of this module only provides an inference graph and cannot be fine-tuned.