Imagenet (ILSVRC-2012-CLS) classification with MobileNet V2 (depth multiplier 0.75).
MobileNet V2 is a family of neural network architectures for efficient on-device image classification and related tasks, originally published by
- Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen: "Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation", 2018.
Mobilenets come in various sizes controlled by a multiplier for the depth (number of features) in the convolutional layers. They can also be trained for various sizes of input images to control inference speed.
This TF-Hub module uses the TF-Slim implementation of
with a depth multiplier of 0.75 and an input size of
This implementation of Mobilenet V2 rounds feature depths to multiples of 8
(an optimization not described in the paper).
Depth multipliers less than 1.0 are not applied to the last convolutional layer
(from which the module takes the image feature vector).
The module contains a trained instance of the network, packaged to do the
that the network was trained on. If you merely want to transform images into
feature vectors, use module
instead, and save the space occupied by the classification layer.
The checkpoint exported into this module was
MobileNet V2 pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("Imagenet").
This module implements the common signature for image classification. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/mobilenet_v2_075_224/classification/2") height, width = hub.get_expected_image_size(module) images = ... # A batch of images with shape [batch_size, height, width, 3]. logits = module(images) # Logits with shape [batch_size, num_classes].
...or using the signature name
image_classification. The indices into logits
num_classes = 1001 classes of the classification from
the original training (see above).
This module can also be used to compute image feature
using the signature name
For this module, the size of the input image is fixed to
width = 224 x 224 pixels.
images are expected to have color values in the range [0,1],
common image input
In principle, consumers of this module can fine-tune it. However, fine-tuning through a large classification might be prone to overfit.
Fine-tuning requires importing the graph version with tag set
in order to operate batch normalization in training mode.
- Initial release.
- Fixed broken UPDATE_OPS for fine-tuning, GitHub issue 86.