Module google/‌imagenet/‌mobilenet_v1_100_224/‌classification/1

Imagenet (ILSVRC-2012-CLS) classification with MobileNet V1 (depth multiplier 1.00).

Module URL:


MobileNet V1 is a family of neural network architectures for efficient on-device image classification, originally published by

Mobilenets come in various sizes controlled by a multiplier for the depth (number of features) in the convolutional layers. They can also be trained for various sizes of input images to control inference speed. This TF-Hub module uses the TF-Slim implementation of mobilenet_v1 with a depth multiplier of 1.0 and an input size of 224x224 pixels.

The module contains a trained instance of the network, packaged to do the image classification that the network was trained on. If you merely want to transform images into feature vectors, use module google/imagenet/mobilenet_v1_100_224/feature_vector/1 instead, and save the space occupied by the classification layer.


The checkpoint exported into this module was mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224.ckpt downloaded from MobileNet pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("Imagenet").


This module implements the common signature for image classification. It can be used like

module = hub.Module("")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
logits = module(images)  # Logits with shape [batch_size, num_classes].

...or using the signature name image_classification. The indices into logits are the num_classes = 1001 classes of the classification from the original training (see above).

This module can also be used to compute image feature vectors, using the signature name image_feature_vector.

For this module, the size of the input image is fixed to height x width = 224 x 224 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.


In principle, consumers of this module can fine-tune it. However, fine-tuning through a large classification might be prone to overfit.

Fine-tuning requires importing the graph version with tag set {"train"} in order to operate batch normalization in training mode.