Module google/‌imagenet/‌mobilenet_v2_035_224/‌feature_vector/1

Feature vectors of images with MobileNet V2 (depth multiplier 0.35) trained on ImageNet (ILSVRC-2012-CLS).

Module URL:


MobileNet V2 is a family of neural network architectures for efficient on-device image classification and related tasks, originally published by

Mobilenets come in various sizes controlled by a multiplier for the depth (number of features) in the convolutional layers. They can also be trained for various sizes of input images to control inference speed.

This TF-Hub module uses the TF-Slim implementation of mobilenet_v2 with a depth multiplier of 0.35 and an input size of 224x224 pixels. This implementation of Mobilenet V2 rounds feature depths to multiples of 8 (an optimization not described in the paper). Depth multipliers less than 1.0 are not applied to the last convolutional layer (from which the module takes the image feature vector).

The module contains a trained instance of the network, packaged to get feature vectors from images. If you want the full model including the classification it was originally trained for, use module google/imagenet/mobilenet_v2_035_224/classification/1 instead.


The checkpoint exported into this module was mobilenet_v2_0.35_224/mobilenet_v2_0.35_224.ckpt downloaded from MobileNet V2 pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("Imagenet").


This module implements the common signature for computing image feature vectors. It can be used like

module = hub.Module("")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
features = module(images)  # Features with shape [batch_size, num_features].

...or using the signature name image_feature_vector. The output for each image in the batch is a feature vector of size num_features = 1280.

For this module, the size of the input image is fixed to height x width = 224 x 224 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.


Consumers of this module can fine-tune it. This requires importing the graph version with tag set {"train"} in order to operate batch normalization in training mode.