Module google/‌imagenet/‌mobilenet_v1_075_160/‌feature_vector/1

Feature vectors of images with MobileNet V1 (depth multiplier 0.75) trained on ImageNet (ILSVRC-2012-CLS).

Module URL: https://tfhub.dev/google/imagenet/mobilenet_v1_075_160/feature_vector/1

Overview

MobileNet V1 is a family of neural network architectures for efficient on-device image classification, originally published by

Mobilenets come in various sizes controlled by a multiplier for the depth (number of features) in the convolutional layers. They can also be trained for various sizes of input images to control inference speed. This TF-Hub module uses the TF-Slim implementation of mobilenet_v1_075 with a depth multiplier of 0.75 and an input size of 160x160 pixels.

The module contains a trained instance of the network, packaged to get feature vectors from images. If you want the full model including the classification it was originally trained for, use module google/imagenet/mobilenet_v1_075_160/classification/1 instead.

Training

The checkpoint exported into this module was mobilenet_v1_2018_02_22/mobilenet_v1_0.75_160/mobilenet_v1_0.75_160.ckpt downloaded from MobileNet pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("Imagenet").

Usage

This module implements the common signature for computing image feature vectors. It can be used like

module = hub.Module("https://tfhub.dev/google/imagenet/mobilenet_v1_075_160/feature_vector/1")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
features = module(images)  # Features with shape [batch_size, num_features].

...or using the signature name image_feature_vector. The output for each image in the batch is a feature vector of size num_features = 768.

For this module, the size of the input image is fixed to height x width = 160 x 160 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.

Fine-tuning

Consumers of this module can fine-tune it. This requires importing the graph version with tag set {"train"} in order to operate batch normalization in training mode.