Module google/‌imagenet/‌nasnet_mobile/‌feature_vector/1

Feature vectors of images with NASNet-A (mobile) trained on ImageNet (ILSVRC-2012-CLS).

Module URL: https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/1

Overview

NASNet-A is a family of convolutional neural networks for image classification. The architecture of its convolutional cells (or layers) has been found by Neural Architecture Search (NAS). NAS and NASNet were originally published by

NASNets come in various sizes. This TF-Hub module uses the TF-Slim implementation nasnet_mobile of NASNet-A for ImageNet that uses 12 Normal Cells, starting with 44 convolutional filters (after the "ImageNet stem"). It has an input size of 224x224 pixels.

The module contains a trained instance of the network, packaged to get feature vectors from images. If you want the full model including the classification it was originally trained for, use module google/imagenet/nasnet_mobile/classification/1 instead.

Training

The checkpoint exported into this module was nasnet-a_mobile_04_10_2017/model.ckpt downloaded from NASNet's pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("ImageNet").

Usage

This module implements the common signature for computing image feature vectors. It can be used like

module = hub.Module("https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/1")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
features = module(images)  # Features with shape [batch_size, num_features].

...or using the signature name image_feature_vector. The output for each image in the batch is a feature vector of size num_features = 1056.

For this module, the size of the input image is fixed to height x width = 224 x 224 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.

Fine-tuning

Consumers of this module can fine-tune it.

Fine-tuning requires to import the graph version with tag set {"train"} in order to operate batch normalization and dropout in training mode. The dropout probability in NASNet path dropout is not scaled with the training steps of fine-tuning and remains at the final (maximal) value from the initial training.