Imagenet (ILSVRC-2012-CLS) classification with PNASNet-5 (large).
PNASNet-5 is a family of convolutional neural networks for image classification. The architecture of its convolutional cells (or layers) has been found by Progressive Neural Architecture Search. PNASNet reuses several techniques from is precursor NASNet, including regularization by path dropout. PNASNet and NASNet were originally published by
- Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, Kevin Murphy: "Progressive Neural Architecture Search", 2017.
- Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le: "Learning Transferable Architectures for Scalable Image Recognition", 2017.
PNASNets come in various sizes. This TF-Hub module uses the TF-Slim
pnasnet_large of PNASNet-5 for ImageNet
that uses 12 cells (plus 2 for the "ImageNet stem"),
starting with 216 convolutional filters (after the stem).
It has an input size of 331x331 pixels.
The module contains a trained instance of the network, packaged to do the
that the network was trained on. If you merely want to transform images into
feature vectors, use module
instead, and save the space occupied by the classification layer.
The checkpoint exported into this module was
TF-Slim's pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("ImageNet").
This module implements the common signature for image classification. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/pnasnet_large/classification/1") height, width = hub.get_expected_image_size(module) images = ... # A batch of images with shape [batch_size, height, width, 3]. logits = module(images) # Logits with shape [batch_size, num_classes].
...or using the signature name
image_classification. The indices into logits
num_classes = 1001 classes of the classification from
the original training (see above).
This module can also be used to compute image feature
using the signature name
For this module, the size of the input image is fixed to
width = 331 x 331 pixels.
images are expected to have color values in the range [0,1],
common image input
In principle, consumers of this module can fine-tune it. However, fine-tuning through a large classification might be prone to overfit.
Fine-tuning requires to import the graph version with tag set
in order to operate batch normalization and dropout in training mode.
The dropout probability in NASNet path dropout is not scaled with
the training steps of fine-tuning and remains at the final (maximal) value
from the initial training.