NASNet-A is a family of convolutional neural networks for image classification. The architecture of its convolutional cells (or layers) has been found by Neural Architecture Search (NAS). NAS and NASNet were originally published by
- Barret Zoph, Quoc V. Le: "Neural Architecture Search with Reinforcement Learning", 2017.
- Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le: "Learning Transferable Architectures for Scalable Image Recognition", 2017.
NASNets come in various sizes. This TF-Hub module uses the TF-Slim
nasnet_large of NASNet-A for ImageNet
that uses 18 Normal Cells, starting with
168 convolutional filters (after the "ImageNet stem").
It has an input size of 331x331 pixels.
The module contains a trained instance of the network, packaged to get
feature vectors from images.
If you want the full model including the classification it was originally
trained for, use module
The checkpoint exported into this module was
NASNet's pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("ImageNet").
This module implements the common signature for computing image feature vectors. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/1") height, width = hub.get_expected_image_size(module) images = ... # A batch of images with shape [batch_size, height, width, 3]. features = module(images) # Features with shape [batch_size, num_features].
...or using the signature name
image_feature_vector. The output for each image
in the batch is a feature vector of size
num_features = 4032.
For this module, the size of the input image is fixed to
width = 331 x 331 pixels.
images are expected to have color values in the range [0,1],
common image input
Consumers of this module can fine-tune it.
Fine-tuning requires to import the graph version with tag set
in order to operate batch normalization and dropout in training mode.
The dropout probability in NASNet path dropout is not scaled with
the training steps of fine-tuning and remains at the final (maximal) value
from the initial training.