## Overview

NASNet-A is a family of convolutional neural networks for image classification. The architecture of its convolutional cells (or layers) has been found by Neural Architecture Search (NAS). NAS and NASNet were originally published by

NASNets come in various sizes. This TF-Hub module uses the TF-Slim implementation nasnet_large of NASNet-A for ImageNet that uses 18 Normal Cells, starting with 168 convolutional filters (after the "ImageNet stem"). It has an input size of 331x331 pixels.

The module contains a trained instance of the network, packaged to do the image classification that the network was trained on. If you merely want to transform images into feature vectors, use module google/imagenet/nasnet_large/feature_vector/1 instead, and save the space occupied by the classification layer.

## Training

The checkpoint exported into this module was nasnet-a_large_04_10_2017/model.ckpt downloaded from NASNet's pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("ImageNet").

## Usage

This module implements the common signature for image classification. It can be used like

module = hub.Module("https://tfhub.dev/google/imagenet/nasnet_large/classification/1")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
logits = module(images)  # Logits with shape [batch_size, num_classes].


...or using the signature name image_classification. The indices into logits are the num_classes = 1001 classes of the classification from the original training (see above).

This module can also be used to compute image feature vectors, using the signature name image_feature_vector.

For this module, the size of the input image is fixed to height x width = 331 x 331 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.

## Fine-tuning

In principle, consumers of this module can fine-tune it. However, fine-tuning through a large classification might be prone to overfit.

Fine-tuning requires to import the graph version with tag set {"train"} in order to operate batch normalization and dropout in training mode. The dropout probability in NASNet path dropout is not scaled with the training steps of fine-tuning and remains at the final (maximal) value from the initial training.