Join us at TensorFlow World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

Segmentation

Get started

DeepLab is a state-of-art deep learning model for semantic image segmentation, where the goal is to assign semantic labels (e.g. person, dog, cat) to every pixel in the input image.

Download starter model

How it works

Semantic image segmentation predicts whether each pixel of an image is associated with a certain class. This is in contrast to object detection, which detects objects in rectangular regions, and image classification, which classifies the overall image.

The current implementation includes the following features:

  1. DeepLabv1: We use atrous convolution to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks.
  2. DeepLabv2: We use atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales with filters at multiple sampling rates and effective fields-of-views.
  3. DeepLabv3: We augment the ASPP module with image-level feature [5, 6] to capture longer range information. We also include batch normalization [7] parameters to facilitate the training. In particular, we applying atrous convolution to extract output features at different output strides during training and evaluation, which efficiently enables training BN at output stride = 16 and attains a high performance at output stride = 8 during evaluation.
  4. DeepLabv3+: We extend DeepLabv3 to include a simple yet effective decoder module to refine the segmentation results especially along object boundaries. Furthermore, in this encoder-decoder structure one can arbitrarily control the resolution of extracted encoder features by atrous convolution to trade-off precision and runtime.

Example output

The model will create a mask over the target objects with high accuracy.

Animation showing image segmentation

Read more about segmentation