Generate a single randomly distorted bounding box for an image.
Compat aliases for migration
See Migration guide for more details.
tf.raw_ops.SampleDistortedBoundingBoxV2( image_size, bounding_boxes, min_object_covered, seed=0, seed2=0, aspect_ratio_range=[0.75, 1.33], area_range=[0.05, 1], max_attempts=100, use_image_if_no_bounding_boxes=False, name=None )
Bounding box annotations are often supplied in addition to ground-truth labels
in image recognition or object localization tasks. A common technique for
training such a system is to randomly distort an image while preserving
its content, i.e. data augmentation. This Op outputs a randomly distorted
localization of an object, i.e. bounding box, given an
bounding_boxes and a series of constraints.
The output of this Op is a single bounding box that may be used to crop the
original image. The output is returned as 3 tensors:
bboxes. The first 2 tensors can be fed directly into
tf.slice to crop the
image. The latter may be supplied to
tf.image.draw_bounding_boxes to visualize
what the bounding box looks like.
Bounding boxes are supplied and returned as
[y_min, x_min, y_max, x_max]. The
bounding box coordinates are floats in
[0.0, 1.0] relative to the width and
height of the underlying image.
# Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.summary.image('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is available, setting
use_image_if_no_bounding_boxes = true will assume there is a single implicit
bounding box covering the whole ima