|TensorFlow 1 version||View source on GitHub|
Generate a single randomly distorted bounding box for an image.
tf.image.sample_distorted_bounding_box( image_size, bounding_boxes, seed=0, min_object_covered=0.1, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None )
Bounding box annotations are often supplied in addition to ground-truth labels
in image recognition or object localization tasks. A common technique for
training such a system is to randomly distort an image while preserving
its content, i.e. data augmentation. This Op outputs a randomly distorted
localization of an object, i.e. bounding box, given an
bounding_boxes and a series of constraints.
The output of this Op is a single bounding box that may be used to crop the
original image. The output is returned as 3 tensors:
bboxes. The first 2 tensors can be fed directly into
tf.slice to crop the
image. The latter may be supplied to
visualize what the bounding box looks like.
Bounding boxes are supplied and returned as
[y_min, x_min, y_max, x_max].
The bounding box coordinates are floats in
[0.0, 1.0] relative to the width
and the height of the underlying image.
# Generate a single distorted bounding box. begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box( tf.shape(image), bounding_boxes=bounding_boxes, min_object_covered=0.1) # Draw the bounding box in an image summary. image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0), bbox_for_draw) tf.compat.v1.summary.image('images_with_box', image_with_box) # Employ the bounding box to distort the image. distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is avail