View source on GitHub |
Generate a single randomly distorted bounding box for an image.
tf.image.sample_distorted_bounding_box(
image_size,
bounding_boxes,
seed=0,
min_object_covered=0.1,
aspect_ratio_range=None,
area_range=None,
max_attempts=None,
use_image_if_no_bounding_boxes=None,
name=None
)
Bounding box annotations are often supplied in addition to ground-truth labels
in image recognition or object localization tasks. A common technique for
training such a system is to randomly distort an image while preserving
its content, i.e. data augmentation. This Op outputs a randomly distorted
localization of an object, i.e. bounding box, given an image_size
,
bounding_boxes
and a series of constraints.
The output of this Op is a single bounding box that may be used to crop the
original image. The output is returned as 3 tensors: begin
, size
and
bboxes
. The first 2 tensors can be fed directly into tf.slice
to crop the
image. The latter may be supplied to tf.image.draw_bounding_boxes
to
visualize what the bounding box looks like.
Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]
.
The bounding box coordinates are floats in [0.0, 1.0]
relative to the width
and the height of the underlying image.
For example,
# Generate a single distorted bounding box.
begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bounding_boxes,
min_object_covered=0.1)
# Draw the bounding box in an image summary.
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox_for_draw)
tf.compat.v1.summary.image('images_with_box', image_with_box)
# Employ the bounding box to distort the image.
distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is available, setting
use_image_if_no_bounding_boxes = true
will assume there is a single implicit
bounding box covering the whole image. If use_image_if_no_bounding_boxes
is
false and no bounding boxes are supplied, an error is raised.
For producing deterministic results given a seed
value, use
tf.image.stateless_sample_distorted_bounding_box
. Unlike using the seed
param with tf.image.random_*
ops, tf.image.stateless_random_*
ops
guarantee the same results given the same seed independent of how many times
the function is called, and independent of global seed settings
(e.g. tf.random.set_seed).
Returns | |
---|---|
A tuple of Tensor objects (begin, size, bboxes).
|
|
begin
|
A Tensor . Has the same type as image_size . 1-D, containing
[offset_height, offset_width, 0] . Provide as input to
tf.slice .
|
size
|
A Tensor . Has the same type as image_size . 1-D, containing
[target_height, target_width, -1] . Provide as input to
tf.slice .
|
bboxes
|
A Tensor of type float32 . 3-D with shape [1, 1, 4] containing
the distorted bounding box.
Provide as input to tf.image.draw_bounding_boxes .
|
Raises | |
---|---|
ValueError
|
If no seed is specified and op determinism is enabled. |