Have a question? Connect with the community at the TensorFlow Forum

# tf.keras.layers.experimental.preprocessing.RandomRotation

Randomly rotate each image.

Inherits From: `PreprocessingLayer`, `Layer`, `Module`

### Used in the notebooks

Used in the guide Used in the tutorials

By default, random rotations are only applied during training. At inference time, the layer does nothing. If you need to apply random rotations at inference time, set `training` to True when calling the layer.

#### Input shape:

4D tensor with shape: `(samples, height, width, channels)`, data_format='channels_last'.

#### Output shape:

4D tensor with shape: `(samples, height, width, channels)`, data_format='channels_last'.

`ValueError` if either bound is not between [0, 1], or upper bound is less than lower bound.

`factor` a float represented as fraction of 2pi, or a tuple of size 2 representing lower and upper bound for rotating clockwise and counter-clockwise. A positive values means rotating counter clock-wise, while a negative value means clock-wise. When represented as a single float, this value is used for both the upper and lower bound. For instance, `factor=(-0.2, 0.3)` results in an output rotation by a random amount in the range `[-20% * 2pi, 30% * 2pi]`. `factor=0.2` results in an output rotating by a random amount in the range `[-20% * 2pi, 20% * 2pi]`.
`fill_mode` Points outside the boundaries of the input are filled according to the given mode (one of `{'constant', 'reflect', 'wrap', 'nearest'}`).

• reflect: `(d c b a | a b c d | d c b a)` The input is extended by reflecting about the edge of the last pixel.
• constant: `(k k k k | a b c d | k k k k)` The input is extended by filling all values beyond the edge with the same constant value k = 0.
• wrap: `(a b c d | a b c d | a b c d)` The input is extended by wrapping around to the opposite edge.
• nearest: `(a a a a | a b c d | d d d d)` The input is extended by the nearest pixel.
`interpolation` Interpolation mode. Supported values: "nearest", "bilinear".
`seed` Integer. Used to create a random seed.
`fill_value` a float represents the value to be filled outside the boundaries when `fill_mode` is "constant".
`is_adapted` Whether the layer has been fit to data already.
`streaming` Whether `adapt` can be called twice without resetting the state.

## Methods

### `adapt`

View source

Fits the state of the preprocessing layer to the data being passed.

Arguments
`data` The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
`batch_size` Integer or `None`. Number of samples per state update. If unspecified, `batch_size` will default to 32. Do not specify the `batch_size` if your data is in the form of datasets, generators, or `keras.utils.Sequence` instances (since they generate batches).
`steps` Integer or `None`. Total number of steps (batches of samples) When training with input tensors such as TensorFlow data tensors, the default `None` is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a `tf.data` dataset, and 'steps' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the `steps` argument. This argument is not supported with array inputs.
`reset_state` Optional argument specifying whether to clear the state of the layer at the start of the call to `adapt`, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False.

### `compile`

View source

Configures the layer for `adapt`.

Arguments
`run_eagerly` Bool. Defaults to `False`. If `True`, this `Model`'s logic will not be wrapped in a `tf.function`. Recommended to leave this as `None` unless your `Model` cannot be run inside a `tf.function`. steps_per_execution: Int. Defaults to 1. The number of batches to run during each `tf.function` call. Running multiple batches inside a single `tf.function` call can greatly improve performance on TPUs or small models with a large Python overhead.

View s