During training, this layer will randomly choose a location to crop images
down to a target size. The layer will crop all the images in the same batch
to the same cropping location.
At inference time, and during training if an input image is smaller than the
target size, the input will be resized and cropped so as to return the
largest possible window in the image that matches the target aspect ratio.
If you need to apply random cropping at inference time, set training to
True when calling the layer.
Input pixel values can be of any range (e.g. [0., 1.) or [0, 255]) and
of integer or floating point dtype. By default, the layer will output
floats.
Input shape
3D
unbatched) or 4D (batched) tensor with shape
(..., height, width, channels), in "channels_last" format.
Output shape
3D
unbatched) or 4D (batched) tensor with shape
(..., target_height, target_width, channels).
Args
height
Integer, the height of the output shape.
width
Integer, the width of the output shape.
seed
Integer. Used to create a random seed.
**kwargs
Base layer keyword arguments, such as
name and dtype.
Attributes
input
Retrieves the input tensor(s) of a symbolic operation.
Only returns the tensor(s) corresponding to the first time
the operation was called.
output
Retrieves the output tensor(s) of a layer.
Only returns the tensor(s) corresponding to the first time
the operation was called.
This method is the reverse of get_config,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights).
Args
config
A Python dictionary, typically the
output of get_config.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.layers.RandomCrop\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/preprocessing/random_crop.py#L8-L173) |\n\nA preprocessing layer which randomly crops images during training.\n\nInherits From: [`Layer`](../../../tf/keras/Layer), [`Operation`](../../../tf/keras/Operation) \n\n tf.keras.layers.RandomCrop(\n height, width, seed=None, data_format=None, name=None, **kwargs\n )\n\nDuring training, this layer will randomly choose a location to crop images\ndown to a target size. The layer will crop all the images in the same batch\nto the same cropping location.\n\nAt inference time, and during training if an input image is smaller than the\ntarget size, the input will be resized and cropped so as to return the\nlargest possible window in the image that matches the target aspect ratio.\nIf you need to apply random cropping at inference time, set `training` to\nTrue when calling the layer.\n\nInput pixel values can be of any range (e.g. `[0., 1.)` or `[0, 255]`) and\nof integer or floating point dtype. By default, the layer will output\nfloats.\n| **Note:** This layer is safe to use inside a [`tf.data`](../../../tf/data) pipeline (independently of which backend you're using).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Input shape ----------- ||\n|------|----------------------------------------------------------------------------------------------------------------------|\n| `3D` | `unbatched) or 4D (batched) tensor with shape` \u003cbr /\u003e `(..., height, width, channels)`, in `\"channels_last\"` format. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Output shape ------------ ||\n|------|-------------------------------------------------------------------------------------------------------|\n| `3D` | `unbatched) or 4D (batched) tensor with shape` \u003cbr /\u003e `(..., target_height, target_width, channels)`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------|-----------------------------------------------------------|\n| `height` | Integer, the height of the output shape. |\n| `width` | Integer, the width of the output shape. |\n| `seed` | Integer. Used to create a random seed. |\n| `**kwargs` | Base layer keyword arguments, such as `name` and `dtype`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input` | Retrieves the input tensor(s) of a symbolic operation. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n| `output` | Retrieves the output tensor(s) of a layer. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L191-L213) \n\n @classmethod\n from_config(\n config\n )\n\nCreates a layer from its config.\n\nThis method is the reverse of `get_config`,\ncapable of instantiating the same layer from the config\ndictionary. It does not handle layer connectivity\n(handled by Network), nor weights (handled by `set_weights`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------------------|\n| `config` | A Python dictionary, typically the output of get_config. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A layer instance. ||\n\n\u003cbr /\u003e\n\n### `symbolic_call`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L58-L70) \n\n symbolic_call(\n *args, **kwargs\n )"]]