tf.raw_ops.Requantize
bookmark_borderbookmark
Stay organized with collections
Save and categorize content based on your preferences.
Converts the quantized input
tensor into a lower-precision output
.
View aliases
Compat aliases for migration
See
Migration guide for
more details.
tf.compat.v1.raw_ops.Requantize
tf.raw_ops.Requantize(
input,
input_min,
input_max,
requested_output_min,
requested_output_max,
out_type,
name=None
)
Converts the quantized input
tensor into a lower-precision output
, using the
output range specified with requested_output_min
and requested_output_max
.
[input_min, input_max]
are scalar floats that specify the range for the float
interpretation of the input
data. For example, if input_min
is -1.0f and
input_max
is 1.0f, and we are dealing with quint16
quantized data, then a 0
value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.
Args |
input
|
A Tensor . Must be one of the following types: qint8 , quint8 , qint32 , qint16 , quint16 .
|
input_min
|
A Tensor of type float32 .
The float value that the minimum quantized input value represents.
|
input_max
|
A Tensor of type float32 .
The float value that the maximum quantized input value represents.
|
requested_output_min
|
A Tensor of type float32 .
The float value that the minimum quantized output value represents.
|
requested_output_max
|
A Tensor of type float32 .
The float value that the maximum quantized output value represents.
|
out_type
|
A tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16 .
The type of the output. Should be a lower bit depth than Tinput.
|
name
|
A name for the operation (optional).
|
Returns |
A tuple of Tensor objects (output, output_min, output_max).
|
output
|
A Tensor of type out_type .
|
output_min
|
A Tensor of type float32 .
|
output_max
|
A Tensor of type float32 .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.raw_ops.Requantize\n\n\u003cbr /\u003e\n\nConverts the quantized `input` tensor into a lower-precision `output`.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.raw_ops.Requantize`](https://www.tensorflow.org/api_docs/python/tf/raw_ops/Requantize)\n\n\u003cbr /\u003e\n\n tf.raw_ops.Requantize(\n input,\n input_min,\n input_max,\n requested_output_min,\n requested_output_max,\n out_type,\n name=None\n )\n\nConverts the quantized `input` tensor into a lower-precision `output`, using the\noutput range specified with `requested_output_min` and `requested_output_max`.\n\n`[input_min, input_max]` are scalar floats that specify the range for the float\ninterpretation of the `input` data. For example, if `input_min` is -1.0f and\n`input_max` is 1.0f, and we are dealing with `quint16` quantized data, then a 0\nvalue in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input` | A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint32`, `qint16`, `quint16`. |\n| `input_min` | A `Tensor` of type `float32`. The float value that the minimum quantized input value represents. |\n| `input_max` | A `Tensor` of type `float32`. The float value that the maximum quantized input value represents. |\n| `requested_output_min` | A `Tensor` of type `float32`. The float value that the minimum quantized output value represents. |\n| `requested_output_max` | A `Tensor` of type `float32`. The float value that the maximum quantized output value represents. |\n| `out_type` | A [`tf.DType`](../../tf/dtypes/DType) from: `tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16`. The type of the output. Should be a lower bit depth than Tinput. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|--------------|--------------------------------|\n| A tuple of `Tensor` objects (output, output_min, output_max). ||\n| `output` | A `Tensor` of type `out_type`. |\n| `output_min` | A `Tensor` of type `float32`. |\n| `output_max` | A `Tensor` of type `float32`. |\n\n\u003cbr /\u003e"]]