tensorflow:: ops:: QuantizeAndDequantizeV2
#include <array_ops.h>
Quantizes then dequantizes a tensor.
Summary
This op simulates the precision loss from the quantized forward pass by:
- Quantizing the tensor to fixed point numbers, which should match the target quantization method when it is used in inference.
- Dequantizing it back to floating point numbers for the following ops, most likely matmul.
There are different ways to quantize. This version uses only scaling, so 0.0 maps to 0.
From the specified 'num_bits' in the quantized output type, it determines minimum and maximum representable quantized values.
e.g.
- [-128, 127] for signed, num_bits = 8, or
- [0, 255] for unsigned, num_bits = 8.
If range_given == False, the initial input_min, input_max will be determined automatically as the minimum and maximum values in the input tensor, otherwise the specified values of input_min, input_max are used.
Note: If the input_min, input_max are specified, they do not need to equal the actual minimum and maximum values in the tensor. e.g. in some cases it may be beneficial to specify these values such that the low probability extremes of the input distribution are clipped.
This op determines the maximum scale_factor that would map the initial [input_min, input_max] range to a range that lies within the representable quantized range.
It determines the scale from one of input_min and input_max, then updates the other one to maximize the representable range.
e.g.
- if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, 5.0]: it would use a scale_factor of -128 / -10.0 = 12.8 In this case, it would update input_max to be 127 / 12.8 = 9.921875
- if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, 10.0]: it would use a scale_factor of 127 / 10.0 = 12.7 In this case, it would update input_min to be 128.0 / 12.7 = -10.07874
- if the output is unsigned, input_min is forced to be 0, and only the specified input_max is used.
After determining the scale_factor and updating the input range, it applies the following to each value in the 'input' tensor.
output = round(clamp(value, input_min, input_max) * scale_factor) / scale_factor.
The above round function rounds the value based on the given round_mode.
Args:
- scope: A Scope object
- input: Tensor to quantize and then dequantize.
- input_min: If
range_given == True
, this specifies the minimum input value that needs to be represented, otherwise it is determined from the min value of theinput
tensor. - input_max: If
range_given == True
, this specifies the maximum input value that needs to be represented, otherwise it is determined from the max value of theinput
tensor.
Optional attributes (see Attrs
):
- signed_input: Whether the quantization is signed or unsigned. (actually this parameter should have been called
signed_output
) - num_bits: The bitwidth of the quantization.
- range_given: Whether the range is given or should be determined from the
input
tensor. - round_mode: The 'round_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents. The following rounding modes are currently supported:
- HALF_TO_EVEN: this is the default round_mode.
- HALF_UP: round towards positive. In this mode 7.5 rounds up to 8 and -7.5 rounds up to -7.
- narrow_range: If True, then the absolute value of the quantized minimum value is the same as the quantized maximum value, instead of 1 greater. i.e. for 8 bit quantization, the minimum value is -127 instead of -128.
- axis: If specified, this axis is treated as a channel or slice axis, and a separate quantization range is used for each channel or slice along this axis.
Returns:
Output
: The output tensor.
Constructors and Destructors |
|
---|---|
QuantizeAndDequantizeV2(const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input input_min, ::tensorflow::Input input_max)
|
|
QuantizeAndDequantizeV2(const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input input_min, ::tensorflow::Input input_max, const QuantizeAndDequantizeV2::Attrs & attrs)
|
Public attributes |
|
---|---|
operation
|
|
output
|
Public functions |
|
---|---|
node() const
|
::tensorflow::Node *
|
operator::tensorflow::Input() const
|
|
operator::tensorflow::Output() const
|
|
Public static functions |
|
---|---|
Axis(int64 x)
|
|
NarrowRange(bool x)
|
|
NumBits(int64 x)
|
|
RangeGiven(bool x)
|
|
RoundMode(StringPiece x)
|
|
SignedInput(bool x)
|
Structs |
|
---|---|
tensorflow:: |
Optional attribute setters for QuantizeAndDequantizeV2. |
Public attributes
operation
Operation operation
output
::tensorflow::Output output
Public functions
QuantizeAndDequantizeV2
QuantizeAndDequantizeV2( const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input input_min, ::tensorflow::Input input_max )
QuantizeAndDequantizeV2
QuantizeAndDequantizeV2( const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input input_min, ::tensorflow::Input input_max, const QuantizeAndDequantizeV2::Attrs & attrs )
node
::tensorflow::Node * node() const
operator::tensorflow::Input
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
Axis
Attrs Axis( int64 x )
NarrowRange
Attrs NarrowRange( bool x )
NumBits
Attrs NumBits( int64 x )
RangeGiven
Attrs RangeGiven( bool x )
RoundMode
Attrs RoundMode( StringPiece x )
SignedInput
Attrs SignedInput( bool x )