Dequantize<U extends TNumber> Dequantize the 'input' tensor into a float or bfloat16 Tensor. 
Dequantize.Options Optional attributes for Dequantize  
FakeQuantWithMinMaxArgs Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. 
FakeQuantWithMinMaxArgs.Options Optional attributes for FakeQuantWithMinMaxArgs  
FakeQuantWithMinMaxArgsGradient Compute gradients for a FakeQuantWithMinMaxArgs operation. 
FakeQuantWithMinMaxArgsGradient.Options Optional attributes for FakeQuantWithMinMaxArgsGradient  
FakeQuantWithMinMaxVars Fake-quantize the 'inputs' tensor of type float via global float scalars

Fake-quantize the `inputs` tensor of type float via global float scalars `min` and `max` to `outputs` tensor of same shape as `inputs`. 

FakeQuantWithMinMaxVars.Options Optional attributes for FakeQuantWithMinMaxVars  
FakeQuantWithMinMaxVarsGradient Compute gradients for a FakeQuantWithMinMaxVars operation. 
FakeQuantWithMinMaxVarsGradient.Options Optional attributes for FakeQuantWithMinMaxVarsGradient  
FakeQuantWithMinMaxVarsPerChannel Fake-quantize the 'inputs' tensor of type float via per-channel floats

Fake-quantize the `inputs` tensor of type float per-channel and one of the shapes: `[d]`, `[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` to `outputs` tensor of same shape as `inputs`. 

FakeQuantWithMinMaxVarsPerChannel.Options Optional attributes for FakeQuantWithMinMaxVarsPerChannel  
FakeQuantWithMinMaxVarsPerChannelGradient Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. 
FakeQuantWithMinMaxVarsPerChannelGradient.Options Optional attributes for FakeQuantWithMinMaxVarsPerChannelGradient  
Quantize<T extends TType> Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. 
Quantize.Options Optional attributes for Quantize  
QuantizeAndDequantize<T extends TNumber> Quantizes then dequantizes a tensor. 
QuantizeAndDequantize.Options Optional attributes for QuantizeAndDequantize  
QuantizeAndDequantizeV3<T extends TNumber> Quantizes then dequantizes a tensor. 
QuantizeAndDequantizeV3.Options Optional attributes for QuantizeAndDequantizeV3  
QuantizeAndDequantizeV4<T extends TNumber> Returns the gradient of `quantization.QuantizeAndDequantizeV4`. 
QuantizeAndDequantizeV4.Options Optional attributes for QuantizeAndDequantizeV4  
QuantizeAndDequantizeV4Grad<T extends TNumber> Returns the gradient of `QuantizeAndDequantizeV4`. 
QuantizeAndDequantizeV4Grad.Options Optional attributes for QuantizeAndDequantizeV4Grad  
QuantizedConcat<T extends TType> Concatenates quantized tensors along one dimension. 
QuantizedMatMulWithBiasAndDequantize<W extends TNumber>  
QuantizedMatMulWithBiasAndDequantize.Options Optional attributes for QuantizedMatMulWithBiasAndDequantize  
QuantizedMatMulWithBiasAndRequantize<W extends TType>  
QuantizedMatMulWithBiasAndRequantize.Options Optional attributes for QuantizedMatMulWithBiasAndRequantize  
QuantizeDownAndShrinkRange<U extends TType> Convert the quantized 'input' tensor into a lower-precision 'output', using the

actual distribution of the values to maximize the usage of the lower bit depth and adjusting the output min and max ranges accordingly. 

RequantizationRange Computes a range that covers the actual values present in a quantized tensor. 
Requantize<U extends TType> Converts the quantized `input` tensor into a lower-precision `output`.