View source on GitHub
|
Module containing N-bit default transforms.
Classes
class ConcatTransform: Transform for Concatenate. Quantize only after concatenation.
class ConcatTransform3Inputs: Transform for 3 inputs Concatenate.
class ConcatTransform4Inputs: Transform for 4 inputs Concatenate.
class ConcatTransform5Inputs: Transform for 5 inputs Concatenate.
class ConcatTransform6Inputs: Transform for 6 inputs Concatenate.
class Conv2DBatchNormActivationQuantize: Ensure FQ does not get placed between Conv, BatchNorm and ReLU.
class Conv2DBatchNormQuantize: Ensure FQ does not get placed between Conv and BatchNorm.
class Conv2DBatchNormReLUQuantize: Ensure FQ does not get placed between Conv, BatchNorm and ReLU.
class Conv2DReshapeBatchNormActivationQuantize: Ensure FQ does not get placed between Conv, BatchNorm and ReLU.
class Conv2DReshapeBatchNormQuantize: Ensure FQ does not get placed between Conv, Reshape and BatchNorm.
class Conv2DReshapeBatchNormReLUQuantize: Ensure FQ does not get placed between Conv, BatchNorm and ReLU.
class InputLayerQuantize: Quantizes InputLayer, by adding QuantizeLayer after it.
class LayerReLUQuantize: Ensure FQ does not get placed between Add and ReLU.
class LayerReluActivationQuantize: Ensure FQ does not get placed between Add and ReLU.
class SeparableConv1DQuantize: Add QAT support for Keras SeparableConv1D layer.
class SeparableConvQuantize: Break SeparableConv into a DepthwiseConv and Conv layer.
View source on GitHub