This page provides information about updates made to the
tf.lite.TFLiteConverter Python API in TensorFlow 2.x.
- Support integer (previously, only float) input/output type for integer
quantized models using the new
inference_output_typeattributes. Refer to this example usage.
- Support conversion and resizing of models with dynamic dimensions.
- Added a new experimental quantization mode with 16-bit activations and 8-bit weights.
- Support integer (previously, only float) input/output type for integer quantized models using the new
- By default, leverage MLIR-based conversion, Google's cutting edge compiler technology for machine learning. This enables conversion of new classes of models, including Mask R-CNN, Mobile BERT, etc and supports models with functional control flow.
TensorFlow 2.0 vs TensorFlow 1.x
- Renamed the
- Removed the following attributes:
get_input_arrays(). Instead, quantize aware training is supported through the
tf.kerasAPI and post training quantization uses fewer attributes.
dump_graphviz_video. Instead, the recommended approach for visualizing a TensorFlow Lite model is to use visualize.py.
- frozen graphs:
drop_control_dependency, as frozen graphs are unsupported in TensorFlow 2.x.
- Removed other converter APIs such as
- Removed other related APIs such as
tf.lite.constants.*types have been mapped to
tf.*TensorFlow data types, to reduce duplication)
- Renamed the