Support integer (previously, only float) input/output type for integer quantized models using the new
inference_output_typeattributes. Refer to this example usage.
Support conversion and resizing of models with dynamic dimensions.
Added a new experimental quantization mode with 16-bit activations and 8-bit weights.
- By default, leverage MLIR-based conversion, Google's cutting edge compiler technology for machine learning. This enables conversion of new classes of models, including Mask R-CNN, Mobile BERT, etc and supports models with functional control flow.
TensorFlow 2.0 vs TensorFlow 1.x
- Renamed the
- Removed the following attributes:
get_input_arrays(). Instead, quantize aware training is supported through the
tf.kerasAPI and post training quantization uses fewer attributes.
dump_graphviz_video. Instead, the recommended approach for visualizing a TensorFlow Lite model is to use visualize.py.
- frozen graphs:
drop_control_dependency, as frozen graphs are unsupported in TensorFlow 2.x.
- Removed other converter APIs such as
- Removed other related APIs such as
tf.lite.constants.*types have been mapped to
tf.*TensorFlow data types, to reduce duplication)
- Renamed the
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-09-02 UTC.