Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

TensorFlow 1.x compatibility

The tf.lite.TFLiteConverter was updated between TensorFlow 1.X and 2.0. This document explains the differences between the 1.X and 2.0 versions of the converter, and provides information about how to use the 1.X version if required.

Summary of changes in Python API between 1.X and 2.0

The following section summarizes the changes in the Python API from 1.X to 2.0. If any of the changes raise concerns, please file a GitHub issue.

Formats supported by TFLiteConverter

The 2.0 version of the converter supports SavedModel and Keras model files generated in both 1.X and 2.0. However, the conversion process no longer supports "frozen graph" GraphDef files generated in 1.X.

Converting frozen graphs

Users who want to convert frozen graph GraphDef files (.pb files) to TensorFlow Lite should use tf.compat.v1.lite.TFLiteConverter.

The following snippet shows a frozen graph file being converted:

# Path to the frozen graph file
graph_def_file = 'frozen_graph.pb'
# A list of the names of the model's input tensors
input_arrays = ['input_name']
# A list of the names of the model's output tensors
output_arrays = ['output_name']
# Load and convert the frozen graph
converter = lite.TFLiteConverter.from_frozen_graph(
  graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
# Write the converted model to disk
open("converted_model.tflite", "wb").write(tflite_model)

Quantization-aware training

The following attributes and methods associated with quantization-aware training have been removed from TFLiteConverter in TensorFlow 2.0:

  • inference_type
  • inference_input_type
  • quantized_input_stats
  • default_ranges_stats
  • reorder_across_fake_quant
  • change_concat_input_ranges
  • post_training_quantize - Deprecated in the 1.X API
  • get_input_arrays()

The rewriter function that supports quantization-aware training does not support models generated by TensorFlow 2.0. Additionally, TensorFlow Lite’s quantization API is being reworked and streamlined in a direction that supports quantization-aware training through the Keras API. These attributes will be removed in the 2.0 API until the new quantization API is launched. Users who want to convert models generated by the rewriter function can use tf.compat.v1.lite.TFLiteConverter.

Changes to TFLiteConverter attributes

The target_ops attribute has become an attribute of TargetSpec and renamed to supported_ops in line with future additions to the optimization framework.

Additionally, the following attributes have been removed:

  • drop_control_dependency (default: True)
  • Graph visualization - The recommended approach for visualizing a TensorFlow Lite graph in TensorFlow 2.0 will be to use visualize.py. Unlike GraphViz, it enables users to visualize the graph after post training quantization has occurred. The following attributes related to graph visualization will be removed:
    • output_format
    • dump_graphviz_dir
    • dump_graphviz_video

General API changes

The following section explains several significant API changes between TensorFlow 1.X and 2.0.

Conversion methods

The following methods that were previously deprecated in 1.X will no longer be exported in 2.0:

  • lite.toco_convert
  • lite.TocoConverter


The lite.constants API was removed in 2.0 in order to decrease duplication between TensorFlow and TensorFlow Lite. The following list maps the lite.constant type to the TensorFlow type:

Additionally, lite.constants.TFLITE and lite.constants.GRAPHVIZ_DOT were removed due to the deprecation of the output_format flag in TFLiteConverter.


The OpHint API is currently not available in 2.0 due to an incompatibility with the 2.0 APIs. This API enables conversion of LSTM based models. Support for LSTMs in 2.0 is being investigated. All related lite.experimental APIs have been removed due to this issue.