TensorFlow Lite Converter

The TensorFlow Lite Converter takes a TensorFlow graph file and creates a graph file used by the TensorFlow Lite interpreter.

From model training to device deployment

After a TensorFlow model is trained, the TensorFlow Lite converter uses that model to generate a TensorFlow Lite FlatBuffer file (.tflite). The converter supports as input: SavedModels, frozen graphs (models generated by freeze_graph.py), and tf.keras models. The TensorFlow Lite FlatBuffer file is deployed to a client device (generally a mobile or embedded device), and the TensorFlow Lite interpreter uses the compressed model for on-device inference. This conversion process is shown in the diagram below:

TFLite converter workflow