TensorFlow Lite Converter

The TensorFlow Lite Converter takes a TensorFlow graph file and creates a graph file used by the TensorFlow Lite interpreter.

From model training to device deployment

After a TensorFlow model is trained, the TensorFlow Lite converter uses that model to generate a TensorFlow Lite FlatBuffer file (.tflite). The converter supports as input: SavedModels, frozen graphs (models generated by freeze_graph.py), and tf.keras HDF5 models. The TensorFlow Lite FlatBuffer file is deployed to a client device (generally a mobile or embedded device), and the TensorFlow Lite interpreter uses the compressed model for on-device inference. This conversion process is shown in the diagram below:

TFLite converter workflow

The TensorFlow Lite Converter can be used either from Python or from the command line. This allows you to integrate the conversion step into the model design workflow, ensuring the model is easy to convert to a mobile inference graph.