From model training to device deployment
The TensorFlow Lite converter generates a TensorFlow Lite
FlatBuffer file (
.tflite) from a
The converter supports the following input formats:
GraphDef: Models generated by freeze_graph.py.
- Any model taken from a
tf.Session(Python API only).
The TensorFlow Lite
FlatBuffer file is then deployed to a client device
(generally a mobile or embedded device), and the TensorFlow Lite interpreter
uses the compressed model for on-device inference. This conversion process is
shown in the diagram below:
The TensorFlow Lite Converter can be used from either of these two options: