The TensorFlow Lite
FlatBuffer file is then deployed to a client device (e.g.
mobile, embedded) and run locally using the TensorFlow Lite interpreter. This
conversion process is shown in the diagram below:
The TensorFlow Lite converter can be used from the Python API. Using the Python API makes it easier to convert models as part of a model development pipeline and helps mitigate compatibility issues early on.