The TensorFlow Lite converter takes a TensorFlow model and generates a
TensorFlow Lite model (an optimized
FlatBuffer format identified by the
.tflite file extension). You have the following two options for using the
- Python API (recommended): This makes it easier to convert models as part of the model development pipeline, apply optimizations, add metadata and has many more features.
- Command line: This only supports basic model conversion.
Helper code: To identify the installed TensorFlow version, run
print(tf.__version__) and to learn more about the TensorFlow Lite converter
Convert a TensorFlow 2.x model using
tf.lite.TFLiteConverter. A TensorFlow 2.x model is stored using the SavedModel format and is generated either using the high-level
tf.keras.*APIs (a Keras model) or the low-level
tf.*APIs (from which you generate concrete functions). As a result, you have the following three options (examples are in the next few sections):
tf.compat.v1.lite.TFLiteConverter.from_saved_model(): Converts a SavedModel.
tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(): Converts a Keras model.
tf.compat.v1.lite.TFLiteConverter.from_session(): Converts a GraphDef from a session.
tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(): Converts a Frozen GraphDef from a file. If you have checkpoints, then first convert it to a Frozen GraphDef file and then use this API as shown here.
The following example shows how to convert a SavedModel into a TensorFlow Lite model.
import tensorflow as tf # Convert the model converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory tflite_model = converter.convert() # Save the model. with open('model.tflite', 'wb') as f: f.write(tflite_model)
The following example shows how to convert a Keras model into a TensorFlow Lite model.
import tensorflow as tf # Create a model using high-level tf.keras.* APIs model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=), tf.keras.layers.Dense(units=16, activation='relu'), tf.keras.layers.Dense(units=1) ]) model.compile(optimizer='sgd', loss='mean_squared_error') # compile the model model.fit(x=[-1, 0, 1], y=[-3, -1, 1], epochs=5) # train the model # (to generate a SavedModel) tf.saved_model.save(model, "saved_model_keras_dir") # Convert the model. converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Save the model. with open('model.tflite', 'wb') as f: f.write(tflite_model)
The following example shows how to convert concrete functions into a TensorFlow Lite model.
import tensorflow as tf # Create a model using low-level tf.* APIs class Squared(tf.Module): @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.float32)]) def __call__(self, x): return tf.square(x) model = Squared() # (ro run your model) result = Squared(5.0) # This prints "25.0" # (to generate a SavedModel) tf.saved_model.save(model, "saved_model_tf_dir") concrete_func = model.__call__.get_concrete_function() # Convert the model. # Notes that for the versions earlier than TensorFlow 2.7, the # from_concrete_functions API is able to work when there is only the first # argument given: # > converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func]) converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], model) tflite_model = converter.convert() # Save the model. with open('model.tflite', 'wb') as f: f.write(tflite_model)
Add metadata, which makes it easier to create platform specific wrapper code when deploying models on devices.
The following are common conversion errors and their solutions:
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: <a href="https://www.tensorflow.org/lite/guide/ops_select">https://www.tensorflow.org/lite/guide/ops_select</a> TF Select ops: ..., .., ...
Solution: The error occurs as your model has TF ops that don't have a corresponding TFLite implementation. You can resolve this by using the TF op in the TFLite model (recommended). If you want to generate a model with TFLite ops only, you can either add a request for the missing TFLite op in Github issue #21526 (leave a comment if your request hasn’t already been mentioned) or create the TFLite op yourself.
.. is neither a custom op nor a flex op
Solution: If this TF op is:
Supported in TF: The error occurs because the TF op is missing from the allowlist (an exhaustive list of TF ops supported by TFLite). You can resolve this as follows:
Unsupported in TF: The error occurs because TFLite is unaware of the custom TF operator defined by you. You can resolve this as follows:
It is highly recommended that you use the Python API listed above instead, if possible.
installed TensorFlow 2.x from pip, use
tflite_convert command as follows: (if you've
installed TensorFlow 2.x from source
then you can replace '
tflite_convert' with '
//tensorflow/lite/python:tflite_convert --' in the following
sections, and if you've
installed TensorFlow 1.x
then refer to Github
tflite_convert: To view all the available flags, use the following command:
$ tflite_convert --help `--output_file`. Type: string. Full path of the output file. `--saved_model_dir`. Type: string. Full path to the SavedModel directory. `--keras_model_file`. Type: string. Full path to the Keras H5 model file. `--enable_v1_converter`. Type: bool. (default False) Enables the converter and flags used in TF 1.x instead of TF 2.x. You are required to provide the `--output_file` flag and either the `--saved_model_dir` or `--keras_model_file` flag.
tflite_convert \ --saved_model_dir=/tmp/mobilenet_saved_model \ --output_file=/tmp/mobilenet.tflite
tflite_convert \ --keras_model_file=/tmp/mobilenet_keras_model.h5 \ --output_file=/tmp/mobilenet.tflite
Use the TensorFlow Lite interpreter to run inference on a client device (e.g. mobile, embedded).