Interpreter interface for running TensorFlow Lite models.

Used in the notebooks

Used in the guide Used in the tutorials

Models obtained from TfLiteConverter can be run in Python with Interpreter.

As an example, lets generate a simple Keras model and convert it to TFLite (TfLiteConverter also supports other input formats with from_saved_model and from_concrete_function)

x = np.array([[1.], [2.]])
y = np.array([[2.], [4.]])
model = tf.keras.models.Sequential([
          tf.keras.layers.Dense(units=1, input_shape=[1])
model.compile(optimizer='sgd', loss='mean_squared_error'), y, epochs=1)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

tflite_model can be saved to a file and loaded later, or directly into the Interpreter. Since TensorFlow Lite pre-plans tensor allocations to optimize inference, the user needs to call allocate_tensors() before any inference.

interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()  # Needed before execution!

Sample execution:

output = interpreter.get_output_details()[0]  # Model has single output.
input = interpreter.get_input_details()[0]  # Model has single input.
input_data = tf.constant(1., shape=[1, 1])
interpreter.set_tensor(input['index'], input_data)
(1, 1)

Use get_signature_runner() for a more user-friendly inference API.

model_path Path to TF-Lite Flatbuffer file.
model_content Content of model.
experimental_delegates Experimental. Subject to change. List of TfLiteDelegate objects returned by lite.load_delegate().
num_threads Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading. num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.
experimental_op_resolver_type The op resolver used by the interpreter. It must be an instance of OpResolverType. By default, we use the built-in op resolver which corresponds to tflite::ops::builtin::BuiltinOpResolver in C++.
experimental_preserve_all_tensors If true, then intermediate tensors used during computation are preserved for inspection, and if the passed op resolver type is AUTO or BUILTIN, the type will be changed to BUILTIN_WITHOUT_DEFAULT_DELEGATES so that no Tensorflow Lite default delegates are applied. If false, getting intermediate tensors could result in undefined values or None, especially when the graph is successfully modified by the Tensorflow Lite default delegate.

ValueError If the interpreter was unable to create.



View source


View source

Gets model input tensor details.

A list in which each item is a dictionary with details about an input tensor. Each dictionary contains the following fields that describe the tensor:

  • name: The tensor name.
  • index: The tensor index in the interpreter.
  • shape: The shape of the tensor.
  • shape_signature: Same as shape for models with known/fixed shapes. If any dimension sizes are unknown, they are indicated with -1.

  • dtype: The numpy data type (such as np.int32 or np.uint8).

  • quantization: Deprecated, use quantization_parameters. This field only works for per-tensor quantization, whereas quantization_parameters works in all cases.

  • quantization_parameters: A dictionary of parameters used to quantize the tensor: ~ scales: List of scales (one if per-tensor quantization). ~ zero_points: List of zero_points (one if per-tensor quantization). ~ quantized_dimension: Specifies the dimension of per-axis quantization, in the case of multiple scales/zero_points.

  • sparsity_parameters: A dictionary of parameters used to encode a sparse tensor. This is empty if the tensor is dense.


View source

Gets model output tensor details.

A list in which each item is a dictionary with details about an output tensor. The dictionary contains the same fields as described for get_input_details().


View source

Gets list of SignatureDefs in the model.


signatures = interpreter.get_signature_list()

# {
#   'add': {'inputs': ['x', 'y'], 'outputs': ['output_0']}
# }

Then using the names in the signature list you can get a callable from

A list of SignatureDef details in a dictionary structure. It is keyed on the SignatureDef method name, and the value holds dictionary of inputs and outputs.


View source

Gets callable for inference of specific SignatureDef.

Example usage,

interpreter = tf.lite.Interpreter(model_content=tflite_model)
fn = interpreter.get_signature_runner('div_with_remainder')
output = fn(x=np.array([3]), y=np.array([2]))
# {
#   'quotient': array([1.], dtype=float32)
#   'remainder': array([1.], dtype=float32)
# }

None can be passed for signature_key if the model has a single Signature only.

All names used are this specific SignatureDef names.

signature_key Signature key for the SignatureDef, it can be None if and only if the model has a single SignatureDef. Default value is None.

This returns a callable that can run inference for SignatureDef defined by argument 'signature_key'. The callable will take key arguments corresponding to the arguments of the SignatureDef, that should have numpy values. The callable will returns dictionary that maps from output names to numpy values of the computed results.

ValueError If passed signature_key is invalid.


View source

Gets the value of the output tensor (get a copy).

If you wish to avoid the copy, use tensor(). This function cannot be used to read intermediate results.

tensor_index Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.
subgraph_index Index of the subgraph to fetch the tensor. Default value is 0, which means to fetch from the primary subgraph.

a numpy array.


View source

Gets tensor details for every tensor with valid tensor details.

Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.

A list of dictionaries containing tensor information.


View source

Invoke the interpreter.

Be sure to set the input sizes, allocate tensors and fill values before calling this. Also, note that this function releases the GIL so heavy computation can be done in the background while the Python interpreter continues. No other function on this object should be called while the invoke() call has not finished.

ValueError When the underlying interpreter fails raise ValueError.


View source


View source

Resizes an input tensor.

input_index Tensor index of input to set. This value can be gotten from the 'index' field in get_input_details.
tensor_size The tensor_shape to resize the input to.
strict Only unknown dimensions can be resized when strict is True. Unknown dimensions are indicated as -1 in the shape_signature attribute of a given tensor. (default False)

ValueError If the interpreter could not resize the input tensor.


interpreter = Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(0, [num_test_images, 224, 224, 3])
interpreter.set_tensor(0, test_images)


View source

Sets the value of the input tensor.

Note this copies data in value.

If you want to avoid copying, you can use the tensor() function to get a numpy buffer pointing to the input buffer in the tflite interpreter.

tensor_index Tensor index of tensor to set. This value can be gotten from the 'index' field in get_input_details.
value Value of tensor to set.

ValueError If the interpreter could not set the tensor.


View source

Returns function that gives a numpy view of the current tensor buffer.

This allows reading and writing to this tensors w/o copies. This more closely mirrors the C++ Interpreter class interface's tensor() member, hence the name. Be careful to not hold these output references through calls to allocate_tensors() and invoke(). This function cannot be used to read intermediate results.


input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
for i in range(10):
  print("inference %s" % output())

Notice how this function avoids making a numpy array directly. This is because it is important to not hold actual numpy views to the data longer than necessary. If you do, then the interpreter can no longer be invoked, because it is possible the interpreter would resize and invalidate the referenced tensors. The NumPy API doesn't allow any mutability of the the underlying buffers.


input = interpreter.tensor(interpreter.get_input_details()[0]["index"])()
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
interpreter.allocate_tensors()  # This will throw RuntimeError
for i in range(10):
  interpreter.invoke()  # this will throw RuntimeError since input,output

tensor_index Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.

A function that can return a new numpy array pointing to the internal TFLite tensor state at any point. It is safe to hold the function forever, but it is not safe to hold the numpy array forever.