tf.lite.Interpreter

Interpreter interface for TensorFlow Lite Models.

Used in the notebooks

Used in the guide Used in the tutorials

This makes the TensorFlow Lite interpreter accessible in Python. It is possible to use this interpreter in a multithreaded Python environment, but you must be sure to call functions of a particular instance from only one thread at a time. So if you want to have 4 threads running different inferences simultaneously, create an interpreter for each one as thread-local data. Similarly, if you are calling invoke() in one thread on a single interpreter but you want to use tensor() on another thread once it is done, you must use a synchronization primitive between the threads to ensure invoke has returned before calling tensor().

model_path Path to TF-Lite Flatbuffer file.
model_content Content of model.
experimental_delegates Experimental. Subject to change. List of TfLiteDelegate objects returned by lite.load_delegate().
num_threads Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading.

ValueError If the interpreter was unable to create.

Methods

allocate_tensors

View source

get_input_details

View source

Gets model input details.

Returns
A list of input details.

get_output_details

View source

Gets model output details.

Returns
A list of output details.

get_tensor

View source

Gets the value of the input tensor (get a copy).

If you wish to avoid the copy, use tensor(). This function cannot be used to read intermediate results.

Args
tensor_index Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.

Returns
a numpy array.

get_tensor_details

View source

Gets tensor details for every tensor with valid tensor details.

Tensors where required information about the tensor is not found are not adde