|TensorFlow 1 version||View source on GitHub|
Interpreter interface for TensorFlow Lite Models.
Compat aliases for migration
See Migration guide for more details.
tf.lite.Interpreter( model_path=None, model_content=None, experimental_delegates=None, num_threads=None )
Used in the notebooks
|Used in the guide||Used in the tutorials|
This makes the TensorFlow Lite interpreter accessible in Python. It is possible to use this interpreter in a multithreaded Python environment, but you must be sure to call functions of a particular instance from only one thread at a time. So if you want to have 4 threads running different inferences simultaneously, create an interpreter for each one as thread-local data. Similarly, if you are calling invoke() in one thread on a single interpreter but you want to use tensor() on another thread once it is done, you must use a synchronization primitive between the threads to ensure invoke has returned before calling tensor().
||Path to TF-Lite Flatbuffer file.|
||Content of model.|
||Experimental. Subject to change. List of TfLiteDelegate objects returned by lite.load_delegate().|
||Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading.|
||If the interpreter was unable to create.|
Gets model input details.
|A list of input details.|
Gets model output details.
|A list of output details.|
get_tensor( tensor_index )
Gets the value of the input tensor (get a copy).
If you wish to avoid the copy, use
tensor(). This function cannot be used
to read intermediate results.
||Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.|
|a numpy array.|
Gets tensor details for every tensor with valid tensor details.
Tensors where required information about the tensor is not found are not adde