|TensorFlow 1 version||View source on GitHub|
Interpreter interface for TensorFlow Lite Models.
Compat aliases for migration
See Migration guide for more details.
tf.lite.Interpreter( model_path=None, model_content=None, experimental_delegates=None, num_threads=None, experimental_op_resolver_type=tf.lite.experimental.OpResolverType.AUTO, experimental_preserve_all_tensors=False )
Used in the notebooks
|Used in the guide||Used in the tutorials|
This makes the TensorFlow Lite interpreter accessible in Python. It is possible to use this interpreter in a multithreaded Python environment, but you must be sure to call functions of a particular instance from only one thread at a time. So if you want to have 4 threads running different inferences simultaneously, create an interpreter for each one as thread-local data. Similarly, if you are calling invoke() in one thread on a single interpreter but you want to use tensor() on another thread once it is done, you must use a synchronization primitive between the threads to ensure invoke has returned before calling tensor().
||Path to TF-Lite Flatbuffer file.|
||Content of model.|
||Experimental. Subject to change. List of TfLiteDelegate objects returned by lite.load_delegate().|
||Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading. num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.|
||The op resolver used by the interpreter. It must be an instance of OpResolverType. By default, we use the built-in op resolver which corresponds to tflite::ops::builtin::BuiltinOpResolver in C++.|
||If true, then intermediate tensors used during computation are preserved for inspection, and if the passed op resolver type is AUTO or BUILTIN, the type will be changed to BUILTIN_WITHOUT_DEFAULT_DELEGATES so that no Tensorflow Lite default delegates are applied. If false, getting intermediate tensors could result in undefined values or None, especially when the graph is successfully modified by the Tensorflow Lite default delegate.|
||If the interpreter was unable to create.|