¿Tengo una pregunta? Conéctese con la comunidad en el Foro de visita del foro de TensorFlow


Interpreter interface for TensorFlow Lite Models.

Used in the notebooks

Used in the guide Used in the tutorials

This makes the TensorFlow Lite interpreter accessible in Python. It is possible to use this interpreter in a multithreaded Python environment, but you must be sure to call functions of a particular instance from only one thread at a time. So if you want to have 4 threads running different inferences simultaneously, create an interpreter for each one as thread-local data. Similarly, if you are calling invoke() in one thread on a single interpreter but you want to use tensor() on another thread once it is done, you must use a synchronization primitive between the threads to ensure invoke has returned before calling tensor().

model_path Path to TF-Lite Flatbuffer file.
model_content Content of model.
experimental_delegates Experimental. Subject to change. List of TfLiteDelegate objects returned by lite.load_delegate().
num_threads Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading.
experimental_op_resolver_type The op resolver used by the interpreter. It must be an instance of OpResolverType. By default, we use the built-in op resolver which corresponds to tflite::ops::builtin::BuiltinOpResolver in C++.
experimental_preserve_all_tensors If true, then intermediate tensors used during computation are preserved for inspection. Otherwise, reading intermediate tensors provides undefined values.

ValueError If the interpreter was unable to create.



View source


View source

Gets model input details.

A list of input details.


View source

Gets model output details.

A list of output details.


View source

Gets list of SignatureDefs in the model.


signatures = interpreter.get_signature_list()

# {
#   'add': {'inputs': ['x', 'y'], 'outputs': ['output_0']}
# }

Then using the names in the signature list you can get a callable from

A list of SignatureDef details in a dictionary structure. It is keyed on the SignatureDef method name, and the value holds dictionary of inputs and outputs.


View source

Gets callable for inference of specific SignatureDef.

Example usage,

interpreter = tf.lite.Interpreter(model_content=tflite_model)
fn = interpreter.get_signature_runner('div_with_remainder')
output = fn(x=np.array([3]), y=np.array([2]))
# {
#   'quotient': array([1.], dtype=float32)
#   'remainder': array([1.], dtype=float32)
# }

None can be passed for method_name if the model has a single Signature only.

All names used are this specific SignatureDef names.

method_name The exported method name for the SignatureDef, it can be None if and only if the model has a single SignatureDef. Default value is None.

This returns a callable that can run inference for SignatureDef defined by argument 'method_name'. The callable will take key arguments corresponding to the arguments of the SignatureDef, that should have numpy values. The callable will returns dictionary that maps from output names to numpy values of the computed results.

ValueError If passed method_name is invalid.


View source

Gets the value of the output tensor (get a copy).

If you wish to avoid the copy, use tensor(). This function cannot be used to read intermediate results.

tensor_index Tensor index of tensor to get. This value can be gotten from the 'index' field in get_output_details.

a numpy array.


View source

Gets tensor details for every tensor with valid tensor details.

Tensors where required information about the tensor is not found are not added to the list. This includes temporary tensors without a name.

A list of dictionaries containing tensor information.


View source

Invoke the interpreter.

Be sure to set the input sizes, allocate tensors and fill values before calling this. Also, note that this function releases the GIL so heavy computation can be done in the background while the Python interpreter continues. No other function on this object should be called while the invoke() call has not finished.

ValueError When the underlying interpreter fails raise ValueError.


View source


View source

Resizes an input tensor.

input_index Tensor index of input to set. This value can be gotten from the 'index' field in get_input_details.
tensor_size The tensor_shape to resize the input to.
strict Only unknown dimensions can be resized when strict is True. Unknown dimensions are indicated as -1 in the shape_signature attribute of a given tensor. (default False)

ValueError If the interpreter could not resize the input tensor.


interpreter = Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(0, [num_test_images, 224, 224, 3])
interpreter.set_tensor(0, test_images)