tflite:: InterpreterOptions
#include <interpreter.h>
Options class for Interpreter
.
Summary
WARNING: This is an experimental API and subject to change.
Constructors and Destructors |
|
---|---|
InterpreterOptions()
|
Public functions |
|
---|---|
GetDynamicAllocationForLargeTensors()
|
int
Returns the size threshold for dynamic tensor allocation method.
|
GetEnsureDynamicTensorsAreReleased()
|
bool
Returns if the
experimental_ensure_dynamic_tensors_are_released_ feature is enabled. |
GetPreserveAllTensors()
|
bool
Returns if the
experimental_preserve_all_tensors_ feature is enabled. |
SetDynamicAllocationForLargeTensors(int value)
|
void
Use dynamic tensor allocation method for large tensors instead of static memory planner.
|
SetEnsureDynamicTensorsAreReleased(bool value)
|
void
Force all intermediate dynamic tensors to be released once they are not used by the model.
|
SetPreserveAllTensors(bool value)
|
void
Preserving all intermediates tensors for debugging.
|
Public functions
GetDynamicAllocationForLargeTensors
int GetDynamicAllocationForLargeTensors()
Returns the size threshold for dynamic tensor allocation method.
It returns zero if the feature is not enabled. WARNING: This is an experimental API and subject to change.
GetEnsureDynamicTensorsAreReleased
bool GetEnsureDynamicTensorsAreReleased()
Returns if the experimental_ensure_dynamic_tensors_are_released_
feature is enabled.
WARNING: This is an experimental API and subject to change.
GetPreserveAllTensors
bool GetPreserveAllTensors()
Returns if the experimental_preserve_all_tensors_
feature is enabled.
WARNING: This is an experimental API and subject to change.
InterpreterOptions
InterpreterOptions()
SetDynamicAllocationForLargeTensors
void SetDynamicAllocationForLargeTensors( int value )
Use dynamic tensor allocation method for large tensors instead of static memory planner.
It improves peak memory usage but there could be some latency impact. The value is used to determine large tensors. WARNING: This is an experimental API and subject to change.
SetEnsureDynamicTensorsAreReleased
void SetEnsureDynamicTensorsAreReleased( bool value )
Force all intermediate dynamic tensors to be released once they are not used by the model.
Please use this configuration with caution, since it might reduce the peak memory usage of the model at the cost of a slower inference speed. WARNING: This is an experimental API and subject to change.
SetPreserveAllTensors
void SetPreserveAllTensors( bool value )
Preserving all intermediates tensors for debugging.
WARNING: This is an experimental API and subject to change.