Thanks for tuning in to Google I/O. View all sessions on demandWatch on demand

TensorFlowLiteSwift Framework Reference


public struct Options : Equatable, Hashable

Options for configuring the Interpreter.

  • The maximum number of CPU threads that the interpreter should run on. The default is nil indicating that the Interpreter will decide the number of threads to use.



    public var threadCount: Int?
  • Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is enabled.


    Enabling this flag will enable use of a new, highly optimized set of CPU kernels provided via the XNNPACK delegate. Currently, this is restricted to a subset of floating point operations. Eventually, we plan to enable this by default, as it can provide significant performance benefits for many classes of floating point models. See for more details.


    Things to keep in mind when enabling this flag:

    • Startup time and resize time may increase.
    • Baseline memory consumption may increase.
    • Compatibility with other delegates (e.g., GPU) has not been fully validated.
    • Quantized models will not see any benefit.


    This is an experimental interface that is subject to change.



    public var isXNNPackEnabled: Bool
  • Creates a new instance with the default values.



    public init()