lite/c/common.h

This file defines common C types and APIs for implementing operations, delegates and other constructs in TensorFlow Lite.

Summary

The actual operations and delegates can be defined using C++, but the interface between the interpreter and the operations are C.

Summary of abstractions:

Some abstractions in this file are created and managed by Interpreter.

NOTE: The order of values in these structs are "semi-ABI stable". New values should be added only to the end of structs and never reordered.

Enumerations

Anonymous Enum 0 enum
TfLiteAllocationStrategy{
  kTfLiteAllocationStrategyMMap,
  kTfLiteAllocationStrategyArena,
  kTfLiteAllocationStrategyMalloc,
  kTfLiteAllocationStrategyNew
}
enum
Memory allocation strategies.
TfLiteAllocationType enum
Memory allocation strategies.
TfLiteCustomAllocationFlags{
  kTfLiteCustomAllocationFlagsSkipAlignCheck = 1
}
enum
The flags used in Interpreter::SetCustomAllocationForTensor.
TfLiteDelegateFlags{
  kTfLiteDelegateFlagsAllowDynamicTensors = 1,
  kTfLiteDelegateFlagsRequirePropagatedShapes = 2,
  kTfLiteDelegateFlagsPerOperatorProfiling = 4
}
enum
The flags used in TfLiteDelegate.
TfLiteDimensionType enum
Storage format of each dimension in a sparse tensor.
TfLiteExternalContextType{
  kTfLiteGemmLowpContext = 1,
  kTfLiteEdgeTpuContext = 2,
  kTfLiteCpuBackendContext = 3,
  kTfLiteMaxExternalContexts = 4
}
enum
The list of external context types known to TF Lite.
TfLiteInPlaceOp{
  kTfLiteInplaceOpNone = 0,
  kTfLiteInplaceOpDataUnmodified = 1,
  kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput = 2,
  kTfLiteInplaceOpInput0Shared = 4,
  kTfLiteInplaceOpInput1Shared = 8,
  kTfLiteInplaceOpInput2Shared = 16,
  kTfLiteInplaceOpMaxValue = UINT64_MAX
}
enum
The valid values of the inplace_operator field in TfLiteRegistration.
TfLiteQuantizationType{
  kTfLiteNoQuantization = 0,
  kTfLiteAffineQuantization = 1
}
enum
SupportedQuantizationTypes.
TfLiteRunStability{
  kTfLiteRunStabilitySingleRun,
  kTfLiteRunStabilityAcrossRuns
}
enum
Describes how stable a tensor attribute is with regards to an interpreter runs.
TfLiteRunStep enum
Describes the steps of a TFLite operation life cycle.

Typedefs

TfLiteAffineQuantization typedef
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
TfLiteAllocationStrategy typedef
Memory allocation strategies.
TfLiteAllocationType typedef
Memory allocation strategies.
TfLiteBufferHandle typedef
int
The delegates should use zero or positive integers to represent handles.
TfLiteComplex128 typedef
Double-precision complex data type compatible with the C99 definition.
TfLiteComplex64 typedef
Single-precision complex data type compatible with the C99 definition.
TfLiteContext typedef
struct TfLiteContext
TfLiteContext allows an op to access the tensors.
TfLiteCustomAllocation typedef
Defines a custom memory allocation not owned by the runtime.
TfLiteCustomAllocationFlags typedef
The flags used in Interpreter::SetCustomAllocationForTensor.
TfLiteDelegate typedef
WARNING: This is an experimental interface that is subject to change.
TfLiteDelegateFlags typedef
The flags used in TfLiteDelegate.
TfLiteDelegateParams typedef
WARNING: This is an experimental interface that is subject to change.
TfLiteDimensionMetadata typedef
Metadata to encode each dimension in a sparse tensor.
TfLiteDimensionType typedef
Storage format of each dimension in a sparse tensor.
TfLiteEvalTensor typedef
Light-weight tensor struct for TF Micro runtime.
TfLiteExternalContext typedef
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
TfLiteExternalContextType typedef
The list of external context types known to TF Lite.
TfLiteFloat16 typedef
struct TfLiteFloat16
Half precision data type compatible with the C99 definition.
TfLiteFloatArray typedef
Fixed size list of floats. Used for per-channel quantization.
TfLiteIntArray typedef
Fixed size list of integers.
TfLiteNode typedef
struct TfLiteNode
A structure representing an instance of a node.
TfLiteOpaqueDelegateBuilder typedef
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
TfLiteOpaqueDelegateParams typedef
WARNING: This is an experimental interface that is subject to change.
TfLitePtrUnion typedef
A union of pointers that points to memory for a given tensor.
TfLiteQuantization typedef
Structure specifying the quantization used by the tensor, if-any.
TfLiteQuantizationType typedef
SupportedQuantizationTypes.
TfLiteRegistration typedef
TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).
TfLiteRegistrationExternal typedef
TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).
TfLiteRegistration_V1 typedef
struct TfLiteRegistration_V1
Old version of TfLiteRegistration to maintain binary backward compatibility.
TfLiteRegistration_V2 typedef
struct TfLiteRegistration_V2
Old version of TfLiteRegistration to maintain binary backward compatibility.
TfLiteRegistration_V3 typedef
struct TfLiteRegistration_V3
Old version of TfLiteRegistration to maintain binary backward compatibility.
TfLiteRunStability typedef
Describes how stable a tensor attribute is with regards to an interpreter runs.
TfLiteRunStep typedef
Describes the steps of a TFLite operation life cycle.
TfLiteSparsity typedef
Parameters used to encode a sparse tensor.
TfLiteTensor typedef
struct TfLiteTensor
A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined).

Variables

kTfLiteMaxSharableOpInputs = 3
const int
The number of shareable inputs supported.

Functions

TfLiteDelegateCreate(void)
Build a null delegate, with all the fields properly set to their default values.
TfLiteFloatArrayCopy(const TfLiteFloatArray *src)
Create a copy of an array passed as src.
TfLiteFloatArrayCreate(int size)
Create a array of a given size (uninitialized entries).
TfLiteFloatArrayFree(TfLiteFloatArray *a)
void
Free memory of array a.
TfLiteFloatArrayGetSizeInBytes(int size)
int
Given the size (number of elements) in a TfLiteFloatArray, calculate its size in bytes.
TfLiteIntArrayCopy(const TfLiteIntArray *src)
Create a copy of an array passed as src.
TfLiteIntArrayCreate(int size)
Create a array of a given size (uninitialized entries).
TfLiteIntArrayEqual(const TfLiteIntArray *a, const TfLiteIntArray *b)
int
Check if two intarrays are equal. Returns 1 if they are equal, 0 otherwise.
TfLiteIntArrayEqualsArray(const TfLiteIntArray *a, int b_size, const int b_data[])
int
Check if an intarray equals an array. Returns 1 if equals, 0 otherwise.
TfLiteIntArrayFree(TfLiteIntArray *a)
void
Free memory of array a.
TfLiteIntArrayGetSizeInBytes(int size)
size_t
Given the size (number of elements) in a TfLiteIntArray, calculate its size in bytes.
TfLiteOpaqueDelegateCreate(const TfLiteOpaqueDelegateBuilder *opaque_delegate_builder)
Creates an opaque delegate and returns its address.
TfLiteOpaqueDelegateDelete(TfLiteOpaqueDelegate *delegate)
void
Deletes the provided opaque delegate.
TfLiteOpaqueDelegateGetData(const TfLiteOpaqueDelegate *delegate)
void *
Returns a pointer to the data associated with the provided opaque delegate.
TfLiteQuantizationFree(TfLiteQuantization *quantization)
void
Free quantization data.
TfLiteSparsityFree(TfLiteSparsity *sparsity)
void
Free sparsity parameters.
TfLiteTensorCopy(const TfLiteTensor *src, TfLiteTensor *dst)
Copies the contents of src in dst.
TfLiteTensorDataFree(TfLiteTensor *t)
void
Free data memory of tensor t.
TfLiteTensorFree(TfLiteTensor *t)
void
Free memory of tensor t.
TfLiteTensorGetAllocationStrategy(const TfLiteTensor *t)
Returns a tensor data allocation strategy.
TfLiteTensorGetBufferAddressStability(const TfLiteTensor *t)
Returns how stable a tensor data buffer address is across runs.
TfLiteTensorGetDataKnownStep(const TfLiteTensor *t)
Returns the operation step when the data of a tensor is populated.
TfLiteTensorGetDataStability(const TfLiteTensor *t)
Returns how stable a tensor data values are across runs.
TfLiteTensorGetShapeKnownStep(const TfLiteTensor *t)
Returns the operation steop when the shape of a tensor is computed.
TfLiteTensorRealloc(size_t num_bytes, TfLiteTensor *tensor)
Change the size of the memory block owned by tensor to num_bytes.
TfLiteTensorReset(TfLiteType type, const char *name, TfLiteIntArray *dims, TfLiteQuantizationParams quantization, char *buffer, size_t size, TfLiteAllocationType allocation_type, const void *allocation, bool is_variable, TfLiteTensor *tensor)
void
Set all of a tensor's fields (and free any previously allocated data).
TfLiteTensorResizeMaybeCopy(size_t num_bytes, TfLiteTensor *tensor, bool preserve_data)
Change the size of the memory block owned by tensor to num_bytes.
TfLiteTypeGetName(TfLiteType type)
const char *
Return the name of a given type, for error reporting purposes.

Structs

TfLiteAffineQuantization

Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).

TfLiteComplex128

Double-precision complex data type compatible with the C99 definition.

TfLiteComplex64

Single-precision complex data type compatible with the C99 definition.

TfLiteContext

TfLiteContext allows an op to access the tensors.

TfLiteCustomAllocation

Defines a custom memory allocation not owned by the runtime.

TfLiteDelegate

WARNING: This is an experimental interface that is subject to change.

TfLiteDelegateParams

WARNING: This is an experimental interface that is subject to change.

TfLiteDimensionMetadata

Metadata to encode each dimension in a sparse tensor.

TfLiteEvalTensor

Light-weight tensor struct for TF Micro runtime.

TfLiteExternalContext

An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.

TfLiteFloat16

Half precision data type compatible with the C99 definition.

TfLiteFloatArray

Fixed size list of floats. Used for per-channel quantization.

TfLiteIntArray

Fixed size list of integers.

TfLiteNode

A structure representing an instance of a node.

TfLiteOpaqueDelegateBuilder

TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.

TfLiteOpaqueDelegateParams

WARNING: This is an experimental interface that is subject to change.

TfLiteQuantization

Structure specifying the quantization used by the tensor, if-any.

TfLiteRegistration

TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).

TfLiteSparsity

Parameters used to encode a sparse tensor.

TfLiteTensor

A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined).

Unions

TfLitePtrUnion

A union of pointers that points to memory for a given tensor.

Enumerations

Anonymous Enum 0

 Anonymous Enum 0

TfLiteAllocationStrategy

 TfLiteAllocationStrategy

Memory allocation strategies.

TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.

Properties
kTfLiteAllocationStrategyArena

Data is mmaped.

kTfLiteAllocationStrategyMMap

No data is allocated.

kTfLiteAllocationStrategyMalloc

Handled by the arena.

kTfLiteAllocationStrategyNew

Uses malloc/free.

Uses new[]/delete[].

TfLiteAllocationType

 TfLiteAllocationType

Memory allocation strategies.

  • kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.
  • kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.
  • kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.
  • kTfLiteDynamic: Allocated during eval, or for string tensors.
  • kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).
  • kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.
  • kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.

TfLiteCustomAllocationFlags

 TfLiteCustomAllocationFlags

The flags used in Interpreter::SetCustomAllocationForTensor.

Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.

Properties
kTfLiteCustomAllocationFlagsSkipAlignCheck

Skips checking whether allocation.data points to an aligned buffer as expected by the TFLite runtime.

NOTE: Setting this flag can cause crashes when calling Invoke(). Use with caution.

TfLiteDelegateFlags

 TfLiteDelegateFlags

The flags used in TfLiteDelegate.

Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.

Properties
kTfLiteDelegateFlagsAllowDynamicTensors

The flag is set if the delegate can handle dynamic sized tensors.

For example, the output shape of a Resize op with non-constant shape can only be inferred when the op is invoked. In this case, the Delegate is responsible for calling SetTensorToDynamic to mark the tensor as a dynamic tensor, and calling ResizeTensor when invoking the op.

If the delegate isn't capable to handle dynamic tensors, this flag need to be set to false.

kTfLiteDelegateFlagsPerOperatorProfiling

This flag can be used by delegates to request per-operator profiling.

If a node is a delegate node, this flag will be checked before profiling. If set, then the node will not be profiled. The delegate will then add per operator information using Profiler::EventType::OPERATOR_INVOKE_EVENT and the results will appear in the operator-wise Profiling section and not in the Delegate internal section.

kTfLiteDelegateFlagsRequirePropagatedShapes

This flag can be used by delegates (that allow dynamic tensors) to ensure applicable tensor shapes are automatically propagated in the case of tensor resizing.

This means that non-dynamic (allocation_type != kTfLiteDynamic) I/O tensors of a delegate kernel will have correct shapes before its Prepare() method is called. The runtime leverages TFLite builtin ops in the original execution plan to propagate shapes.

A few points to note:

  1. This requires kTfLiteDelegateFlagsAllowDynamicTensors. If that flag is false, this one is redundant since the delegate kernels are re-initialized every time tensors are resized.
  2. Enabling this flag adds some overhead to AllocateTensors(), since extra work is required to prepare the original execution plan.
  3. This flag requires that the original execution plan only have ops with valid registrations (and not 'dummy' custom ops like with Flex).

WARNING: This feature is experimental and subject to change.

TfLiteDimensionType

 TfLiteDimensionType

Storage format of each dimension in a sparse tensor.

TfLiteExternalContextType

 TfLiteExternalContextType

The list of external context types known to TF Lite.

This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.

Properties
kTfLiteCpuBackendContext

Placeholder for Edge TPU support.

kTfLiteEdgeTpuContext

include gemm_support.h to use.

kTfLiteGemmLowpContext

include eigen_support.h to use.

kTfLiteMaxExternalContexts

include cpu_backend_context.h to use.

TfLiteInPlaceOp

 TfLiteInPlaceOp

The valid values of the inplace_operator field in TfLiteRegistration.

This allow an op to signal to the runtime that the same data pointer may be passed as an input and output without impacting the result. This does not mean that the memory can safely be reused, it is up to the runtime to determine this, e.g. if another op consumes the same input or not or if an input tensor has sufficient memory allocated to store the output data.

Setting these flags authorizes the runtime to set the data pointers of an input and output tensor to the same value. In such cases, the memory required by the output must be less than or equal to that required by the shared input, never greater. If kTfLiteInplaceOpDataUnmodified is set, then the runtime can share the same input tensor with multiple operator's outputs, provided that kTfLiteInplaceOpDataUnmodified is set for all of them. Otherwise, if an input tensor is consumed by multiple operators, it may only be shared with the operator which is the last to consume it.

Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.

Properties
kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput

Setting kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput means that InputN may be shared with OutputN instead of with the first output.

This flag requires one or more of kTfLiteInplaceOpInputNShared to be set.

kTfLiteInplaceOpDataUnmodified

This indicates that an op's first output's data is identical to its first input's data, for example Reshape.

kTfLiteInplaceOpInput0Shared

kTfLiteInplaceOpInputNShared indicates that it is safe for an op to share InputN's data pointer with an output tensor.

If kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set then kTfLiteInplaceOpInputNShared indicates that InputN may be shared with OutputN, otherwise kTfLiteInplaceOpInputNShared indicates that InputN may be shared with the first output.

Indicates that an op's first input may be shared with the first output tensor. kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput has no impact on the behavior allowed by this flag.

kTfLiteInplaceOpInput1Shared

Indicates that an op's second input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or second output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.

kTfLiteInplaceOpInput2Shared

Indicates that an op's third input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or third output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.

kTfLiteInplaceOpMaxValue

Placeholder to ensure that enum can hold 64 bit values to accommodate future fields.

kTfLiteInplaceOpNone

The default value.

This indicates that the same data pointer cannot safely be passed as an op's input and output.

TfLiteQuantizationType

 TfLiteQuantizationType

SupportedQuantizationTypes.

Properties
kTfLiteAffineQuantization

Affine quantization (with support for per-channel quantization).

Corresponds to TfLiteAffineQuantization.

kTfLiteNoQuantization

No quantization.

TfLiteRunStability

 TfLiteRunStability

Describes how stable a tensor attribute is with regards to an interpreter runs.

Properties
kTfLiteRunStabilityAcrossRuns

Will stay the same for one run.

Will stay the same across all runs.

kTfLiteRunStabilitySingleRun

May change at any time.

TfLiteRunStep

 TfLiteRunStep

Describes the steps of a TFLite operation life cycle.

Typedefs

TfLiteAffineQuantization

struct TfLiteAffineQuantization TfLiteAffineQuantization

Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).

quantized_dimension specifies which dimension the scales and zero_points correspond to. For a particular value in quantized_dimension, quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)

TfLiteAllocationStrategy

enum TfLiteAllocationStrategy TfLiteAllocationStrategy

Memory allocation strategies.

TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.

TfLiteAllocationType

enum TfLiteAllocationType TfLiteAllocationType

Memory allocation strategies.

  • kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.
  • kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.
  • kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.
  • kTfLiteDynamic: Allocated during eval, or for string tensors.
  • kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).
  • kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.
  • kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.

TfLiteBufferHandle

int TfLiteBufferHandle

The delegates should use zero or positive integers to represent handles.

-1 is reserved from unallocated status.

TfLiteComplex128

struct TfLiteComplex128 TfLiteComplex128

Double-precision complex data type compatible with the C99 definition.

TfLiteComplex64

struct TfLiteComplex64 TfLiteComplex64

Single-precision complex data type compatible with the C99 definition.

TfLiteContext

struct TfLiteContext TfLiteContext

TfLiteContext allows an op to access the tensors.

TfLiteContext is a struct that is created by the TF Lite runtime and passed to the "methods" (C function pointers) in the TfLiteRegistration struct that are used to define custom ops and custom delegate kernels. It contains information and methods (C function pointers) that can be called by the code implementing a custom op or a custom delegate kernel. These methods provide access to the context in which that custom op or custom delegate kernel occurs, such as access to the input and output tensors for that op, as well as methods for allocating memory buffers and intermediate tensors, etc.

See also TfLiteOpaqueContext, which is an more ABI-stable equivalent.

TfLiteCustomAllocation

struct TfLiteCustomAllocation TfLiteCustomAllocation

Defines a custom memory allocation not owned by the runtime.

data should be aligned to kDefaultTensorAlignment defined in lite/util.h. (Currently 64 bytes) NOTE: See Interpreter::SetCustomAllocationForTensor for details on usage.

TfLiteCustomAllocationFlags

enum TfLiteCustomAllocationFlags TfLiteCustomAllocationFlags

The flags used in Interpreter::SetCustomAllocationForTensor.

Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.

TfLiteDelegate

struct TfLiteDelegate TfLiteDelegate

WARNING: This is an experimental interface that is subject to change.

TfLiteDelegateFlags

enum TfLiteDelegateFlags TfLiteDelegateFlags

The flags used in TfLiteDelegate.

Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.

TfLiteDelegateParams

struct TfLiteDelegateParams TfLiteDelegateParams

WARNING: This is an experimental interface that is subject to change.

Currently, TfLiteDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.

See also the CreateDelegateParams function in interpreter.cc details.

TfLiteDimensionMetadata

struct TfLiteDimensionMetadata TfLiteDimensionMetadata

Metadata to encode each dimension in a sparse tensor.

TfLiteDimensionType

enum TfLiteDimensionType TfLiteDimensionType

Storage format of each dimension in a sparse tensor.

TfLiteEvalTensor

struct TfLiteEvalTensor TfLiteEvalTensor

Light-weight tensor struct for TF Micro runtime.

Provides the minimal amount of information required for a kernel to run during TfLiteRegistration::Eval.

TfLiteExternalContext

struct TfLiteExternalContext TfLiteExternalContext

An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.

TF Lite knows very little about the actual contexts, but it keeps a list of them, and is able to refresh them if configurations like the number of recommended threads change.

TfLiteExternalContextType

enum TfLiteExternalContextType TfLiteExternalContextType

The list of external context types known to TF Lite.

This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.

TfLiteFloat16

struct TfLiteFloat16 TfLiteFloat16

Half precision data type compatible with the C99 definition.

TfLiteFloatArray

struct TfLiteFloatArray TfLiteFloatArray

Fixed size list of floats. Used for per-channel quantization.

TfLiteIntArray

struct TfLiteIntArray TfLiteIntArray

Fixed size list of integers.

Used for dimensions and inputs/outputs tensor indices

TfLiteNode

struct TfLiteNode TfLiteNode

A structure representing an instance of a node.

This structure only exhibits the inputs, outputs, user defined data and some node properties (like statefulness), not other features like the type.

TfLiteOpaqueDelegateBuilder

struct TfLiteOpaqueDelegateBuilder TfLiteOpaqueDelegateBuilder

TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.

Note: This struct is not ABI stable.

For forward source compatibility TfLiteOpaqueDelegateBuilder objects should be brace-initialized, so that all fields (including any that might be added in the future) get zero-initialized. The purpose of each field is exactly the same as with TfLiteDelegate.

WARNING: This is an experimental interface that is subject to change.

TfLiteOpaqueDelegateParams

struct TfLiteOpaqueDelegateParams TfLiteOpaqueDelegateParams

WARNING: This is an experimental interface that is subject to change.

Currently, TfLiteOpaqueDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.

See also the CreateOpaqueDelegateParams function in subgraph.cc details.

TfLitePtrUnion

union TfLitePtrUnion TfLitePtrUnion

A union of pointers that points to memory for a given tensor.

Do not access these members directly, if possible, use GetTensorData(tensor) instead, otherwise only access .data, as other members are deprecated.

TfLiteQuantization

struct TfLiteQuantization TfLiteQuantization

Structure specifying the quantization used by the tensor, if-any.

TfLiteQuantizationType

enum TfLiteQuantizationType TfLiteQuantizationType

SupportedQuantizationTypes.

TfLiteRegistration

struct TfLiteRegistration TfLiteRegistration

TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).

It is a struct containing "methods" (C function pointers) that will be invoked by the TF Lite runtime to evaluate instances of the operation.

See also TfLiteRegistrationExternal which is a more ABI-stable equivalent.

TfLiteRegistrationExternal

struct TfLiteRegistrationExternal TfLiteRegistrationExternal

TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).

The purpose of each field is the exactly the same as with TfLiteRegistration.

TfLiteRegistration_V1

struct TfLiteRegistration_V1 TfLiteRegistration_V1

Old version of TfLiteRegistration to maintain binary backward compatibility.

The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.

WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.

TfLiteRegistration_V2

struct TfLiteRegistration_V2 TfLiteRegistration_V2

Old version of TfLiteRegistration to maintain binary backward compatibility.

The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.

WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.

TfLiteRegistration_V3

struct TfLiteRegistration_V3 TfLiteRegistration_V3

Old version of TfLiteRegistration to maintain binary backward compatibility.

The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.

WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.

TfLiteRunStability

enum TfLiteRunStability TfLiteRunStability

Describes how stable a tensor attribute is with regards to an interpreter runs.

TfLiteRunStep

enum TfLiteRunStep TfLiteRunStep

Describes the steps of a TFLite operation life cycle.

TfLiteSparsity

struct TfLiteSparsity TfLiteSparsity

Parameters used to encode a sparse tensor.

For detailed explanation of each field please refer to lite/schema/schema.fbs.

TfLiteTensor

struct TfLiteTensor TfLiteTensor

A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined).

Variables

kTfLiteMaxSharableOpInputs

const int kTfLiteMaxSharableOpInputs = 3

The number of shareable inputs supported.

Functions

TfLiteDelegateCreate

TfLiteDelegate TfLiteDelegateCreate(
  void
)

Build a null delegate, with all the fields properly set to their default values.

TfLiteFloatArrayCopy

TfLiteFloatArray * TfLiteFloatArrayCopy(
  const TfLiteFloatArray *src
)

Create a copy of an array passed as src.

You are expected to free memory with TfLiteFloatArrayFree.

TfLiteFloatArrayCreate

TfLiteFloatArray * TfLiteFloatArrayCreate(
  int size
)

Create a array of a given size (uninitialized entries).

This returns a pointer, that you must free using TfLiteFloatArrayFree().

TfLiteFloatArrayFree

void TfLiteFloatArrayFree(
  TfLiteFloatArray *a
)

Free memory of array a.

TfLiteFloatArrayGetSizeInBytes

int TfLiteFloatArrayGetSizeInBytes(
  int size
)

Given the size (number of elements) in a TfLiteFloatArray, calculate its size in bytes.

TfLiteIntArrayCopy

TfLiteIntArray * TfLiteIntArrayCopy(
  const TfLiteIntArray *src
)

Create a copy of an array passed as src.

You are expected to free memory with TfLiteIntArrayFree

TfLiteIntArrayCreate

TfLiteIntArray * TfLiteIntArrayCreate(
  int size
)

Create a array of a given size (uninitialized entries).

This returns a pointer, that you must free using TfLiteIntArrayFree().

TfLiteIntArrayEqual

int TfLiteIntArrayEqual(
  const TfLiteIntArray *a,
  const TfLiteIntArray *b
)

Check if two intarrays are equal. Returns 1 if they are equal, 0 otherwise.

TfLiteIntArrayEqualsArray

int TfLiteIntArrayEqualsArray(
  const TfLiteIntArray *a,
  int b_size,
  const int b_data[]
)

Check if an intarray equals an array. Returns 1 if equals, 0 otherwise.

TfLiteIntArrayFree

void TfLiteIntArrayFree(
  TfLiteIntArray *a
)

Free memory of array a.

TfLiteIntArrayGetSizeInBytes

size_t TfLiteIntArrayGetSizeInBytes(
  int size
)

Given the size (number of elements) in a TfLiteIntArray, calculate its size in bytes.

TfLiteOpaqueDelegateCreate

TfLiteOpaqueDelegate * TfLiteOpaqueDelegateCreate(
  const TfLiteOpaqueDelegateBuilder *opaque_delegate_builder
)

Creates an opaque delegate and returns its address.

The opaque delegate will behave according to the provided opaque_delegate_builder. The lifetime of the objects pointed to by any of the fields within the opaque_delegate_builder must outlive the returned TfLiteOpaqueDelegate and any TfLiteInterpreter, TfLiteInterpreterOptions, tflite::Interpreter, or tflite::InterpreterBuilder that the delegate is added to. The returned address should be passed to TfLiteOpaqueDelegateDelete for deletion. If opaque_delegate_builder is a null pointer, then a null pointer will be returned.

TfLiteOpaqueDelegateDelete

void TfLiteOpaqueDelegateDelete(
  TfLiteOpaqueDelegate *delegate
)

Deletes the provided opaque delegate.

This function has no effect if the delegate is a null pointer.

TfLiteOpaqueDelegateGetData

void * TfLiteOpaqueDelegateGetData(
  const TfLiteOpaqueDelegate *delegate
)

Returns a pointer to the data associated with the provided opaque delegate.

A null pointer will be returned when:

TfLiteQuantizationFree

void TfLiteQuantizationFree(
  TfLiteQuantization *quantization
)

Free quantization data.

TfLiteSparsityFree

void TfLiteSparsityFree(
  TfLiteSparsity *sparsity
)

Free sparsity parameters.

TfLiteTensorCopy

TfLiteStatus TfLiteTensorCopy(
  const TfLiteTensor *src,
  TfLiteTensor *dst
)

Copies the contents of src in dst.

Function does nothing if either src or dst is passed as nullptr and return kTfLiteOk. Returns kTfLiteError if src and dst doesn't have matching data size. Note function copies contents, so it won't create new data pointer or change allocation type. All Tensor related properties will be copied from src to dst like quantization, sparsity, ...

TfLiteTensorDataFree

void TfLiteTensorDataFree(
  TfLiteTensor *t
)

Free data memory of tensor t.

TfLiteTensorFree

void TfLiteTensorFree(
  TfLiteTensor *t
)

Free memory of tensor t.

TfLiteTensorGetAllocationStrategy

TfLiteAllocationStrategy TfLiteTensorGetAllocationStrategy(
  const TfLiteTensor *t
)

Returns a tensor data allocation strategy.

TfLiteTensorGetBufferAddressStability

TfLiteRunStability TfLiteTensorGetBufferAddressStability(
  const TfLiteTensor *t
)

Returns how stable a tensor data buffer address is across runs.

TfLiteTensorGetDataKnownStep

TfLiteRunStep TfLiteTensorGetDataKnownStep(
  const TfLiteTensor *t
)

Returns the operation step when the data of a tensor is populated.

Some operations can precompute their results before the evaluation step. This makes the data available earlier for subsequent operations.

TfLiteTensorGetDataStability

TfLiteRunStability TfLiteTensorGetDataStability(
  const TfLiteTensor *t
)

Returns how stable a tensor data values are across runs.

TfLiteTensorGetShapeKnownStep

TfLiteRunStep TfLiteTensorGetShapeKnownStep(
  const TfLiteTensor *t
)

Returns the operation steop when the shape of a tensor is computed.

Some operations can precompute the shape of their results before the evaluation step. This makes the shape available earlier for subsequent operations.

TfLiteTensorRealloc

TfLiteStatus TfLiteTensorRealloc(
  size_t num_bytes,
  TfLiteTensor *tensor
)

Change the size of the memory block owned by tensor to num_bytes.

Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. Tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.

TfLiteTensorReset

void TfLiteTensorReset(
  TfLiteType type,
  const char *name,
  TfLiteIntArray *dims,
  TfLiteQuantizationParams quantization,
  char *buffer,
  size_t size,
  TfLiteAllocationType allocation_type,
  const void *allocation,
  bool is_variable,
  TfLiteTensor *tensor
)

Set all of a tensor's fields (and free any previously allocated data).

TfLiteTensorResizeMaybeCopy

TfLiteStatus TfLiteTensorResizeMaybeCopy(
  size_t num_bytes,
  TfLiteTensor *tensor,
  bool preserve_data
)

Change the size of the memory block owned by tensor to num_bytes.

Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. If preserve_data is true, tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.

TfLiteTypeGetName

const char * TfLiteTypeGetName(
  TfLiteType type
)

Return the name of a given type, for error reporting purposes.