Select TensorFlow operators

Since the TensorFlow Lite builtin operator library only supports a limited number of TensorFlow operators, not every model is convertible. For details, refer to operator compatibility.

To allow conversion, users can enable the usage of certain TensorFlow ops in their TensorFlow Lite model. However, running TensorFlow Lite models with TensorFlow ops requires pulling in the core TensorFlow runtime, which increases the TensorFlow Lite interpreter binary size. For Android, you can avoid this by selectively building only required Tensorflow ops. For the details, refer to reduce binary size.

This document outlines how to convert and run a TensorFlow Lite model containing TensorFlow ops on a platform of your choice. It also discusses performance and size metrics and known limitations.

Convert a model

The following example shows how to generate a TensorFlow Lite model with select TensorFlow ops.

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

Run Inference

When using a TensorFlow Lite model that has been converted with support for select TensorFlow ops, the client must also use a TensorFlow Lite runtime that includes the necessary library of TensorFlow ops.

Android AAR

To reduce the binary size, please build your own custom AAR files as guided in the next section. If the binary size is not a considerable concern, we recommend using the prebuilt AAR with TensorFlow ops hosted at MavenCentral.

You can specify this in your build.gradle dependencies by adding it alongside the standard TensorFlow Lite AAR as follows:

dependencies {
    implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
    // This dependency adds the necessary TF op support.
    implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'
}

To use nightly snapshots, make sure that you have added Sonatype snapshot repository.

Once you've added the dependency, the necessary delegate for handling the graph's TensorFlow ops should be automatically installed for graphs that require them.

android {
    defaultConfig {
        ndk {
            abiFilters 'armeabi-v7a', 'arm64-v8a'
        }
    }
}

Building the Android AAR

For reducing the binary size or other advanced cases, you can also build the library manually. Assuming a working TensorFlow Lite build environment, build the Android AAR with select TensorFlow ops as follows:

sh tensorflow/lite/tools/build_aar.sh \
  --input_models=/a/b/model_one.tflite,/c/d/model_two.tflite \
  --target_archs=x86,x86_64,arm64-v8a,armeabi-v7a

This will generate the AAR file bazel-bin/tmp/tensorflow-lite.aar for TensorFlow Lite built-in and custom ops; and generate the AAR file bazel-bin/tmp/tensorflow-lite-select-tf-ops.aar for TensorFlow ops. If you don't have a working build environment, You can also build above files with docker.

From there, you can either import the AAR files directly into your project, or publish the custom AAR files to your local Maven repository:

mvn install:install-file \
  -Dfile=bazel-bin/tmp/tensorflow-lite.aar \
  -DgroupId=org.tensorflow \
  -DartifactId=tensorflow-lite -Dversion=0.1.100 -Dpackaging=aar
mvn install:install-file \
  -Dfile=bazel-bin/tmp/tensorflow-lite-select-tf-ops.aar \
  -DgroupId=org.tensorflow \
  -DartifactId=tensorflow-lite-select-tf-ops -Dversion=0.1.100 -Dpackaging=aar

Finally, in your app's build.gradle, ensure you have the mavenLocal() dependency and replace the standard TensorFlow Lite dependency with the one that has support for select TensorFlow ops:

allprojects {
    repositories {
        mavenCentral()
        maven {  // Only for snapshot artifacts
            name 'ossrh-snapshot'
            url 'https://oss.sonatype.org/content/repositories/snapshots'
        }
        mavenLocal()
    }
}

dependencies {
    implementation 'org.tensorflow:tensorflow-lite:0.1.100'
    implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.1.100'
}

iOS

Using CocoaPods

TensorFlow Lite provides nightly prebuilt select TF ops CocoaPods for arm64, which you can depend on alongside the TensorFlowLiteSwift or TensorFlowLiteObjC CocoaPods.

# In your Podfile target:
  pod 'TensorFlowLiteSwift'   # or 'TensorFlowLiteObjC'
  pod 'TensorFlowLiteSelectTfOps', '~> 0.0.1-nightly'

After running pod install, you need to provide an additional linker flag to force load the select TF ops framework into your project. In your Xcode project, go to Build Settings -> Other Linker Flags, and add:

For versions >= 2.9.0:

-force_load $(SRCROOT)/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.xcframework/ios-arm64/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps

For versions < 2.9.0:

-force_load $(SRCROOT)/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps

You should then be able to run any models converted with the SELECT_TF_OPS in your iOS app. For example, you can modify the Image Classification iOS app to test the select TF ops feature.

  • Replace the model file with the one converted with SELECT_TF_OPS enabled.
  • Add TensorFlowLiteSelectTfOps dependency to the Podfile as instructed.
  • Add the additional linker flag as above.
  • Run the example app and see if the model works correctly.

Using Bazel + Xcode

TensorFlow Lite with select TensorFlow ops for iOS can be built using Bazel. First, follow the iOS build instructions to configure your Bazel workspace and .bazelrc file correctly.

Once you have configured the workspace with iOS support enabled, you can use the following command to build the select TF ops addon framework, which can be added on top of the regular TensorFlowLiteC.framework. Note that the select TF ops framework cannot be built for i386 architecture, so you need to explicitly provide the list of target architectures excluding i386.

bazel build -c opt --config=ios --ios_multi_cpus=arm64,x86_64 \
  //tensorflow/lite/ios:TensorFlowLiteSelectTfOps_framework

This will generate the framework under bazel-bin/tensorflow/lite/ios/ directory. You can add this new framework to your Xcode project by following similar steps described under the Xcode project settings section in the iOS build guide.

After adding the framework into your app project, an additional linker flag should be specified in your app project to force load the select TF ops framework. In your Xcode project, go to Build Settings -> Other Linker Flags, and add:

-force_load <path/to/your/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps>

C/C++

If you're using Bazel or CMake to build TensorFlow Lite interpreter, you can enable Flex delegate by linking a TensorFlow Lite Flex delegate shared library. You can build it with Bazel as the following command.

bazel build -c opt --config=monolithic tensorflow/lite/delegates/flex:tensorflowlite_flex

This command generates the following shared library in bazel-bin/tensorflow/lite/delegates/flex.

Platform Library name
Linux libtensorflowlite_flex.so
macOS libtensorflowlite_flex.dylib
Windows tensorflowlite_flex.dll

Note that the necessary TfLiteDelegate will be installed automatically when creating the interpreter at runtime as long as the shared library is linked. It is not necessary to explicitly install the delegate instance as is typically required with other delegate types.

Python

TensorFlow Lite with select TensorFlow ops will be installed automatically with the TensorFlow pip package. You can also choose to only install the TensorFlow Lite Interpreter pip package.

Metrics

Performance

When using a mixture of both builtin and select TensorFlow ops, all of the same TensorFlow Lite optimizations and optimized builtin ops will be available and usable with the converted model.

The following table describes the average time taken to run inference on MobileNet on a Pixel 2. The listed times are an average of 100 runs. These targets were built for Android using the flags: --config=android_arm64 -c opt.

Build Time (milliseconds)
Only built-in ops (TFLITE_BUILTIN) 260.7
Using only TF ops (SELECT_TF_OPS) 264.5

Binary size

The following table describes the binary size of TensorFlow Lite for each build. These targets were built for Android using --config=android_arm -c opt.

Build C++ Binary Size Android APK Size
Only built-in ops 796 KB 561 KB
Built-in ops + TF ops 23.0 MB 8.0 MB
Built-in ops + TF ops (1) 4.1 MB 1.8 MB

(1) These libraries are selectively built for i3d-kinetics-400 model with 8 TFLite builtin ops and 3 Tensorflow ops. For more details, please see the Reduce TensorFlow Lite binary size section.

Known limitations

  • Unsupported types: Certain TensorFlow ops may not support the full set of input/output types that are typically available in TensorFlow.

Updates

  • Version 2.6
    • Supports for GraphDef-attribute based operators and HashTable resource initializations have improved.
  • Version 2.5
  • Version 2.4
    • Compatibility with hardware accelerated delegates has improved