When deploying models for on-device machine learning (ODML) applications, it is important to be aware of the limited memory that is available on mobile devices. Model binary sizes are closely correlated to the number of ops used in the model. TensorFlow Lite enables you to reduce model binary sizes by using selective builds. Selective builds skip unused operations in your model set and produce a compact library with just the runtime and the op kernels required for the model to run on your mobile device.
Selective build applies on the following three operations libraries.
The table below demonstrates the impact of selective builds for some common use cases:
|Model Name||Domain||Target architecture||AAR file size(s)|
|Mobilenet_1.0_224(float)||Image classification||armeabi-v7a||tensorflow-lite.aar (296,635 bytes)|
|arm64-v8a||tensorflow-lite.aar (382,892 bytes)|
|SPICE||Sound pitch extraction||armeabi-v7a||tensorflow-lite.aar (375,813 bytes)
tensorflow-lite-select-tf-ops.aar (1,676,380 bytes)
|arm64-v8a||tensorflow-lite.aar (421,826 bytes)
tensorflow-lite-select-tf-ops.aar (2,298,630 bytes)
|i3d-kinetics-400||Video classification||armeabi-v7a||tensorflow-lite.aar (240,085 bytes)
tensorflow-lite-select-tf-ops.aar (1,708,597 bytes)
|arm64-v8a||tensorflow-lite.aar (273,713 bytes)
tensorflow-lite-select-tf-ops.aar (2,339,697 bytes)
- Selective Build for C API and iOS version is not supported currently.
Selectively build TensorFlow Lite with Bazel
This section assumes that you have downloaded TensorFlow source codes and set up the local development environment to Bazel.
Build AAR files for Android project
You can build the custom TensorFlow Lite AARs by providing your model file paths as follows.
sh tensorflow/lite/tools/build_aar.sh \ --input_models=/a/b/model_one.tflite,/c/d/model_two.tflite \ --target_archs=x86,x86_64,arm64-v8a,armeabi-v7a
The above command will generate the AAR file
for TensorFlow Lite built-in and custom ops; and optionally, generates the aar
bazel-bin/tmp/tensorflow-lite-select-tf-ops.aar if your models contain
Select TensorFlow ops. Note that this builds a "fat" AAR with several different
architectures; if you don't need all of them, use the subset appropriate for
your deployment environment.
Advanced Usage: Build with custom ops
If you have developed Tensorflow Lite models with custom ops, you can build them by adding the following flags to the build command:
sh tensorflow/lite/tools/build_aar.sh \ --input_models=/a/b/model_one.tflite,/c/d/model_two.tflite \ --target_archs=x86,x86_64,arm64-v8a,armeabi-v7a \ --tflite_custom_ops_srcs=/e/f/file1.cc,/g/h/file2.h \ --tflite_custom_ops_deps=dep1,dep2
tflite_custom_ops_srcs flag contains source files of your custom ops and
tflite_custom_ops_deps flag contains dependencies to build those source
files. Note that these dependencies must exist in the TensorFlow repo.
Selectively Build TensorFlow Lite with Docker
Build AAR files for Android project
Download the script for building with Docker by running:
curl -o build_aar_with_docker.sh \ https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/tools/build_aar_with_docker.sh && chmod +x build_aar_with_docker.sh
Then, you can build the custom TensorFlow Lite AAR by providing your model file paths as follows.
sh build_aar_with_docker.sh \ --input_models=/a/b/model_one.tflite,/c/d/model_two.tflite \ --target_archs=x86,x86_64,arm64-v8a,armeabi-v7a \ --checkpoint=master
checkpoint flag is a commit, a branch or a tag of the TensorFlow repo that
you want to checkout before building the libraries. The above command will
generate the AAR file
tensorflow-lite.aar for TensorFlow Lite built-in and
custom ops and optionally the AAR file
Select TensorFlow ops in your current directory.
Add AAR files to project
Add AAR files by directly
importing the AAR into your project,
publishing the custom AAR to your local Maven repository.
Note that you have to add the AAR files for
as well if you generate it.