Save the date! Google I/O returns May 18-20 Register now

Build TensorFlow Lite with CMake

This page describes how to build and use the TensorFlow Lite library with CMake tool.

The following instructions have been tested on Ubuntu 16.04.3 64-bit PC (AMD64) , macOS Catalina (x86_64), Windows 10 and TensorFlow devel Docker image tensorflow/tensorflow:devel.

Step 1. Install CMake tool

It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following command.

sudo apt-get install cmake

Or you can follow the official cmake installation guide

Step 2. Clone TensorFlow repository

git clone https://github.com/tensorflow/tensorflow.git tensorflow_src

Step 3. Create CMake build directory

mkdir tflite_build
cd tflite_build

Step 4. Run CMake tool with configurations

Release build

It generates an optimized release binary by default. If you want to build for your workstation, simply run the following command.

cmake ../tensorflow_src/tensorflow/lite

Debug build

If you need to produce a debug build which has symbol information, you need to provide -DCMAKE_BUILD_TYPE=Debug option.

cmake ../tensorflow_src/tensorflow/lite -DCMAKE_BUILD_TYPE=Debug

Cross-compilation for Android

You can use CMake to build Android binaries. You need to install Android NDK and provide the NDK path with -DDCMAKE_TOOLCHAIN_FILE flag. You also need to set target ABI with -DANDROID_ABI flag.

cmake -DCMAKE_TOOLCHAIN_FILE=<NDK path>/build/cmake/android.toolchain.cmake \
  -DANDROID_ABI=arm64-v8a ../tensorflow_src/tensorflow/lite

OpenCL GPU delegate

If your target machine has OpenCL support, you can use GPU delegate which can leverage your GPU power.

To configure OpenCL GPU delegate support:

cmake ../tensorflow_src/tensorflow/lite -DTFLITE_ENABLE_GPU=ON

Step 5. Build TensorFlow Lite

In the tflite_build directory,

cmake --build . -j

Step 6. Build TensorFlow Lite Benchmark Tool and Label Image Example (Optional)

In the tflite_build directory,

cmake --build . -j -t benchmark_model
cmake --build . -j -t label_image

Available Options to build TensorFlow Lite

Here is the list of available options. You can override it with -D<option_name>=[ON|OFF]. For example, -DTFLITE_ENABLE_XNNPACK=OFF to disable XNNPACK which is enabled by default.

Option Name Feature Default
TFLITE_ENABLE_RUY Enable RUY matrix multiplication library OFF
TFLITE_ENABLE_NNAPI Enable NNAPI delegate ON (Android)
TFLITE_ENABLE_GPU Enable GPU delegate OFF
TFLITE_ENABLE_XNNPACK Enable XNNPACK delegate ON
TFLITE_ENABLE_MMAP Enable MMAP (unsupported on Windows) ON

Create a CMake project which uses TensorFlow Lite

Here is the CMakeLists.txt of TFLite minimal example.

You need to have add_subdirectory() for TensorFlow Lite directory and link tensorflow-lite with target_link_libraries().

cmake_minimum_required(VERSION 3.16)
project(minimal C CXX)

set(TENSORFLOW_SOURCE_DIR "" CACHE PATH
  "Directory that contains the TensorFlow project" )
if(NOT TENSORFLOW_SOURCE_DIR)
  get_filename_component(TENSORFLOW_SOURCE_DIR
    "${CMAKE_CURRENT_LIST_DIR}/../../../../" ABSOLUTE)
endif()

add_subdirectory(
  "${TENSORFLOW_SOURCE_DIR}/tensorflow/lite"
  "${CMAKE_CURRENT_BINARY_DIR}/tensorflow-lite" EXCLUDE_FROM_ALL)

add_executable(minimal minimal.cc)
target_link_libraries(minimal tensorflow-lite ${CMAKE_DL_LIBS}

Build TensorFlow Lite C library

If you want to build TensorFlow Lite shared library for C API, follow step 1 to step 3 first. After that, run the following commands.

cmake ../tensorflow_src/tensorflow/lite/c
cmake --build . -j

This command generates the following shared library in the current directory.

Platform Library name
Linux libtensorflowlite_c.so
macOS libtensorflowlite_c.dylib
Windows tensorflowlite_c.dll