Path: blob/master/site/en-snapshot/lite/guide/build_cmake.md
25118 views
Build TensorFlow Lite with CMake
This page describes how to build and use the TensorFlow Lite library with CMake tool.
The following instructions have been tested on Ubuntu 16.04.3 64-bit PC (AMD64) , macOS Catalina (x86_64), Windows 10 and TensorFlow devel Docker image tensorflow/tensorflow:devel.
Note: This feature is available since version 2.4.
Step 1. Install CMake tool
It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following command.
Or you can follow the official cmake installation guide
Step 2. Clone TensorFlow repository
Note: If you're using the TensorFlow Docker image, the repo is already provided in /tensorflow_src/
.
Step 3. Create CMake build directory
Step 4. Run CMake tool with configurations
Release build
It generates an optimized release binary by default. If you want to build for your workstation, simply run the following command.
Debug build
If you need to produce a debug build which has symbol information, you need to provide the -DCMAKE_BUILD_TYPE=Debug
option.
Build with kernel unit tests
In order to be able to run kernel tests, you need to provide the -DTFLITE_KERNEL_TEST=on
flag. Unit test cross-compilation specifics can be found in the next subsection.
Build installable package
To build an installable package that can be used as a dependency by another CMake project with find_package(tensorflow-lite CONFIG)
, use the -DTFLITE_ENABLE_INSTALL=ON
option.
You should ideally also provide your own versions of library dependencies. These will also need to used by the project that depends on TF Lite. You can use the -DCMAKE_FIND_PACKAGE_PREFER_CONFIG=ON
and set the <PackageName>_DIR
variables to point to your library installations.
Note: Refer to CMake documentation for find_package
to learn more about handling and locating packages.
Cross-compilation
You can use CMake to build binaries for ARM64 or Android target architectures.
In order to cross-compile the TF Lite, you namely need to provide the path to the SDK (e.g. ARM64 SDK or NDK in Android's case) with -DCMAKE_TOOLCHAIN_FILE
flag.
Specifics of Android cross-compilation
For Android cross-compilation, you need to install Android NDK and provide the NDK path with -DCMAKE_TOOLCHAIN_FILE
flag mentioned above. You also need to set target ABI with-DANDROID_ABI
flag.
Specifics of kernel (unit) tests cross-compilation
Cross-compilation of the unit tests requires flatc compiler for the host architecture. For this purpose, there is a CMakeLists located in tensorflow/lite/tools/cmake/native_tools/flatbuffers
to build the flatc compiler with CMake in advance in a separate build directory using the host toolchain.
It is also possible to install the flatc to a custom installation location (e.g. to a directory containing other natively-built tools instead of the CMake build directory):
For the TF Lite cross-compilation itself, additional parameter -DTFLITE_HOST_TOOLS_DIR=<flatc_dir_path>
pointing to the directory containing the native flatc binary needs to be provided along with the -DTFLITE_KERNEL_TEST=on
flag mentioned above.
Cross-compiled kernel (unit) tests launch on target
Unit tests can be run as separate executables or using the CTest utility. As far as CTest is concerned, if at least one of the parameters TFLITE_ENABLE_NNAPI, TFLITE_ENABLE_XNNPACK
or TFLITE_EXTERNAL_DELEGATE
is enabled for the TF Lite build, the resulting tests are generated with two different labels (utilizing the same test executable): - plain - denoting the tests ones run on CPU backend - delegate - denoting the tests expecting additional launch arguments used for the used delegate specification
Both CTestTestfile.cmake
and run-tests.cmake
(as referred below) are available in <build_dir>/kernels
.
Launch of unit tests with CPU backend (provided the CTestTestfile.cmake
is present on target in the current directory):
Launch examples of unit tests using delegates (provided the CTestTestfile.cmake
as well as run-tests.cmake
file are present on target in the current directory):
A known limitation of this way of providing additional delegate-related launch arguments to unit tests is that it effectively supports only those with an expected return value of 0. Different return values will be reported as a test failure.
OpenCL GPU delegate
If your target machine has OpenCL support, you can use GPU delegate which can leverage your GPU power.
To configure OpenCL GPU delegate support:
Note: It's experimental and available starting from TensorFlow 2.5. There could be compatibility issues. It's only verified with Android devices and NVidia CUDA OpenCL 1.2.
Step 5. Build TensorFlow Lite
In the tflite_build
directory,
Note: This generates a static library libtensorflow-lite.a
in the current directory but the library isn't self-contained since all the transitive dependencies are not included. To use the library properly, you need to create a CMake project. Please refer the "Create a CMake project which uses TensorFlow Lite" section.
Step 6. Build TensorFlow Lite Benchmark Tool and Label Image Example (Optional)
In the tflite_build
directory,
Available Options to build TensorFlow Lite
Here is the list of available options. You can override it with -D<option_name>=[ON|OFF]
. For example, -DTFLITE_ENABLE_XNNPACK=OFF
to disable XNNPACK which is enabled by default.
Option Name | Feature | Android | Linux | macOS | Windows |
---|---|---|---|---|---|
TFLITE_ENABLE_RUY | Enable RUY | ON | OFF | OFF | OFF |
: : matrix : : : : : | |||||
: : multiplication : : : : : | |||||
: : library : : : : : | |||||
TFLITE_ENABLE_NNAPI | Enable NNAPI | ON | OFF | N/A | N/A |
: : delegate : : : : : | |||||
TFLITE_ENABLE_GPU | Enable GPU | OFF | OFF | N/A | N/A |
: : delegate : : : : : | |||||
TFLITE_ENABLE_XNNPACK | Enable XNNPACK | ON | ON | ON | ON |
: : delegate : : : : : | |||||
TFLITE_ENABLE_MMAP | Enable MMAP | ON | ON | ON | N/A |
Create a CMake project which uses TensorFlow Lite
Here is the CMakeLists.txt of TFLite minimal example.
You need to have add_subdirectory() for TensorFlow Lite directory and link tensorflow-lite
with target_link_libraries().
Build TensorFlow Lite C library
If you want to build TensorFlow Lite shared library for C API, follow step 1 to step 3 first. After that, run the following commands.
This command generates the following shared library in the current directory.
Note: On Windows system, you can find the tensorflowlite_c.dll
under debug
directory.
Platform | Library name |
---|---|
Linux | libtensorflowlite_c.so |
macOS | libtensorflowlite_c.dylib |
Windows | tensorflowlite_c.dll |
Note: You need the public headers (tensorflow/lite/c_api.h
, tensorflow/lite/c_api_experimental.h
, tensorflow/lite/c_api_types.h
, and tensorflow/lite/common.h
), and the private headers that those public headers include (tensorflow/lite/core/builtin_ops.h
, tensorflow/lite/core/c/*.h
, and tensorflow/lite/core/async/c/*.h
, ) to use the generated shared library.