Inferencing models with metadata can be as easy as just a few lines of code. TensorFlow Lite metadata contains a rich description of what the model does and how to use the model. It can empower code generators to automatically generate the inference code for you, such as using the TensorFlow Lite Android code generator and the Android Studio ML Binding feature. It can also be used to configure your custom inference pipeline.
Tools and libraries
TensorFlow Lite provides varieties of tools and libraries to serve different tiers of deployment requirements as follows:
Generate model interface with the TensorFlow Lite Code Generator
TensorFlow Lite Code Generator is an executable that generates
model interface automatically based on the metadata. It currently supports
Android with Java. The wrapper code removes the need to interact directly with
ByteBuffer. Instead, developers can interact with the TensorFlow Lite model
with typed objects such as
Rect. Android Studio users can also
get access to the codegen feature through
Android Studio ML Binding.
Leverage out-of-box APIs with the TensorFlow Lite Task Library
TensorFlow Lite Task Library provides optimized ready-to-use model interfaces for popular machine learning tasks, such as image classification, question and answer, etc. The model interfaces are specifically designed for each task to achieve the best performance and usability. Task Library works cross-platform and is supported on Java, C++, and Swift.
Build custom inference pipelines with the TensorFlow Lite Support Library
TensorFlow Lite Support Library is a cross-platform library that helps to customize model interface and build inference pipelines. It contains varieties of util methods and data structures to perform pre/post processing and data conversion. It is also designed to match the behavior of TensorFlow modules, such as TF.Image and TF.Text, ensuring consistency from training to inferencing.