Google I/O is a wrap! Catch up on TensorFlow sessions View sessions

TensorFlow Lite uses TensorFlow models converted into a smaller, more efficient machine learning (ML) model format. You can use pre-trained models with TensorFlow Lite, modify existing models, or build your own TensorFlow models and then convert them to TensorFlow Lite format. TensorFlow Lite models can perform almost any task a regular TensorFlow model can do: object detection, natural language processing, pattern recognition, and more using a wide range of input data including images, video, audio, and text.

Skip to the Convert section for information about getting your model to run with TensorFlow Lite.
For guidance on getting models for your use case, keep reading.

You don't have to build a TensorFlow Lite model to start using machine learning on mobile or edge devices. Many already-built and optimized models are available for you to use right away in your application. You can start with using pre-trained models in TensorFlow Lite and move up to building custom models over time, as follows:

  1. Start developing machine learning features with already trained models.
  2. Modify existing TensorFlow Lite models using tools such as Model Maker.
  3. Build a custom model with TensorFlow tools and then convert it to TensorFlow Lite.

If you are trying to quickly implement features or utility tasks with machine learning, you should review the use cases supported by ML Kit before starting development with TensorFlow Lite. This development tool provides APIs you can call directly from mobile apps to complete common ML tasks such as barcode scanning and on-device translation. Using this method can help you get results fast. However, ML Kit has limited options for extending its capabilities. For more information, see the ML Kit developer documentation.


If building a custom model for your specific use case is your ultimate goal, you should start with developing and training a TensorFlow model or extending an existing one. Before you start your model development process, you should be aware of the constraints for TensorFlow Lite models and build your model with these constraints in mind:

  • Limited compute capabilities - Compared to fully equipped servers with multiple CPUs, high memory capacity, and specialized processors such as GPUs and TPUs, mobile and edge devices are much more limited, and models and data you can effectively process with them are limited.
  • Size of models - The overall complexity of a model, including data pre-processing logic and the number of layers in the model, increases the in-memory size of a model. A large model may run unacceptably slow or simply may not fit in the available memory of a mobile or edge device.
  • Size of data - The size of input data that can be effectively processed with a machine learning model is limited on a mobile or edge device. Models that use large data libraries such language libraries, image libraries, or video clip libraries may not fit on these devices, and may require off-device storage and access solutions.
  • Supported TensorFlow operations - TensorFlow Lite runtime environments support a smaller number of machine learning model operations compared to regular TensorFlow models. As you develop a model for use with TensorFlow Lite, you should track the compatibility of your model against the capabilities of TensorFlow Lite runtime environments.

For more information building effective, compatible, high performance models for TensorFlow Lite, see Performance best practices.

Learn how to pick a pre-trained ML model to use with TensorFlow Lite.
Use TensorFlow Lite Model Maker to modify models using your training data.
Learn how to build custom TensorFlow models to use with TensorFlow Lite.