Deploy machine learning models on mobile and IoT devices
TensorFlow Lite is an open source deep learning framework for on-device inference.
How it works
Pick a model
Pick a new model or retrain an existing one.
Convert a TensorFlow model into a compressed flat buffer with the TensorFlow Lite Converter.
Take the compressed .tflite file and load it into a mobile or embedded device.
Quantize by converting 32-bit floats to more efficient 8-bit integers or run on GPU.
Solutions to common problems
Explore optimized models to help with common mobile and edge use cases.
News & announcements
See updates to help you with your work, and subscribe to our monthly TensorFlow newsletter to get the latest announcements sent directly to your inbox.
In this video, you'll learn how to build AI into any device using TensorFlow Lite, and learn about the future of on-device ML and our roadmap. You’ll also discover a library of pretrained models that are ready to use in your apps or to be customized for your needs.
Run inference on GPU can improve inference up to ~4x on Pixel 3.