Mobile TensorFlow makes sense when there is a poor or missing network connection, or where sending continuous data to a server would be too expensive. We are working to help developers make lean mobile apps using TensorFlow, both by continuing to reduce the code footprint, and supporting quantization and lower precision arithmetic that reduce model size.
Many applications can benefit from on-device processing. Google Translate's instant visual translation is a great example. By running its processing locally, users get an incredibly responsive and interactive experience.