google-ai-edge / LiteRT
LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing google-ai-edge/LiteRT in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewLiteRT Google's on-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization π Get Started | π€ Contributing | π License | π‘ Security Policy | π Documentation Build Status | Nightly Builds | Continuous Builds | Other Builds | | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | | | | Description LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. LiteRT features advanced GPU/NPU acceleration, delivers superior ML & GenAI performance, making on-device ML inference easier than ever. π What's New β’ **π New LiteRT Compiled Model API**: Streamline development with automated accelerator selection, true async execution, and efficient I/O buffer handling. β’ Automated accelerator selection vs explicit delegate creation β’ Async execution for faster overall execution time β’ Easy NPU runtime and model distribution β’ Efficient I/O buffer handling β’ **π€ Unified NPU Acceleration**: Offer seamless access to NPUs from major chipset providers with a consistent developer experience. LiteRT NPU, previously under Early access program is available to all users: https://ai.google.dev/edge/litert/next/npu β’ **β‘ Best-in-class GPU Performance**: Use state-of-the-art GPU acceleration for on-device ML. The new buffer interoperability enables zero-copy and minimizes latency across various GPU buffer types. β’ **π§ Superior Generative AI inference**: Enable the simplest integration with the best performance for GenAI models. π» Platforms Supported LiteRT is designed for cross-platform deployment on a wide range of hardware. | Platform | CPU Support | GPU Support | NPU Support | | ---------- | ----------- | --------------------- | ----------------------------------------------------------------- | | π€ Android | β | β OpenCL β OpenGL | Google Tensor\* β Qualcomm β MediaTek S.LSI\* Intel\* | | π iOS | β | β Metal | ANE\* | | π§ Linux | β | β WebGPU | N/A | | π macOS | β | β WebGPU β Metal | ANE\* | | π» Windows | β | β WebGPU | Intel\* | | π Web | β | β WebGPU | Coming soon | | π§© IoT | β | β WebGPU | Broadcom\* Raspberry Pi\* | *\*Coming soon* Model Coverage and Performance Coming soon... π Installation For a comprehensive guide to setting up your application with LiteRT, see the Get Started guide. You can build LiteRT from source: β’ Start a docker daemon. β’ Run under The script automatically creates a Linux Docker image, which allows you to build artifacts for Linux and Android (through cross compilation). See build instructions in CMake build instructions and Bazel build instructions for more information on how to build runtime libraries with the docker container. For more information about using docker interactive shell or building different targets, please refer to . πΊ Choose Your Adventure Every developer's path is different. Here are a few common journeys to help you get started based on your goals: β’ π I have a PyTorch model... β’ **Goal**: Convert a model from PyTorch to run on LiteRT. β’ **Path1 (classic models)**: Use the LiteRT Torch Converter to transform your PyTorch model into the format, and use AI Edge Quantizer to optimize the model for optimal performance under resource constraints. From there, you can deploy it using the standard LiteRT runtime. β’ **Path2 (LLMs)**: Use LiteRT Generative Torch API to reauthor and convert your PyTorch LLMs into Apache format, and deploy it using LiteRT LM. β’ π± I'm new to on-device ML... β’ **Goal**: Run a pre-trained model (like image segmentation) in a mobile app for the first time. β’ **Path1 (Beginner dev)**: Follow step-by-step instructions via Android Studio to create a Real-time segmentation App for CPU/GPU/NPU inference. Source code link. β’ **Path2 (Experienced dev)**: Start with the Get Started guide, find a pre-trained .tflite model on Kaggle Models, and use the standard LiteRT runtime to integrate it into your Android or iOS app. β’ β‘ I need to maximize performance... β’ **Goal**: Accelerate an existing model to run faster and more efficiently on-device. β’ **Path**: β’ Explore the LiteRT API to easily leverage hardware acceleration. β’ **For working with Generative AI**: Dive into LiteRT LM, our specialized solution for running GenAI models. β’ π§ I'm working with Generative AI... β’ **Goal**: Deploy a large language model (LLM) or diffusion model on a mobile device. β’ **Path**: Dive into LiteRT LM, our specialized solution for running GenAI models. You'll focus on model quantization and optimizations specific to large model architectures. πΊ Roadmap Our commitment is to make LiteRT the best runtime for any on-device ML deployment. Our product strategies are: β’ **Expanding Hardware Acceleration**: Broadening our support for NPUs and improving performance across all major hardware accelerators. β’ **Generative AI Optimizations**: Introducing new optimizations and features specifically for the next wave of on-device generative AI models. β’ **Improving Developer Tools**: Building better tools for debugging, profiling, and optβ¦