back to home

pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch

4,387 stars
880 forks
1,803 issues
PythonC++Objective-C++

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing pytorch/executorch in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/pytorch/executorch)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

ExecuTorch On-device AI inference powered by PyTorch **ExecuTorch** is PyTorch's unified solution for deploying AI models on-device—from smartphones to microcontrollers—built for privacy, performance, and portability. It powers Meta's on-device AI across **Instagram, WhatsApp, Quest 3, Ray-Ban Meta Smart Glasses**, and more. Deploy **LLMs, vision, speech, and multimodal models** with the same PyTorch APIs you already know—accelerating research to production with seamless model export, optimization, and deployment. No manual C++ rewrites. No format conversions. No vendor lock-in. 📘 Table of Contents • Why ExecuTorch? • How It Works • Quick Start • Installation • Export and Deploy in 3 Steps • Run on Device • LLM Example: Llama • Platform & Hardware Support • Production Deployments • Examples & Models • Key Features • Documentation • Community & Contributing • License Why ExecuTorch? • **🔒 Native PyTorch Export** — Direct export from PyTorch. No .onnx, .tflite, or intermediate format conversions. Preserve model semantics. • **⚡ Production-Proven** — Powers billions of users at Meta with real-time on-device inference. • **💾 Tiny Runtime** — 50KB base footprint. Runs on microcontrollers to high-end smartphones. • **🚀 12+ Hardware Backends** — Open-source acceleration for Apple, Qualcomm, ARM, MediaTek, Vulkan, and more. • **🎯 One Export, Multiple Backends** — Switch hardware targets with a single line change. Deploy the same model everywhere. How It Works ExecuTorch uses **ahead-of-time (AOT) compilation** to prepare PyTorch models for edge deployment: • **🧩 Export** — Capture your PyTorch model graph with • **⚙️ Compile** — Quantize, optimize, and partition to hardware backends → • **🚀 Execute** — Load on-device via lightweight C++ runtime Models use a standardized Core ATen operator set. Partitioners delegate subgraphs to specialized hardware (NPU/GPU) with CPU fallback. Learn more: How ExecuTorch Works • Architecture Guide Quick Start Installation For platform-specific setup (Android, iOS, embedded systems), see the Quick Start documentation for additional info. Export and Deploy in 3 Steps Run on Device **C++** **Swift (iOS)** **Kotlin (Android)** LLM Example: Llama Export Llama models using the script or Optimum-ExecuTorch: Run on-device with the LLM runner API: **C++** **Swift (iOS)** **Kotlin (Android)** — API Docs • Demo App For multimodal models (vision, audio), use the MultiModal runner API which extends the LLM runner to handle image and audio inputs alongside text. See Llava and Voxtral examples. See examples/models/llama for complete workflow including quantization, mobile deployment, and advanced options. **Next Steps:** • 📖 Step-by-step tutorial — Complete walkthrough for your first model • ⚡ Colab notebook — Try ExecuTorch instantly in your browser • 🤖 Deploy Llama models — LLM workflow with quantization and mobile demos Platform & Hardware Support | **Platform** | **Supported Backends** | |------------------|----------------------------------------------------------| | Android | XNNPACK, Vulkan, Qualcomm, MediaTek, Samsung Exynos | | iOS | XNNPACK, MPS, CoreML (Neural Engine) | | Linux / Windows | XNNPACK, OpenVINO, CUDA *(experimental)* | | macOS | XNNPACK, MPS, Metal *(experimental)* | | Embedded / MCU | XNNPACK, ARM Ethos-U, NXP, Cadence DSP | See Backend Documentation for detailed hardware requirements and optimization guides. For desktop/laptop GPU inference with CUDA and Metal, see the Desktop Guide. For Zephyr RTOS integration, see the Zephyr Guide. Production Deployments ExecuTorch powers on-device AI at scale across Meta's family of apps, VR/AR devices, and partner deployments. View success stories → Examples & Models **LLMs:** Llama 3.2/3.1/3, Qwen 3, Phi-4-mini, LiquidAI LFM2 **Multimodal:** Llava (vision-language), Voxtral (audio-language), Gemma (vision-language) **Vision/Speech:** MobileNetV2, DeepLabV3, Whisper **Resources:** directory • executorch-examples out-of-tree demos • Optimum-ExecuTorch for HuggingFace models • Unsloth for fine-tuned LLM deployment Key Features ExecuTorch provides advanced capabilities for production deployment: • **Quantization** — Built-in support via torchao for 8-bit, 4-bit, and dynamic quantization • **Memory Planning** — Optimize memory usage with ahead-of-time allocation strategies • **Developer Tools** — ETDump profiler, ETRecord inspector, and model debugger • **Selective Build** — Strip unused operators to minimize binary size • **Custom Operators** — Extend with domain-specific kernels • **Dynamic Shapes** — Support variable input sizes with bounded ranges See Advanced Topics for quantization techniques, custom backends, and compiler passes. Documentation • **Documentation Home** — Complete guides and tutorials • **API Reference** — Python, C++, Java/Kotlin APIs • **Backend Integration** — Build custom hardware backends • **Troubleshooting** — Common issues and solutions Community & Contributing We welcome contributions from the community! • 💬 **GitHub Discussions** — Ask questions and share ideas • 🎮 **Discord** — Chat with the team and community • 🐛 **Issues** — Report bugs or request features • 🤝 **Contributing Guide** — Guidelines and codebase structure License ExecuTorch is BSD licensed, as found in the LICENSE file. --- Part of the PyTorch ecosystem GitHub • Documentation