back to home

ruvnet / RuView

π RuView: WiFi DensePose turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video.

37,717 stars
5,181 forks
55 issues
RustPythonJavaScript

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing ruvnet/RuView in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/ruvnet/RuView)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

π RuView **See through walls with WiFi + Ai** ## **Perceive the world through signals.** No cameras. No wearables. No Internet. Just physics. π RuView is an edge AI perception system that learns directly from the environment around it. Instead of relying on cameras or cloud models, it observes whatever signals exist in a space such as WiFi, radio waves across the spectrum, motion patterns, vibration, sound, or other sensory inputs and builds an understanding of what is happening locally. Built on top of RuVector, the project became widely known for its implementation of WiFi DensePose — a sensing technique first explored in academic research such as Carnegie Mellon University's *DensePose From WiFi* work. That research demonstrated that WiFi signals can be used to reconstruct human pose. RuView extends that concept into a practical edge system. By analyzing Channel State Information (CSI) disturbances caused by human movement, RuView reconstructs body position, breathing rate, heart rate, and presence in real time using physics-based signal processing and machine learning. Unlike research systems that rely on synchronized cameras for training, RuView is designed to operate entirely from radio signals and self-learned embeddings at the edge. The system runs entirely on inexpensive hardware such as an ESP32 sensor mesh (as low as ~$1 per node). Small programmable edge modules analyze signals locally and learn the RF signature of a room over time, allowing the system to separate the environment from the activity happening inside it. Because RuView learns in proximity to the signals it observes, it improves as it operates. Each deployment develops a local model of its surroundings and continuously adapts without requiring cameras, labeled data, or cloud infrastructure. In practice this means ordinary environments gain a new kind of spatial awareness. Rooms, buildings, and devices begin to sense presence, movement, and vital activity using the signals that already fill the space. Built for low-power edge applications Edge modules are small programs that run directly on the ESP32 sensor — no internet needed, no cloud fees, instant response. > | What | How | Speed | > |------|-----|-------| > | **Pose estimation** | CSI subcarrier amplitude/phase → DensePose UV maps | 54K fps (Rust) | > | **Breathing detection** | Bandpass 0.1-0.5 Hz → FFT peak | 6-30 BPM | > | **Heart rate** | Bandpass 0.8-2.0 Hz → FFT peak | 40-120 BPM | > | **Presence sensing** | RSSI variance + motion band power | | **Through-wall** | Fresnel zone geometry + multipath modeling | Up to 5m depth | > [!NOTE] > **CSI-capable hardware required.** Pose estimation, vital signs, and through-wall sensing rely on Channel State Information (CSI) — per-subcarrier amplitude and phase data that standard consumer WiFi does not expose. You need CSI-capable hardware (ESP32-S3 or a research NIC) for full functionality. Consumer WiFi laptops can only provide RSSI-based presence detection, which is significantly less capable. > **Hardware options** for live CSI capture: > > | Option | Hardware | Cost | Full CSI | Capabilities | > |--------|----------|------|----------|-------------| > | **ESP32 Mesh** (recommended) | 3-6x ESP32-S3 + WiFi router | ~$54 | Yes | Pose, breathing, heartbeat, motion, presence | > | **Research NIC** | Intel 5300 / Atheros AR9580 | ~$50-100 | Yes | Full CSI with 3x3 MIMO | > | **Any WiFi** | Windows, macOS, or Linux laptop | $0 | No | RSSI-only: coarse presence and motion | > > No hardware? Verify the signal processing pipeline with the deterministic reference signal: > --- 📖 Documentation | Document | Description | |----------|-------------| | User Guide | Step-by-step guide: installation, first run, API usage, hardware setup, training | | Build Guide | Building from source (Rust and Python) | | Architecture Decisions | 62 ADRs — why each technical choice was made, organized by domain (hardware, signal processing, ML, platform, infrastructure) | | Domain Models | 7 DDD models (RuvSense, Signal Processing, Training Pipeline, Hardware Platform, Sensing Server, WiFi-Mat, CHCI) — bounded contexts, aggregates, domain events, and ubiquitous language | | Desktop App | **WIP** — Tauri v2 desktop app for node management, OTA updates, WASM deployment, and mesh visualization | | Medical Examples | Contactless blood pressure, heart rate, breathing rate via 60 GHz mmWave radar — $15 hardware, no wearable | --- Real-time pose skeleton from WiFi CSI signals — no cameras, no wearables ▶ Live Observatory Demo  |  ▶ Dual-Modal Pose Fusion Demo > The server is optional for visualization and aggregation — the ESP32 runs independently for presence detection, vital signs, and fall alerts. > > **Live ESP32 pipeline**: Connect an ESP32-S3 node → run the sensing server → open the pose fusion demo for real-time dual-modal pose estimation (webcam + WiFi CSI). See ADR-059. 🚀 Key Features Sensing See people, breathing, and heartbeats through walls — using only WiFi signals already in the room. | | Feature | What It Means | |---|---------|---------------| | 🔒 | **Privacy-First** | Tracks human pose using only WiFi signals — no cameras, no video, no images stored | | 💓 | **Vital Signs** | Detects breathing rate (6-30 breaths/min) and heart rate (40-120 bpm) without any wearable | | 👥 | **Multi-Person** | Tracks multiple people simultaneously, each with independent pose and vitals — no hard software limit (physics: ~3-5 per AP with 56 subcarriers, more with multi-AP) | | 🧱 | **Through-Wall** | WiFi passes through walls, furniture, and debris — works where cameras cannot | | 🚑 | **Disaster Response** | Detects trapped survivors through rubble and classifies injury severity (START triage) | | 📡 | **Multistatic Mesh** | 4-6 low-cost sensor nodes work together, combining 12+ overlapping signal paths for full 360-degree room coverage with sub-inch accuracy and no person mix-ups (ADR-029) | | 🌐 | **Persiste…