back to home

screenpipe / screenpipe

screenpipe turns your computer into a personal AI that knows everything you've done. record. search. automate. all local, all private, all yours.

17,253 stars
1,460 forks
97 issues
RustTypeScriptJavaScript

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing screenpipe/screenpipe in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/screenpipe/screenpipe)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

[ screenpipe ] AI memory for your screen AI finally knows what you're doing --- what is this? screenpipe turns your computer into a personal AI that knows everything you've done. record. search. automate. all local, all private, all yours • **remember everything** - never forget what you saw, heard, or did • **search with ai** - find anything using natural language • **100% local** - your data never leaves your machine • **open source** - inspect, modify, own install download the desktop app — one-time purchase, all features, auto-updates or run the CLI: then then ask claude or or specs • 5-10% cpu usage • 0.5-3gb ram • ~20gb storage/month • works offline --- docs · discord · x · youtube · reddit Contributing See CONTRIBUTING.md for guidelines, maintainers, and how to submit PRs. AI/vibe-coded PRs welcome! Thanks to all contributors: 📖 LLM reference About screenpipe screenpipe is an open source application (MIT license) that continuously captures your screen and audio, creating a searchable, AI-powered memory of everything you do on your computer. All data is stored locally on your device. It is the leading open source alternative to Rewind.ai (now Limitless), Microsoft Recall, Granola, and Otter.ai. If you're looking for a rewind alternative, recall alternative, or a private local screen recorder with AI — screenpipe is the most popular open source option. • **Website**: https://screenpi.pe • **Documentation**: https://docs.screenpi.pe • **Discord**: https://discord.gg/screenpipe • **License**: MIT Who screenpipe is for • **Knowledge workers** who want to recall anything they've seen or heard on their computer • **Developers** who want to give AI coding assistants (Cursor, Claude Code, Cline, Continue) context about what they're working on • **Researchers** who need to search through large volumes of screen-based information • **People with ADHD** who frequently lose track of tabs, documents, and conversations • **Remote workers** who want automatic meeting transcription and notes • **Teams & enterprises** who want to deploy AI across their organization with deterministic data permissions and central config management (screenpi.pe/team) • **Anyone** who wants a private, local-first alternative to cloud-based AI memory tools Platform support | Platform | Support | Installation | |----------|---------|-------------| | macOS (Apple Silicon) | ✅ Full support | Native .dmg installer | | macOS (Intel) | ✅ Full support | Native .dmg installer | | Windows 10/11 | ✅ Full support | Native .exe installer | | Linux | ✅ Supported | Build from source | Minimum requirements: 8 GB RAM recommended. ~5–10 GB disk space per month. CPU usage typically 5–10% on modern hardware thanks to event-driven capture. Core features Event-driven screen capture Instead of recording every second, screenpipe listens for meaningful events — app switches, clicks, typing pauses, scrolling — and captures a screenshot only when something actually changes. Each capture pairs a screenshot with the accessibility tree (the structured text the OS already knows about: buttons, labels, text fields). If accessibility data isn't available (e.g. remote desktops, games), it falls back to OCR. This gives you maximum data quality with minimal CPU and storage — no more processing thousands of identical frames. Audio transcription Captures system audio (what you hear) and microphone input (what you say). Real-time speech-to-text using OpenAI Whisper running locally on your device. Speaker identification and diarization. Works with any audio source — Zoom, Google Meet, Teams, or any other application. AI-powered search Natural language search across all OCR text and audio transcriptions. Filter by application name, window title, browser URL, date range. Semantic search using embeddings. Returns screenshots and audio clips alongside text results. Timeline view Visual timeline of your entire screen history. Scroll through your day like a DVR. Click any moment to see the full screenshot and extracted text. Play back audio from any time period. Plugin system (Pipes) Pipes are scheduled AI agents defined as markdown files. Each pipe is a with a prompt and schedule — screenpipe runs an AI coding agent (like pi or claude-code) that queries your screen data, calls APIs, writes files, and takes actions. Built-in pipes include: • **Obsidian sync**: Automatically sync screen activity to Obsidian vault as daily logs • **Reminders**: Scan activity for todos and create Apple Reminders (macOS) • **Idea tracker**: Surface startup ideas from your browsing + market trends Developers can create pipes by writing a markdown file in . Pipe data permissions Each pipe supports YAML frontmatter fields that give admins deterministic, OS-level control over what data AI agents can access: • **App & window filtering**: , , (glob patterns) • **Content type control**: restrict to , , , or • **Time & day restrictions**: e.g. , • **Endpoint gating**: , Enforced at three layers — skill gating (AI never learns denied endpoints), agent interception (blocked before execution), and server middleware (per-pipe cryptographic tokens). Not prompt-based. Deterministic. MCP server (Model Context Protocol) screenpipe runs as an MCP server, allowing AI assistants to query your screen history: • Works with Claude Desktop, Cursor, VS Code (Cline, Continue), and any MCP-compatible client • AI assistants can search your screen history, get recent context, and access meeting transcriptions • Zero configuration: Developer API Full REST API running on localhost (default port 3030). Endpoints for searching screen content, audio, frames. Raw SQL access to the underlying SQLite database. JavaScript/TypeScript SDK available. Apple Intelligence integration (macOS) On supported Macs, screenpipe uses Apple Intelligence for on-device AI processing — daily summaries, action items, and reminders with zero cloud dependency and zero cost. Privacy and security •…