back to home

yazinsai / OpenOats

A meeting note-taker that talks back.

View on GitHub
1,592 stars
148 forks
6 issues

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing yazinsai/OpenOats in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/yazinsai/OpenOats)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

OpenOats A meeting note-taker that talks back. OpenOats sits next to your call, transcribes both sides of the conversation in real time, and searches your own notes to surface things worth saying — right when you need them. Features • **Invisible to the other side** — the app window is hidden from screen sharing by default, so no one knows you're using it • **Fully offline transcription** — speech recognition runs entirely on your Mac; no audio ever leaves the device • **Runs 100% locally** — pair with Ollama for LLM suggestions and local embeddings, and nothing touches the network at all • **Pick any LLM** — use OpenRouter for cloud models (GPT-4o, Claude, Gemini) or Ollama for local ones (Llama, Qwen, Mistral) • **Live transcript** — see both sides of the conversation as it happens, copy the whole thing with one click • **Auto-saved sessions** — every conversation is automatically saved as a plain-text transcript and a structured session log, no manual export needed • **Knowledge base search** — point it at a folder of notes and it pulls in what's relevant using Voyage AI embeddings, local Ollama embeddings, or any OpenAI-compatible endpoint (llama.cpp, llamaswap, LiteLLM, vLLM, etc.) How it works • You start a call and hit **Live** • OpenOats transcribes both speakers locally on your Mac • When the conversation hits a moment that matters — a question, a decision point, a claim worth backing up — it searches your notes and surfaces relevant talking points • You sound prepared because you are Recording Consent & Legal Disclaimer **Important:** OpenOats records and transcribes audio from your microphone and system audio. Many jurisdictions have laws requiring consent from some or all participants before a conversation may be recorded (e.g., two-party/all-party consent states in the U.S., GDPR in the EU). **By using this software, you acknowledge and agree that:** • **You are solely responsible** for determining whether recording is lawful in your jurisdiction and for obtaining any required consent from all participants before starting a session. • **The developers and contributors of OpenOats provide no legal advice** and make no representations about the legality of recording in any jurisdiction. • **The developers accept no liability** for any unauthorized or unlawful recording conducted using this software. **Do not use this software to record conversations without proper consent where required by law.** The app will ask you to acknowledge these obligations before your first recording session. Download Install via Homebrew: To upgrade later: Or grab the latest DMG from the Releases page. Or build from source: Quick start • Open the DMG and drag OpenOats to Applications • Launch the app and grant microphone + system audio recording permissions • Open Settings ( ) and pick your providers: • **Cloud**: add your OpenRouter and Voyage AI API keys • **Local**: select Ollama as your LLM and embedding provider (make sure Ollama is running) • **OpenAI-compatible**: select "OpenAI Compatible" as your embedding provider and point it at any endpoint • Point it at a folder of or files — that's your knowledge base • Click **Idle** to go live The first run downloads the local speech model (~600 MB). What you need • Apple Silicon Mac, macOS 15+ • Xcode 26 / Swift 6.2 • **For cloud mode**: OpenRouter API key + Voyage AI API key • **For local mode**: Ollama running locally with your preferred models (e.g. for suggestions, for embeddings) • **For OpenAI-compatible embeddings**: any server implementing (llama.cpp, llamaswap, LiteLLM, vLLM, etc.) Knowledge base Point the app at a folder of Markdown or plain text files. That's it. OpenOats chunks, embeds, and caches them locally. When the conversation shifts, it searches your notes and only surfaces what's actually relevant. Works well with meeting prep docs, research notes, pitch decks, competitive analysis, customer briefs — anything you'd want at your fingertips during a call. Privacy • Speech is transcribed locally — audio never leaves your Mac • **With Ollama**: everything stays on your machine. Zero network calls. • **With cloud providers**: KB chunks are sent to Voyage AI (or your chosen OpenAI-compatible endpoint) for embedding (text only, no audio), and conversation context is sent to OpenRouter for suggestions • API keys are stored in your Mac's Keychain • The app window is hidden from screen sharing by default • Transcripts are saved locally to Cloud mode: what data leaves your Mac When using cloud providers, OpenOats makes the following network requests. **No audio is ever sent** — only text. In fully-local mode (Ollama for both LLM and embeddings), nothing touches the network at all. • Knowledge base indexing — Voyage AI ( ) **When:** Each time you index your knowledge base folder (on launch or when files change). **What is sent:** • Text chunks from your / knowledge base files (split by markdown headings, 80–500 words each, with the header breadcrumb prepended) • Model name ( ) and requested output dimensions ( ) • Input type ( ) Chunks are sent in batches of 32. Only new or changed files are embedded — unchanged files use a local cache. • Knowledge base search — Voyage AI ( ) **When:** Each time the suggestion pipeline runs (triggered by a substantive utterance from the other speaker, subject to a 90-second cooldown). **What is sent:** • 1–4 short query strings derived from the conversation: the latest utterance text, the current conversation topic, a short conversation summary, and the top open question • Model name, dimensions, and input type ( ) • Knowledge base reranking — Voyage AI ( ) **When:** Immediately after step 2, if Voyage AI is the embedding provider. **What is sent:** • The primary search query (the latest utterance text) • Up to 10 candidate KB chunk texts (from your own notes) for reranking • Model name ( ) • Conversation state update — OpenRouter ( ) **When:** Periodically during a session whe…