Kocoro-lab / Shannon
A production-oriented multi-agent orchestration framework.
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing Kocoro-lab/Shannon in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewShannon — Production AI Agents That Actually Work Ship reliable AI agents to production. Multi-strategy orchestration, swarm collaboration, token budget control, human approval workflows, and time-travel debugging — all built in. ** Live Demo → ** *View real-time agent execution and event streams* *Shannon open-source platform architecture — multi-agent orchestration with execution strategies, WASI sandboxing, and built-in observability* Why Shannon? | The Problem | Shannon's Solution | | ----------------------------------- | ------------------------------------------------------------ | | _Agents fail silently?_ | Temporal workflows with time-travel debugging — replay any execution step-by-step | | _Costs spiral out of control?_ | Hard token budgets per task/agent with automatic model fallback | | _No visibility into what happened?_ | Real-time dashboard, Prometheus metrics, OpenTelemetry tracing | | _Security concerns?_ | WASI sandbox for code execution, OPA policies, multi-tenant isolation | | _Vendor lock-in?_ | Works with OpenAI, Anthropic, Google, DeepSeek, local models | Quick Start Prerequisites • Docker and Docker Compose • An API key for at least one LLM provider (OpenAI, Anthropic, etc.) Installation **Quick Install:** This downloads config, prompts for API keys, pulls Docker images, and starts services. **Required API Keys** (choose one): • OpenAI: • Anthropic: • Or any OpenAI-compatible endpoint **Optional but recommended:** • Web Search: (get key at serpapi.com) • Web Fetch: (get key at firecrawl.dev) **Setting API keys:** The install script prompts you to edit during setup. To update keys later: > **Building from source?** See Development below. > > **Platform-specific guides:** Ubuntu · Rocky Linux · Windows · Windows (中文) Your First Agent Shannon provides multiple ways to interact with AI agents. Choose the option that works best for you: Option 1: REST API Use Shannon's HTTP REST API directly. For complete API documentation, see **docs.shannon.run**. **Perfect for:** • Integrating Shannon into existing applications • Automation scripts and workflows • Language-agnostic integration Option 2: Python SDK Install the official Shannon Python SDK: **Perfect for:** • Python-based applications and notebooks • Data science workflows • Batch processing and automation See Python SDK Documentation for the full API reference. Option 3: Native Desktop App Download pre-built desktop applications from GitHub Releases: • **macOS (Universal)** — Intel & Apple Silicon • **Windows (x64)** — MSI or EXE installer • **Linux (x64)** — AppImage or DEB package Or build from source: **Native app benefits:** • System tray integration and native notifications • Offline task history (Dexie.js local database) • Better performance and lower memory usage • Auto-updates from GitHub releases See Desktop App Guide for more details. Option 4: Web UI (Needs Source Download) Run the desktop app as a local web server for development: **Perfect for:** • Quick testing and exploration • Development and debugging • Real-time event streaming visualization Configuring Tool API Keys Add these to your file based on which tools you need: > **Tip:** For quick setup, just add . Get a key at serpapi.com. Ports & Endpoints | Service | Port | Endpoint | Purpose | |---------|------|----------|---------| | **Gateway** | 8080 | | REST API, OpenAI-compatible | | **Admin/Events** | 8081 | | SSE/WebSocket streaming, health | | **Orchestrator** | 50052 | | gRPC (internal) | | **Temporal UI** | 8088 | | Workflow debugging | | **Grafana** | 3030 | | Metrics dashboard | Additional API Endpoints **Daemon & Real-time Messaging:** • — Daemon connection status • — Real-time message delivery to connected CLI daemons **Channels (Messaging Integrations):** • — CRUD for messaging channel integrations (Slack, LINE) • — Inbound webhook endpoint **Workspace Files:** • — List session workspace files • — Download a workspace file Architecture See the architecture diagram above for the full platform overview including execution strategies, sandbox isolation, and tool ecosystem. **Components:** • **Orchestrator (Go)** — Task routing, budget enforcement, session management, OPA policies • **Agent Core (Rust)** — WASI sandbox, policy enforcement, session workspaces, file operations • **LLM Service (Python)** — Provider abstraction (15+ LLMs), MCP tools, skills system • **Data Layer** — PostgreSQL (state), Redis (sessions), Qdrant (vector memory) Core Capabilities OpenAI-Compatible API Real-time Event Streaming Skills System Create custom skills in . See Skills System. WASI Sandbox & Session Workspaces WASI sandbox provides secure code execution with no system call access. See Session Workspaces. Research Workflows Swarm Multi-Agent Workflows Human-in-the-Loop Approval Session Continuity Scheduled Tasks 10+ LLM Providers • **OpenAI**: GPT-5.1, GPT-5 mini, GPT-5 nano • **Anthropic**: Claude Opus 4.6, Opus 4.5, Opus 4.1, Sonnet 4.6, Sonnet 4.5, Haiku 4.5 • **Google**: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 3 Pro Preview • **xAI**: Grok 4 (reasoning & non-reasoning) • **DeepSeek**: DeepSeek V3.2, DeepSeek R1 • **Others**: Qwen, Mistral, Meta (Llama 4), Zhipu (GLM-4.6), Cohere • **Open-Source / Local**: Ollama (Llama, Mistral, Phi, etc.), LM Studio, vLLM — any OpenAI-compatible endpoint • Automatic failover between providers > **Note:** OpenAI, Anthropic, xAI, and Google providers are battle-tested. Others are supported but less extensively validated. MCP Integration Native support for Model Context Protocol: • Custom tool registration • OAuth2 server authentication • Rate limiting and circuit breakers • Cost tracking for MCP tool usage Key Features Time-Travel Debugging (Needs Source Download) Token Budget Control OPA Policy Governance Secure Code Execution Built for Enterprise • **Multi-Tenant Isolation** — Separate memory, budgets, and policies per tenant • **Human-in-the-Loop** — Appro…