back to home

vinkius-labs / vurb.ts

Vurb.ts - The TypeScript Framework for MCP Servers. Type-safe tools, structured AI perception, and built-in security. Deploy once — every AI assistant connects instantly.

207 stars
15 forks
5 issues
TypeScriptJavaScriptShell

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing vinkius-labs/vurb.ts in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/vinkius-labs/vurb.ts)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

**The MVA framework for production MCP servers.** Structured perception for AI agents. Zero hallucination. Zero data leaks. Documentation · Quick Start · API Reference --- Get Started in 5 Seconds That's it. A production-ready MCP server with file-based routing, Presenters, middleware, tests, and pre-configured connections for **Cursor**, **Claude Desktop**, **Claude Code**, **Windsurf**, **Cline**, and **VS Code + GitHub Copilot**. Choose a vector to scaffold exactly the project you need: | Vector | What it scaffolds | |---|---| | | file-based routing. Zero external deps | | | Prisma schema + CRUD tools with field-level security | | | n8n workflow bridge — auto-discover webhooks as tools | | | OpenAPI 3.x / Swagger 2.0 → full MVA tool generation | | | RFC 8628 Device Flow authentication | Drop a file in , restart — it's a live MCP tool. No central import file, no merge conflicts: --- Why Vurb.ts Exists Every raw MCP server does the same thing: the database result and ship it to the LLM. Three catastrophic consequences: **Data exfiltration.** sends , , — every column — straight to the LLM provider. One field = one GDPR violation. **Token explosion.** Every tool schema is sent on every turn, even when irrelevant. System prompt rules for every domain entity are sent globally, bloating context with wasted tokens. **Context DDoS.** An unbounded can dump thousands of rows into the context window. The LLM hallucinates. Your API bill explodes. --- The MVA Solution Vurb.ts replaces with a **Presenter** — a deterministic perception layer that controls exactly what the agent sees, knows, and can do next. The result is not JSON — it's a **Perception Package**: No guessing. Undeclared fields rejected. Domain rules travel with data — not in the system prompt. Next actions computed from data state. --- Before vs. After **Before** — raw MCP: **After** — Vurb.ts with MVA: The handler returns raw data. The Presenter shapes absolutely everything the agent perceives. --- Architecture Egress Firewall — Schema as Security Boundary The Presenter's Zod schema acts as a whitelist. **Only declared fields pass through.** A database migration that adds doesn't change what the agent sees — the new column is invisible unless you explicitly declare it in the schema. DLP Compliance Engine — PII Redaction GDPR / LGPD / HIPAA compliance built into the framework. compiles a V8-optimized redaction function via that masks sensitive fields **after** UI blocks and rules have been computed (Late Guillotine Pattern) — the LLM receives instead of real values. Custom censors, wildcard paths ( , ), and centralized PII field lists. **Zero-leak guarantee** — the developer cannot accidentally bypass redaction. 8 Anti-Hallucination Mechanisms Each mechanism compounds. Fewer tokens in context, fewer requests per task, less hallucination, lower cost. FSM State Gate — Temporal Anti-Hallucination **The first framework where it is physically impossible for an AI to execute tools out of order.** LLMs are chaotic — even with HATEOAS suggestions, a model can ignore them and call with an empty cart. The FSM State Gate makes temporal hallucination structurally impossible: if the workflow state is , the tool **doesn't exist** in . The LLM literally cannot call it. | State | Visible Tools | |---|---| | | , | | | , , | | | , | | | | Three complementary layers: **Format** (Zod validates shape), **Guidance** (HATEOAS suggests the next tool), **Gate** (FSM physically removes wrong tools). XState v5 powered, serverless-ready with . Zero-Trust Sandbox — Computation Delegation The LLM sends JavaScript logic to your data instead of shipping data to the LLM. Code runs inside a sealed V8 isolate — **zero access** to , , , , , . Timeout kill, memory cap, output limit, automatic isolate recovery, and AbortSignal kill-switch (Connection Watchdog). auto-injects HATEOAS instructions into the tool description — the LLM knows exactly how to format its code. Prototype pollution contained. escape blocked. One isolate per engine, new pristine context per call. State Sync — Temporal Awareness for Agents LLMs have no sense of time. After then , the agent still believes the list is unchanged. Vurb.ts injects RFC 7234-inspired cache-control signals: Registry-level policies with , glob patterns ( , ), policy overlap detection, observability hooks, and MCP emission. Prompt Engine — Server-Side Templates MCP Prompts as executable server-side templates with the same Fluent API as tools. Middleware, hydration timeout, schema-informed coercion, interceptors, multi-modal messages, and the Presenter bridge: decomposes any Presenter into prompt messages — same schema, same rules, same affordances in both tools and prompts. Multi-modal with , , . Interceptors inject compliance footers after every handler. with filtering, pagination, and lifecycle sync. Agent Skills — Progressive Instruction Distribution **No other MCP framework has this.** Distribute domain expertise to AI agents on demand via MCP. Three-layer progressive disclosure — the agent searches a lightweight index, loads only the relevant SKILL.md, and reads auxiliary files on demand. Zero context window waste. Skills follow the agentskills.io open standard — SKILL.md with YAML frontmatter. returns the lightweight index. returns full instructions. gives access to auxiliary files with **path traversal protection** (only files within the skill's directory). Custom search engines supported. Fluent API — Semantic Verbs & Chainable Builders Every builder method is chainable and fully typed. Types accumulate as you chain — the final has 100% accurate autocomplete with zero annotations: embeds prompt engineering. prevents backend overload. caps response size. streams MCP progress notifications. / / control temporal awareness. enables computation delegation. enables FSM gating. Middleware — Pre-Compiled, Zero-Allocation tRPC-style context derivation. Middleware chains com…