vxcontrol / pentagi
✨ Fully autonomous AI Agents system capable of performing complex penetration testing tasks
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing vxcontrol/pentagi in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewPentAGI P enetration testing A rtificial G eneral I ntelligence > **Join the Community!** Connect with security researchers, AI enthusiasts, and fellow ethical hackers. Get support, share insights, and stay updated with the latest PentAGI developments. ⠀ Table of Contents • Overview • Features • Quick Start • API Access • Advanced Setup • Development • Testing LLM Agents • Embedding Configuration and Testing • Function Testing with ftester • Building • Credits • License Overview PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. The project is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. You can watch the video **PentAGI overview**: Features • Secure & Isolated. All operations are performed in a sandboxed Docker environment with complete isolation. • Fully Autonomous. AI-powered agent that automatically determines and executes penetration testing steps with optional execution monitoring and intelligent task planning for enhanced reliability. • Professional Pentesting Tools. Built-in suite of 20+ professional security tools including nmap, metasploit, sqlmap, and more. • Smart Memory System. Long-term storage of research results and successful approaches for future use. • Knowledge Graph Integration. Graphiti-powered knowledge graph using Neo4j for semantic relationship tracking and advanced context understanding. • Web Intelligence. Built-in browser via scraper for gathering latest information from web sources. • External Search Systems. Integration with advanced search APIs including Tavily, Traversaal, Perplexity, DuckDuckGo, Google Custom Search, Sploitus Search and Searxng for comprehensive information gathering. • Team of Specialists. Delegation system with specialized AI agents for research, development, and infrastructure tasks, enhanced with optional execution monitoring and intelligent task planning for optimal performance with smaller models. • Comprehensive Monitoring. Detailed logging and integration with Grafana/Prometheus for real-time system observation. • Detailed Reporting. Generation of thorough vulnerability reports with exploitation guides. • Smart Container Management. Automatic Docker image selection based on specific task requirements. • Modern Interface. Clean and intuitive web UI for system management and monitoring. • Comprehensive APIs. Full-featured REST and GraphQL APIs with Bearer token authentication for automation and integration. • Persistent Storage. All commands and outputs are stored in PostgreSQL with pgvector extension. • Scalable Architecture. Microservices-based design supporting horizontal scaling. • Self-Hosted Solution. Complete control over your deployment and data. • Flexible Authentication. Support for 10+ LLM providers (OpenAI, Anthropic, Google AI/Gemini, AWS Bedrock, Ollama, DeepSeek, GLM, Kimi, Qwen, Custom) plus aggregators (OpenRouter, DeepInfra). For production local deployments, see our vLLM + Qwen3.5-27B-FP8 guide. • API Token Authentication. Secure Bearer token system for programmatic access to REST and GraphQL APIs. • Quick Deployment. Easy setup through Docker Compose with comprehensive environment configuration. Architecture System Context Container Architecture (click to expand) Entity Relationship (click to expand) Agent Interaction (click to expand) Memory System (click to expand) Chain Summarization (click to expand) The chain summarization system manages conversation context growth by selectively summarizing older messages. This is critical for preventing token limits from being exceeded while maintaining conversation coherence. The algorithm operates on a structured representation of conversation chains (ChainAST) that preserves message types including tool calls and their responses. All summarization operations maintain critical conversation flow while reducing context size. Global Summarizer Configuration Options | Parameter | Environment Variable | Default | Description | | --------------------- | -------------------------------- | ------- | ---------------------------------------------------------- | | Preserve Last | | | Whether to keep all messages in the last section intact | | Use QA Pairs | | | Whether to use QA pair summarization strategy | | Summarize Human in QA | | | Whether to summarize human messages in QA pairs | | Last Section Size | | | Maximum byte size for last section (50KB) | | Max Body Pair Size | | | Maximum byte size for a single body pair (16KB) | | Max QA Sections | | | Maximum QA pair sections to preserve | | Max QA Size | | | Maximum byte size for QA pair sections (64KB) | | Keep QA Sections | | | Number of recent QA sections to keep without summarization | Assistant Summarizer Configuration Options Assistant instances can use customized summarization settings to fine-tune context management behavior: | Parameter | Environment Variable | Default | Description | | ------------------ | --------------------------------------- | ------- | -------------------------------------------------------------------- | | Preserve Last | | | Whether to preserve all messages in the assistant's last section | | Last Section Size | | | Maximum byte size for assistant's last section (75KB) | | Max Body Pair Size | | | Maximum byte size for a single body pair in assistant context (16KB) | | Max QA Sections | | | Maximum QA sections to preserve in assistant context | | Max QA Size | | | Maximum byte size for assistant's QA sections (75KB) | | Keep QA Sections | | | Number of recent QA sections to preserve without summarization | The assistant summarizer configuration provides more memory for context retention compared to the global settings, preserving more recent conversation history while still ensuring efficient token usage. Summarizer Environment Configuration Advanced Agent Supervision (click to ex…