ItzCrazyKns / Vane
Vane is an AI-powered answering engine.
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing ItzCrazyKns/Vane in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewVane 🔍 Vane is a **privacy-focused AI answering engine** that runs entirely on your own hardware. It combines knowledge from the vast internet with support for **local LLMs** (Ollama) and cloud providers (OpenAI, Claude, Groq), delivering accurate answers with **cited sources** while keeping your searches completely private. Want to know more about its architecture and how it works? You can read it here. ✨ Features 🤖 **Support for all major AI providers** - Use local LLMs through Ollama or connect to OpenAI, Anthropic Claude, Google Gemini, Groq, and more. Mix and match models based on your needs. ⚡ **Smart search modes** - Choose Speed Mode when you need quick answers, Balanced Mode for everyday searches, or Quality Mode for deep research. 🧭 **Pick your sources** - Search the web, discussions, or academic papers. More sources and integrations are in progress. 🧩 **Widgets** - Helpful UI cards that show up when relevant, like weather, calculations, stock prices, and other quick lookups. 🔍 **Web search powered by SearxNG** - Access multiple search engines while keeping your identity private. Support for Tavily and Exa coming soon for even better results. 📷 **Image and video search** - Find visual content alongside text results. Search isn't limited to just articles anymore. 📄 **File uploads** - Upload documents and ask questions about them. PDFs, text files, images - Vane understands them all. 🌐 **Search specific domains** - Limit your search to specific websites when you know where to look. Perfect for technical documentation or research papers. 💡 **Smart suggestions** - Get intelligent search suggestions as you type, helping you formulate better queries. 📚 **Discover** - Browse interesting articles and trending content throughout the day. Stay informed without even searching. 🕒 **Search history** - Every search is saved locally so you can revisit your discoveries anytime. Your research is never lost. ✨ **More coming soon** - We're actively developing new features based on community feedback. Join our Discord to help shape Vane's future! Sponsors Vane's development is powered by the generous support of our sponsors. Their contributions help keep this project free, open-source, and accessible to everyone. **✨ Try Warp - The AI-Powered Terminal →** Warp is revolutionizing development workflows with AI-powered features, modern UX, and blazing-fast performance. Used by developers at top companies worldwide. --- We'd also like to thank the following partners for their generous support: Exa • The Perfect Web Search API for LLMs - web search, crawling, deep research, and answer APIs Installation There are mainly 2 ways of installing Vane - With Docker, Without Docker. Using Docker is highly recommended. Getting Started with Docker (Recommended) Vane can be easily run using Docker. Simply run the following command: This will pull and start the Vane container with the bundled SearxNG search engine. Once running, open your browser and navigate to http://localhost:3000. You can then configure your settings (API keys, models, etc.) directly in the setup screen. **Note**: The image includes both Vane and SearxNG, so no additional setup is required. The flags create persistent volumes for your data and uploaded files. Using Vane with Your Own SearxNG Instance If you already have SearxNG running, you can use the slim version of Vane: **Important**: Make sure your SearxNG instance has: • JSON format enabled in the settings • Wolfram Alpha search engine enabled Replace with your actual SearxNG URL. Then configure your AI provider settings in the setup screen at http://localhost:3000. Advanced Setup (Building from Source) If you prefer to build from source or need more control: • Ensure Docker is installed and running on your system. • Clone the Vane repository: • After cloning, navigate to the directory containing the project files. • Build and run using Docker: • Access Vane at http://localhost:3000 and configure your settings in the setup screen. **Note**: After the containers are built, you can start Vane directly from Docker without having to open a terminal. Non-Docker Installation • Install SearXNG and allow format in the SearXNG settings. Make sure Wolfram Alpha search engine is also enabled. • Clone the repository: • Install dependencies: • Build the application: • Start the application: • Open your browser and navigate to http://localhost:3000 to complete the setup and configure your settings (API keys, models, SearxNG URL, etc.) in the setup screen. **Note**: Using Docker is recommended as it simplifies the setup process, especially for managing environment variables and dependencies. See the installation documentation for more information like updating, etc. Troubleshooting Local OpenAI-API-Compliant Servers If Vane tells you that you haven't configured any chat model providers, ensure that: • Your server is running on (not ) and on the same port you put in the API URL. • You have specified the correct model name loaded by your local LLM server. • You have specified the correct API key, or if one is not defined, you have put _something_ in the API key field and not left it empty. Ollama Connection Errors If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can: • **Check your Ollama API URL:** Ensure that the API URL is correctly set in the settings menu. • **Update API URL Based on OS:** • **Windows:** Use • **Mac:** Use • **Linux:** Use Adjust the port number if you're using a different one. • **Linux Users - Expose Ollama to Network:** • Inside , you need to add . (Change the port number if you are using a different one.) Then reload the systemd manager configuration with , and restart Ollama by . For more information see Ollama docs • Ensure that the port (default is 11434) is not blocked by your firewall. Lemonade Connection Errors If you'r…