langchain-ai / local-deep-researcher
Fully local web research and report writing assistant
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing langchain-ai/local-deep-researcher in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewLocal Deep Researcher Local Deep Researcher is a fully local web research assistant that uses any LLM hosted by Ollama or LMStudio. Give it a topic and it will generate a web search query, gather web search results, summarize the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, and repeat for a user-defined number of cycles. It will provide the user a final markdown summary with all sources used to generate the summary. Short summary video: 🔥 Updates • 8/6/25: Added support for tool calling and gpt-oss. > ⚠️ **WARNING (8/6/25)**: The models do not support JSON mode in Ollama. Select in the configuration to use tool calling instead of JSON mode. 📺 Video Tutorials See it in action or build it yourself? Check out these helpful video tutorials: • Overview of Local Deep Researcher with R1 - Load and test DeepSeek R1 distilled models. • Building Local Deep Researcher from Scratch - Overview of how this is built. 🚀 Quickstart Clone the repository: Then edit the file to customize the environment variables according to your needs. These environment variables control the model selection, search tools, and other configuration settings. When you run the application, these values will be automatically loaded via (because point to the "env" file). Selecting local model with Ollama • Download the Ollama app for Mac here. • Pull a local LLM from Ollama. As an example: • Optionally, update the file with the following Ollama configuration settings. • If set, these values will take precedence over the defaults set in the class in . Selecting local model with LMStudio • Download and install LMStudio from here. • In LMStudio: • Download and load your preferred model (e.g., qwen_qwq-32b) • Go to the "Local Server" tab • Start the server with the OpenAI-compatible API • Note the server URL (default: http://localhost:1234/v1) • Optionally, update the file with the following LMStudio configuration settings. • If set, these values will take precedence over the defaults set in the class in . Selecting search tool By default, it will use DuckDuckGo for web search, which does not require an API key. But you can also use SearXNG, Tavily or Perplexity by adding their API keys to the environment file. Optionally, update the file with the following search tool configuration and API keys. If set, these values will take precedence over the defaults set in the class in . Running with LangGraph Studio Mac • (Recommended) Create a virtual environment: • Launch LangGraph server: Windows • (Recommended) Create a virtual environment: • Install (and add to PATH during installation). • Restart your terminal to ensure Python is available, then create and activate a virtual environment: • Launch LangGraph server: Using the LangGraph Studio UI When you launch LangGraph server, you should see the following output and Studio will open in your browser: > Ready! > API: http://127.0.0.1:2024 > Docs: http://127.0.0.1:2024/docs > LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 Open via the URL above. In the tab, you can directly set various assistant configurations. Keep in mind that the priority order for configuration values is: Give the assistant a topic for research, and you can visualize its process! Model Compatibility Note When selecting a local LLM, set steps use structured JSON output. Some models may have difficulty with this requirement, and the assistant has fallback mechanisms to handle this. As an example, the DeepSeek R1 (7B) and DeepSeek R1 (1.5B) models have difficulty producing required JSON output, and the assistant will use a fallback mechanism to handle this. Browser Compatibility Note When accessing the LangGraph Studio UI: • Firefox is recommended for the best experience • Safari users may encounter security warnings due to mixed content (HTTPS/HTTP) • If you encounter issues, try: • Using Firefox or another browser • Disabling ad-blocking extensions • Checking browser console for specific error messages How it works Local Deep Researcher is inspired by IterDRAG. This approach will decompose a query into sub-queries, retrieve documents for each one, answer the sub-query, and then build on the answer by retrieving docs for the second sub-query. Here, we do similar: • Given a user-provided topic, use a local LLM (via Ollama or LMStudio) to generate a web search query • Uses a search engine / tool to find relevant sources • Uses LLM to summarize the findings from web search related to the user-provided research topic • Then, it uses the LLM to reflect on the summary, identifying knowledge gaps • It generates a new search query to address the knowledge gaps • The process repeats, with the summary being iteratively updated with new information from web search • Runs for a configurable number of iterations (see tab) Outputs The output of the graph is a markdown file containing the research summary, with citations to the sources used. All sources gathered during research are saved to the graph state. You can visualize them in the graph state, which is visible in LangGraph Studio: The final summary is saved to the graph state as well: Deployment Options There are various ways to deploy this graph. See Module 6 of LangChain Academy for a detailed walkthrough of deployment options with LangGraph. TypeScript Implementation A TypeScript port of this project (without Perplexity search) is available at: https://github.com/PacoVK/ollama-deep-researcher-ts Running as a Docker container The included only runs LangChain Studio with local-deep-researcher as a service, but does not include Ollama as a dependant service. You must run Ollama separately and configure the environment variable. Optionally you can also specify the Ollama model to use by providing the environment variable. Clone the repo and build an image: Run the container: NOTE: You will see log message: ...but the…