back to home

vectorize-io / hindsight

Hindsight: Agent Memory That Learns

4,542 stars
300 forks
17 issues
PythonTypeScriptMDX

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing vectorize-io/hindsight in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/vectorize-io/hindsight)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Documentation • Paper • Cookbook • Hindsight Cloud --- What is Hindsight? Hindsight™ is an agent memory system built to create smarter agents that learn over time. Most agent memory systems focus on recalling conversation history. Hindsight is focused on making agents that learn, not just remember. It eliminates the shortcomings of alternative techniques such as RAG and knowledge graph and delivers state-of-the-art performance on long term memory tasks. Memory Performance & Accuracy Hindsight is the most accurate agent memory system ever tested according to benchmark performance. It has achieved state-of-the-art performance on the LongMemEval benchmark, widely used to assess memory system performance across a variety of conversational AI scenarios. The current reported performance of Hindsight and other agent memory solutions as of January 2026 is shown here: The benchmark performance data for Hindsight has been independently reproduced by research collaborators at the Virginia Tech Sanghani Center for Artificial Intelligence and Data Analytics and The Washington Post. Other scores are self-reported by software vendors. Hindsight is being used in production at Fortune 500 enterprises and by a growing number of AI startups. Adding Hindsight to Your AI Agents The easiest way to use Hindsight with an existing agent is with the LLM Wrapper. You can add memory to your agent with 2 lines of code. That will swap your current LLM client out with the Hindsight wrapper. After that, memories will be stored and retrieved automatically as you make LLM calls. If you need more control over how and when your agent stores and recalls memories, there's also a simple API you can integrate with using the SDKs or directly via HTTP. --- > 🤖 **Using a coding agent?** Install the Hindsight documentation skill for instant access to docs while you code: > > Works with Claude Code, Cursor, and other AI coding assistants. --- Quick Start Docker (recommended) >API: http://localhost:8888 >UI: http://localhost:9999 You can modify the LLM provider by setting . Valid options are , , , , , , and . The documentation provides more details on supported models. Docker (external PostgreSQL) >API: http://localhost:8888 >UI: http://localhost:9999 Client Python Node.js / TypeScript Python Embedded (no server required) --- Use Cases Hindsight is built to support conversational AI agents as well as agents that are intended to perform tasks autonomously. The ideal use case for Hindsight are agents that require a blend of these features such as AI employees that need to handle open-ended tasks, change behavior based on user feedback, and learn to perform complex tasks to automate work at a level that approximates a human work. Hindsight can be used with simple AI workflows like those built with n8n and other similar tools, but may be overkill for such applications. Per-User Memories and Chat History One of the simpler use cases you can use Hindsight for is to personalize AI chatbots and other conversational agents by storing and recalling memories associated with individual users. The requirements for this use case usually look something like this: Satisfying these requirements in Hindsight is straightforward. When new user inputs and tool calls are ingested into Hindsight using the retain operation, custom metadata can be used to enrich the new memories. Metadata provides a convenient way to isolate memories that need to be restricted to a given user. Once these are fed into the retain operation, any raw memories and mental models that get created can be filtered when retrieving relevant memories. --- Architecture & Operations Most agent memory implementations rely on basic vector search or sometimes use a knowledge graph. Hindsight uses biomimetic data structures to organize agent memories in a way that is more like how human memory works: • **World:** Facts about the world ("The stove gets hot") • **Experiences:** Agent's own experiences ("I touched the stove and it really hurt") • **Mental Models:** Learned understanding of the agent's world formed by reflecting on raw memories and experiences. Memories in Hindsight are stored in banks (i.e. memory banks). When memories are added to Hindsight, they are pushed into either the world facts or experiences memory pathway. They are then represented as a combination of entities, relationships, and time series with sparse/dense vector representations to aid in later recall. Hindsight provides three simple methods to interact with the system: • **Retain:** Provide information to Hindsight that you want it to remember • **Recall:** Retrieve memories from Hindsight • **Reflect:** Reflect on memories and experiences to generate new observations and insights from existing memories. Retain The operation is used to push new memories into Hindsight. It tells Hindsight to _retain_ the information you pass in as an input. Behind the scenes, the retain operation uses an LLM to extract key facts, temporal data, entities, and relationships. It passes these through a normalization process to transform extracted data into canonical entities, time series, and search indexes along with metadata. These representations create the pathways for accurate memory retrieval in the recall and reflect operations. Recall The recall operation is used to retrieve memories. These memories can come from any of the memory types (world, experiences, etc.) Recall performs 4 retrieval strategies in parallel: • Semantic: Vector similarity • Keyword: BM25 exact matching • Graph: Entity/temporal/causal links • Temporal: Time range filtering The individual results from the retrievals are merged, then ordered by relevance using reciprocal rank fusion and a cross-encoder reranking model. The final output is trimmed as needed to fit within the token limit. Reflect The reflect operation is used to perform a more thorough analysis of existing memories. This allows the agent to form ne…