back to home

wonderNefelibata / Awesome-LRM-Safety

Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as DeepSeek-R1 and OpenAI o1, which are currently very popular.

View on GitHub
82 stars
6 forks
1 issues
Python

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing wonderNefelibata/Awesome-LRM-Safety in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/wonderNefelibata/Awesome-LRM-Safety)
Preview:Analyzed by RepoMind