AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing ChenZiHong-Gavin/LLM-Everything in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler view📃 前言 LLM-Everything **从零开始,系统掌握大语言模型的一切。** *** ✨ 为什么是这个项目? 市面上不缺 LLM 教程,但缺的是**真正讲明白**的。 • 🎯 **不复制粘贴** — 每篇文章精心打磨,用生动的方式拆解复杂概念 • 🔨 **从零实现代码** — 不只讲理论,带你亲手写出来,在实战中理解原理 • 🗺️ **体系化路线** — 从基础到前沿,完整的学习路径,不再迷路 *** 📚 知识地图 🎚️ 基础部分 | 🐍 Python 基础 logging 模块 import 模块 multiprocessing 模块 | 🐘 机器学习基础 文本表示模型 Bag-of-Words Topic Model Static Word Embeddings | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 🪿 深度学习基础 🚧 持续更新中... | 🐬 LLM 基础 思考模式切换 为什么现在的LLM都是decoder-only架构 | 🐬 Prompt Engineering • Tree of Thoughts 🦖 Transformer 架构 > 逐模块拆解 Transformer,从输入到输出,一个不落。 | 模块 | 链接 | | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Tokenizer | tokenizer.md | | Embeddings | ELMo BERT GPT | | Positional Encoding | positional-encoding.md | | Self Attention | self-attention.md | | Multi-Head Attention | multi-head-attention.md | | Add & Norm | add-and-norm.md | | FeedForward | feedforward.md | | Linear & Softmax | linear-and-softmax.md | | Decoding Strategy | decoding-strategy.md | 🎄 LLM 训练 | 主题 | 内容 | | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **显存需求** | LLM 精度问题 训练需要多少显存 | | **分布式并行** | 数据并行 模型并行 优化器并行 异构系统并行 | | **训练流程** | 预训练 data-engineering.md 监督微调 强化学习 🚧 | | **数据准备** | 课程学习 | 🐒 MoE(混合专家模型) • 专家并行 🪿 LLM 应用 • RAG • Graph RAG 🐢 多模态大模型 • QFormer 🔒 LLM 安全 • 🚧 持续更新中... *** 🛣️ 推荐学习路线 *** 🤝 参与贡献 本项目正在快速迭代中,欢迎: • 🐛 提 Issue 指出错误或疑问 • 🔀 提 PR 补充内容 • ⭐ 觉得有用就给个 Star,这是最大的鼓励 *** **如果这个项目帮到了你,请点个 ⭐ Star 支持一下!**