back to home

liliu-avril / Awesome-Segment-Anything

This repository is for the first comprehensive survey on Meta AI's Segment Anything Model (SAM).

View on GitHub
1,215 stars
74 forks
0 issues

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing liliu-avril/Awesome-Segment-Anything in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/liliu-avril/Awesome-Segment-Anything)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

A Comprehensive Survey on Segment Anything Model for Vision and Beyond > **The First Comprehensive SAM Survey: A Comprehensive Survey on Segment Anything Model for Vision and Beyond.** Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu. [paper] [homepage][中文解读] > ** Abstract:** *Artificial intelligence (AI) is evolving towards artificial general intelligence, which refers to the ability of an AI system to perform a wide range of tasks and exhibit a level of intelligence similar to that of a human being. This is in contrast to narrow or specialized AI, which is designed to perform specific tasks with a high degree of efficiency. Therefore, it is urgent to design a general class of models, which we term foundation models, trained on broad data that can be adapted to various downstream tasks. The recently proposed segment anything model (SAM) has made significant progress in breaking the boundaries of segmentation, greatly promoting the development of foundation models for computer vision. To fully comprehend SAM, we conduct a survey study. As the first to comprehensively review the progress of segmenting anything task for vision and beyond based on the foundation model of SAM, this work focuses on its applications to various tasks and data types by discussing its historical development, recent progress, and profound impact on broad applications. We first introduce the background and terminology for foundation models including SAM, as well as state-of-the-art methods contemporaneous with SAM that are significant for segmenting anything task. Then, we analyze and summarize the advantages and limitations of SAM across various image processing applications, including software scenes, real-world scenes, and complex scenes. Importantly, many insights are drawn to guide future research to develop more versatile foundation models and improve the architecture of SAM. We also summarize massive other amazing applications of SAM in vision and beyond. Finally, we maintain a continuously updated paper list and an open-source project summary for foundation model SAM at here.* > **Awesome Segment Anything Models:** A curated list of awesome segment anything models in computer vision and beyond. This repository supplements our survey paper. We intend to continuously update it. If you like our project, please give us a star ⭐ on GitHub for latest update. We strongly encourage authors of relevant works to make a pull request and add their paper's information [here]. :boom:**SAM Audio: ''SAM Audio: Segment Anything in Audio'' was released.** :boom:**SAM 3D: ''SAM 3D: 3Dfy Anything in Images'' was released.** :boom:**SAM 3: ''SAM 3: Segment Anything with Concepts'' was released.** :boom:**SAM 2: ''Segment Anything in Images and Videos'' was released.** :boom:**SAM: ''Segment Anything'' was released.** :boom:**SAM & SAM2 for videos: The first survey on Segment Anything for Videos: A Systematic Survey was online.** ____ :fire: Highlights Contents • Survey • Paper List • Seminal Papers • Follow-up Papers • 2026 • 2025 • 2024 • 2023 • Open Source Projects • Awesome Repositories for SAM Citation If you find our work useful in your research, please consider citing: Survey • **The First Comprehensive SAM Survey:** Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian Yang, Yuehong Hu. "A Comprehensive Survey on Segment Anything Model for Vision and Beyond." ArXiv (2024). [paper] [[homepage]](https://github.com/liliu-avril/Awesome-Segment-Anything) [[中文解读]](https://mp.weixin.qq.com/s/uYpRzvRp22-40x8e0pLVIg) [2023.05] • **The First Survey on SAM & SAM2 for Videos:** Chunhui Zhang, Yawen Cui, Weilin Lin, Guanjie Huang, Yan Rong, Li Liu, Shiguang Shan. "Segment Anything for Videos: A Systematic Survey." ArXiv (2024). [[ArXiv]](https://arxiv.org/abs/2408.08315) [[ChinaXiv]](https://chinaxiv.org/abs/202408.00019) [[ResearchGate]](https://www.researchgate.net/publication/382737497_Segment_Anything_for_Videos_A_Systematic_Survey) [[Project]](https://github.com/983632847/SAM-for-Videos) [[中文解读]](https://zhuanlan.zhihu.com/p/712807912) [2024.07] • **SAM4MIS:** Yichi Zhang, Rushi Jiao. "Towards Segment Anything Model (SAM) for Medical Image Segmentation: A Survey." CBM (2024). [paper] [project] [2023.05] • Yichi Zhang, Zhenrong Shen. "Unleashing the Potential of SAM2 for Biomedical Images and Videos: A Survey." ArXiv (2024). [paper] [code] [2024.08] • Tianfei Zhou, Fei Zhang, Boyu Chang, Wenguan Wang, Ye Yuan, Ender Konukoglu, Daniel Cremers. "Image Segmentation in Foundation Model Era: A Survey." ArXiv (2024). [paper] [2024.08] • Chaoning Zhang, Fachrina Dewi Puspitasari, Sheng Zheng, Chenghao Li, Yu Qiao, Taegoo Kang, Xinru Shan, Chenshuang Zhang, Caiyan Qin, Francois Rameau, Lik-Hang Lee, Sung-Ho Bae, Choong Seon Hong. "A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering." ArXiv (2024). [paper] [2023.05] • Xiaorui Sun, Jun Liu, Heng Tao Shen, Xiaofeng Zhu, Ping Hu. "On Efficient Variants of Segment Anything Model: A Survey." IJCV (2025). [paper] [2024.10] • Mudassar Ali and Tong Wu and Haoji Hu and Qiong Luo and Dong Xu and Weizeng Zheng and Neng Jin and Chen Yang and Jincao Yao. "A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives." Computerized Medical Imaging and Graphics (2024). [paper] [2024.12] • Zhang Jiaxing, Tang Hao. "SAM2 for Image and Video Segmentation: A Comprehensive Survey." ArXiv (2025). [paper] [2025.03] • Kang Wang. "A survey on SAM-based methods for medical image segmentation." IS-AII (2025). [paper] [2025.07] • Guoping Xu, Jayaram K. Udupa, Yajun Yu, Hua-Chieh Shao, Songlin Zhao, Wei Liu, You Zhang. "Segment Anything for Video: A Comprehensive Review of Video Object Segmentation and Tracking from Past to Future." ArXiv (2025). [paper] [2025.07] • **WanSAM4RS-Tracker:** Zhipeng Wan and Sheng Wang and Wei Han and Yuewei Wang and X…