back to home

IDEA-Research / Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything

17,465 stars
1,584 forks
310 issues
Jupyter NotebookPythonCuda

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing IDEA-Research/Grounded-Segment-Anything in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/IDEA-Research/Grounded-Segment-Anything)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Grounded-Segment-Anything We plan to create a very interesting demo by combining Grounding DINO and Segment Anything which aims to detect and segment anything with text inputs! And we will continue to improve it and create more interesting demos based on this foundation. And we have already released an overall technical report about our project on arXiv, please check Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks for more details. • 🔥 **Grounded SAM 2** is released now, which combines Grounding DINO with SAM 2 for any object tracking in open-world scenarios. • 🔥 **Grounding DINO 1.5** is released now, which is IDEA Research's **Most Capable** Open-World Object Detection Model! • 🔥 **Grounding DINO** and **Grounded SAM** are now supported in Huggingface. For more convenient use, you can refer to this documentation We are very willing to **help everyone share and promote new projects** based on Segment-Anything, Please check out here for more amazing demos and works in the community: Highlight Extension Projects. You can submit a new issue (with tag) or a new pull request to add new project's links. **🍄 Why Building this Project?** The **core idea** behind this project is to **combine the strengths of different models in order to build a very powerful pipeline for solving complex problems**. And it's worth mentioning that this is a workflow for combining strong expert models, where **all parts can be used separately or in combination, and can be replaced with any similar but different models (like replacing Grounding DINO with GLIP or other detectors / replacing Stable-Diffusion with ControlNet or GLIGEN/ Combining with ChatGPT)**. **🍇 Updates** • ** ** We have released a comprehensive technical report about our project on arXiv, please check Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks for more details. And we are profoundly grateful for the contributions of all the contributors in this project. • ** ** Support Grounded-RepViT-SAM demo, thanks a lot for their great work! • ** ** Support Grounded-Edge-SAM demo, thanks a lot for their great work! • ** ** Support Grounded-Efficient-SAM demo, thanks a lot for their great work! • ** ** Release RAM++, which is the next generation of RAM. RAM++ can recognize any category with high accuracy, including both predefined common categories and diverse open-set categories. • ** ** Release our newly proposed visual prompt counting model T-Rex. The introduction Video and Demo is available in DDS now. • ** ** Support Light-HQ-SAM in EfficientSAM, credits to Mingqiao Ye and Lei Ke, thanks a lot for their great work! • ** ** Combining **Grounding-DINO-B** with SAM-HQ achieves **49.6 mean AP** in Segmentation in the Wild competition zero-shot track, surpassing Grounded-SAM by **3.6 mean AP**, thanks for their great work! • ** ** Combining Grounding-DINO with Efficient SAM variants including FastSAM and MobileSAM in EfficientSAM for faster annotating, thanks a lot for their great work! • ** ** By combining **Grounding-DINO-L** with **SAM-ViT-H**, Grounded-SAM achieves 46.0 mean AP in Segmentation in the Wild competition zero-shot track on CVPR 2023 workshop, surpassing UNINEXT (CVPR 2023) by about **4 mean AP**. • ** ** Release RAM-Grounded-SAM Replicate Online Demo. Thanks a lot to Chenxi for providing this nice demo 🌹. • ** ** Support RAM-Grounded-SAM & SAM-HQ and update Simple Automatic Label Demo to support RAM, setting up a strong automatic annotation pipeline. • ** ** Checkout the Autodistill: Train YOLOv8 with ZERO Annotations tutorial to learn how to use Grounded-SAM + Autodistill for automated data labeling and real-time model training. • ** ** Support SAM-HQ in Grounded-SAM Demo for higher quality prediction. • ** ** Support RAM-Grounded-SAM for strong automatic labeling pipeline! Thanks for Recognize-Anything. • ** ** Our Grounded-SAM has been accepted to present a **demo** at ICCV 2023! See you in Paris! • ** **: Support , and in ImageBind-SAM. • ** **: Checkout the Automated Dataset Annotation and Evaluation with GroundingDINO and SAM which is an amazing tutorial on automatic labeling! Thanks a lot for Piotr Skalski and Roboflow! Table of Contents • Grounded-Segment-Anything • Preliminary Works • Highlighted Projects • Installation • Install with Docker • Install locally • Grounded-SAM Playground • Step-by-Step Notebook Demo • GroundingDINO: Detect Everything with Text Prompt • Grounded-SAM: Detect and Segment Everything with Text Prompt • Grounded-SAM with Inpainting: Detect, Segment and Generate Everything with Text Prompt • Grounded-SAM and Inpaint Gradio APP • Grounded-SAM with RAM or Tag2Text for Automatic Labeling • Grounded-SAM with BLIP & ChatGPT for Automatic Labeling • Grounded-SAM with Whisper: Detect and Segment Anything with Audio • Grounded-SAM ChatBot with Visual ChatGPT • Grounded-SAM with OSX for 3D Whole-Body Mesh Recovery • Grounded-SAM with VISAM for Tracking and Segment Anything • Interactive Fashion-Edit Playground: Click for Segmentation And Editing • Interactive Human-face Editing Playground: Click And Editing Human Face • 3D Box Via Segment Anything • Playground: More Interesting and Imaginative Demos with Grounded-SAM • DeepFloyd: Image Generation with Text Prompt • PaintByExample: Exemplar-based Image Editing with Diffusion Models • LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions • RePaint: Inpainting using Denoising Diffusion Probabilistic Models • ImageBind with SAM: Segment with Different Modalities • Efficient SAM Series for Faster Annotation • Grounded-FastSAM Demo • Grounded-MobileSAM Demo • Grounded-Light-HQSAM Demo • Grounded-Efficient-SAM Demo • Grounded-Edge-SAM Demo • Grounded-RepViT-SAM Demo • Citation Preliminary Works Here we provide some background knowledge that you may need to know before trying the demos. | Title | Intro | Description | Links | |:----:|:----:|:----:|:----:| | Segment-Anything | | A strong foundation mode…