ChenHongruixuan / BRIGHT
[ESSD 2025 & IEEE GRSS DFC 2025] Bright: A globally distributed multimodal VHR dataset for all-weather disaster response
View on GitHubAI Architecture Analysis
This repository is indexed by RepoMind. By analyzing ChenHongruixuan/BRIGHT in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler view☀️BRIGHT☀️ BRIGHT: A globally distributed multimodal VHR dataset for all-weather disaster response Hongruixuan Chen 1,2 , Jian Song 1,2 , Olivier Dietrich 3 , Clifford Broni-Bediako 2 , Weihao Xuan 1,2 , Junjue Wang 1 Xinlei Shao 1 , Yimin Wei 1,2 , Junshi Xia 3 , Cuiling Lan 4 , Konrad Schindler 3 , Naoto Yokoya 1,2 * 1 The University of Tokyo, 2 RIKEN AIP, 3 ETH Zurich, 4 Microsoft Research Asia **Overview** | **Start BRIGHT** | **Common Issues** | **Follow-Ups** | **Others** 🛎️Updates • ** **: BRIGHT has been accepted by ESSD!! The contents related to IEEE GRSS DFC 2025 have been transferred to here!! • ** **: BRIGHT has been accepted by ESSD and online available now!! • ** **: BRIGHT has been integrated into TorChange. Many thanks for the effort of Dr. Zhuo Zheng!! • ** **: All the data and benchmark code related to our paper has now released. You are warmly welcome to use them!! • ** **: IEEE GRSS DFC 2025 Track II is over. Congratulations to winners!! You can now download the full version of DFC 2025 Track II data in Zenodo or HuggingFace!! • ** **: BRIGHT has been integrated into TorchGeo. Many thanks for the effort of Nils Lehmann!! • ** **: The arXiv paper of BRIGHT is now online. If you are interested in details of BRIGHT, do not hesitate to take a look!! 🔭Overview • **BRIGHT** is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers **five** types of natural disasters and **two** types of man-made disasters across **14** disaster events in **23** regions worldwide, with a particular focus on developing countries. • It supports not only the development of **supervised** deep models, but also the testing of their performance on **cross-event transfer** setup, as well as **unsupervised domain adaptation**, **semi-supervised learning**, **unsupervised change detection**, and **unsupervised image matching** methods in multimodal and disaster scenarios. 🗝️Let's Get Started with BRIGHT! Note that the code in this repo runs under **Linux** system. We have not tested whether it works under other OS. **Step 1: Clone the repository:** Clone this repository and navigate to the project directory: **Step 2: Environment Setup:** It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment: ***Create and activate a new conda environment*** ***Install dependencies*** Please download the BRIGHT from Zenodo or HuggingFace. Note that we cannot redistribute the optical data over Ukraine, Myanmar, and Mexico. Please follow our tutorial to download and preprocess them. After the data has been prepared, please make them have the following folder/file structure: The following commands show how to train and evaluate UNet on the BRIGHT dataset using our standard ML split set in [ ]: Then, you can run the following code to generate raw & visualized prediction results and evaluate performance using the saved weight. You can also download our provided checkpoints from Zenodo. In addition to the above supervised deep models, BRIGHT also provides standardized evaluation setups for several important learning paradigms and multimodal EO tasks: • : Evaluate model generalization across disaster types and regions. This setup simulates real-world scenarios where no labeled data (**zero-shot**) or limited labeled data (**one-shot**) is available for the target event during training. • : Adapt models trained on source disaster events to unseen target events without any target labels, using UDA techniques under the **zero-shot** cross-event setting. • : Leverage a small number of labeled samples and a larger set of unlabeled samples from the target event to improve performance under the **one-shot** cross-event setting. • : Detect disaster-induced building changes without using any labels. This setup supports benchmarking of general-purpose change detection algorithms under realistic large-scale disaster scenarios. • : Evaluate the performance of matching algorithms in aligning **raw, large-scale** optical and SAR images based on **manual-control-point**-based registration accuracy. This setup focuses on realistic multimodal alignment in disaster-affected areas. • : The Track II of IEEE GRSS DFC 2025 aims to develop robust and generalizable methods for assessing building damage using bi-temporal multimodal images on unseen disaster events. 🤔Common Issues Based on peers' questions from issue section, here's a quick navigate list of solutions to some common issues. | Issue | Solution | | :---: | :---: | | Complete data of DFC25 for research | The labels for validation and test sets of DFC25 have been uploaded to Zenodo and HuggingFace. | | Python package conflicts | The baseline code is not limited to a specific version, and participants do not need to match the version we provide. | 🏢 Works Building on BRIGHT We are delighted to see BRIGHT supporting various research directions. Below is a curated list of papers, benchmarks, and projects that build upon or integrate BRIGHT. | Work | Category | Venue | Link | Key Contribution | | :--- | :--- | :--- | :--- | :--- | | CDML | Algorithm & Benchmark | IEEE TPAMI 2026 | Code | Proposed a first-order cross-domain meta-learning framework for few-shot remote sensing classification | | SARCLIP | Algorithm & Benchmark | ISPRS J P&RS 2025 | Data & Code | Proposed multimodal foundation model (SARCLIP) and 400k dataset for SAR analysis | | DisasterM3 | Benchmark | NeurIPS 2025 | Data & Code | Constructed DisasterM3, a multi-sensor vision-language dataset (123k pairs) for VLM-based disaster response | | SARLANG-1M | Benchmark | IEEE TGRS 2026 | Data & Code | Constructed a large-scale SAR-text benchmark (1M+ pairs) for multimodal understanding | | IM4CD | Algorithm | ISPRS J P&RS 2026 | - | Proposed an unsupervised framework that unifies multimodal change detection and image matching to…