senguptaumd / Background-Matting
Background Matting: The World is Your Green Screen
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing senguptaumd/Background-Matting in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewBackground Matting: The World is Your Green Screen By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman This paper will be presented in IEEE CVPR 2020. **Project Page** Go to Project page for additional details and results. Paper (Arxiv) Blog Post **Background Matting v2.0** We recently released a brand new background matting project: better quality and REAL-TIME performance (30fps at 4K and 60fps at FHD)! You can now use this with Zoom! Much better quality! We tried this on a Linux machine with a GPU. Check out the code! Project members ## • Soumyadip Sengupta, University of Washington • Vivek Jayaram, University of Washington • Brian Curless, University of Washington • Steve Seitz, University of Washington • Ira Kemelmacher-Shlizerman, University of Washington Acknowledgement: Andrey Ryabtsev, University of Washington License ### This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License. Summary ## • Updates • Getting Started • Inference Code on images • Inference Code on videos • Notes on capturing images • Training code on synthetic-composite Adobe dataset • Training code on unlabeled real videos • Captured Data • Inference in Google Colab • Citations • Related Implementations **Updates** ## April 21, 2020: • New features: • Training code for supervised training on synthetic-composite Adobe dataset and self-supervised learning on unlabeled real videos. April 20,2020 • New features: • Google Colab for inference, thanks to Andrey Ryabtsev, University of Washington. • Captured data released for research purposes. April 9, 2020 • Issues: • Updated alignment function in pre-processing code. Python version uses AKAZE features (SIFT and SURF is not available with opencv3), MATLAB version also provided uses SURF features. • New features: • Testing code to replace background for videos April 8, 2020 • Issues: • Turning off adjustExposure() for bias-gain correction in test_pre_processing.py. (Bug found, need to be fixed) • Incorporating 'uncropping' operation in test_background-matting_image.py. (Output will be of same resolution and aspect-ratio as input) Getting Started Clone repository: Please use Python 3. Create an Anaconda environment and install the dependencies. Our code is tested with Pytorch=1.1.0, Tensorflow=1.14 with cuda10.0 Make sure CUDA 10.0 is your default cuda. If your CUDA 10.0 is installed in , apply Install PyTorch, Tensorflow (needed for segmentation) and dependencies Note: The code is likely to work on other PyTorch and Tensorflow versions compatible with your system CUDA. If you already have a working environment with PyTorch and Tensorflow, only install dependencies with . If our code fails due to different versions, then you need to install specific CUDA, PyTorch and Tensorflow versions. Run the inference code on sample images Data To perform Background Matting based green-screening, you need to capture: • (a) Image with the subject (use extension) • (b) Image of the background without the subject (use extension) • (c) Target background to insert the subject (place in ) Use folder for testing and prepare your own data based on that. This data was collected with a hand-held camera. Pre-trained model Please download the pre-trained models from Google Drive and place folder inside . Note: model was trained on the training set of the Adobe dataset. This was the model used for numerical evaluation on Adobe dataset. Pre-processing • Segmentation Background Matting needs a segmentation mask for the subject. We use tensorflow version of Deeplabv3+. You can replace Deeplabv3+ with any segmentation network of your choice. Save the segmentation results with extension . • Alignment Skip this step, if your data is captured with fixed-camera. • For hand-held camera, we need to align the background with the input image as a part of pre-processing. We apply simple hoomography based alignment. • We ask users to **disable the auto-focus and auto-exposure** of the camera while capturing the pair of images. This can be easily done in iPhone cameras (tap and hold for a while). Run for pre-processing. It aligns the background image and changes its bias-gain to match the input image We used AKAZE features python code (since SURF and SIFT unavilable in opencv3) for alignment. We also provide an alternate MATLAB code ( ), which uses SURF features. MATLAB code also provides a way to visualize feature matching and alignment. Bad alignment will produce bad matting output. Bias-gain adjustment is turned off in the Python code due to a bug, but it is present in MATLAB code. If there are significant exposure changes between the captured image and the captured background, use bias-gain adjustment to account for that. Feel free to write your own alignment code, choose your favorite feature detector, feature matching and alignment. Background Matting For images taken with fixed camera (with a tripod), choose for best results. lets you use the model trained on synthetic-composite Adobe dataset, without real data (worse performance). Run the inference code on sample videos This is almost exactly similar as that of the image with few small changes. Data To perform Background Matting based green-screening, you need to capture: • (a) Video with the subject ( ) • (b) Image of the background without the subject (use extension) • (c) Target background to insert the subject (place in ) We provide captured with hand-held camera and captured with fixed camera for testing. Please download the data and place both folders under . Prepare your own data based on that. Pre-processing • Frame extraction: Repeat the same for • Segmentation Repeat the same for • Alignment No need to run alignment for or videos captured with fixed-camera. Run for pre-processing. Alternately you can also use in MATLAB. Background Matting For hand-held videos, like : For fixed-camera videos, like : To obtai…