back to home

microsoft / Bringing-Old-Photos-Back-to-Life

Bringing Old Photo Back to Life (CVPR 2020 oral)

15,705 stars
2,092 forks
108 issues
PythonDockerfileShell

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing microsoft/Bringing-Old-Photos-Back-to-Life in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/microsoft/Bringing-Old-Photos-Back-to-Life)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Old Photo Restoration (Official PyTorch Implementation) Project Page | Paper (CVPR version) | Paper (Journal version) | Pretrained Model | Colab Demo | Replicate Demo & Docker Image :fire: **Bringing Old Photos Back to Life, CVPR2020 (Oral)** **Old Photo Restoration via Deep Latent Space Translation, TPAMI 2022** Ziyu Wan 1 , Bo Zhang 2 , Dongdong Chen 3 , Pan Zhang 4 , Dong Chen 2 , Jing Liao 1 , Fang Wen 2 1 City University of Hong Kong, 2 Microsoft Research Asia, 3 Microsoft Cloud AI, 4 USTC :sparkles: News **2022.3.31**: Our new work regarding old film restoration will be published in CVPR 2022. For more details, please refer to the project website and github repo. The framework now supports the restoration of high-resolution input. Training code is available and welcome to have a try and learn the training details. You can now play with our Colab and try it on your photos. Requirement The code is tested on Ubuntu with Nvidia GPUs and CUDA installed. Python>=3.6 is required to run the code. Installation Clone the Synchronized-BatchNorm-PyTorch repository for Download the landmark detection pretrained model Download the pretrained model, put the file under , and put the file under . Then unzip them respectively. Install dependencies: :rocket: How to use? **Note**: GPU can be set 0 or 0,1,2 or 0,2; use -1 for CPU 1) Full Pipeline You could easily restore the old photos with one simple command after installation and downloading the pretrained model. For images without scratches: For scratched images: **For high-resolution images with scratches**: Note: Please try to use the absolute path. The final results will be saved in . You could also check the produced results of different steps in . 2) Scratch Detection Currently we don't plan to release the scratched old photos dataset with labels directly. If you want to get the paired data, you could use our pretrained model to test the collected images to obtain the labels. 3) Global Restoration A triplet domain translation network is proposed to solve both structured degradation and unstructured degradation of old photos. 4) Face Enhancement We use a progressive generator to refine the face regions of old photos. More details could be found in our journal submission and folder. > *NOTE*: > This repo is mainly for research purpose and we have not yet optimized the running performance. > > Since the model is pretrained with 256*256 images, the model may not work ideally for arbitrary resolution. 5) GUI A user-friendly GUI which takes input of image by user and shows result in respective window. How it works: • Run GUI.py file. • Click browse and select your image from test_images/old_w_scratch folder to remove scratches. • Click Modify Photo button. • Wait for a while and see results on GUI window. • Exit window by clicking Exit Window and get your result image in output folder. How to train? 1) Create Training File Put the folders of VOC dataset, collected old photos (e.g., Real_L_old and Real_RGB_old) into one shared folder. Then Note: Remember to modify the code based on your own environment. 2) Train the VAEs of domain A and domain B respectively Note: For the --name option, please ensure your experiment name contains "domainA" or "domainB", which will be used to select different dataset. 3) Train the mapping network between domains Train the mapping without scratches: Traing the mapping with scraches: Traing the mapping with scraches (Multi-Scale Patch Attention for HR input): Citation If you find our work useful for your research, please consider citing the following papers :) If you are also interested in the legacy photo/video colorization, please refer to this work. Maintenance This project is currently maintained by Ziyu Wan and is for academic research use only. If you have any questions, feel free to contact raywzy@gmail.com. License The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file. We use our labeled dataset to train the scratch detection model. This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.