back to home

liuzhuang13 / DenseNet

Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

4,857 stars
1,069 forks
30 issues
Lua

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing liuzhuang13/DenseNet in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/liuzhuang13/DenseNet)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Densely Connected Convolutional Networks (DenseNets) This repository contains the code for DenseNet introduced in the following paper Densely Connected Convolutional Networks (CVPR 2017, Best Paper Award) Gao Huang\*, Zhuang Liu\*, Laurens van der Maaten and Kilian Weinberger (\* Authors contributed equally). **Now with much more memory efficient implementation!** Please check the technical report and code for more infomation. The code is built on fb.resnet.torch. Citation If you find DenseNet useful in your research, please consider citing: @inproceedings{DenseNet2017, title={Densely connected convolutional networks}, author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q }, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, year={2017} } Other Implementations Our [[Caffe]](https://github.com/liuzhuang13/DenseNetCaffe), Our memory-efficient [[Caffe]](https://github.com/Tongcheng/DN_CaffeScript), Our memory-efficient [[PyTorch]](https://github.com/gpleiss/efficient_densenet_pytorch), [[PyTorch]](https://github.com/andreasveit/densenet-pytorch) by Andreas Veit, [[PyTorch]](https://github.com/bamos/densenet.pytorch) by Brandon Amos, [[PyTorch]](https://github.com/baldassarreFe/pytorch-densenet-tiramisu) by Federico Baldassarre, [[MXNet]](https://github.com/Nicatio/Densenet/tree/master/mxnet) by Nicatio, [[MXNet]](https://github.com/bruinxiong/densenet.mxnet) by Xiong Lin, [[MXNet]](https://github.com/miraclewkf/DenseNet) by miraclewkf, [[Tensorflow]](https://github.com/YixuanLi/densenet-tensorflow) by Yixuan Li, [[Tensorflow]](https://github.com/LaurentMazare/deep-models/tree/master/densenet) by Laurent Mazare, [[Tensorflow]](https://github.com/ikhlestov/vision_networks) by Illarion Khlestov, [[Lasagne]](https://github.com/Lasagne/Recipes/tree/master/papers/densenet) by Jan Schlüter, [[Keras]](https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet) by tdeboissiere, [[Keras]](https://github.com/robertomest/convnet-study) by Roberto de Moura Estevão Filho, [[Keras]](https://github.com/titu1994/DenseNet) by Somshubra Majumdar, [[Chainer]](https://github.com/t-hanya/chainer-DenseNet) by Toshinori Hanya, [[Chainer]](https://github.com/yasunorikudo/chainer-DenseNet) by Yasunori Kudo, [[Torch 3D-DenseNet]](https://github.com/barrykui/3ddensenet.torch) by Barry Kui, [[Keras]](https://github.com/cmasch/densenet) by Christopher Masch, [[Tensorflow2]](https://github.com/okason97/DenseNet-Tensorflow2) by Gaston Rios and Ulises Jeremias Cornejo Fandos. Note that we only listed some early implementations here. If you would like to add yours, please submit a pull request. Some Following up Projects • Multi-Scale Dense Convolutional Networks for Efficient Prediction • DSOD: Learning Deeply Supervised Object Detectors from Scratch • CondenseNet: An Efficient DenseNet using Learned Group Convolutions • Fully Convolutional DenseNets for Semantic Segmentation • Pelee: A Real-Time Object Detection System on Mobile Devices Contents • Introduction • Usage • Results on CIFAR • Results on ImageNet and Pretrained Models • Updates Introduction DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each *dense block*). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs. Figure 1: A dense block with 5 layers and growth rate 4. Figure 2: A deep DenseNet with three dense blocks. Usage • Install Torch and required dependencies like cuDNN. See the instructions here for a step-by-step guide. • Clone this repo: As an example, the following command trains a DenseNet-BC with depth L=100 and growth rate k=12 on CIFAR-10: As another example, the following command trains a DenseNet-BC with depth L=121 and growth rate k=32 on ImageNet: Please refer to fb.resnet.torch for data preparation. DenseNet and DenseNet-BC By default, the code runs with the DenseNet-BC architecture, which has 1x1 convolutional *bottleneck* layers, and *compresses* the number of channels at each transition layer by 0.5. To run with the original DenseNet, simply use the options *-bottleneck false* and *-reduction 1* Memory efficient implementation (newly added feature on June 6, 2017) There is an option *-optMemory* which is very useful for reducing GPU memory footprint when training a DenseNet. By default, the value is set to 2, which activates the *shareGradInput* function (with small modifications from here). There are two extreme memory efficient modes (*-optMemory 3* or *-optMemory 4*) which use a customized densely connected layer. With *-optMemory 4*, the largest 190-layer DenseNet-BC on CIFAR can be trained on a single NVIDIA TitanX GPU (uses 8.3G of 12G) instead of fully using four GPUs with the standard (recursive concatenation) implementation . More details about the memory efficient implementation are discussed here. Results on CIFAR The table below shows the results of DenseNets on CIFAR datasets. The "+" mark at the end denotes for standard data augmentation (random crop after zero-padding, and horizontal flip). For a DenseNet model, L denotes its depth and k denotes its growth rate. On CIFAR-10 and CIFAR-100 without data augmentation, a Dropout layer with drop rate 0.2 is introduced after each convolutional layer except the very first one. Model | Parameters| CIFAR-10 | CIFAR-10+ | CIFAR-100 | CIFAR-100+ -------|:-------:|:--------:|:--------:|:--------:|:--------:| DenseNet (L=40, k=12) |1.0M |7.00 |5.24 | 27.55|24.42 DenseNet (…