NWPU-Crowd Sample Code
This repo is the official implementation of paper: NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting. The code is developed based on C^3 Framework.
Compared with the original C^3 Framework,
- the python3.x's new features are utilized;
- the density map is generated online by a conv layer for saving io time on the disk;
- improve the visualization in the Tensorboard.
These features will be merged into C^3 Framework as soon as possible.
Getting Started
Preparation
-
Prerequisites
- Python 3.x
- Pytorch 1.x: http://pytorch.org .
- other libs in
requirements.txt
, runpip install -r requirements.txt
.
-
Installation
- Clone this repo:
git clone https://github.com/gjy3035/NWPU-Crowd-Sample-Code.git
- Clone this repo:
-
Data Preparation
- Download NWPU-Crowd dataset from OneDrive1, OneDrive2 or BaiduNetDisk.
- Unzip
*zip
files in turns and placeimages_part*
into a folder. Finally, the folder tree is below:
-- NWPU-Crowd |-- images | |-- 0001.jpg | |-- 0002.jpg | |-- ... | |-- 5109.jpg |-- jsons | |-- 0001.json | |-- 0002.json | |-- ... | |-- 3609.json |-- mats | |-- 0001.mat | |-- 0002.mat | |-- ... | |-- 3609.mat |-- train.txt |-- val.txt |-- test.txt |-- readme.md
- Run
./datasets/prepare_NWPU.m
using Matlab. - Modify
__C_NWPU.DATA_PATH
in./datasets/setting/NWPU.py
with the path of your processed data.
Training
- Set the parameters in
config.py
and./datasets/setting/NWPU.py
(if you want to reproduce our results, you are recommended to use our parameters in./saved_exp_para
). - run
python train.py
. - run
tensorboard --logdir=exp --port=6006
.
Testing
We only provide an example to forward the model on the test set. You may need to modify it to test your models.
- Modify some key parameters in
test.py
:- Line 32:
LOG_PARA
, the same as__C_NWPU.LOG_PARA
in./datasets/setting/NWPU.py
. - Line 34:
dataRoot
, the same as__C_NWPU.DATA_PATH
in./datasets/setting/NWPU.py
. - Line 36:
model_path
. - Line 48: GPU Id and Model Name.
- Line 32:
- Run
python test.py
.
Pre-trained Models
We provide the pre-trained models in this link, which is a temporary share point of OneDrive. We will provide a permanent website ASAP.
Performance on the validation set
For an intuitive comparison, the visualization results of these methods are provided at this link. The overall results on val set:
Method | MAE | MSE | PSNR | SSIM |
---|---|---|---|---|
MCNN [1] | 218.53 | 700.61 | 28.558 | 0.875 |
C3F-VGG [2] | 105.79 | 504.39 | 29.977 | 0.918 |
CSRNet [3] | 104.89 | 433.48 | 29.901 | 0.883 |
CANNet [4] | 93.58 | 489.90 | 30.428 | 0.870 |
SCAR [5] | 81.57 | 397.92 | 30.356 | 0.920 |
SFCN+ [6] | 90.65 | 487.17 | 30.518 | 0.933 |
About the leaderboard on the test set, please visit Crowd benchmark.
References
- Single-Image Crowd Counting via Multi-Column Convolutional Neural Network, CPVR, 2016.
- C^3 Framework: An Open-source PyTorch Code for Crowd Counting, arXiv, 2019.
- CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes, CVPR, 2018.
- Context-Aware Crowd Counting, CVPR, 2019.
- SCAR: Spatial-/Channel-wise Attention Regression Networks for Crowd Counting, Neurocomputing, 2019.
- Learning from Synthetic Data for Crowd Counting in the Wild, CVPR, 2019.
Evaluation Scheme
The Evaluation Python Code of the crowdbenchmark.com
is shown in ./misc/evaluation_code.py
, which is similar to our validation code in trainer.py
.
Citation
If you find this project is useful for your research, please cite:
@article{gao2020nwpu,
title={NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization},
author={Wang, Qi and Gao, Junyu and Lin, Wei and Li, Xuelong},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
doi={10.1109/TPAMI.2020.3013269},
year={2020}
}
Our code borrows a lot from the C^3 Framework, you may cite:
@article{gao2019c,
title={C$^3$ Framework: An Open-source PyTorch Code for Crowd Counting},
author={Gao, Junyu and Lin, Wei and Zhao, Bin and Wang, Dong and Gao, Chenyu and Wen, Jun},
journal={arXiv preprint arXiv:1907.02724},
year={2019}
}
If you use crowd counting models in this repo (MCNN, C3F-VGG, CSRNet, CANNet, SCAR, and SFCN+), please cite them.