LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
News
- β‘(2023-2-21): The annotations of a small part of images have been updated, including the annotation of some missing pedestrians, and the optimization of some imprecise annotations. The updated dataset is now available from the homepage. If you need the previous version of the annotations, please refer to here.
- β‘(2022-5-24): We provide a toolbox for various format conversions (xml to yolov5, xml to yolov3, xml to coco)
- β‘(2022-3-27): We released some raw data (unregistered image pairs and videos) for further research including image registration. Please visit homepage to get the update. (2022-3-28 We have updated the link of Baidu Yun of LLVIP raw data, the data downloaded from the new link supports decompression under
windows
andmacos
. The original link only supportwindows
.) - β‘(2021-12-25): We released a Kaggle Community Competition "Find Person in the Dark!" based on part of LLVIP dataset. Welcome playing and having fun! Attention: only the visible-image data we uploaded in Kaggle platform is allowed to use (the infrared images in LLVIP or other external data are forbidden)
- β‘(2021-11-24): Pedestrian detection models were released
- β‘(2021-09-01): We have released the dataset, please visit homepage to get the dataset. (Note that we removed some low-quality images from the original dataset, and for this version there are 30976 images.)
Citation
If you use this data for your research, please cite our paper LLVIP: A Visible-infrared Paired Dataset for Low-light Vision:
@inproceedings{jia2021llvip,
title={LLVIP: A visible-infrared paired dataset for low-light vision},
author={Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Zhou, Wenli},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={3496--3504},
year={2021}
}
or
@misc{https://doi.org/10.48550/arxiv.2108.10831,
doi = {10.48550/ARXIV.2108.10831},
url = {https://arxiv.org/abs/2108.10831},
author = {Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Liu, Shengjie and Zhou, Wenli},
keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LLVIP: A Visible-infrared Paired Dataset for Low-light Vision},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
Image Fusion
Baselines
FusionGAN
Preparation
- Install requirements
git clone https://github.com/bupt-ai-cz/LLVIP.git cd LLVIP/FusionGAN # Create your virtual environment using anaconda conda create -n FusionGAN python=3.7 conda activate FusionGAN conda install matplotlib scipy==1.2.1 tensorflow-gpu==1.14.0 pip install opencv-python sudo apt install libgl1-mesa-glx
- File structure
FusionGAN βββ ... βββ Test_LLVIP_ir | βββ 190001.jpg | βββ 190002.jpg | βββ ... βββ Test_LLVIP_vi | βββ 190001.jpg | βββ 190002.jpg | βββ ... βββ Train_LLVIP_ir | βββ 010001.jpg | βββ 010002.jpg | βββ ... βββ Train_LLVIP_vi βββ 010001.jpg βββ 010002.jpg βββ ...
Train
python main.py --epoch 10 --batch_size 32
See more training options in main.py
.
Test
python test_one_image.py
Remember to put pretrained model in your checkpoint
folder and change corresponding model name in test_one_image.py
.
To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.
Densefuse
Preparation
- Install requirements
git clone https://github.com/bupt-ai-cz/LLVIP cd LLVIP/imagefusion_densefuse # Create your virtual environment using anaconda conda create -n Densefuse python=3.7 conda activate Densefuse conda install scikit-image scipy==1.2.1 tensorflow-gpu==1.14.0
- File structure
imagefusion_densefuse βββ ... βββdatasets | βββ010001_ir.jpg | βββ010001_vi.jpg | βββ ... βββtest | βββ190001_ir.jpg | βββ190001_vi.jpg | βββ ... βββLLVIP βββ infrared | βββtrain | | βββ 010001.jpg | | βββ 010002.jpg | | βββ ... | βββtest | βββ 190001.jpg | βββ 190002.jpg | βββ ... βββ visible βββtrain | βββ 010001.jpg | βββ 010002.jpg | βββ ... βββ test βββ 190001.jpg βββ 190002.jpg βββ ...
Train & Test
python main.py
Check and modify training/testing options in main.py
. Before training/testing, you need to rename the images in LLVIP dataset and put them in the designated folder. We have provided a script named rename.py
to rename the images and save them in the datasets
or test
folder. Checkpoints are saved in ./models/densefuse_gray/
. To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.
IFCNN
Please visit https://github.com/uzeful/IFCNN.
Pedestrian Detection
Baselines
Yolov5
Preparation
Linux and Python>=3.6.0
-
Install requirements
git clone https://github.com/bupt-ai-cz/LLVIP.git cd LLVIP/yolov5 pip install -r requirements.txt
-
File structure
The training set of LLVIP is used for training the yolov5 model and the testing set of LLVIP is used for the validation of the yolov5 model.
yolov5 βββ ... βββLLVIP βββ labels | βββtrain | | βββ 010001.txt | | βββ 010002.txt | | βββ ... | βββval | βββ 190001.txt | βββ 190002.txt | βββ ... βββ images βββtrain | βββ 010001.jpg | βββ 010002.jpg | βββ ... βββ val βββ 190001.jpg βββ 190002.jpg βββ ...
We provide a toolbox for converting annotation files to txt files in yolov5 format.
Train
python train.py --img 1280 --batch 8 --epochs 200 --data LLVIP.yaml --weights yolov5l.pt --name LLVIP_export
See more training options in train.py
. The pretrained model yolov5l.pt
can be downloaded from here. The trained model will be saved in ./runs/train/LLVIP_export/weights
folder.
Test
python val.py --data --img 1280 --weights last.pt --data LLVIP.yaml
Remember to put the trained model in the same folder as val.py
.
Our trained model can be downloaded from here: Google-Drive-Yolov5-model or BaiduYun-Yolov5-model (code: qepr)
- Click Here for the tutorial of Yolov3 οΌOur trained Yolov3 model can be downloaded from here: Google-Drive-Yolov3-model or BaiduYun-Yolov3-model (code: ine5)οΌ.
Results
We retrained and tested Yolov5l and Yolov3 on the updated dataset (30976 images).
Where AP means the average of AP at IoU threshold of 0.5 to 0.95, with an interval of 0.05.
The figure above shows the change of AP under different IoU thresholds. When the IoU threshold is higher than 0.7, the AP value drops rapidly. Besides, the infrared image highlights pedestrains and achieves a better effect than the visible image in the detection task, which not only proves the necessity of infrared images but also indicates that the performance of visible-image pedestrian detection algorithm is not good enough under low-light conditions.We also calculated log average miss rate based on the test results and drew the miss rate-FPPI curve.
Image-to-Image Translation
Baseline
pix2pixGAN
Preparation
- Install requirements
cd pix2pixGAN pip install -r requirements.txt
- Prepare dataset
- File structure
pix2pixGAN βββ ... βββdatasets βββ ... βββLLVIP βββ train | βββ 010001.jpg | βββ 010002.jpg | βββ 010003.jpg | βββ ... βββ test βββ 190001.jpg βββ 190002.jpg βββ 190003.jpg βββ ...
Train
python train.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --batch_size 8 --preprocess scale_width_and_crop --load_size 320 --crop_size 256 --gpu_ids 0 --n_epochs 100 --n_epochs_decay 100
Test
python test.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --gpu_ids 0 --preprocess scale_width_and_crop --load_size 320 --crop_size 256
See ./pix2pixGAN/options
for more train and test options.
Results
We retrained and tested pix2pixGAN on the updated dataset(30976 images). The structure of generator is unet256, and the structure of discriminator is the basic PatchGAN as default.
License
This LLVIP Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree to our license terms.
Call For Contributions
Welcome to point out errors in data annotation. If you want to modify the label, please refer to the annotation tutorial, and email us the corrected label file.
More annotation forms are also welcome (such as segmentation), please contact us.
Acknowledgments
Thanks XueZ-phd for his contribution to LLVIP dataset. He corrected the imperfect annotations in the dataset.
Contact
email: [email protected], [email protected], [email protected], [email protected]