• Stars
    star
    618
  • Rank 72,605 (Top 2 %)
  • Language
    Jupyter Notebook
  • Created over 3 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision visitors

Project | Arxiv | Kaggle | PWC | Tweet

News

  • โšก(2023-2-21): The annotations of a small part of images have been updated, including the annotation of some missing pedestrians, and the optimization of some imprecise annotations. The updated dataset is now available from the homepage. If you need the previous version of the annotations, please refer to here.
  • โšก(2022-5-24): We provide a toolbox for various format conversions (xml to yolov5, xml to yolov3, xml to coco)
  • โšก(2022-3-27): We released some raw data (unregistered image pairs and videos) for further research including image registration. Please visit homepage to get the update. (2022-3-28 We have updated the link of Baidu Yun of LLVIP raw data, the data downloaded from the new link supports decompression under windows and macos. The original link only support windows.)
  • โšก(2021-12-25): We released a Kaggle Community Competition "Find Person in the Dark!" based on part of LLVIP dataset. Welcome playing and having fun! Attention: only the visible-image data we uploaded in Kaggle platform is allowed to use (the infrared images in LLVIP or other external data are forbidden)
  • โšก(2021-11-24): Pedestrian detection models were released
  • โšก(2021-09-01): We have released the dataset, please visit homepage to get the dataset. (Note that we removed some low-quality images from the original dataset, and for this version there are 30976 images.)

figure1-LR


Citation

If you use this data for your research, please cite our paper LLVIP: A Visible-infrared Paired Dataset for Low-light Vision:

@inproceedings{jia2021llvip,
  title={LLVIP: A visible-infrared paired dataset for low-light vision},
  author={Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Zhou, Wenli},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={3496--3504},
  year={2021}
}

or

@misc{https://doi.org/10.48550/arxiv.2108.10831,
  doi = {10.48550/ARXIV.2108.10831}, 
  url = {https://arxiv.org/abs/2108.10831},
  author = {Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Liu, Shengjie and Zhou, Wenli}, 
  keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {LLVIP: A Visible-infrared Paired Dataset for Low-light Vision},
  publisher = {arXiv},
  year = {2021},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

Image Fusion

Baselines

FusionGAN

Preparation

  • Install requirements
    git clone https://github.com/bupt-ai-cz/LLVIP.git
    cd LLVIP/FusionGAN
    # Create your virtual environment using anaconda
    conda create -n FusionGAN python=3.7
    conda activate FusionGAN
    
    conda install matplotlib scipy==1.2.1 tensorflow-gpu==1.14.0 
    pip install opencv-python
    sudo apt install libgl1-mesa-glx
  • File structure
    FusionGAN
    โ”œโ”€โ”€ ...
    โ”œโ”€โ”€ Test_LLVIP_ir
    |   โ”œโ”€โ”€ 190001.jpg
    |   โ”œโ”€โ”€ 190002.jpg
    |   โ””โ”€โ”€ ...
    โ”œโ”€โ”€ Test_LLVIP_vi
    |   โ”œโ”€โ”€ 190001.jpg
    |   โ”œโ”€โ”€ 190002.jpg
    |   โ””โ”€โ”€ ...
    โ”œโ”€โ”€ Train_LLVIP_ir
    |   โ”œโ”€โ”€ 010001.jpg
    |   โ”œโ”€โ”€ 010002.jpg
    |   โ””โ”€โ”€ ...
    โ””โ”€โ”€ Train_LLVIP_vi
        โ”œโ”€โ”€ 010001.jpg
        โ”œโ”€โ”€ 010002.jpg
        โ””โ”€โ”€ ...
    

Train

python main.py --epoch 10 --batch_size 32

See more training options in main.py.

Test

python test_one_image.py

Remember to put pretrained model in your checkpoint folder and change corresponding model name in test_one_image.py. To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.

Densefuse

Preparation

  • Install requirements
    git clone https://github.com/bupt-ai-cz/LLVIP
    cd LLVIP/imagefusion_densefuse
    
    # Create your virtual environment using anaconda
    conda create -n Densefuse python=3.7
    conda activate Densefuse
    
    conda install scikit-image scipy==1.2.1 tensorflow-gpu==1.14.0
  • File structure
    imagefusion_densefuse
    โ”œโ”€โ”€ ...
    โ”œโ”€โ”€datasets
    |  โ”œโ”€โ”€010001_ir.jpg
    |  โ”œโ”€โ”€010001_vi.jpg
    |  โ””โ”€โ”€ ...
    โ”œโ”€โ”€test
    |  โ”œโ”€โ”€190001_ir.jpg
    |  โ”œโ”€โ”€190001_vi.jpg
    |  โ””โ”€โ”€ ...
    โ””โ”€โ”€LLVIP
       โ”œโ”€โ”€ infrared
       |   โ”œโ”€โ”€train
       |   |  โ”œโ”€โ”€ 010001.jpg
       |   |  โ”œโ”€โ”€ 010002.jpg
       |   |  โ””โ”€โ”€ ...
       |   โ””โ”€โ”€test
       |      โ”œโ”€โ”€ 190001.jpg
       |      โ”œโ”€โ”€ 190002.jpg
       |      โ””โ”€โ”€ ...
       โ””โ”€โ”€ visible
           โ”œโ”€โ”€train
           |   โ”œโ”€โ”€ 010001.jpg
           |   โ”œโ”€โ”€ 010002.jpg
           |   โ””โ”€โ”€ ...
           โ””โ”€โ”€ test
               โ”œโ”€โ”€ 190001.jpg
               โ”œโ”€โ”€ 190002.jpg
               โ””โ”€โ”€ ...
    

Train & Test

python main.py 

Check and modify training/testing options in main.py. Before training/testing, you need to rename the images in LLVIP dataset and put them in the designated folder. We have provided a script named rename.py to rename the images and save them in the datasets or test folder. Checkpoints are saved in ./models/densefuse_gray/. To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.

IFCNN

Please visit https://github.com/uzeful/IFCNN.

Pedestrian Detection

Baselines

Yolov5

Preparation

Linux and Python>=3.6.0

  • Install requirements

    git clone https://github.com/bupt-ai-cz/LLVIP.git
    cd LLVIP/yolov5
    pip install -r requirements.txt
  • File structure

    The training set of LLVIP is used for training the yolov5 model and the testing set of LLVIP is used for the validation of the yolov5 model.

    yolov5
    โ”œโ”€โ”€ ...
    โ””โ”€โ”€LLVIP
       โ”œโ”€โ”€ labels
       |   โ”œโ”€โ”€train
       |   |  โ”œโ”€โ”€ 010001.txt
       |   |  โ”œโ”€โ”€ 010002.txt
       |   |  โ””โ”€โ”€ ...
       |   โ””โ”€โ”€val
       |      โ”œโ”€โ”€ 190001.txt
       |      โ”œโ”€โ”€ 190002.txt
       |      โ””โ”€โ”€ ...
       โ””โ”€โ”€ images
           โ”œโ”€โ”€train
           |   โ”œโ”€โ”€ 010001.jpg
           |   โ”œโ”€โ”€ 010002.jpg
           |   โ””โ”€โ”€ ...
           โ””โ”€โ”€ val
               โ”œโ”€โ”€ 190001.jpg
               โ”œโ”€โ”€ 190002.jpg
               โ””โ”€โ”€ ...
    

    We provide a toolbox for converting annotation files to txt files in yolov5 format.

Train

python train.py --img 1280 --batch 8 --epochs 200 --data LLVIP.yaml --weights yolov5l.pt --name LLVIP_export

See more training options in train.py. The pretrained model yolov5l.pt can be downloaded from here. The trained model will be saved in ./runs/train/LLVIP_export/weights folder.

Test

python val.py --data --img 1280 --weights last.pt --data LLVIP.yaml

Remember to put the trained model in the same folder as val.py.

Our trained model can be downloaded from here: Google-Drive-Yolov5-model or BaiduYun-Yolov5-model (code: qepr)

Results

We retrained and tested Yolov5l and Yolov3 on the updated dataset (30976 images).

Where AP means the average of AP at IoU threshold of 0.5 to 0.95, with an interval of 0.05.

The figure above shows the change of AP under different IoU thresholds. When the IoU threshold is higher than 0.7, the AP value drops rapidly. Besides, the infrared image highlights pedestrains and achieves a better effect than the visible image in the detection task, which not only proves the necessity of infrared images but also indicates that the performance of visible-image pedestrian detection algorithm is not good enough under low-light conditions.

We also calculated log average miss rate based on the test results and drew the miss rate-FPPI curve.

Image-to-Image Translation

Baseline

pix2pixGAN

Preparation

  • Install requirements
    cd pix2pixGAN
    pip install -r requirements.txt
  • Prepare dataset
  • File structure
    pix2pixGAN
    โ”œโ”€โ”€ ...
    โ””โ”€โ”€datasets
       โ”œโ”€โ”€ ...
       โ””โ”€โ”€LLVIP
          โ”œโ”€โ”€ train
          |   โ”œโ”€โ”€ 010001.jpg
          |   โ”œโ”€โ”€ 010002.jpg
          |   โ”œโ”€โ”€ 010003.jpg
          |   โ””โ”€โ”€ ...
          โ””โ”€โ”€ test
              โ”œโ”€โ”€ 190001.jpg
              โ”œโ”€โ”€ 190002.jpg
              โ”œโ”€โ”€ 190003.jpg
              โ””โ”€โ”€ ...
    

Train

python train.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --batch_size 8 --preprocess scale_width_and_crop --load_size 320 --crop_size 256 --gpu_ids 0 --n_epochs 100 --n_epochs_decay 100

Test

python test.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --gpu_ids 0 --preprocess scale_width_and_crop --load_size 320 --crop_size 256

See ./pix2pixGAN/options for more train and test options.


Results

We retrained and tested pix2pixGAN on the updated dataset(30976 images). The structure of generator is unet256, and the structure of discriminator is the basic PatchGAN as default.

License

This LLVIP Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree to our license terms.

Call For Contributions

Welcome to point out errors in data annotation. If you want to modify the label, please refer to the annotation tutorial, and email us the corrected label file.

More annotation forms are also welcome (such as segmentation), please contact us.

Acknowledgments

Thanks XueZ-phd for his contribution to LLVIP dataset. He corrected the imperfect annotations in the dataset.

Contact

email: [email protected], [email protected], [email protected], [email protected]

More Repositories

1

Meta-SelfLearning

Meta Self-learning for Multi-Source Domain Adaptation๏ผš A Benchmark
Python
198
star
2

BCI

BCI: Breast Cancer Immunohistochemical Image Generation through Pyramid Pix2pix
Python
146
star
3

HHCL-ReID

Hard-sample Guided Hybrid Contrast Learning for Unsupervised Person Re-Identification
Python
133
star
4

CAC-UNet-DigestPath2019

1st to MICCAI DigestPath2019 challenge (https://digestpath2019.grand-challenge.org/Home/) on colonoscopy tissue segmentation and classification task. (MICCAI 2019) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Python
94
star
5

IAST-ECCV2020

IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Python
84
star
6

BALNMP

Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides, BCNB Dataset
Python
53
star
7

PGDF

Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
Python
31
star
8

HSA-NRL

Hard Sample Aware Noise Robust Learning forHistopathology Image Classification
Python
29
star
9

HIAST

This is the official implementation of "Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain Semantic Segmentation".
Python
10
star
10

Thyroid-Cytopathological-Diagnosis-with-AMIL_MSFF

Attention Based Multi-Instance Thyroid Cytopathological Diagnosis with Multi-Scale Feature Fusion
Python
9
star
11

TCVC

Code of paper "Temporal Consistent Automatic Video Colorization via Semantic Correspondence"
Python
8
star
12

Glomeruli-Instance-Segmentation

Python
7
star
13

bupt-ai-cz

The introduction and news of CVSM Group.
7
star
14

Label-Noise-Robust-Training

Noise Robust Learning with Hard Example Aware for Pathological Image classification
Python
6
star
15

Thyroid-Nodule-Ultrasound-Image-Classification

Thyroid Nodule Ultrasound Image Classification Through Hybrid Feature Cropping Network
6
star
16

MPFN

Ischemic Stroke Lesion Segmentation Using Multi-Plane Information Fusion
5
star
17

ProML

code for "Semi-supervised Domain Adaptation via Prototype-based Multi-level Learning"
Python
5
star
18

ANRN

Python
5
star
19

IAST-CAC-UNet-LLCNN-BreastCancerCNN-ImageRetrieval_DF_CDVS-Highly_Efficient_Follicular_Segmentation

Codes and Data for CVSM Group: 1. IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020); 2.
4
star
20

WUDA

WUDA
2
star
21

TCNL

Python
1
star
22

SMAF

code for paper โ€œA SELF-TRAINING FRAMEWORK BASED ON MULTI-SCALE ATTENTION FUSION FOR WEAKLY SUPERVISED SEMANTIC SEGMENTATIONโ€
1
star