• Stars
    star
    370
  • Rank 112,191 (Top 3 %)
  • Language
    HTML
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ECCV2022] "Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression", https://arxiv.org/abs/2207.10564

night_enhancement (ECCV'2022)

Introduction

This is an implementation of the following paper.

Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression.
European Conference on Computer Vision (ECCV2022)

Yeying Jin, Wenhan Yang and Robby T. Tan

[Paper] [Supplementary] arXiv [Poster] [Slides] [Link]

PWC Replicate

Abstract

Night images suffer not only from low light, but also from uneven distributions of light. Most existing night visibility enhancement methods focus mainly on enhancing low-light regions. This inevitably leads to over enhancement and saturation in bright regions, such as those regions affected by light effects (glare, floodlight, etc). To address this problem, we need to suppress the light effects in bright regions while, at the same time, boosting the intensity of dark regions. With this idea in mind, we introduce an unsupervised method that integrates a layer decomposition network and a light-effects suppression network. Given a single night image as input, our decomposition network learns to decompose shading, reflectance and light-effects layers, guided by unsupervised layer-specific prior losses. Our light-effects suppression network further suppresses the light effects and, at the same time, enhances the illumination in dark regions. This light-effects suppression network exploits the estimated light-effects layer as the guidance to focus on the light-effects regions. To recover the background details and reduce hallucination/artefacts, we propose structure and high-frequency consistency losses. Our quantitative and qualitative evaluations on real images show that our method outperforms state-of-the-art methods in suppressing night light effects and boosting the intensity of dark regions.

Datasets

Light-Effects Suppression on Night Data

  1. Light-effects data
    Light-effects data is collected from Flickr and by ourselves, with multiple light colors in various scenes:

  1. LED data
    We captured images with dimmer light as the reference images.

  1. GTA5 nighttime fog
    Synthetic GTA5 nighttime fog data:
  • ECCV2020 Nighttime Defogging Using High-Low Frequency Decomposition and Grayscale-Color Networks [Paper]
    Wending Yan, Robby T. Tan and Dengxin Dai

  1. Syn-light-effects
    Synthetic-light-effects data is the implementation of the paper,
    S. Metari, F. Deschênes, "A New Convolution Kernel for Atmospheric Point Spread Function Applied to Computer Vision", ICCV, 2017.
    Run the Matlab code to generate Syn-light-effects:
glow_rendering_code/repro_ICCV2007_Fig5.m

Light-Effects Suppression Results:

Pre-trained Model

[Update] We have released light-effects suppression code and checkpoint on May 21, 2023.

  1. Download the pre-trained de-light-effects model, put in ./results/delighteffects/model/
  2. Put the test images in ./light-effects/

Light-effects Suppression Test

python main_delighteffects.py

Demo

[Update] We have released demo_all.html and demo_all.ipynb code on May 21, 2023.

Input are in ./light-effects/, Output are in ./light-effects-output/

demo_all.ipynb

[Update] We have released demo code on Dec 28, 2022.

python demo.py

Decomposition

[Update] We have released decomposition code on Dec 28, 2022. run the code to layer decomposition, output light-effects layer, initial background layer.

demo_decomposition.m

Background Results | Light-Effects Results | Shading Results

Feature Results:

  1. run the MATLAB code to adaptively fuse the three color channels, output I_gray
checkGrayMerge.m

  1. Download the fine-tuned VGG model (fine-tuned on ExDark (Exclusively Dark Image Dataset)), put in ./VGG_code/ckpts/vgg16_featureextractFalse_ExDark/nets/model_best.tar

  2. obtain structure features

python test_VGGfeatures.py

Summary of Comparisons:

Low-Light Enhancement

  1. LOL dataset
    LOL: Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. "Deep Retinex Decomposition for Low-Light Enhancement", BMVC, 2018. [Baiduyun (extracted code: sdd0)] [Google Drive]

  2. LOL-Real dataset
    LOL-real (the extension work): Wenhan Yang, Haofeng Huang, Wenjing Wang, Shiqi Wang, and Jiaying Liu. "Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement", TIP, 2021. [Baiduyun (extracted code: l9xm)] [Google Drive]

    We use LOL-real as it is larger and more diverse.

Low-Light Enhancement Results:

Pre-trained Model

  1. Download the pre-trained LOL model, put in ./results/LOL/model/
  2. Put the test images in ./LOL/

Low-light Enhancement Test

python main.py

Results

  1. LOL-Real Results

Get the following Table 4 in the main paper on the LOL-Real dataset (100 test images).

Learning Method PSNR SSIM
Unsupervised Learning Ours 25.51 0.8015
N/A Input 9.72 0.1752

[Update]: Re-train (train from scratch) in LOL_V2_real (698 train images), and test on LOL_V2_real (100 test images).
PSNR: 20.85 (vs EnlightenGAN's 18.23), SSIM: 0.7243 (vs EnlightenGAN's 0.61) pre-trained LOL_V2 model

  1. LOL-test Results

Get the following Table 3 in the main paper on the LOL-test dataset (15 test images).

Learning Method PSNR SSIM
Unsupervised Learning Ours 21.521 0.7647
N/A Input 7.773 0.1259

Citations

If this work is useful for your research, please cite our paper.

@inproceedings{jin2022unsupervised,
  title={Unsupervised night image enhancement: When layer decomposition meets light-effects suppression},
  author={Jin, Yeying and Yang, Wenhan and Tan, Robby T},
  booktitle={European Conference on Computer Vision},
  pages={404--421},
  year={2022},
  organization={Springer}
}

If light-effects data is useful for your research, please cite our paper.

@inproceedings{sharma2021nighttime,
	title={Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects},
	author={Sharma, Aashish and Tan, Robby T},
	booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
	pages={11977--11986},
	year={2021}
}

If GTA5 nighttime fog data is useful for your research, please cite our paper.

@inproceedings{yan2020nighttime,
	title={Nighttime defogging using high-low frequency decomposition and grayscale-color networks},
	author={Yan, Wending and Tan, Robby T and Dai, Dengxin},
	booktitle={European Conference on Computer Vision},
	pages={473--488},
	year={2020},
	organization={Springer}
}