• Stars
    star
    530
  • Rank 83,660 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reference code for the paper: Deep White-Balance Editing (CVPR 2020). Our method is a deep learning multi-task framework for white-balance editing.

Deep White-Balance Editing, CVPR 2020 (Oral)

Mahmoud Afifi1,2 and Michael S. Brown1

1Samsung AI Center (SAIC) - Toronto

2York University

Oral presentation

deep_WB_fig

Reference code for the paper Deep White-Balance Editing. Mahmoud Afifi and Michael S. Brown, CVPR 2020. If you use this code or our dataset, please cite our paper:

@inproceedings{afifi2020deepWB,
  title={Deep White-Balance Editing},
  author={Afifi, Mahmoud and Brown, Michael S},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

network

Training data

  1. Download the Rendered WB dataset.

  2. Copy both input images and ground-truth images in a single directory. Each pair of input/ground truth images should be in the following format: input image: name_WB_picStyle.png and the corresponding ground truth image: name_G_AS.png. This is the same filename style used in the Rendered WB dataset. As an example, please refer to dataset directory.

Code

We provide source code for Matlab and PyTorch platforms. There is no guarantee that the trained models produce exactly the same results.

1. Matlab (recommended)

Prerequisite

  1. Matlab 2019b or higher
  2. Deep Learning Toolbox

Get Started

Run install_.m

Demos:
  1. Run demo_single_image.m or demo_images.m to process a single image or image directory, respectively. The available tasks are AWB, all, and editing. If you run the demo_single_image.m, it should save the result in ../result_images and output the following figure:

  1. Run demo_GUI.m for a gui demo.

Training Code:

Run training.m to start training. You should adjust training image directories from the datasetDir variable before running the code. You can change the training settings in training.m before training.

For example, you can use epochs and miniBatch variables to change the number of training epochs and mini-batch size, respectively. If you set fold = 0 and trainingImgsNum = 0, the training will use all training data without fold cross-validation. If you would like to limit the number of training images to be n images, set trainingImgsNum to n. If you would like to do 3-fold cross-validation, use fold = testing_fold. Then the code will train on the remaining folds and leave the selected fold for testing.

Other useful options include: patchsPerImg to select the number of random patches per image and patchSize to set the size of training patches. To control the learning rate drop rate and factor, please check the get_training_options.m function located in the utilities directory. You can use the loadpath variable to continue training from a training checkpoint .mat file. To start training from scratch, use loadpath=[];.

Once training started, a .cvs file will be created in the reports_and_checkpoints directory. You can use this file to visualize training progress. If you run Matlab with a graphical interface and you want to visualize some of input/output patches during training, set a breakpoint here and write the following code in the command window:

close all; i = 1; figure; subplot(2,3,1);imshow(extractdata(Y(:,:,1:3,i))); subplot(2,3,2);imshow(extractdata(Y(:,:,4:6,i))); subplot(2,3,3);imshow(extractdata(Y(:,:,7:9,i))); subplot(2,3,4); imshow(gather(T(:,:,1:3,i))); subplot(2,3,5); imshow(gather(T(:,:,4:6,i))); subplot(2,3,6); imshow(gather(T(:,:,7:9,i)));

You can change the value of i in the above code to see different images in the current training batch. The figure will show you produced patches (first row) and the corresponding ground truth patches (second row). For non-graphical interface, you can edit your custom code here to save example patches periodically. Hint: you may need to use a persistent variable to control the process. Alternative solutions include using custom trianing loop.

2. PyTorch

Prerequisite

  1. Python 3.6

  2. pytorch (tested with 1.2.0 and 1.5.0)

  3. torchvision (tested with 0.4.0 and 0.6.0)

  4. cudatoolkit

  5. tensorboard (optional)

  6. numpy

  7. Pillow

  8. future

  9. tqdm

  10. matplotlib

  11. scipy

  12. scikit-learn

The code may work with library versions other than the specified.

Get Started

Demos:
  1. Run demo_single_image.py to process a single image. Example of applying AWB + different WB settings: python demo_single_image.py --input_image ../example_images/00.jpg --output_image ../result_images --show. This example should save the output image in ../result_images and output the following figure:

  1. Run demo_images.py to process image directory. Example: python demo_images.py --input_dir ../example_images/ --output_dir ../result_images --task AWB. The available tasks are AWB, all, and editing. You can also specify the task in the demo_single_image.py demo.
Training Code:

Run training.py to start training. You should adjust training image directories before running the code.

Example: CUDA_VISIBLE_DEVICE=0 python train.py --training_dir ../dataset/ --fold 0 --epochs 500 --learning-rate-drop-period 50 --num_training_images 0. In this example, fold = 0 and num_training_images = 0 mean that the training will use all training data without fold cross-validation. If you would like to limit the number of training images to be n images, set num_training_images to n. If you would like to do 3-fold cross-validation, use fold = testing_fold. Then the code will train on the remaining folds and leave the selected fold for testing.

Other useful options include: --patches-per-image to select the number of random patches per image, --learning-rate-drop-period and --learning-rate-drop-factor to control the learning rate drop period and factor, respectively, and --patch-size to set the size of training patches. You can continue training from a training checkpoint .pth file using --load option.

If you have TensorBoard installed on your machine, run tensorboard --logdir ./runs after start training to check training progress and visualize samples of input/output patches.

Results

results

This software is provided for research purposes only and CAN NOT be used for commercial purposes.

Maintainer: Mahmoud Afifi ([email protected])

Related Research Projects

  • When Color Constancy Goes Wrong: The first work to directly address the problem of incorrectly white-balanced images; requires a small memory overhead and it is fast (CVPR 2019).
  • White-Balance Augmenter: An augmentation technique based on camera WB errors (ICCV 2019).
  • Interactive White Balancing:A simple method to link the nonlinear white-balance correction to the user's selected colors to allow interactive white-balance manipulation (CIC 2020).
  • Exposure Correction: A single coarse-to-fine deep learning model with adversarial training to correct both over- and under-exposed photographs (CVPR 2021).

More Repositories

1

Exposure_Correction

Project page of the paper "Learning Multi-Scale Photo Exposure Correction" (CVPR 2021).
MATLAB
523
star
2

WB_sRGB

White balance camera-rendered sRGB images (CVPR 2019) [Matlab & Python]
MATLAB
334
star
3

HistoGAN

Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Jupyter Notebook
271
star
4

WB_color_augmenter

WB color augmenter improves the accuracy of image classification and image semantic segmentation methods by emulating different WB effects (ICCV 2019) [Python & Matlab].
MATLAB
166
star
5

C5

Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)
Python
104
star
6

mixedillWB

Reference code for the paper Auto White-Balance Correction for Mixed-Illuminant Scenes.
Python
83
star
7

CIE_XYZ_NET

PyTorch & Matlab code for the paper: CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks (TPAMI 2021).
MATLAB
78
star
8

11K-Hands

Two-stream CNN for gender classification and biometric identification using a dataset of 11K hand images.
MATLAB
77
star
9

Image_recoloring

Image Recoloring Based on Object Color Distributions (Eurographics 2019)
MATLAB
47
star
10

color-aware-style-transfer

Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Jupyter Notebook
45
star
11

Semantic-Color-Constancy-Using-CNN

Semantic information can help CNNs to get better illuminant estimation -- a proof of concept
MATLAB
37
star
12

SIIE

Sensor-Independent Illumination Estimation for DNN Models (BMVC 2019)
MATLAB
33
star
13

ColorTempTuning

A camera pipeline that allows accurate post-capture white balance editing (CIC best paper award, 2019)
MATLAB
31
star
14

raw2raw

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.
Python
30
star
15

modified-Poisson-image-editing

Realistic image blending -- a Matlab implementation of MPB: A modified Poisson blending technique, Computational Visual Media 2015.
MATLAB
28
star
16

Interactive_WB_correction

Reference code for the paper Interactive White Balancing for Camera-Rendered Images Mahmoud Afifi and Michael S. Brown. In Color and Imaging Conference (CIC), 2020.
MATLAB
26
star
17

image_relighting

Python
25
star
18

colour_transfer_MKL

Python implementation of colour transfer algorithm based on linear Monge-Kantorovitch solution
Python
15
star
19

Poisson-image-editing

Matlab implementation of Poisson image editing
MATLAB
14
star
20

APAP-bias-correction-for-illumination-estimation-methods

Bias correction method for illuminant estimation -- JOSA 2019
MATLAB
9
star
21

seam-carving

MATLAB
9
star
22

Multi-stream-CNN

Matlab example of Multi-stream-CNN
MATLAB
5
star
23

WB_color_augmenter_python

Python version of the WB augmenter (ICCV'19)
Python
3
star
24

dynamic-length-color-palettes

Dynamic length colour palettes
MATLAB
2
star
25

FlickrImageDownloader

Download up to 4K images with specific keyword(s) from Flickr
Python
1
star
26

plot_rg_chromaticity

MATLAB
1
star