Caffe-ExcitationBP
This is a Caffe implementation of Excitation Backprop described in
This software implementation is provided for academic research and non-commercial purposes only. This implementation is provided without warranty. The Excitation Backprop method described in the above paper and implemented in this software is patent-pending by Adobe.
Prerequisites
- The same prerequisites as Caffe
- Anaconda (python packages)
Quick Start
- Unzip the files to a local folder (denoted as root_folder).
- Enter the root_folder and compile the code the same way as in Caffe.
- Our code is tested in GPU mode, so make sure to activate the GPU code when compiling the code.
- Make sure to compile pycaffe, the python interface
- Enter root_folder/ExcitationBP, run demo.ipynb using the python notebook. It will automatically download the pre-trained GoogleNet model for COCO and show you how to compute the contrastive attention map. For details for running the python notebook remotely on a server, see here.
Other comments
- We also implemented the gradient based method and the deconv method compared in our paper. See demo.ipynb.
- We implemented both GPU and CPU version of Excitation Backprop. Change
caffe.set_mode_eb_gpu()
tocaffe.set_mode_eb_cpu()
to run the CPU version. - Our pre-train model is modified to be fully convolutional, so that images of any size and aspect raioe can be directly processed.
- To apply your own CNN model, you need to modify the deploy.prototxt according to root_folder/models/COCO/deploy.prototxt. Basically, you need to add a dummy loss layer at the end of the file. Make sure to remove any dropout layers.
- (New) We have made some modifications to make our method work on ResNet like models. When handling EltwiseLayer, we ignore the bottom input corresponding to the skip link. We find that this works better than splitting the signals.
Supplementary data
- Image lists for COCO and VOC07, including sublists for the difficult images used in the paper: download