• Stars
    star
    195
  • Rank 199,374 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Attention-based Dropout Layer for Weakly Supervised Object Localization (CVPR 2019 Oral)

Attention-based Dropout Layer for Weakly Supervised Object Localization

Attention-based Dropout Layer for Weakly Supervised Object Localization
Junsuk Choe, and Hyunjung Shim
School of Integrated Technology, Yonsei University

Weakly Supervised Object Localization (WSOL) techniques learn the object location only using image-level labels, without location annotations. A common limitation for these techniques is that they cover only the most discriminative part of the object, not the entire object. To address this problem, we propose an Attention-based Dropout Layer (ADL), which utilizes the self-attention mechanism to process the feature maps of the model. The proposed method is composed of two key components: 1) hiding the most discriminative part from the model for capturing the integral extent of object, and 2) highlighting the informative region for improving the recognition power of the model. Based on extensive experiments, we demonstrate that the proposed method is effective to improve the accuracy of WSOL, achieving a new state-of-the-art localization accuracy in CUB-200-2011 dataset. We also show that the proposed method is much more efficient in terms of both parameter and computation overheads than existing techniques.

RubberDuck

ADL block diagram. The self-attention map is generated by channelwise average pooling of the input feature map. Based on the self-attention map, we produce a drop mask using thresholding and an importance map using a sigmoid activation, respectively. The drop mask and the importance map are selected stochastically at each iteration and applied to the input feature map. Please note that this figure illustrates the case when the importance map is selected.

Getting Started

Tensorpack implementation of Attention-Dropout Layer for Weakly Supservised Object Localization.
PyTorch implementation is available at: link

Our implementation is based on these repositories:

Imagenet Pre-trained models can be downloaded here:

Requirements

  • Python 3.3+
  • Python bindings for OpenCV.
  • Tensorflow (โ‰ฅ 1.12, < 2)

Prepare datasets

ImageNet

To prepare ImageNet data, download ImageNet "train" and "val" splits from here and put the downloaded file on dataset/ILSVRC2012_img_train.tar and dataset/ILSVRC2012_img_val.tar.

Then, run the following command on root directory to extract the images.

./dataset/prepare_imagenet.sh

apt-get install parallel may be required to install parallel.

The structure of image files looks like

dataset
โ””โ”€โ”€ ILSVRC
    โ””โ”€โ”€ train
        โ””โ”€โ”€ n01440764
            โ”œโ”€โ”€ n01440764_10026.JPEG
            โ”œโ”€โ”€ n01440764_10027.JPEG
            โ””โ”€โ”€ ...
        โ””โ”€โ”€ n01443537
        โ””โ”€โ”€ ...
    โ””โ”€โ”€ val
        โ”œโ”€โ”€ ILSVRC2012_val_00000001.JPEG
        โ”œโ”€โ”€ ILSVRC2012_val_00000002.JPEG
        โ””โ”€โ”€ ...

Corresponding annotation files can be found in here.

CUB

Run the following command to download original CUB dataset and extract the image files on root directory.

./dataset/prepare_cub.sh

The structure of image files looks like

dataset
โ””โ”€โ”€ CUB
    โ””โ”€โ”€ 001.Black_footed_Albatross
        โ”œโ”€โ”€ Black_Footed_Albatross_0001_796111.jpg
        โ”œโ”€โ”€ Black_Footed_Albatross_0002_55.jpg
        โ””โ”€โ”€ ...
    โ””โ”€โ”€ 002.Laysan_Albatross
    โ””โ”€โ”€ ...

Corresponding annotation files can be found in here.

Training script

First download pretrained models from here. Currently, we provide ResNet50-SE and VGG-16 networks.
Then, run the following command on root directory.

./run_train.sh