• Stars
    star
    121
  • Rank 293,924 (Top 6 %)
  • Language
    Python
  • Created about 5 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[arXiv 2020] Deep Connected Attention Networks

Deep Connected Attention Networks (DCANet)

Illustration

Figure 1. Illustration of our DCANet. We visualize intermediate feature activation using Grad-CAM. Vanilla SE-ResNet50 varies its focus dramatically at different stages. In contrast, our DCA enhanced SE-ResNet50 progressively and recursively adjusts focus, and closely pays attention to the target object.

Approach

Figure 2. An overview of our Deep Connected Attention Network. We connect the output of transformation module in the previous attention block to the output of extraction module in current attention block. In the context of multiple attention dimensions, we connect attentions along each dimension. Here we show an example with two attention dimensions. It can be extended to more dimensions.

Implementation

In this repository, all the models are implemented by pytorch.

We use the standard data augmentation strategies with ResNet.

To reproduce our DCANet work, please refer Usage.md.

Trained Models

😊 All trained models and training log files are submitted to an anonymous Google Drive.

😊 We provide corresponding links in the "download" column.



Table 1. Single crop classification accuracy (%) on ImageNet validation set. We re-train models using the PyTorch framework and report results in the "re-implement" column. The corresponding DCANet variants are presented in the "DCANet" column. The best performances are marked as bold. "-" means no experiments since our DCA module is designed for enhancing attention blocks, which are not existent in base networks.
Re-Implement DCANet
Top1 Top5 Param(G) FLOPs Download Top1 Top5 Param(G) FLOPs Download
ResNet50 75.90 92.72 4.12 25.56M model log - - - - -
SE-ResNet50 77.29 93.65 4.13 28.09M model log 77.55 93.77 4.13 28.65M model log
SK-ResNet50 77.79 93.76 5.98 37.12M model log 77.94 93.90 5.98 37.48M model log
GEθ-ResNet50 76.24 92.98 4.13 25.56M model log 76.75 93.36 4.13 26.12M model log
GC-ResNet50 74.90 92.28 4.13 28.11M model log 75.42 92.47 4.13 28.63M model log
CBAM-ResNet50 77.28 93.60 4.14 28.09M model log 77.83 93.72 4.14 30.90M model log
Mnas1_0 71.72 90.32 0.33 4.38 model log - - - - -
SE-Mnas1_0 69.69 89.12 0.33 4.42M model log 71.76 90.40 0.33 4.48M model log
GEθ-Mnas1_0 72.72 90.87 0.33 4.38M model log 72.82 91.18 0.33 4.48M model log
CBAM-Mnas1_0 69.13 88.92 0.33 4.42M model log 71.00 89.78 0.33 4.56M model log
MobileNetV2 71.03 90.07 0.32 3.50M model log - - - - -
SE-MobileNetV2 72.05 90.58 0.32 3.56M model log 73.24 91.14 0.32 3.65M model log
SK-MobileNetV2 74.05 91.85 0.35 5.28M model log 74.45 91.85 0.36 5.91M model log
GEθ-MobileNetV2 72.28 90.91 0.32 3.50M model log 72.47 90.68 0.32 3.59M model log
CBAM-MobileNetV2 71.91 90.51 0.32 3.57M model log 73.04 91.18 0.34 3.65M model log


Table 2: Detection performances (%) with different backbones on the MS-COCO validation dataset. We employ two state-of-the-art detectors: RetinaNet and Cascade R-CNN in our detection experiments.
Detector Backbone AP(50:95) AP(50) AP(75) AP(s) AP(m) AP(l) Download
Retina ResNet50 36.2 55.9 38.5 19.4 39.8 48.3 model log
Retina SE-ResNet50 37.4 57.8 39.8 20.6 40.8 50.3 model log
Retina DCA-SE-ResNet50 37.7 58.2 40.1 20.8 40.9 50.4 model log
Cascade R-CNN ResNet50 40.6 58.9 44.2 22.4 43.7 54.7 model log
Cascade R-CNN GC-ResNet50 41.1 59.7 44.6 23.6 44.1 54.3 model log
Cascade R-CNN DCA-GC-ResNet50 41.4 60.2 44.7 22.8 45.0 54.2 model log

More Repositories

1

Context-Cluster

[ICLR 2023 Oral] Image as Set of Points
Python
537
star
2

pointMLP-pytorch

[ICLR 2022 poster] Official PyTorch implementation of "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework"
Python
481
star
3

Rewrite-the-Stars

[CVPR 2024] Rewrite the Stars
Python
242
star
4

Open-Set-Recognition

Open Set Recognition
Python
135
star
5

CollaborativeFiltering

matlab, collaborative filtering, MovieLens dataset,The movie recommendation system
MATLAB
107
star
6

FCViT

A Close Look at Spatial Modeling: From Attention to Convolution
Python
89
star
7

LIVE

[CVPR 2022 Oral] Towards Layer-wise Image Vectorization
Python
57
star
8

EfficientMod

[ICLR 2024 poster] Efficient Modulation for Vision Networks
Python
44
star
9

DataMining

Java implementation of the classic Data mining (big data) algorithm. Create a new Java project, and copy this project to the SRC directory is ok.
Java
27
star
10

SPANet

Codes of "SPANet: Spatial Pyramid Attention Network for Enhanced Image Recognition"
Python
22
star
11

ORL3

Face recognition implement based on LDA, PCA and SVM.
MATLAB
21
star
12

CV_papers

A paper List for computer vision.
18
star
13

Efficient_ImageNet_Classification

An efficient implementation for ImageNet classification
Python
15
star
14

CenterLoss

A Discriminative Feature Learning Approach for Deep Face Recognition
Python
8
star
15

DNN

A DNN learning project
Python
7
star
16

TSNE

Python
7
star
17

Non-Local

Compare Non-Local, GC, SE and Global_Average_Pooling
Python
7
star
18

SPANet_TMM

Python
6
star
19

Attention

Python
5
star
20

L21FS

L21FS
MATLAB
5
star
21

SparseSENet

For REU project
Python
4
star
22

ResidualAttention

Python
4
star
23

Dynamic-Conv

Python
4
star
24

NANet

Code for "Attention Meets Normalization and Beyond"
Python
4
star
25

awesome-vision-transformers-comparison

A detailed comparsion of Recent Vision Transformers on ImageNet1k
4
star
26

hpc_yolo3

an object detection framework for UNT REU 2019 project.
Python
3
star
27

2PTWSVM

2PTWSVM
MATLAB
3
star
28

ParameterFree

Official code for “Cascaded Context Dependency: An Extremely Lightweight Module for Deep Convolutional Neural Networks”
Python
3
star
29

pointsMLP

Python
3
star
30

DistKernel

Python
3
star
31

RDANet

residual decoupled attention on imagenet
Python
2
star
32

ShiftFormer

An Efficient Transformer Beyond Convolution, Self-Attention, and MLP
Python
2
star
33

Learning_keras

A learning demo for DNN based on keras
2
star
34

imagenet_nv

Python
2
star
35

ImageNet.fastai

Python
2
star
36

cifar

Python
2
star
37

imagenet.pytorch

Python
2
star
38

detection

Python
2
star
39

openmax

Python
2
star
40

RDA_cifar_GCP

RDA on CIFAR100
Python
2
star
41

incrementalAD

This is a project for Incremental Anomaly Detection, which is written in MATLAB. The used techniques including: incremental LDA, incremental SVM, SMOTE, hard negative mining.
MATLAB
2
star
42

MLproject

The project for Machine Learning
MATLAB
2
star
43

mmdet

Forked from mmdetection
Python
1
star
44

PRM

IJCAI 2020
Python
1
star
45

SuperPixelNet

1
star
46

metric_learning

Deep Metric Learning
Python
1
star
47

RENYI

1
star
48

mmdet0

Python
1
star
49

ma-xu.github.io

personal blog website
HTML
1
star
50

mmdetection3d-0.10.0

Python
1
star
51

springMVC2222

JavaScript
1
star
52

seg_every_thing2

Python
1
star
53

DeepMetric

Python
1
star
54

khdj

康护到家
HTML
1
star
55

Odevity

Python
1
star
56

EC

SpringMVC整合hibernate,mysql的基础学习模版
Java
1
star
57

mmdet1

Python
1
star
58

OLTR

Python
1
star
59

Library

all
JavaScript
1
star
60

ECommerce

test
JavaScript
1
star