• Stars
    star
    245
  • Rank 165,304 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

InceptionNeXt: When Inception Meets ConvNeXt (CVPR 2024)

InceptionNeXt: When Inception Meets ConvNeXt

This is a PyTorch implementation of InceptionNeXt proposed by our paper "InceptionNeXt: When Inception Meets ConvNeXt".

InceptionNeXt TLDR: To speed up ConvNeXt, we build InceptionNeXt by decomposing the large kernel dpethweise convolution with Inception style. Our InceptionNeXt-T enjoys both ResNet-50โ€™s speed and ConvNeXt-Tโ€™s accuracy.

Requirements

Our models are trained and tested in the environment of PyTorch 1.13, NVIDIA CUDA 11.7.1 and timm 0.6.11 (pip install timm==0.6.11). If you use docker, check Dockerfile that we used.

Data preparation: ImageNet with the following folder structure, you can extract ImageNet by this script.

โ”‚imagenet/
โ”œโ”€โ”€train/
โ”‚  โ”œโ”€โ”€ n01440764
โ”‚  โ”‚   โ”œโ”€โ”€ n01440764_10026.JPEG
โ”‚  โ”‚   โ”œโ”€โ”€ n01440764_10027.JPEG
โ”‚  โ”‚   โ”œโ”€โ”€ ......
โ”‚  โ”œโ”€โ”€ ......
โ”œโ”€โ”€val/
โ”‚  โ”œโ”€โ”€ n01440764
โ”‚  โ”‚   โ”œโ”€โ”€ ILSVRC2012_val_00000293.JPEG
โ”‚  โ”‚   โ”œโ”€โ”€ ILSVRC2012_val_00002138.JPEG
โ”‚  โ”‚   โ”œโ”€โ”€ ......
โ”‚  โ”œโ”€โ”€ ......

Models

InceptionNeXt trained on ImageNet-1K

Model Resolution Params MACs Train throughput Infer. throughput Top1 Acc
resnet50 224 26M 4.1G 969 3149 78.4
convnext_tiny 224 29M 4.5G 575 2413 82.1
inceptionnext_tiny 224 28M 4.2G 901 2900 82.3
inceptionnext_small 224 49M 8.4G 521 1750 83.5
inceptionnext_base 224 87M 14.9G 375 1244 84.0
inceptionnext_base_384 384 87M 43.6G 139 428 85.2

ConvNeXt variants trained on ImageNet-1K

Model Resolution Params MACs Train throughput Infer. throughput Top1 Acc
resnet50 224 26M 4.1G 969 3149 78.4
convnext_tiny 224 29M 4.5G 575 2413 82.1
convnext_tiny_k5 224 29M 4.4G 675 2704 82.0
convnext_tiny_k3 224 28M 4.4G 798 2802 81.5
convnext_tiny_k3_par1_2 224 28M 4.4G 818 2740 81.4
convnext_tiny_k3_par3_8 224 28M 4.4G 847 2762 81.4
convnext_tiny_k3_par1_4 224 28M 4.4G 871 2808 81.3
convnext_tiny_k3_par1_8 224 28M 4.4G 901 2833 80.8
convnext_tiny_k3_par1_16 224 28M 4.4G 916 2846 80.1

The throughputs are measured on an A100 with full precisioni and batch size of 128. See Benchmarking throughput.

Usage

We also provide a Colab notebook which run the steps to perform inference with InceptionNeXt: Colab

Validation

To evaluate our CAFormer-S18 models, run:

MODEL=inceptionnext_tiny
python3 validate.py /path/to/imagenet  --model $MODEL -b 128 \
  --pretrained

Benchmarking throughput

On the environment described above, we benchmark throughputs on an A100 with batch size of 128. The beter results of "Channel First" and "Channel Last" memory layouts are reported.

For Channel First:

MODEL=inceptionnext_tiny # convnext_tiny
python3 benchmark.py /path/to/imagenet  --model $MODEL

For Channel Last:

MODEL=inceptionnext_tiny # convnext_tiny
python3 benchmark.py /path/to/imagenet  --model $MODEL --channel-last

Train

We use batch size of 4096 by default and we show how to train models with 8 GPUs. For multi-node training, adjust --grad-accum-steps according to your situations.

DATA_PATH=/path/to/imagenet
CODE_PATH=/path/to/code/inceptionnext # modify code path here


ALL_BATCH_SIZE=4096
NUM_GPU=8
GRAD_ACCUM_STEPS=4 # Adjust according to your GPU numbers and memory size.
let BATCH_SIZE=ALL_BATCH_SIZE/NUM_GPU/GRAD_ACCUM_STEPS


MODEL=inceptionnext_tiny # inceptionnext_small, inceptionnext_base
DROP_PATH=0.1 # 0.3, 0.4


cd $CODE_PATH && sh distributed_train.sh $NUM_GPU $DATA_PATH \
--model $MODEL --opt adamw --lr 4e-3 --warmup-epochs 20 \
-b $BATCH_SIZE --grad-accum-steps $GRAD_ACCUM_STEPS \
--drop-path $DROP_PATH

Training (fine-tuning) scripts of other models are shown in scripts.

Bibtex

@article{yu2023inceptionnext,
  title={InceptionNeXt: when inception meets convnext},
  author={Yu, Weihao and Zhou, Pan and Yan, Shuicheng and Wang, Xinchao},
  journal={arXiv preprint arXiv:2303.16900},
  year={2023}
}

Acknowledgment

Weihao Yu would like to thank TRC program and GCP research credits for the support of partial computational resources. Our implementation is based on pytorch-image-models, poolformer, ConvNeXt and metaformer.

More Repositories

1

EditAnything

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
Python
3,256
star
2

poolformer

PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)
Python
1,290
star
3

envpool

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.
C++
1,084
star
4

volo

VOLO: Vision Outlooker for Visual Recognition
Jupyter Notebook
922
star
5

Adan

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
Python
743
star
6

MDT

Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)
Python
494
star
7

metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
Python
414
star
8

lorahub

The official repository of paper "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition".
Python
380
star
9

mvp

NeurIPS-2021: Direct Multi-view Multi-person 3D Human Pose Estimation
Python
324
star
10

CLoT

CVPR'24, Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation".
Python
290
star
11

iFormer

iFormer: Inception Transformer
Python
226
star
12

ptp

[CVPR2023] The code for ใ€ŠPosition-guided Text Prompt for Vision-Language Pre-trainingใ€‹
Python
148
star
13

BindDiffusion

BindDiffusion: One Diffusion Model to Bind Them All
Python
140
star
14

sailor-llm

โš“๏ธ Sailor: Open Language Models for South-East Asia
Python
87
star
15

FDM

The official PyTorch implementation of Fast Diffusion Model
Python
83
star
16

mugs

A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".
Python
78
star
17

Agent-Smith

[ICML2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Python
69
star
18

sdft

[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Shell
67
star
19

symbolic-instruction-tuning

The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".
Python
58
star
20

scaling-with-vocab

๐Ÿ“ˆ Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
Python
52
star
21

ScaleLong

The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (NeurIPS 2023)
Python
47
star
22

VGT

Video Graph Transformer for Video Question Answering (ECCV'22)
Python
44
star
23

jax_xc

Exchange correlation functionals translated from libxc to jax
Python
43
star
24

d4ft

A JAX library for Density Functional Theory.
Python
40
star
25

finetune-fair-diffusion

Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness
Python
38
star
26

dice

Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
Python
36
star
27

ILD

Imitation Learning via Differentiable Physics
Python
33
star
28

GP-Nerf

Official implementation for GP-NeRF (ECCV 2022)
Python
33
star
29

Consistent3D

The official PyTorch implementation of Consistent3D (CVPR 2024)
Python
33
star
30

edp

[NeurIPS 2023] Efficient Diffusion Policy
Python
32
star
31

rosmo

Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Python
28
star
32

MMCBench

Python
27
star
33

GDPO

Graph Diffusion Policy Optimization
Python
24
star
34

dualformer

Python
23
star
35

hloenv

an environment based on XLA for deep learning compiler optimization research.
C++
23
star
36

DiffMemorize

On Memorization in Diffusion Models
Python
21
star
37

optim4rl

Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
Python
21
star
38

TEC

Python
15
star
39

numcc

NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Python
12
star
40

PatchAIL

Implementation of PatchAIL in the ICLR 2023 paper <Visual Imitation with Patch Rewards>
Python
12
star
41

offbench

Python
11
star
42

OPER

code for the paper Offline Prioritized Experience Replay
Jupyter Notebook
11
star
43

win

Python
4
star
44

P-DoS

[ArXiv 2024] Denial-of-Service Poisoning Attacks on Large Language Models
Python
4
star
45

sailcompass

Python
3
star
46

SLRLA-optimizer

Python
2
star
47

Cheating-LLM-Benchmarks

Jupyter Notebook
2
star
48

I-FSJ

Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Python
2
star
49

MISA

[NeurIPS 2023] Mutual Information Regularized Offline Reinforcement Learning
Python
1
star