• Stars
    star
    182
  • Rank 211,154 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2023] Official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR"

Lite-DETR

This is the official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR". Accepted to CVPR 2023.

Code is available now.

Key Features

Efficient encoder design to reduce computational cost

  • Simple. Dozens of lines code change (if not consider pluggable key-aware attention).
  • Effective. Reduce encoder cost by 50% while preserve most of the original performance.
  • General. Validated on a series of DETR models (Deformable DETR, H-DETR, DINO).

Getting Started

Lite-DINO Resnet-50 Results

name backbone box AP Checkpoint
1 Lite-DINO-H2L2-(2+1)x3 R50 49.9 Link
2 Lite-DINO-H3L1-(6+1)x1 R50 50.2 Link
3 Lite-DINO-H3L1-(2+1)x3 R50 50.4 Link

Installation

We use the environment same to DINO to run Lite-DINO. If you have run DINO, you can skip this step. We test our models under python=3.7.3,pytorch=1.9.0,cuda=11.1. Other versions might be available as well.

  1. Clone this repo
git https://github.com/IDEA-Research/Lite-DETR
cd Lite-DETR
  1. Install Pytorch and torchvision

Follow the instruction on https://pytorch.org/get-started/locally/.

# an example:
conda install -c pytorch pytorch torchvision
  1. Install other needed packages
pip install -r requirements.txt
  1. Compiling CUDA operators
cd models/dino/ops
python setup.py build install
# unit test (should see all checking is True)
python test.py
cd ../../..

Data

Please download COCO 2017 dataset and organize them as following:

COCODIR/
  β”œβ”€β”€ train2017/
  β”œβ”€β”€ val2017/
  └── annotations/
  	β”œβ”€β”€ instances_train2017.json
  	└── instances_val2017.json

Eval our pretrianed model

Download our DINO model checkpoint links in above table and perform the command below.

python -m torch.distributed.launch main.py \
    --eval -c config/DINO/DINO_4scale.py --coco_path /path/to/your/COCODIR \
    --options num_expansion=a enc_scale=b --resume /path/to/ckpt

Note: for Lite-DINO-H2L2-(2+1)x3, a=3, b=1.

for Lite-DINO-H3L1-(6+1)x1, a=1, b=3.

for Lite-DINO-H2L2-(3+1)x3, a=3, b=3.

Add --benchmark --benchmark_only at the end of the above command to measure the GFLOPs.

Lack Speed Optimizations

We did not provide cuda implementation for the key-aware deformable attention (KDA), so the training and inference speed is slow. As KDA mainly impacts the performance of small objects, you can use the original deformable attention instead by setting key_aware=False in the config. The overall performance will be not significantly impacted.

Concurrent work RT-DETR also adopts similar idea to handle high-resolution maps and other speed improvements. It is optimized well in running speed, so we encourage you to use RT-DETR for practical scenarios.

Train the model

You can also train our model on a single process:

python -m torch.distributed.launch main.py \
    -c config/DINO/DINO_4scale.py --coco_path /path/to/your/COCODIR \
    --options num_expansion=a enc_scale=b

Distributed Run

However, as the training is time consuming, we suggest to train the model on multi-gpu.

python -m torch.distributed.launch --nproc_per_node=8 main.py \
    -c config/DINO/DINO_4scale.py --coco_path /path/to/your/COCODIR \
    --options num_expansion=a enc_scale=b

Model Framework

hero_figure

Citing Lite DETR

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@article{li2023lite,
  title={Lite DETR: An Interleaved Multi-Scale Encoder for Efficient DETR},
  author={Li, Feng and Zeng, Ailing and Liu, Shilong and Zhang, Hao and Li, Hongyang and Zhang, Lei and Ni, Lionel M},
  journal={arXiv preprint arXiv:2303.07335},
  year={2023}
}

More Repositories

1

Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Jupyter Notebook
14,724
star
2

GroundingDINO

[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
Python
6,003
star
3

DINO

[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"
Python
2,160
star
4

T-Rex

[ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy
Python
2,147
star
5

DWPose

"Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
Python
2,136
star
6

detrex

detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.
Python
2,001
star
7

awesome-detection-transformer

Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)
1,261
star
8

MaskDINO

[CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation"
Python
1,149
star
9

Grounding-DINO-1.5-API

API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
Python
680
star
10

OpenSeeD

[ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"
Python
650
star
11

Motion-X

[NeurIPS 2023] Official implementation of the paper "Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset"
Python
542
star
12

DN-DETR

[CVPR 2022 Oral] Official implementation of DN-DETR
Python
535
star
13

DAB-DETR

[ICLR 2022] Official implementation of the paper "DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR"
Jupyter Notebook
499
star
14

OSX

[CVPR 2023] Official implementation of the paper "One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer"
Python
291
star
15

HumanTOMATO

[ICML 2024] πŸ…HumanTOMATO: Text-aligned Whole-body Motion Generation
Python
276
star
16

MotionLLM

[Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos
Python
226
star
17

deepdataspace

The Go-To Choice for CV Data Visualization, Annotation, and Model Analysis.
TypeScript
212
star
18

Stable-DINO

[ICCV 2023] Official implementation of the paper "Detection Transformer with Stable Matching"
Python
203
star
19

DreamWaltz

[NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".
Python
176
star
20

MP-Former

[CVPR 2023] Official implementation of the paper: MP-Former: Mask-Piloted Transformer for Image Segmentation
Python
99
star
21

HumanSD

The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"
Python
92
star
22

HumanArt

The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes"
86
star
23

ED-Pose

The official repo for [ICLR'23] "Explicit Box Detection Unifies End-to-End Multi-Person Pose Estimation "
Python
73
star
24

DQ-DETR

[AAAI 2023] DQ-DETR: Dual Query Detection Transformer for Phrase Extraction and Grounding
54
star
25

DisCo-CLIP

Official PyTorch implementation of the paper "DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training".
Python
47
star
26

LipsFormer

Python
34
star
27

DiffHOI

Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"
Python
29
star
28

hana

Implementation and checkpoints of Imagen, Google's text-to-image synthesis neural network, in Pytorch
Python
17
star
29

TOSS

[ICLR 2024] Official implementation of the paper "Toss: High-quality text-guided novel view synthesis from a single image"
Python
15
star
30

IYFC

C++
9
star
31

TAPTR

6
star
32

detrex-storage

2
star