• Stars
    star
    6,003
  • Rank 6,713 (Top 0.2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"

๐Ÿฆ• Grounding DINO

PWC PWC
PWC PWC

IDEA-CVR, IDEA-Research

Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang๐Ÿ“ง.

[Paper] [Demo] [BibTex]

PyTorch implementation and pretrained models for Grounding DINO. For details, see the paper Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.

๐ŸŒž Helpful Tutorial

โœจ Highlight Projects

๐Ÿ’ก Highlight

  • Open-Set Detection. Detect everything with language!
  • High Performancce. COCO zero-shot 52.5 AP (training without COCO data!). COCO fine-tune 63.0 AP.
  • Flexible. Collaboration with Stable Diffusion for Image Editting.

๐Ÿ”ฅ News

  • 2023/07/18: We release Semantic-SAM, a universal image segmentation model to enable segment and recognize anything at any desired granularity. Code and checkpoint are available!
  • 2023/06/17: We provide an example to evaluate Grounding DINO on COCO zero-shot performance.
  • 2023/04/15: Refer to CV in the Wild Readings for those who are interested in open-set recognition!
  • 2023/04/08: We release demos to combine Grounding DINO with GLIGEN for more controllable image editings.
  • 2023/04/08: We release demos to combine Grounding DINO with Stable Diffusion for image editings.
  • 2023/04/06: We build a new demo by marrying GroundingDINO with Segment-Anything named Grounded-Segment-Anything aims to support segmentation in GroundingDINO.
  • 2023/03/28: A YouTube video about Grounding DINO and basic object detection prompt engineering. [SkalskiP]
  • 2023/03/28: Add a demo on Hugging Face Space!
  • 2023/03/27: Support CPU-only mode. Now the model can run on machines without GPUs.
  • 2023/03/25: A demo for Grounding DINO is available at Colab. [SkalskiP]
  • 2023/03/22: Code is available Now!
Description Paper introduction. ODinW Marrying Grounding DINO and GLIGEN gd_gligen

โญ Explanations/Tips for Grounding DINO Inputs and Outputs

  • Grounding DINO accepts an (image, text) pair as inputs.
  • It outputs 900 (by default) object boxes. Each box has similarity scores across all input words. (as shown in Figures below.)
  • We defaultly choose the boxes whose highest similarities are higher than a box_threshold.
  • We extract the words whose similarities are higher than the text_threshold as predicted labels.
  • If you want to obtain objects of specific phrases, like the dogs in the sentence two dogs with a stick., you can select the boxes with highest text similarities with dogs as final outputs.
  • Note that each word can be split to more than one tokens with different tokenlizers. The number of words in a sentence may not equal to the number of text tokens.
  • We suggest separating different category names with . for Grounding DINO. model_explain1 model_explain2

๐Ÿท๏ธ TODO

  • Release inference code and demo.
  • Release checkpoints.
  • Grounding DINO with Stable Diffusion and GLIGEN demos.
  • Release training codes.

๐Ÿ› ๏ธ Install

Note:

  1. If you have a CUDA environment, please make sure the environment variable CUDA_HOME is set. It will be compiled under CPU-only mode if no CUDA available.

Please make sure following the installation steps strictly, otherwise the program may produce:

NameError: name '_C' is not defined

If this happened, please reinstalled the groundingDINO by reclone the git and do all the installation steps again.

how to check cuda:

echo $CUDA_HOME

If it print nothing, then it means you haven't set up the path/

Run this so the environment variable will be set under current shell.

export CUDA_HOME=/path/to/cuda-11.3

Notice the version of cuda should be aligned with your CUDA runtime, for there might exists multiple cuda at the same time.

If you want to set the CUDA_HOME permanently, store it using:

echo 'export CUDA_HOME=/path/to/cuda' >> ~/.bashrc

after that, source the bashrc file and check CUDA_HOME:

source ~/.bashrc
echo $CUDA_HOME

In this example, /path/to/cuda-11.3 should be replaced with the path where your CUDA toolkit is installed. You can find this by typing which nvcc in your terminal:

For instance, if the output is /usr/local/cuda/bin/nvcc, then:

export CUDA_HOME=/usr/local/cuda

Installation:

1.Clone the GroundingDINO repository from GitHub.

git clone https://github.com/IDEA-Research/GroundingDINO.git
  1. Change the current directory to the GroundingDINO folder.
cd GroundingDINO/
  1. Install the required dependencies in the current directory.
pip install -e .
  1. Download pre-trained model weights.
mkdir weights
cd weights
wget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
cd ..

โ–ถ๏ธ Demo

Check your GPU ID (only if you're using a GPU)

nvidia-smi

Replace {GPU ID}, image_you_want_to_detect.jpg, and "dir you want to save the output" with appropriate values in the following command

CUDA_VISIBLE_DEVICES={GPU ID} python demo/inference_on_a_image.py \
-c groundingdino/config/GroundingDINO_SwinT_OGC.py \
-p weights/groundingdino_swint_ogc.pth \
-i image_you_want_to_detect.jpg \
-o "dir you want to save the output" \
-t "chair"
 [--cpu-only] # open it for cpu mode

If you would like to specify the phrases to detect, here is a demo:

CUDA_VISIBLE_DEVICES={GPU ID} python demo/inference_on_a_image.py \
-c groundingdino/config/GroundingDINO_SwinT_OGC.py \
-p ./groundingdino_swint_ogc.pth \
-i .asset/cat_dog.jpeg \
-o logs/1111 \
-t "There is a cat and a dog in the image ." \
--token_spans "[[[9, 10], [11, 14]], [[19, 20], [21, 24]]]"
 [--cpu-only] # open it for cpu mode

The token_spans specify the start and end positions of a phrases. For example, the first phrase is [[9, 10], [11, 14]]. "There is a cat and a dog in the image ."[9:10] = 'a', "There is a cat and a dog in the image ."[11:14] = 'cat'. Hence it refers to the phrase a cat . Similarly, the [[19, 20], [21, 24]] refers to the phrase a dog.

See the demo/inference_on_a_image.py for more details.

Running with Python:

from groundingdino.util.inference import load_model, load_image, predict, annotate
import cv2

model = load_model("groundingdino/config/GroundingDINO_SwinT_OGC.py", "weights/groundingdino_swint_ogc.pth")
IMAGE_PATH = "weights/dog-3.jpeg"
TEXT_PROMPT = "chair . person . dog ."
BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25

image_source, image = load_image(IMAGE_PATH)

boxes, logits, phrases = predict(
    model=model,
    image=image,
    caption=TEXT_PROMPT,
    box_threshold=BOX_TRESHOLD,
    text_threshold=TEXT_TRESHOLD
)

annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)
cv2.imwrite("annotated_image.jpg", annotated_frame)

Web UI

We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file demo/gradio_app.py for more details.

Notebooks

COCO Zero-shot Evaluations

We provide an example to evaluate Grounding DINO zero-shot performance on COCO. The results should be 48.5.

CUDA_VISIBLE_DEVICES=0 \
python demo/test_ap_on_coco.py \
 -c groundingdino/config/GroundingDINO_SwinT_OGC.py \
 -p weights/groundingdino_swint_ogc.pth \
 --anno_path /path/to/annoataions/ie/instances_val2017.json \
 --image_dir /path/to/imagedir/ie/val2017

๐Ÿงณ Checkpoints

name backbone Data box AP on COCO Checkpoint Config
1 GroundingDINO-T Swin-T O365,GoldG,Cap4M 48.4 (zero-shot) / 57.2 (fine-tune) GitHub link | HF link link
2 GroundingDINO-B Swin-B COCO,O365,GoldG,Cap4M,OpenImage,ODinW-35,RefCOCO 56.7 GitHub link | HF link link

๐ŸŽ–๏ธ Results

COCO Object Detection Results COCO
ODinW Object Detection Results ODinW
Marrying Grounding DINO with Stable Diffusion for Image Editing See our example notebook for more details. GD_SD
Marrying Grounding DINO with GLIGEN for more Detailed Image Editing. See our example notebook for more details. GD_GLIGEN

๐Ÿฆ• Model: Grounding DINO

Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder.

arch

โ™ฅ๏ธ Acknowledgement

Our model is related to DINO and GLIP. Thanks for their great work!

We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at Awesome Detection Transformer. A new toolbox detrex is available as well.

Thanks Stable Diffusion and GLIGEN for their awesome models.

โœ’๏ธ Citation

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@article{liu2023grounding,
  title={Grounding dino: Marrying dino with grounded pre-training for open-set object detection},
  author={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others},
  journal={arXiv preprint arXiv:2303.05499},
  year={2023}
}

More Repositories

1

Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Jupyter Notebook
14,724
star
2

DINO

[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"
Python
2,160
star
3

T-Rex

[ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy
Python
2,147
star
4

DWPose

"Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
Python
2,136
star
5

detrex

detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.
Python
2,001
star
6

awesome-detection-transformer

Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)
1,261
star
7

MaskDINO

[CVPR 2023] Official implementation of the paper "Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation"
Python
1,149
star
8

Grounding-DINO-1.5-API

API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
Python
680
star
9

OpenSeeD

[ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"
Python
650
star
10

Motion-X

[NeurIPS 2023] Official implementation of the paper "Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset"
Python
542
star
11

DN-DETR

[CVPR 2022 Oral] Official implementation of DN-DETR
Python
535
star
12

DAB-DETR

[ICLR 2022] Official implementation of the paper "DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR"
Jupyter Notebook
499
star
13

OSX

[CVPR 2023] Official implementation of the paper "One-Stage 3D Whole-Body Mesh Recovery with Component Aware Transformer"
Python
291
star
14

HumanTOMATO

[ICML 2024] ๐Ÿ…HumanTOMATO: Text-aligned Whole-body Motion Generation
Python
276
star
15

MotionLLM

[Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos
Python
226
star
16

deepdataspace

The Go-To Choice for CV Data Visualization, Annotation, and Model Analysis.
TypeScript
212
star
17

Stable-DINO

[ICCV 2023] Official implementation of the paper "Detection Transformer with Stable Matching"
Python
203
star
18

Lite-DETR

[CVPR 2023] Official implementation of the paper "Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR"
Python
182
star
19

DreamWaltz

[NeurIPS 2023] Official implementation of the paper "DreamWaltz: Make a Scene with Complex 3D Animatable Avatars".
Python
176
star
20

MP-Former

[CVPR 2023] Official implementation of the paper: MP-Former: Mask-Piloted Transformer for Image Segmentation
Python
99
star
21

HumanSD

The official implementation of paper "HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation"
Python
92
star
22

HumanArt

The official implementation of CVPR 2023 paper "Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes"
86
star
23

ED-Pose

The official repo for [ICLR'23] "Explicit Box Detection Unifies End-to-End Multi-Person Pose Estimation "
Python
73
star
24

DQ-DETR

[AAAI 2023] DQ-DETR: Dual Query Detection Transformer for Phrase Extraction and Grounding
54
star
25

DisCo-CLIP

Official PyTorch implementation of the paper "DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training".
Python
47
star
26

LipsFormer

Python
34
star
27

DiffHOI

Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"
Python
29
star
28

hana

Implementation and checkpoints of Imagen, Google's text-to-image synthesis neural network, in Pytorch
Python
17
star
29

TOSS

[ICLR 2024] Official implementation of the paper "Toss: High-quality text-guided novel view synthesis from a single image"
Python
15
star
30

IYFC

C++
9
star
31

TAPTR

6
star
32

detrex-storage

2
star