• Stars
    star
    146
  • Rank 252,769 (Top 5 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created 10 months ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CogCoM

🆕 2024/2/26: Release the chat model CogCoM-chat-17b.
🆕 2024/2/26: Release the grounding model CogCoM-grounding-17b.
🆕 2024/2/4: Release the base model CogCoM-base-17b.

🌟 Jump to detailed introduction: Introduction to CogCoM.

📖 Paper: CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations

CogCoM is a general vision-language model (VLM) endowed with Chain of Manipulations (CoM) mechanism, that enables VLMs to perform multi-turns evidential visual reasoning by actively manipulating the input image. We now release CogCoM-base-17b, CogCoM-grounding-17b and CogCoM-chat-17b, a family of models with 10 billion visual parameters and 7 billion language parameters, trained on respective generalist corpuses incorporating a fusion of 4 capability types of data (instruction-following, OCR, detailed-captioning, and CoM).

🌐 Web Demo is coming soon.

Table of Contents

Release

  • 2024/2/4 CogCoM-base-17b released.

Get Started

Option 1: Inference Using Web Demo.

  • Now you can use the local code we have implemented with Gradio for GUI demo. The web demo is coming soon.

Option 2:Deploy CogCoM by yourself

We support two GUIs for model inference, CLI and web demo . If you want to use it in your python code, it is easy to modify the CLI scripts for your case.

First, we need to install the dependencies.

# CUDA >= 11.8
pip install -r requirements.txt
python -m spacy download en_core_web_sm

All code for inference is located under the demo/ directory. Please switch to this directory first before proceeding with further operations.

Situation 2.1 CLI (SAT version)

Run CLI demo via:

python cli_demo_sat.py --from_pretrained cogcom-base-17b --local_tokenizer path/to/tokenizer --bf16 --english

The program will automatically download the sat model and interact in the command line (can simply using vicuna-7b-1.5 tokenizer). You can generate replies by entering instructions and pressing enter. Enter clear to clear the conversation history and stop to stop the program.

We also support model parallel inference, which splits model to multiple (2/4/8) GPUs. --nproc-per-node=[n] in the following command controls the number of used GPUs.

torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogcom-base-17b --local_tokenizer path/to/tokenizer --bf16
  • If you want to manually download the weights, you can replace the path after --from_pretrained with the model path.

  • Our model supports SAT's 4-bit quantization and 8-bit quantization. You can change --bf16 to --fp16, or --fp16 --quant 4, or --fp16 --quant 8.

    For example

    python cli_demo_sat.py --from_pretrained cogcom-base-17b --fp16 --quant 8
    # In SAT version,--quant should be used with --fp16
  • The program provides the following hyperparameters to control the generation process:

    usage: cli_demo_sat.py [-h] [--max_length MAX_LENGTH] [--top_p TOP_P] [--top_k TOP_K] [--temperature TEMPERATURE]
    
    optional arguments:
        -h, --help                    show this help message and exit
        --max_length MAX_LENGTH       max length of the total sequence
        --top_p TOP_P                 top p for nucleus sampling
        --top_k TOP_K                 top k for top k sampling
        --temperature TEMPERATURE     temperature for sampling
    

Situation 2.2 CLI (Huggingface version)

Run CLI demo via:

# CogCoM
python cli_demo_hf.py --from_pretrained THUDM/cogcom-base-17b-hf --bf16 --local_tokenizer path/to/tokenizer --bf16 --english
  • If you want to manually download the weights, you can replace the path after --from_pretrained with the model path.

  • You can change --bf16 to --fp16, or --quant 4. For example, our model supports Huggingface's 4-bit quantization:

    python cli_demo_hf.py --from_pretrained THUDM/cogcom-base-17b-hf --quant 4

Situation 2.3 Web Demo

We also offer a local web demo based on Gradio. First, install Gradio by running: pip install gradio. Then download and enter this repository and run web_demo.py. See the next section for detailed usage:

python web_demo.py --from_pretrained cogcom-base-17b --local_tokenizer path/to/tokenizer --bf16 --english

The GUI of the web demo looks like:

Option 3:Finetuning CogCoM

You may want to use CogCoM in your own task, which needs a different output style or domain knowledge. All code for finetuning is located under at finetune.sh and finetune.py files.

Hardware requirement

  • Model Inference:

    For INT4 quantization: 1 * RTX 3090(24G)

    For FP16: 1 * A100(80G) or 2 * RTX 3090(24G)

  • Finetuning:

    For FP16: 4 * A100(80G) [Recommend] or 8* RTX 3090(24G).

Model checkpoints

If you run the demo/cli_demo*.py from the code repository, it will automatically download SAT or Hugging Face weights. Alternatively, you can choose to manually download the necessary weights.

  • CogCoM

    Model name Input resolution Introduction Huggingface model SAT model
    cogcom-base-17b 490 Supports grounding, OCR, and CoM. coming soon link
    cogcom-grounding-17b 490 Supports grounding, OCR, and CoM. coming soon link
    cogcom-chat-17b 490 Supports chat, grounding, OCR, and CoM. coming soon link

Introduction to CogCoM

  • CogCoM is a general open-source visual language model (VLM) equipped with Chain of Manipulations (CoM). CogCoM-17B has 10 billion vision parameters and 7 billion language parameters.
  • CogCoM-17B rely on an efficient CoM data production framework, that engages remarkable LLM to provide basic solving steps, adopts reliable visual tools to obtain visual contents, and then acquires feasible paths based on traversal.
  • CogCoM-17B is trained on a data fusion of 4 types capabilities, including instruction-following, OCR, detailed-captioning, and CoM, which can solve general multimodal tasks and can perform evidential visual reasoning that permits uses to trace the error causes in the interpretable paths.
  • CogCoM devises a memory-based compatible VLM architecture, that enables VLMs to actively manipulate the input image (e.g., grounding, crop, zoom in) and re-input the processed new image with a multi-turns multi-images manner, for rigorously reasoning.
Click to view results on GQA, TallyVQA, TextVQA, ST-VQA.
Method GQA TallyVQA-s TallyVQA-c TextVQA ST-VQA
Flamingo - - - 54.1 -
GIT - - - 59.8 -
GIT2 - - - 67.3 -
BLIP-2 44.7* - - - 21.7
InstructBLIP 49.5* - - - 50.7*
Qwen-VL 49.5* - - - 50.7*
CogCoM 71.7 84.0 70.1 71.1 70.0
Click to view results of grounding benchmarks.
RefCOCO RefCOCO+ RefCOCOg
val testA testB val testA testB val test
CogCoM-grounding-generalist 92.34 94.57 89.15 88.19 92.80 82.08 89.32 90.45

Examples

  • CogCoM performs evidential visual reasoning for details recognition, reading time, understanding charts, counting objects, and reading texts.

    Click for view examples.


  • CogCoM demonstrates the flexible capabilities for adapting to different multimodal scenarios, including evidential visual reasoning, Visual Grounding, Grounded Captioning, Image Captioning, Multi Choice, and Detailed Captioning.

Cookbook

Task Prompts

  1. General Multi-Round Dialogue: Say whatever you want.

  2. Chain of Manipulations : Explicitly launching CoM reasoning.

    • We randomly add launching prompts to the CoM chains for solving meticulous visual problems, so you can explicitly let CogCoM to run with CoM mechanism, by adding the following launching prompt (we randomly generate numerous launching prompts for flexibility, see com_dataset.py for all details):
        Please solve the problem gradually via a chain of manipulations, where in each step you can selectively adopt one of the following manipulations GROUNDING(a phrase)->boxes, OCR(an image or a region)->texts, CROP_AND_ZOOMIN(a region on given image)->new_image, CALCULATE(a computable target)->numbers, or invent a new manipulation, if that seems helpful. {QUESTION}
  3. Visual Grounding. Our model is compatible with the grounding instructions from MultiInstruct and CogVLM, we provide basic usage of three functionalities here:

    • Visual Grounding (VG): Returning grounding coordinates (bounding box) based on the description of objects. Use any template from instruction template. For example (replacing <expr> with the object's description):

      "Find the region in image that "<expr>" describes."

    • Grounded Captioning (GC): Providing a description based on bounding box coordinates. Use a template from instruction template. For example (replacing <objs> with the position coordinates),

      "Describe the content of [[086,540,400,760]] in the picture."

    • Image Description with Cooordinates (IDC): Image description with grounding coordinates (bounding box). Use any template from caption_with_box template as model input. For example:

      Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?

Format of coordination: The bounding box coordinates in the model's input and output use the format [[x1, y1, x2, y2]], with the origin at the top left corner, the x-axis to the right, and the y-axis downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative coordinates multiplied by 1000 (prefixed with zeros to three digits).

FAQ

  • If you have trouble in accessing huggingface.co, you can add --local_tokenizer /path/to/vicuna-7b-v1.5 to load the tokenizer.
  • Download model using 🔨SAT, the model will be saved to the default location ~/.sat_models. Change the default location by setting the environment variable SAT_HOME. For example, if you want to save the model to /path/to/my/models, you can run export SAT_HOME=/path/to/my/models before running the python command.

License

The code in this repository is open source under the Apache-2.0 license, while the use of the CogCoM model weights must comply with the Model License.

Citation & Acknowledgements

@article{qi2024cogcom,
  title={CogCoM: Train Large Vision-Language Models Diving into Details through Chain of Manipulations},
  author={Qi, Ji and Ding, Ming and Wang, Weihan and Bai, Yushi and Lv, Qingsong and Hong, Wenyi and Xu, Bin and Hou, Lei and Li, Juanzi and Dong, Yuxiao and Tang, Jie},
  journal={arXiv preprint arXiv:2402.04236},
  year={2024}
}

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Python
40,459
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Python
15,702
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Python
13,366
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
8,150
star
5

CogVideo

text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Python
7,976
star
6

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,653
star
7

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,622
star
8

CogVLM

a state-of-the-art-level open visual language model | 多模态预训练模型
Python
5,913
star
9

GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Python
4,826
star
10

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Python
4,076
star
11

GLM

GLM (General Language Model)
Python
3,168
star
12

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
2,144
star
13

CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Python
2,018
star
14

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,968
star
15

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,720
star
16

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,691
star
17

WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
Python
1,557
star
18

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,339
star
19

CodeGeeX4

CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
Python
1,271
star
20

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
1,117
star
21

LongWriter

LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Python
1,076
star
22

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
966
star
23

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
944
star
24

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
915
star
25

LongBench

[ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
629
star
26

AutoWebGLM

An LLM-based Web Navigating Agent (KDD'24)
Python
584
star
27

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
522
star
28

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
462
star
29

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
456
star
30

Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Python
366
star
31

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
322
star
32

MathGLM

Official Pytorch Implementation for MathGLM
Python
316
star
33

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
301
star
34

AlignBench

大模型多维度中文对齐评测基准 (ACL 2024)
Python
295
star
35

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
278
star
36

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
Python
272
star
37

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
262
star
38

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
237
star
39

NLP4Rec-Papers

Paper list of NLP for recommender systems
225
star
40

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
225
star
41

Chinese-Transformer-XL

Python
218
star
42

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
203
star
43

LongAlign

[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs
Python
199
star
44

icetk

A unified tokenization tool for Images, Chinese and English.
Python
150
star
45

ReST-MCTS

ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)
Python
146
star
46

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
134
star
47

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
133
star
48

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
121
star
49

ProteinLM

Protein Language Model
Python
111
star
50

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
51

VisualAgentBench

Towards Large Multimodal Models as Visual Foundation Agents
Python
94
star
52

CogView3

text to image to generation: CogView3-Plus and CogView3(ECCV 2024)
Python
93
star
53

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
91
star
54

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
85
star
55

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
56

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
80
star
57

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
58

ChatGLM-Math

Python
75
star
59

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
71
star
60

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
67
star
61

FewNLU

Python
65
star
62

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)
Python
62
star
63

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
64

XDAI

Python
61
star
65

CogAgent

59
star
66

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
59
star
67

NaturalCodeBench

Python
54
star
68

LVBench

LVBench: An Extreme Long Video Understanding Benchmark
Python
52
star
69

AutoRE

Python
45
star
70

Graph-Reading-Group

Daily reading group on graphs at KEG
44
star
71

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
37
star
72

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
34
star
73

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
74

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
30
star
75

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
30
star
76

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16–'18
28
star
77

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
23
star
78

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
79

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
21
star
80

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
21
star
81

HOSMEL

A task relevant entity linking toolkit
Python
20
star
82

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
19
star
83

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
19
star
84

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
18
star
85

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
18
star
86

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
16
star
87

LargeScale

Python
15
star
88

eTrust

Source code and dataset for TKDE 2019 paper “Trust Relationship Prediction in Alibaba E-Commerce Platform”
C++
15
star
89

MSAGPT

MSAGPT
Python
15
star
90

whoiswho-top-solutions

Python
14
star
91

paper-source-trace

Python
14
star
92

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
93

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
10
star
94

BattleAgentBench

Python
9
star
95

GraphAlign

GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
Python
8
star
96

APAR

APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
Python
8
star
97

scholar-profiling

Jupyter Notebook
7
star
98

citation-prediction

Python
7
star
99

OpenWebAgent

A convenient framework for developing LLM- and LMM-based web agents.
JavaScript
6
star
100

OAG-AQA

Python
6
star