• Stars
    star
    1,557
  • Rank 30,051 (Top 0.6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)

WebGLM: Towards An Efficient Web-enhanced Question Answering System with Human Preferences

๐Ÿ“ƒ Paper (KDD'23) โ€ข ๐ŸŒ ไธญๆ–‡ README โ€ข ๐Ÿค— HF Repo [WebGLM-10B] [WebGLM-2B] โ€ข ๐Ÿ“š Dataset [WebGLM-QA]

This is the official implementation of WebGLM. If you find our open-sourced efforts useful, please ๐ŸŒŸ the repo to encourage our following developement!

[Please click to watch the demo!]

Click to Watch Demo!

Read this in ไธญๆ–‡.

Update

[2023/06/25] Release ChatGLM2-6B, an updated version of ChatGLM-6B which introduces several new features:

  1. Stronger Performance: we have fully upgraded the ChatGLM2-6B. It uses the hybrid objective function of GLM, and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The evaluation results show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size.
  2. Longer Context: Based on FlashAttention technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations.
  3. More Efficient Inference: Based on Multi-Query Attention technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K.

More details please refer to ChatGLM2-6Bใ€‚

Overview

paper

WebGLM aspires to provide an efficient and cost-effective web-enhanced question-answering system using the 10-billion-parameter General Language Model (GLM). It aims to improve real-world application deployment by integrating web search and retrieval capabilities into the pre-trained language model.

Features

  • LLM-augmented Retriever: Enhances the retrieval of relevant web content to better aid in answering questions accurately.
  • Bootstrapped Generator: Generates human-like responses to questions, leveraging the power of the GLM to provide refined answers.
  • Human Preference-aware Scorer: Estimates the quality of generated responses by prioritizing human preferences, ensuring the system produces useful and engaging content.

News

  • [2023-06-24] We support searching via Bing now!
  • [2023-06-14] We release our code and the paper of WebGLM!

Preparation

Prepare Code and Environments

Clone this repo, and install python requirements.

pip install -r requirements.txt

Install Nodejs.

apt install nodejs # If you use Ubuntu

Install playwright dependencies.

playwright install

If browsing environments are not installed in your host, you need to install them. Do not worry, playwright will give you instructions when you first execute it if so.

Prepare SerpAPI Key

In search process, we use SerpAPI to get search results. You need to get a SerpAPI key from here.

Then, set the environment variable SERPAPI_KEY to your key.

export SERPAPI_KEY="YOUR KEY"

Alternatively, you can use Bing search with local browser environment (playwright). You can add --searcher bing to start command lines to use Bing search. (See Run as Command Line Interface and Run as Web Service)

Prepare Retriever Checkpoint

Download the checkpoint on Tsinghua Cloud by running the command line below.

You can manually specify the path to save the checkpoint by --save SAVE_PATH.

python download.py retriever-pretrained-checkpoint

Try WebGLM

Before you run the code, make sure that the space of your device is enough.

Export Environment Variables

Export the environment variable WEBGLM_RETRIEVER_CKPT to the path of the retriever checkpoint. If you have downloaded the retriever checkpoint in the default path, you can simply run the command line below.

export WEBGLM_RETRIEVER_CKPT=./download/retriever-pretrained-checkpoint

Run as Command Line Interface

You can try WebGLM-2B model by:

python cli_demo.py -w THUDM/WebGLM-2B

Or directly for WebGLM-10B model:

python cli_demo.py

If you want to use Bing search instead of SerpAPI, you can add --searcher bing to the command line, for example:

python cli_demo.py -w THUDM/WebGLM-2B --searcher bing

Run as Web Service

Run web_demo.py with the same arguments as cli_demo.py to start a web service. For example, you can try WebGLM-2B model with Bing search by:

python web_demo.py -w THUDM/WebGLM-2B --searcher bing

Train WebGLM

Train Generator

Prepare Data (WebGLM-QA)

Download the training data (WebGLM-QA) on Tsinghua Cloud by running the command line below.

python download.py generator-training-data

It will automatically download all the data and preprocess them into the seq2seq form that can be used immediately in ./download.

Training

Please refer to GLM repo for seq2seq training.

Train Retriever

Prepare Data

Download the training data on Tsinghua Cloud by running the command line below.

python download.py retriever-training-data

Training

Run the following command line to train the retriever. If you have downloaded the retriever training data in the default path, you can simply run the command line below.

python train_retriever.py --train_data_dir ./download/retriever-training-data

Evaluation

You can reproduce our results on TriviaQA, WebQuestions and NQ Open. Take TriviaQA for example, you can simply run the command line below:

bash scripts/triviaqa.sh

and start running the experiment.

Real Application Cases

Here you can see some examples of WebGLM real application scenarios.

When will the COVID-19 disappear?

How to balance career and hobbies?

FL Studio and Cubase, which is better?

Is attention better than CNN?

How to survive in the first-tier cities without a high-salary work?

What do you think of version 3.5 of Genshin Impact?

transformers are originated in NLP, but why they can be applied in CV?

Who proposed Music Transformer? How does it work?

What is the backbone of Toolformer?

License

This repository is licensed under the Apache-2.0 License. The use of model weights is subject to the Model_License. All open-sourced data is for resarch purpose only.

Citation

If you use this code for your research, please cite our paper.

@misc{liu2023webglm,
      title={WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences},
      author={Xiao Liu and Hanyu Lai and Hao Yu and Yifan Xu and Aohan Zeng and Zhengxiao Du and Peng Zhang and Yuxiao Dong and Jie Tang},
      year={2023},
      eprint={2306.07906},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

This repo is simplified for easier deployment.

More Repositories

1

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
40,459
star
2

ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
15,702
star
3

ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | ๅผ€ๆบๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
13,366
star
4

CodeGeeX

CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Python
8,150
star
5

CogVideo

text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Python
7,976
star
6

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Python
7,653
star
7

CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model
Python
7,622
star
8

CogVLM

a state-of-the-art-level open visual language model | ๅคšๆจกๆ€้ข„่ฎญ็ปƒๆจกๅž‹
Python
5,913
star
9

GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | ๅผ€ๆบๅคš่ฏญ่จ€ๅคšๆจกๆ€ๅฏน่ฏๆจกๅž‹
Python
4,826
star
10

VisualGLM-6B

Chinese and English multimodal conversational language model | ๅคšๆจกๆ€ไธญ่‹ฑๅŒ่ฏญๅฏน่ฏ่ฏญ่จ€ๆจกๅž‹
Python
4,076
star
11

GLM

GLM (General Language Model)
Python
3,168
star
12

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Python
2,144
star
13

CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Python
2,018
star
14

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Python
1,968
star
15

CogDL

CogDL: A Comprehensive Library for Graph Deep Learning (WWW 2023)
Python
1,720
star
16

CogView

Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Python
1,691
star
17

AgentTuning

AgentTuning: Enabling Generalized Agent Abilities for LLMs
Python
1,339
star
18

CodeGeeX4

CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
Python
1,271
star
19

ImageReward

[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Python
1,117
star
20

LongWriter

LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Python
1,076
star
21

SwissArmyTransformer

SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
Python
966
star
22

CogView2

official code repo for paper "CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers"
Python
944
star
23

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Python
915
star
24

LongBench

[ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Python
629
star
25

AutoWebGLM

An LLM-based Web Navigating Agent (KDD'24)
Python
584
star
26

GATNE

Source code and dataset for KDD 2019 paper "Representation Learning for Attributed Multiplex Heterogeneous Network"
Python
522
star
27

GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
Python
462
star
28

CogQA

Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
Python
456
star
29

Inf-DiT

Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer
Python
366
star
30

GCC

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020
Python
322
star
31

MathGLM

Official Pytorch Implementation for MathGLM
Python
316
star
32

HGB

Revisiting, benchmarking, and refining Heterogeneous Graph Neural Networks.
Python
301
star
33

AlignBench

ๅคงๆจกๅž‹ๅคš็ปดๅบฆไธญๆ–‡ๅฏน้ฝ่ฏ„ๆต‹ๅŸบๅ‡† (ACL 2024)
Python
295
star
34

ComiRec

Source code and dataset for KDD 2020 paper "Controllable Multi-Interest Framework for Recommendation"
Python
278
star
35

LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
Python
272
star
36

RelayDiffusion

The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
Python
262
star
37

KOBE

Towards Knowledge-Based Personalized Product Description Generation in E-commerce @ KDD 2019
Python
237
star
38

NLP4Rec-Papers

Paper list of NLP for recommender systems
225
star
39

ProNE

Source code and dataset for IJCAI 2019 paper "ProNE: Fast and Scalable Network Representation Learning"
Python
225
star
40

Chinese-Transformer-XL

Python
218
star
41

GRAND

Source code and dataset of the NeurIPS 2020 paper "Graph Random Neural Network for Semi-Supervised Learning on Graphs"
Python
203
star
42

LongAlign

[EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs
Python
199
star
43

icetk

A unified tokenization tool for Images, Chinese and English.
Python
150
star
44

CogCoM

Jupyter Notebook
146
star
45

ReST-MCTS

ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)
Python
146
star
46

KBRD

Towards Knowledge-Based Recommender Dialog System @ EMNLP 2019
Python
134
star
47

GraphMAE2

GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner in WWW'23
Python
133
star
48

iPrompt

Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting
Python
121
star
49

ProteinLM

Protein Language Model
Python
111
star
50

MCNS

Source code and dataset for KDD 2020 paper "Understanding Negative Sampling in Graph Representation Learning"
Python
111
star
51

VisualAgentBench

Towards Large Multimodal Models as Visual Foundation Agents
Python
94
star
52

CogView3

text to image to generation: CogView3-Plus and CogView3(ECCV 2024)
Python
93
star
53

grb

Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Python
91
star
54

GraphSGAN

Implementation of "GraphSGAN", a GAN-based semi-supervised learning algorithm for graph data.
Python
85
star
55

kgTransformer

kgTransformer: pre-training for reasoning over complex KG queries (KDD 22)
Python
83
star
56

ScenarioMeta

Source code and dataset for KDD 2019 paper "Sequential Scenario-Specific Meta Learner for Online Recommendation"
Python
80
star
57

OAG-BERT

A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)
76
star
58

ChatGLM-Math

Python
75
star
59

CogKR

Source code and dataset for paper "Cognitive Knowledge Graph Reasoning for One-shot Relational Learning"
Python
71
star
60

SelfKG

Codes for WWW2022 accepted paper: SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs
Python
67
star
61

FewNLU

Python
65
star
62

SciGLM

SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning (NeurIPS D&B Track 2024)
Python
62
star
63

Multilingual-GLM

The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective
Python
62
star
64

XDAI

Python
61
star
65

CogAgent

59
star
66

OAG

Source code and dataset for KDD 2019 paper "OAG: Toward Linking Large-scale Heterogeneous Entity Graphs"
Python
59
star
67

NaturalCodeBench

Python
54
star
68

LVBench

LVBench: An Extreme Long Video Understanding Benchmark
Python
52
star
69

AutoRE

Python
45
star
70

Graph-Reading-Group

Daily reading group on graphs at KEG
44
star
71

SCR

SCR: Training Graph Neural Networks with Consistency Regularization
Python
37
star
72

WhoIsWho

KDD'23 Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark, Leaderboard, and Toolkit
Python
34
star
73

FastLDM

Inference speed-up for stable-diffusion (ldm) with TensorRT.
Python
34
star
74

GraphCAD

TKDE'22-GraphCAD: https://arxiv.org/pdf/2108.07516.pdf
Python
30
star
75

GRAND-plus

Code and dataset for paper "GRAND+: Scalable Graph Random Neural Networks"
Python
30
star
76

KDD-Industrial-Papers

A list of recent industrial papers in KDD'16โ€“'18
28
star
77

ApeGNN

ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation (WWW'23)
Python
23
star
78

GLM-iprompt

Apply Iprompt on GLM with innovative new methods. Currently support Chinese QA, English QA and Chinese poem generation.
Python
21
star
79

GIAAD

Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track
Python
21
star
80

Tsinghua-ML-Course

Course Materials for ML Course at Tsinghua
HTML
21
star
81

HOSMEL

A task relevant entity linking toolkit
Python
20
star
82

Self-Contrast

Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
Python
19
star
83

RecDCL

RecDCL: Dual Contrastive Learning for Recommendation (WWW'24, Oral)
Python
19
star
84

tdgia

code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)
Python
18
star
85

BatchSampler

The source code for BatchSampler that accepted in KDD'23
Python
18
star
86

MRT

MRT: Tracing the Evolution of Scientific Publications (TKDE 2021)
16
star
87

LargeScale

Python
15
star
88

eTrust

Source code and dataset for TKDE 2019 paper โ€œTrust Relationship Prediction in Alibaba E-Commerce Platformโ€
C++
15
star
89

MSAGPT

MSAGPT
Python
15
star
90

whoiswho-top-solutions

Python
14
star
91

paper-source-trace

Python
14
star
92

Efficient-Head-Finetuning

Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Python
13
star
93

IGB

Source code and dataset for IJCAI 2022 paper "Rethinking the Setting of Semi-supervised Learning on Graphs"
Python
10
star
94

BattleAgentBench

Python
9
star
95

GraphAlign

GraphAlign: Pretraining One Graph Neural Network on Multiple Graphs via Feature Alignment
Python
8
star
96

APAR

APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding
Python
8
star
97

scholar-profiling

Jupyter Notebook
7
star
98

citation-prediction

Python
7
star
99

OpenWebAgent

A convenient framework for developing LLM- and LMM-based web agents.
JavaScript
6
star
100

OAG-AQA

Python
6
star