• Stars
    star
    414
  • Rank 103,921 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627

Enabling Large Language Models to Generate Text with Citations

ALCE
*: ALCE is pronounced as /elk/ as ALCE is the Latin word for elk (Europe) or moose (North America).

This repository contains the code and data for paper Enabling Large Language Models to Generate Text with Citations. In this paper, we propose ALCE, a benchmark for Automatic LLMs' Citation Evaluation. ALCE contains three datasets: ASQA, QAMPARI, and ELI5. We provide automatic evaluation code of LLM generations around three dimensions: fluency, correctness, and citation quality. This repository also includes code to reproduce the baselines in our paper.

ALCE

Quick Links

Requirements

Please install the latest versions of PyTorch (torch), HuggingFace Transformers (transformers), HuggingFace Accelerate (accelerate), and the OpenAI API package (openai). This codebase is tested on torch==2.1.0.dev20230514+cu118, transformers==4.28.1, accelerate==0.17.1, and openai==0.27.4 with Python 3.9.7.

Data

You can download datasets (along with retrieval results) by running the following command:

bash download_data.sh

All the data will be stored in data/. Our data included top-100 DPR/GTR retrieved results for ASQA and QAMPARI, and top-100 BM25 retrieved results for QAMPARI. We also provide reranked oracle retrieval results, where top-5 passages can achieve the same recall as the original top-100 recall.

Retrieval

You can reproduce the passage retrieval step with the following command:

python retrieval.py --data {path/to/data} --retriever {bm25/gtr} --output_file {path/to/output}

There are additional packages required for the retrieval steps. Specifically, you need to install pyserini==0.21.0(their github repo is helpful) and sentence-transformers==2.2.2.

For the BM25 retrieval over Common Crawl using Sphere, you must first download the index from the Sphere repo, and set the environmental variable BM25_SPHERE_PATH to the path of the downloaded index. Specifically, you can use the following command:

wget -P faiss_index https://dl.fbaipublicfiles.com/sphere/sphere_sparse_index.tar.gz
tar -xzvf faiss_index/sphere_sparse_index.tar.gz -C faiss_index
export BM25_SPHERE_PATH=$PWD/faiss_index

It's important to note that given the large size of the corpus, this step is extremely expensive and time-consuming. We found that larger CPU memory tends to help with the speed.

For GTR, we first build an index using the DPR wikipedia snapshot, which you can obtain using the download script from the DPR repo, and then setting the environmental variable DPR_WIKI_TSV to the path of the tsv file. Specifically, you can use the following command:

wget https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz
gzip -xzvf psgs_w100.tsv.gz
export DPR_WIKI_TSV=$PWD/psgs_w100.tsv

Then, you want to set GTR_EMB to the path of the GTR embeddings of the Wikipedia corpus, and running the retrieval script for the first time will automatically build and save the index. Building the dense index can be expensive for GPU memory (we use 80GB GPUs for this) and time-consuming; the entire index will take about 31GB. If you find this step to be too expensive, you can also download it using:

wget https://huggingface.co/datasets/princeton-nlp/gtr-t5-xxl-wikipedia-psgs_w100-index/resolve/main/gtr_wikipedia_index.pkl
export GTR=$PWD/gtr_wikipedia_index.pkl

To reproduce the DPR retrieval, we refer the DPR repo, which we used the original DPR checkpoint trained on NQ.

Code Structure

  • run.py: run file to reproduce our baseline generations.
  • eval.py: eval file to evaluate generations.
  • prompts: folder that contains all prompt files.
  • configs/: folder that contains all config files to reproduce baselines.
  • tools/: misc code (generate summaries/snippets, reranking, etc.)

Reproducing Baselines

You can reproduce baselines from our paper by

python run.py --config configs/{config_name}

You can also overwrite any arguments in the config file or add new arguments simply through command line:

python run.py --config configs/{config_name} --seed 43 --model vicuna-13b

The naming of config files follow the rule of {LLM}_{#demos and #passages}_{retriever}_{method}.yaml. Method names include:

  • default corresponds to the Vanilla model in our paper.
  • summary corresponds to the Summary model.
  • extraction corresponds to the Snippet model.
  • interact_doc_id corresponds to the Interact model.
  • interact_search corresponds to the InlineSearch model.
  • closedbook corresponds to the ClosedBook model.

Our code support both OpenAI API and offline HuggingFace models:

  • For OpenAI models (for example, ChatGPT), you need to set the environment variable OPENAI_API_KEY and OPENAI_ORG_ID. If you are using the Azure OpenAI API, you need to set the environment variable of OPENAI_API_KEY and OPENAI_API_BASE. You also need to add the flag --azure.
    • Note that in Azure OpenAI API, ChatGPT's name is different and you should set it by --model gpt-35-turbo.
  • For the open-source models, you also need to set the environment variable LLAMA_ROOT to the directory containing the weights folder for the model (as LLaMA checkpoints are not available on HuggingFace model hub and can only be loaded locally).

For detailed argument usage, please refer to run.py.

Model output along with gold answers and run configs will be stored in a json file in result/.

Post-hoc citation

For closed-book models, one can use post_hoc_cite.py to add citations in a post-hoc manner (using GTR-large). To run post-hoc citation, execute

python post_hoc_cite.py --f result/{RESULT JSON FILE NAME} --external_docs data/{CORRESPONDING DATA}

The output file with post-hoc citations will be stored in result/, with a suffix post_hoc_cite.gtr-t5-large-external.

Evaluation

ACLE evaluation is implemented in eval.py.

For ASQA, use the following command

python eval.py --f {path/to/result/file} --citations --qa --mauve

For QAMPARI, use the following command

python eval.py --f {path/to/result/file} --citations

For ELI5, use the following command

python eval.py --f {path/to/result/file} --citations --claims_nli --mauve

The evaluation result will be saved in result/, with the same name as the input and a suffix .score.

Bug or Questions?

If you have any questions related to the code or the paper, feel free to email Tianyu ([email protected]). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

Citation

Please cite our paper if you use ALCE in your work:

@article{gao2023enabling,
   title={Enabling Large Language Models to Generate Text with Citations},
   author={Gao, Tianyu and Yen, Howard and Yu, Jiatong and Chen, Danqi},
   year={2023}
}

More Repositories

1

SWE-agent

SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
Python
12,189
star
2

tree-of-thought-llm

[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Python
4,416
star
3

SimCSE

[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
Python
3,310
star
4

SWE-bench

[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?
Python
1,554
star
5

MeZO

[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
Python
1,002
star
6

PURE

[NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812
Python
777
star
7

LM-BFF

[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
Python
712
star
8

DensePhrases

[ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.org/abs/2012.12624
Python
601
star
9

SimPO

SimPO: Simple Preference Optimization with a Reference-Free Reward
Python
510
star
10

LLM-Shearing

[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
Python
492
star
11

LESS

[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
Jupyter Notebook
319
star
12

AutoCompressors

[EMNLP 2023] Adapting Language Models to Compress Long Contexts
Python
262
star
13

WebShop

[NeurIPS 2022] ๐Ÿ›’WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
Python
247
star
14

TRIME

[EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674
Python
189
star
15

CoFiPruning

[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
Python
187
star
16

intercode

[NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898
Python
179
star
17

OptiPrompt

[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240
Python
167
star
18

TransformerPrograms

[NeurIPS 2023] Learning Transformer Programs
Python
154
star
19

EntityQuestions

EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers https://arxiv.org/abs/2109.08535
Python
134
star
20

QuRating

[ICML 2024] Selecting High-Quality Data for Training Language Models
Python
119
star
21

CEPE

[ACL 2024] Long-Context Language Modeling with Parallel Encodings
Python
117
star
22

DinkyTrain

Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration ๐Ÿšƒ
Python
109
star
23

LLMBar

[ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following
Python
95
star
24

MQuAKE

[EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
Jupyter Notebook
86
star
25

USACO

Can Language Models Solve Olympiad Programming?
Python
86
star
26

NLProofS

EMNLP 2022: Generating Natural Language Proofs with Verifier-Guided Search https://arxiv.org/abs/2205.12443
Python
80
star
27

MADE

EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering
Python
70
star
28

LM-Kernel-FT

A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643
Python
68
star
29

calm-textgame

[EMNLP 2020] Keep CALM and Explore: Language Models for Action Generation in Text-based Games
Python
64
star
30

CharXiv

CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Python
63
star
31

c-sts

[EMNLP 2023] C-STS: Conditional Semantic Textual Similarity
Python
61
star
32

DataMUX

[NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks
Jupyter Notebook
58
star
33

ShortcutGrammar

EMNLP 2022: Finding Dataset Shortcuts with Grammar Induction https://arxiv.org/abs/2210.11560
Jupyter Notebook
58
star
34

LitSearch

A Retrieval Benchmark for Scientific Literature Search
Python
53
star
35

Collie

[ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks
Jupyter Notebook
51
star
36

EvalConvQA

[ACL 2022] Ditch the Gold Standard: Re-evaluating Conversational Question Answering
Python
45
star
37

MABEL

EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975
Python
35
star
38

LM-Science-Tutor

Python
32
star
39

rationale-robustness

NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790
Python
26
star
40

PTP

Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073
Python
23
star
41

InstructEval

[NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.
Jupyter Notebook
23
star
42

WhatICLLearns

[ACL 2023 Findings] What In-Context Learning โ€œLearnsโ€ In-Context: Disentangling Task Recognition and Task Learning
Python
21
star
43

Cognac

Repo for paper: Controllable Text Generation with Language Constraints
Python
19
star
44

corpus-poisoning

[EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156
Python
18
star
45

semsup

Semantic Supervision: Enabling Generalization over Output Spaces
Python
16
star
46

ELIZA-Transformer

Representing Rule-based Chatbots with Transformers
Python
15
star
47

SRL-NLC

Safe Reinforcement Learning with Natural Language Constraints
14
star
48

Edge-Pruning

Code and data for the paper "Finding Transformer Circuits with Edge Pruning".
Python
14
star
49

datamux-pretraining

MUX-PLMs: Pretraining LMs with Data Multiplexing
Python
14
star
50

XTX

[ICLR 2022 Spotlight] Multi-Stage Episodic Control for Strategic Exploration in Text Games
Python
13
star
51

MultilingualAnalysis

Repository for the paper titled: "When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer"
Python
13
star
52

blindfold-textgame

[NAACL 2021] Reading and Acting while Blindfolded: The Need for Semantics in Text Game Agents
Python
12
star
53

align-mlm

Python
11
star
54

dyck-transformer

[ACL 2021] Self-Attention Networks Can Process Bounded Hierarchical Languages
Python
11
star
55

metric-wsd

NAACL'2021: Non-Parametric Few-Shot Learning for Word Sense Disambiguation
Python
10
star
56

semsup-xc

SemSup-XC: Semantic Supervision for Extreme Classification
Jupyter Notebook
10
star
57

lwm

We develop world models that can be adapted with natural language. Intergrating these models into artificial agents allows humans to effectively control these agents through verbal communication.
Python
9
star
58

benign-data-breaks-safety

Python
7
star
59

CopyCat

Python
7
star
60

Heuristic-Core

[ACL 2024] The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models - https://arxiv.org/abs/2403.03942
Python
6
star
61

CARETS

Python
6
star
62

SPARTAN

SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Python
5
star
63

attribute-tagging

[LaReL 2022] Towards an Enhanced, Faithful, and Adaptable Web Interaction Environment
Python
4
star
64

NegotiationToM

Code release for Improving Dialog Systems for Negotiation with Personality Modeling.
Python
4
star
65

il-scaling-in-games

Official code repo of "Scaling Laws for Imitation Learning in NetHack"
Python
4
star
66

MoQA

Python
3
star
67

decision-transformer

Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.
Python
2
star