• Stars
    star
    245
  • Rank 164,307 (Top 4 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering (EMNLP 2020)

Multi-Hop Graph Relation Networks (EMNLP 2020)

License: MIT

This is the repo of our EMNLP'20 paper:

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering
Yanlin Feng*, Xinyue Chen*, Bill Yuchen Lin, Peifeng Wang, Jun Yan and Xiang Ren.
EMNLP 2020.
*=equal contritbution

This repository also implements other graph encoding models for question answering (including vanilla LM finetuning).

  • RelationNet
  • R-GCN
  • KagNet
  • GConAttn
  • KVMem
  • MHGRN (or. MultiGRN)

Each model supports the following text encoders:

  • LSTM
  • GPT
  • BERT
  • XLNet
  • RoBERTa

Resources

We provide preprocessed ConceptNet and pretrained entity embeddings for your own usage. These resources are independent of the source code.

Note that the following reousrces can be download here.

ConceptNet (5.6.0)

Description Downloads Notes
Entity Vocab entity-vocab one entity per line, space replaced by '_'
Relation Vocab relation-vocab one relation per line, merged
ConceptNet (CSV format) conceptnet-5.6.0-csv English tuples extracted from the full conceptnet with merged relations
ConceptNet (NetworkX format) conceptnet-5.6.0-networkx NetworkX pickled format, pruned by filtering out stop words

Entity Embeddings (Node Features)

Entity embeddings are packed into a matrix of shape (#ent, dim) and stored in numpy format. Use np.load to read the file. You may need to download the vocabulary files first.

Embedding Model Dimensionality Description Downloads
TransE 100 Obtained using OpenKE with optim=sgd, lr=1e-3, epoch=1000 entities relations
NumberBatch 300 https://github.com/commonsense/conceptnet-numberbatch entities
BERT-based 1024 Provided by Zhengwei entities

Dependencies

Run the following commands to create a conda environment (assume CUDA10):

conda create -n krqa python=3.6 numpy matplotlib ipython
source activate krqa
conda install pytorch=1.1.0 torchvision cudatoolkit=10.0 -c pytorch
pip install dgl-cu100==0.3.1
pip install transformers==2.0.0 tqdm networkx==2.3 nltk spacy==2.1.6
python -m spacy download en

Usage

1. Download Data

First, you need to download all the necessary data in order to train the model:

git clone https://github.com/INK-USC/MHGRN.git
cd MHGRN
bash scripts/download.sh

The script will:

2. Preprocess

To preprocess the data, run:

python preprocess.py

By default, all available CPU cores will be used for multi-processing in order to speed up the process. Alternatively, you can use "-p" to specify the number of processes to use:

python preprocess.py -p 20

The script will:

  • Convert the original datasets into .jsonl files (stored in data/csqa/statement/)
  • Extract English relations from ConceptNet, merge the original 42 relation types into 17 types
  • Identify all mentioned concepts in the questions and answers
  • Extract subgraphs for each q-a pair

The preprocessing procedure takes approximately 3 hours on a 40-core CPU server. Most intermediate files are in .jsonl or .pk format and stored in various folders. The resulting file structure will look like:

.
β”œβ”€β”€ README.md
└── data/
    β”œβ”€β”€ cpnet/                 (prerocessed ConceptNet)
    β”œβ”€β”€ glove/                 (pretrained GloVe embeddings)
    β”œβ”€β”€ transe/                (pretrained TransE embeddings)
    └── csqa/
        β”œβ”€β”€ train_rand_split.jsonl
        β”œβ”€β”€ dev_rand_split.jsonl
        β”œβ”€β”€ test_rand_split_no_answers.jsonl
        β”œβ”€β”€ statement/             (converted statements)
        β”œβ”€β”€ grounded/              (grounded entities)
        β”œβ”€β”€ paths/                 (unpruned/pruned paths)
        β”œβ”€β”€ graphs/                (extracted subgraphs)
        β”œβ”€β”€ ...

3. Hyperparameter Search (optional)

To search the parameters for RoBERTa-Large on CommonsenseQA:

bash scripts/param_search_lm.sh csqa roberta-large

To search the parameters for BERT+RelationNet on CommonsenseQA:

bash scripts/param_search_rn.sh csqa bert-large-uncased

4. Training

Each graph encoding model is implemented in a single script:

Graph Encoder Script Description
None lm.py w/o knowledge graph
Relation Network rn.py
R-GCN rgcn.py Use --gnn_layer_num and --num_basis to specify #layer and #basis
KagNet kagnet.py Adapted from https://github.com/INK-USC/KagNet, still tuning
Gcon-Attn gconattn.py
KV-Memory kvmem.py
MHGRN grn.py

Some important command line arguments are listed as follows (run python {lm,rn,rgcn,...}.py -h for a complete list):

Arg Values Description Notes
--mode {train, eval, ...} Training or Evaluation default=train
-enc, --encoder {lstm, openai-gpt, bert-large-unased, roberta-large, ....} Text Encoer Model names (except for lstm) are the ones used by huggingface-transformers, default=bert-large-uncased
--optim {adam, adamw, radam} Optimizer default=radam
-ds, --dataset {csqa, obqa} Dataset default=csqa
-ih, --inhouse {0, 1} Run In-house Split default=1, only applicable to CSQA
--ent_emb {transe, numberbatch, tzw} Entity Embeddings default=tzw (BERT-based node features)
-sl, --max_seq_len {32, 64, 128, 256} Maximum Sequence Length Use 128 or 256 for datasets that contain long sentences! default=64
-elr, --encoder_lr {1e-5, 2e-5, 3e-5, 6e-5, 1e-4} Text Encoder LR dataset specific and text encoder specific, default values in utils/parser_utils.py
-dlr, --decoder_lr {1e-4, 3e-4, 1e-3, 3e-3} Graph Encoder LR dataset specific and model specific, default values in {model}.py
--lr_schedule {fixed, warmup_linear, warmup_constant} Learning Rate Schedule default=fixed
-me, --max_epochs_before_stop {2, 4, 6} Early Stopping Patience default=2
--unfreeze_epoch {0, 3} Freeze Text Encoder for N epochs model specific
-bs, --batch_size {16, 32, 64} Batch Size default=32
--save_dir str Checkpoint Directory model specific
--seed {0, 1, 2, 3} Random Seed default=0

For example, run the following command to train a RoBERTa-Large model on CommonsenseQA:

python lm.py --encoder roberta-large --dataset csqa

To train a RelationNet with BERT-Large-Uncased as the encoder:

python rn.py --encoder bert-large-uncased

To reproduce the reported results of MultiGRN on CommonsenseQA official set:

bash scripts/run_grn_csqa.sh

5. Evaluation

To evaluate a trained model (you need to specify --save_dir if the checkpoint is not stored in the default directory):

python {lm,rn,rgcn,...}.py --mode eval [ --save_dir path/to/directory/ ]

Use Your Own Dataset

  • Convert your dataset to {train,dev,test}.statement.jsonl in .jsonl format (see data/csqa/statement/train.statement.jsonl)
  • Create a directory in data/{yourdataset}/ to store the .jsonl files
  • Modify preprocess.py and perform subgraph extraction for your data
  • Modify utils/parser_utils.py to support your own dataset
  • Tune encoder_lr,decoder_lr and other important hyperparameters, modify utils/parser_utils.py and {model}.py to record the tuned hyperparameters

More Repositories

1

RE-Net

Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs (EMNLP 2020)
Python
433
star
2

USC-DS-RelationExtraction

Distantly Supervised Relation Extraction
C++
417
star
3

KagNet

Knowledge-Aware Graph Networks for Commonsense Reasoning (EMNLP-IJCNLP 19)
Python
268
star
4

TriggerNER

TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition (ACL 2020)
Python
173
star
5

CommonGen

A Constrained Text Generation Challenge Towards Generative Commonsense Reasoning
Python
139
star
6

AlpacaTag

AlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging (ACL 2019 Demo)
HTML
137
star
7

CrossFit

Code for paper "CrossFit πŸ‹οΈ: A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)
Python
101
star
8

ClusType

Automatic Entity Recognition and Typing for Domain-Specific Corpora (KDD'15)
Python
98
star
9

temporal-gcn-lstm

Code for Characterizing and Forecasting User Engagement with In-App Action Graphs: A Case Study of Snapchat
Python
78
star
10

AFET

AFET: Automatic Fine-Grained Entity Typing (EMNLP'16)
Python
57
star
11

CPL

Collaborative Policy Learning for Open Knowledge Graph Reasoning (EMNLP 2019)
Python
56
star
12

PLE

Label Noise Reduction in Entity Typing (KDD'16)
C++
53
star
13

NERO

Source Code for paper "NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction", WWW 2020
Python
47
star
14

fewNER

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER (ACL 2022)
Python
43
star
15

StructMineDataPipeline

Performs entity detection, distant supervision, candidate generation, and produces JSON files for typing systems (PLE, AFET, CoType)
C++
43
star
16

shifted-label-distribution

Source code for paper "Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction" (EMNLP 2019)
C++
39
star
17

DualRE

Source code for paper: "Learning Dual Retrieval Module for Semi-supervised Relation Extraction"
Python
36
star
18

hierarchical-explanation-neural-sequence-models

Source code for "Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models", ICLR 2020.
Python
30
star
19

CALM

Source code for ICLR 2021 paper : Pre-training Text-to-Text Transformers for Concept-Centric Common Sense
Python
27
star
20

ReQuest

Indirect Supervision for Relation Extraction Using Question-Answer Pairs (WSDM'18)
C++
24
star
21

DIG

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)
Python
24
star
22

LEAN-LIFE

Label Efficient Learning From Explanations
Python
23
star
23

XCSR

Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"
Python
22
star
24

ReCross

ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation
Python
22
star
25

VisCOLL

Code and data for the project "Visually grounded continual learning of compositional semantics"
Python
21
star
26

DArtNet

Temporal Attribute Prediction via Joint Modeling of Multi-Relational Structure Evolution
Python
19
star
27

NumerSense

The data and code for NumerSense (EMNLP2020)
Python
19
star
28

NExT

Source Code for paper "Learning from Explanations with Neural Execution Tree", ICLR 2020
Python
18
star
29

HGN

Learning Contextualized Knowledge Structures for Commonsense Reasoning
Python
17
star
30

GMED

Source code for "Gradient Based Memory Editing for Task-Free Continual Learning", 4th Lifelong ML Workshop@ICML 2020
Python
16
star
31

SalKG

This is the official PyTorch implementation of our NeurIPS 2021 paper: "SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning"
Python
14
star
32

FaiRR

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language (ACL 2022)
Python
14
star
33

FiD-ICL

"FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)
Python
13
star
34

IsoBN

IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization
Python
13
star
35

hypter

Zero-shot Learning by Generating Task-specific Adapters
Python
13
star
36

sparse-distillation

Code for "Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models"
Python
12
star
37

expl-refinement

Code for the paper "Refining Language Model with Compositional Explanation" (NeurIPS 2021)
Python
12
star
38

RiddleSense

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge
Python
12
star
39

ConNet

Python
12
star
40

entity-robustness

Code and data for paper "On the Robustness of Reading Comprehension Models to Entity Renaming" (NAACL'22)
Python
11
star
41

mrc-explanation

Source Code for "Teaching Machine Comprehension with Compositional Explanations" (Findings of EMNLP 2020)
Python
11
star
42

Reflect

Data and Code for Paper "Reflect Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality" (EMNLP 2022)
Python
11
star
43

rockner

Python
10
star
44

BITE

Code and data for paper "BITE: Textual Backdoor Attacks with Iterative Trigger Injection"
Python
9
star
45

CLIF

Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"
Python
8
star
46

XMD

XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Vue
7
star
47

procedural-extraction

Code for paper Eliciting Knowledge from Experts: Automatic Transcript Parsing for Cognitive Task Analysis, in proceedings of ACL 2019
Python
7
star
48

RationaleMultiRewardDistillation

Code and Dataset for preprint titled "Tailoring Self-Rationalizers with Multi-Reward Distillation"
Python
6
star
49

G-PlanET

Python
6
star
50

RobustLR

A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners
Python
6
star
51

LINK

Code for paper "In Search of the Long-Tail: Systematic Generation of Long-Tail Knowledge via Logical Rule Guided Search"
Python
6
star
52

Upstream-Bias-Mitigation

Code and data for NAACL 2021 paper "On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning"
Python
5
star
53

RationaleHumanUtility

Codebase for Human Utility of FTRs at ACL 2023
Python
5
star
54

deceive-KG-models

An implementation of the experiments on KG robustness
Python
4
star
55

ER-Test

Code for ER-Test, accepted to the Findings of EMNLP 2022
Python
3
star
56

PE2

Code for paper "Prompt Engineering a Prompt Engineer" (https://arxiv.org/abs/2311.05661)
Python
3
star
57

get-started-on-dl-experiments

2
star
58

ink-usc.github.io

INK Research Lab Website
JavaScript
2
star
59

Lifelong-ICL

Jupyter Notebook
2
star
60

CrossTaskMoE

Code for paper "Eliciting and Understanding Cross-task Skills with Task-level Mixture-of-Experts" (Findings of EMNLP 2022)
Python
2
star
61

predicting-big-bench

Code for paper "How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench"
Python
2
star
62

bias-mitigation-via-transfer-learning

Source code for Arxiv paper: Efficiently Mitigating Classification Bias via Transfer Learning
2
star
63

Controllable-AV-Explanations

Python
1
star
64

MACROSCORE

MACROSCORE - Scoring Scientific Research
Jupyter Notebook
1
star