• Stars
    star
    163
  • Rank 229,803 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The data and code for ACL2020 paper "Logical Natural Language Generation from Open-Domain Tables"

LogicNLG

The data and code for ACL2020 paper Logical Natural Language Generation from Open-Domain Tables, which aims to study the problem of natural language generation with logical inference in the intermediate steps. Going beyond simply surface-level copying, LogicNLG requires the model to deeply understand the content in the table and infer information implicitly expressed by the table.

Demo

You can explore the visualization interface to see the generation results of different models on LogNLG. Have fun!

Requirements

  • pytorch 1.4.0
  • huggingface transformers 2.5.1
  • tensorboardX
  • tqdm
  • apex [optional]

Training/Evaluation Data

The data used for LogicNLG is provided in data folder, the details are described in README

Preparation

Unzip all the table files

unzip all_csv.zip

Download the NLI scorer

wget https://logicnlg.s3-us-west-2.amazonaws.com/NLI_models.zip
unzip NLI_models.zip

Download the Semantic Parser

wget https://logicnlg.s3-us-west-2.amazonaws.com/parser_models.zip
unzip parser_models.zip

Reproducing Reported Results From Automatic Metric Models

The generated output from Field-Infusing-Transformer,GPT-2-based, Coarse-to-Fine models are stored in outputs. Their corresponding parsing results are stored in program_outputs.

You can verify their corpus-level BLEU score by:

python evaluate.py --input outputs/field_infusing.json --refernce data/test_lm.json --option corpus
python evaluate.py --input outputs/GPT_gpt2_12.65.json --refernce data/test_lm.json --option corpus
python evaluate.py --input outputs/GPT_gpt2_C2F_13.35.json --refernce data/test_lm.json --option corpus

You can verify their NLI-Acc by:

CUDA_VISIBLE_DEVICES=0 python NLI.py --model bert-base-multilingual-uncased --do_verify --encoding gnn --load_from NLI_models/model_ep4.pt --fp16 --verify_file outputs/GPT_gpt2_C2F_13.35.json --verify_linking data/test_lm.json

You can verify their SP-Acc by:

CUDA_VISIBLE_DEVICES=0 python parse_programs.py --compute_score --load_from parser_models/model.pt --score_file program_outputs/GPT_gpt2_C2F_13.35.json

Loading Our Trained Models

You are download and reload our trained models from Amazon S3 and decode results from them.

wget https://logicnlg.s3-us-west-2.amazonaws.com/models.zip
unzip models.zip

For GPT-2.py model

You can either decode the sentences

CUDA_VISIBLE_DEVICES=0 python GPT2.py --do_test --load_from models/GPT_ep8.pt

or evaluate the Adv-Acc

CUDA_VISIBLE_DEVICES=0 python GPT2.py --do_verify --load_from models/GPT_ep8.pt

For Coarse-to-Fine model

You can either decode the sentences

CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_test --load_from models/GPT_stage2_C2F_ep13.pt

or evaluate the Adv-Acc

CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_verify --load_from models/GPT_stage2_C2F_ep13.pt --stage 2

These commands will save the decoded sentences to outputs/ folder and print out the Adv-Acc scores reported in the paper.

Retrain Your Own Model

Train Field-Infusing Transformer

CUDA_VISIBLE_DEVICES=0 python Transformer.py --do_train

Train GPT2-small Model

CUDA_VISIBLE_DEVICES=0 python GPT2.py --do_train --model gpt2

If you are running on a cluster of multiple nodes, you can also try our distributed training recipe:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 GPT-distributed.py --do_train --model gpt2 --batch_size 4

Train GPT2-Coarse-to-Fine Model

  1. Warm-up the template generation model for 10 epochs
CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_train --model gpt2 --stage 1
  1. Load the last model and then train the fine-grained surface realization model for 15 epochs and smaller batch size.
CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_train --model gpt2 --stage 2 --epochs 15 --batch_size 3 --load_from models/GPT_stage1_C2F_ep9.pt

The trained models are stored under models/ folder, you can reload them and evaluate.

Evaluation Command

Perform Verification

python GPT2.py --do_verify --load_from models/[Your_Model] --model gpt2
python GPT2-coarse-to-fine.py --do_verify --load_from models/[Your_Model] --model gpt2 --stage 2

Perform Generation

CUDA_VISIBLE_DEVICES=0 python GPT2.py --do_test --load_from models/[Your_Model] --model gpt2
CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_test --load_from models/[Your_Model] --model gpt2

After running do_test command, the decoded results on test split will be saved into outputs/ folder, which is required for the following NLI-Acc and SP-Acc score computation.

Compute NLI-Acc score

CUDA_VISIBLE_DEVICES=0 python NLI.py --model bert-base-multilingual-uncased --do_verify --encoding gnn --load_from NLI_models/model_ep4.pt --fp16 --verify_file outputs/[Your_File] --verify_linking data/test_lm.json

Compute SP-Acc score

  1. Parsing your output file into programs (warning: this program uses breadth first search for potential programs, and could take a long time if you don't have many cpu cores. The experimented machine has 64 cores, and the parsing takes 30-60 minutes.):
python parse_programs.py --parse --score_file outputs/[Your_File]
  1. Run the ranker model to predict the entailment relationship:
CUDA_VISIBLE_DEVICES=0 python parse_programs.py --compute_score --load_from parser_models/model.pt --score_file program_outputs/[Your_File]

Parser and Entity Linker

We provide the details of our parser in README.

Codalab

We host challenge of LogicNLG in CodaLab. Please consider submit your results to the challenge site.

CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_verify_challenge --load_from models/GPT_stage2_C2F_ep13.pt --stage 2
CUDA_VISIBLE_DEVICES=0 python GPT2-coarse-to-fine.py --do_test_challenge --load_from models/GPT_stage2_C2F_ep13.pt --model gpt2

These two commands will output results "verify_results.json" and "test_results.json" in the challenge folder, please remember to zip your files before submission.

cd challenge
zip -r results.zip verify_results.json test_results.json

Recent Papers

Model Organization Reference BLUEU-1 BLEU-2 BLEU-3 SP-Acc SP-Acc
GPT-TabGen UCSB Chen et al. 48.8 27.1 12.6 42.1 68.7
GPT-Coarse-to-Fine UCSB Chen et al. 46.6 26.8 13.3 42.7 72.2
DCVED Shanghai Jiao Tong University Chen & Jin et al. 49.5 28.6 15.3 43.9 76.9

Miscellaneous

If you find any problem about the code, please leave an issue or shoot me an email.

More Repositories

1

Table-Fact-Checking

Data and Code for ICLR2020 Paper "TabFact: A Large-scale Dataset for Table-based Fact Verification"
Python
369
star
2

HybridQA

Dataset and code for EMNLP2020 paper "HybridQA: A Dataset of Multi-Hop Question Answeringover Tabular and Textual Data"
Python
188
star
3

Program-of-Thoughts

Data and Code for Program of Thoughts (TMLR 2023)
Python
154
star
4

TheoremQA

The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset
Python
143
star
5

OTT-QA

Code and Data for ICLR2021 Paper "Open Question Answering over Tables and Text"
Python
142
star
6

KGPT

Code and Data for EMNLP2020 Paper "KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation"
Python
142
star
7

HDSA-Dialog

Code and Data for ACL 2019 "Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention"
Python
136
star
8

Time-Sensitive-QA

Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"
Jupyter Notebook
47
star
9

Variational-Vocabulary-Selection

Code for NAACL19 Paper "How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection"
Python
42
star
10

KB-Reasoning-Data

The FB15k and NELL-995 Dataset for NAACL18 paper "Variational Knowledge Graph Reasoning"
39
star
11

Meta-Module-Network

Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"
Python
39
star
12

Cross-Lingual-NBT

Code for EMNLP 2018 paper "XL-NBT: A Cross-lingual Neural Belief Tracking Framework"
Python
36
star
13

Semi-Supervised-Image-Captioning

Code for "bootstrap, review, decode: using out-of-domain textual data to improve image captioning"
Jupyter Notebook
20
star
14

GNN-TabFact

SOTA on TabFact: Graph Neural Network for Table-based Fact Checking
Python
18
star
15

TableCoT

The code and data used for "Large Language Models are few(1)-shot Table Reasoners"
Python
18
star
16

GPT2-Logic2Text

The code for Template-GPT-2 Generation Model for Logic2Text Dataset
Python
18
star
17

WikiTables-WithLinks

Crawled Wikipedia Tables with Passages
Python
11
star
18

ImageEval

Editing Baselines
Jupyter Notebook
4
star
19

Data-to-text-Evaluation-Metric

The metric computation script for different data to text tasks
Python
3
star
20

wenhuchen.github.io

Personal Website
HTML
2
star
21

opendomaintables.github.io

Visualization of Open Domain Tables
HTML
1
star
22

cs486-fall2024-website

Website Page for CS486-fall2024
1
star
23

Scripts

Useful Small Functions to help me deal with different scenarios
Python
1
star
24

WikiTables

The collection of WikiTables
1
star
25

setting_files

Shell
1
star