• Stars
    star
    127
  • Rank 282,887 (Top 6 %)
  • Language
    Python
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Compound Probabilistic Context-Free Grammars

Code for the paper:
Compound Probabilistic Context-Free Grammars for Grammar Induction
Yoon Kim, Chris Dyer, Alexander Rush
ACL 2019

The preprocessed datasets, trained models, and the datasets parsed with the trained models can be found here.

Dependencies

The code was tested in python 3.6 and pytorch 1.0. We also require the nltk package if creating the processed data from the raw PTB dataset.

Data

The processed version of PTB can be downloaded at the above link. This contains the train/validation/test sets, as well as the vocabulary used (ptb.dict). If you want to create this from scratch, you can run

python process_ptb.py --ptb_path PATH-TO-PTB/parsed/mrg/wsj --output_path DATA-FOLDER

where PATH-TO-PTB is the location of your PTB corpus and OUTPUT-FOLDER is where the processed trees are saved. This will create ptb-train.txt, ptb-valid.txt, ptb-test.txt in DATA-FOLDER.

Now run the preprocessing script

python preprocess.py --trainfile data/ptb-train.txt --valfile data/ptb-valid.txt 
--testfile data/ptb-test.txt --outputfile data/ptb --vocabsize 10000 --lowercase 1 --replace_num 1

See preprocess.py for more options (e.g. batch size). Running this will save the following files in the data/ folder: ptb-train.pkl, ptb-val.pkl, ptb-test.pkl, ptb.dict. Here ptb.dict is the word-idx mapping, and you can change the output folder/name by changing the argument to --outputfile.

Training

To train the compound PCFG, run

python train.py --train_file data/ptb-train.pkl --val_file data/ptb-val.pkl 
--save_path compound-pcfg.pt --gpu 0

where --save_path is where you want to save the model, and --gpu 0 is for using the first GPU in the cluster (the mapping from PyTorch GPU index to your cluster's GPU index may vary).

To train the neural PCFG that does not use continuous latent variables, run

python train.py --z_dim 0 --train_file data/ptb-train.pkl --val_file data/ptb-val.pkl 
--save_path neural-pcfg.pt --gpu 0

Training will take 2-4 days depending on your setup.

Evaluation

To evaluate the trained model on the test set, run

python eval.py --model_file compound-pcfg.pt --data_file data/ptb-test.txt 
--out_file pred-parse.txt --gold_out_file gold-parse.txt --gpu 0

where --out_file is where you want to output the predicted parses. This will calculate the F1 scores between the predicted trees and the gold trees in ptb-test.txt.

To parse a new set of sentences, run

python parse.py --model_file compound-pcfg.pt --data_file sents-to-be-parsed.txt 
--out_file pred-parse.txt --gpu 0

Note that sents-to-be-parsed.txt should have one sentence per line, and be preprocessed in a way that roughly matches the processing in process_ptb.py (e.g. no punctuation).

To just evaluate F1 given the trees, run (for example)

python compare_trees.py --tree1 data/parsed-data/ptb-test-gold-filtered.txt
--tree2 data/parsed-data/ptb-test-compound-pcfg.txt

Note regarding F1 calculation

To be comparable to the numbers reported in PRPN/Ordered Neurons papers, we use the original sentence F1 evaluation code based on L83-89 of the PRPN repo. This has quirky behavior in corner cases where the gold tree is over a sentence of length > 2 but only has the sentence-level trivial span. In this case the sentence F1 for that example could be potentially nonzero according to the code. Corpus F1 does not have this issue.

Trained models

We provide the best neural/compound PCFG models, under the data/trained-model folder. These can be used for eval.py or parse.py.

Parsed Datasets

We also provide parsed train/val/test sets for the best run of each model for further analysis and RNNG training. These can be found under the data/parsed-data folder when you download the processed datasets from above:

  • ptb-{train/valid/test}-gold-filtered.txt: Gold trees where length 1 sentences have been filtered out.
  • ptb-{train/valid/test}-{neural-pcfg/compound-pcfg/prpn/on}.txt: Predicted trees from the best run of different models. For example ptb-test-compound-pcfg.txt is the test set parsed with the compound PCFG.
  • ptb-test-{neural-pcfg/compound-pcfg/prpn/on}-{rnng/urnng}.txt: Predicted trees for the test only for RNNG and URNNG (i.e. train on induced trees with RNNG, then fine-tune with URNNG objective). For example ptb-test-compound-pcfg-urnng.txt contains the predicted trees from an an RNNG trained on compound PCFG trees then fine-tuned with the URNNG objective.

Results

Here are the sentence-level F1 numbers on the PTB test set for the models that performed best on the validation set . F1 with Induced URNNG indicates training an RNNG on the induced trees and then fine-tuning with the URNNG objective (see below). This gave improvements across the board.

Model F1 F1 with Induced URNNG
PRPN 47.9 51.5
Ordered Neurons 50.0 55.1
Neural PCFG 52.6 58.7
Compound PCFG 60.1 66.9

Training Recurrent Neural Network Grammars (RNNG) on Induced Trees

Training the RNNG on induced trees and fine-tuning with the Unsupervised RNNG uses code from Unsupervised Recurrent Neural Network Grammars. The below commands should be run from the urnng folder.

First preprocess the training set with induced trees, for example with the compound PCFG:

python preprocess.py --batchsize 16 --vocabfile data/ptb.dict --lowercase 1 --replace_num 1
--trainfile data/parsed-data/ptb-train-compound-pcfg.txt 
--valfile data/parsed-data/ptb-valid-compound-pcfg.txt 
--testfile data/parsed-data/ptb-test-compound-pcfg.txt 
--outputfile data/ptb-comp-pcfg

Note the use of --vocabfile to use the same vocabulary as the one used in the above experiments.

Then use the above files to train an RNNG (and fine-tune with URNNG) using instructions from the URNNG folder, e.g.

python train.py --train_file /compound-pcfg/data/ptb-comp-pcfg-train.pkl 
--val_file /compound-pcfg/data/ptb-comp-pcfg-val.pkl --save_path compound-pcfg-rnng.pt 
--mode supervised --train_q_epochs 18 --count_eos_ppl 1 --gpu 0

For this version of PTB we count the </s> token in PPL calculations, hence the use of --count_eos_ppl 1. Note that this only affects evaluation and not training.

For fine-tuning:

python train.py --train_from compound-pcfg-rnng.pt --save_path compound-pcfg-urnng.pt
--train_file /compound-pcfg/data/ptb-comp-pcfg-train.pkl 
--val_file /compound-pcfg/data/ptb-comp-pcfg-val.pkl
--mode unsupervised --train_q_epochs 10 --epochs 10 --count_eos_ppl 1 --lr 0.1 --gpu 0 --kl_warmup 0

For evaluation:

python eval_ppl.py --model_file compound-pcfg-urnng.pt --samples 1000 --is_temp 2 --gpu 0
--test_file /compound-pcfg/data/ptb-test.pkl --count_eos_ppl 1

For parsing F1:

python parse.py --model_file compound-pcfg-urnng.pt --data_file /compound-pcfg/data/ptb-test.txt 
--out_file pred-parse.txt --gold_out_file gold-parse.txt --gpu 0 --lowercase 1 --replace_num 1

Miscellaneous Stuff

  • LSTM LM training in Table 3 uses code/hyperparameters from here.
  • For Table 3, LSTM/RNNG is trained with SGD, while PRPN/ON is trained with Adam. However, it seems like models trained with ASGD, as in the AWD-LSTM, do quite a bit better in terms of perplexity. Thanks to Freda for pointing this out!
  • Curriculum learning based on length does not always seem to help, and in subsequent experiments I am seeing similar results without it. For example, one could use --max_length 45 --final_max_length 45 to train on all sentences of length up to 45 from the first epoch (limit on 45 is due to memory reasons).
  • Yanpeng has a much faster implementation of compound PCFGs utilizing the pytorch-struct library. Check it out here!

Acknowledgements

Much of our preprocessing and evaluation code is based on the following repositories:

License

MIT

More Repositories

1

annotated-transformer

An annotated implementation of the Transformer paper.
Jupyter Notebook
5,683
star
2

seq2seq-attn

Sequence-to-sequence model with LSTM encoder/decoders and attention
Lua
1,257
star
3

im2markup

Neural model for converting Image-to-Markup (by Yuntian Deng yuntiandeng.com)
Lua
1,203
star
4

pytorch-struct

Fast, general, and tested differentiable structured prediction in PyTorch
Jupyter Notebook
1,107
star
5

sent-conv-torch

Text classification using a convolutional neural network.
Lua
448
star
6

namedtensor

Named Tensor implementation for Torch
Jupyter Notebook
443
star
7

var-attn

Latent Alignment and Variational Attention
Python
326
star
8

sent-summary

300
star
9

neural-template-gen

Python
262
star
10

struct-attn

Code for Structured Attention Networks https://arxiv.org/abs/1702.00887
Lua
237
star
11

NeuralSteganography

STEGASURAS: STEGanography via Arithmetic coding and Strong neURAl modelS
Python
183
star
12

urnng

Python
176
star
13

botnet-detection

Topological botnet detection datasets and graph neural network applications
Python
169
star
14

data2text

Lua
158
star
15

sa-vae

Python
154
star
16

cascaded-generation

Cascaded Text Generation with Markov Transformers
Python
127
star
17

TextFlow

Python
116
star
18

boxscore-data

HTML
111
star
19

decomp-attn

Decomposable Attention Model for Sentence Pair Classification (from https://arxiv.org/abs/1606.01933)
Lua
95
star
20

encoder-agnostic-adaptation

Encoder-Agnostic Adaptation for Conditional Language Generation
Python
79
star
21

genbmm

CUDA kernels for generalized matrix-multiplication in PyTorch
Jupyter Notebook
79
star
22

DeepLatentNLP

61
star
23

nmt-android

Neural Machine Translation on Android
Lua
59
star
24

BSO

Lua
54
star
25

hmm-lm

Python
42
star
26

seq2seq-talk

TeX
39
star
27

Talk-Latent

TeX
31
star
28

regulatory-prediction

Code and Data to accompany "Dilated Convolutions for Modeling Long-Distance Genomic Dependencies", presented at the ICML 2017 Workshop on Computational Biology
Python
28
star
29

harvardnlp.github.io

JavaScript
26
star
30

strux

Python
18
star
31

lie-access-memory

Lua
17
star
32

annotated-attention

Jupyter Notebook
15
star
33

DataModules

A state-less module system for torch-like languages
Python
8
star
34

rush-nlp

JavaScript
8
star
35

seq2seq-attn-web

CSS
8
star
36

tutorial-deep-latent

TeX
7
star
37

MemN2N

Torch implementation of End-to-End Memory Networks (https://arxiv.org/abs/1503.08895)
Lua
6
star
38

image-extraction

Extract images from PDFs
Jupyter Notebook
4
star
39

paper-explorer

JavaScript
3
star
40

readcomp

Entity Tracking Improves Cloze-style Reading Comprehension
Python
3
star
41

banded

Sparse banded diagonal matrices for pytorch
Cuda
2
star
42

torax

Python
2
star
43

cs6741

HTML
2
star
44

simple-recs

Python
1
star
45

poser

Python
1
star
46

iclr

1
star
47

cs6741-materials

1
star