• Stars
    star
    227
  • Rank 174,915 (Top 4 %)
  • Language
    HTML
  • License
    MIT License
  • Created over 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official code and data repository for our EMNLP 2020 long paper "Reformulating Unsupervised Style Transfer as Paraphrase Generation" (https://arxiv.org/abs/2010.05700).

Reformulating Unsupervised Style Transfer as Paraphrase Generation (EMNLP 2020)

This is the official repository accompanying the EMNLP 2020 long paper Reformulating Unsupervised Style Transfer as Paraphrase Generation. This repository contains the accompanying dataset and codebase.

Updates (2021-22)

Demos

The web demo for the system can be found here. The code and setup for the webpage can be found in web-demo/README.md. We also have a command-line demo for the paraphrase model. For more details, check README_terminal_demo.md.

Outputs from STRAP / baselines

All outputs generated by our model: outputs. Contact me at [email protected] for outputs from the Formality dataset (both our model and baselines) once you have received the GYAFC dataset. The outputs from baseline models have been added to outputs/baselines. Please see style_paraphrase/evaluation/README.md for a script to run evaluation on baselines.

Setup

The code uses PyTorch 1.4+, HuggingFace's transformers library for training GPT2 models, and Facebook AI Research's fairseq for evaluation using RoBERTa classifiers. To install PyTorch, look for the Python package compatible with your local CUDA setup here.

virtualenv style-venv
source style-venv/bin/activate
pip install torch torchvision
pip install -r requirements.txt
pip install --editable .

cd fairseq
pip install --editable .

To process custom datasets and run the classifier, you will need to download RoBERTA. Download the RoBERTa checkpoints from here. Alternatively, you could follow the commands below. If you want a smaller model, you can also setup a ROBERTA_BASE variable using a similar process.

wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz
tar -xzvf roberta.large.tar.gz

# Add the following to your .bashrc file, feel free to store the model elsewhere on the hard disk
export ROBERTA_LARGE=$PWD/roberta.large

Datasets

All datasets will be added to this Google Drive link. Download the datasets and place them under datasets. The datasets currently available are (with their folder names),

  1. ParaNMT-50M filtered down to 75k pairs - datasets/paranmt_filtered
  2. Shakespeare style transfer - datasets/shakespeare
  3. Formality transfer - Please follow the instructions here. Once you have access to the corpus, you could email me ([email protected]) to get access to the preprocessed version. We will also add scripts to preprocess the raw data.
  4. Corpus of Diverse Styles - datasets/cds. Samples can be found in samples/data_samples. Please cite the original sources as well if you plan to use this dataset.

Training / Pretrained Models

  1. To train the paraphrase model, run style_paraphrase/examples/run_finetune_paraphrase.sh.

  2. To train the inverse paraphrasers for Shakespeare, check the two scripts in style_paraphrase/examples/shakespeare.

  3. To train the inverse paraphrasers for Formality, check the two scripts in style_paraphrase/examples/formality. Note that you will need to email me asking for the preprocessed dataset once you have access to the GYAFC corpus (see instructions in Datasets section).

  4. To train models on CDS, please follow step #2 and #5 below in "Custom Datasets".

All the main pretrained models have been added to the Google Drive link.

To run a fine-tuning and evaluation script simultaneously with support for hyperparameter tuning, please see the code in style_paraphrase/schedule.py and style_paraphrase/hyperparameters_config.py. This is customized to SLURM, you might need to mkae minor adjustments for it to work on your cluster.

Classifier Training

Classifiers are needed to evaluate style transfer performance. To train the classifiers follow the steps:

  1. Install the local fork of fairseq, as discussed above in "Setup".

  2. Download the RoBERTa checkpoints as discussed above in "Setup".

  3. For training classifiers on Shakespeare, CoLA or CDS datasets, download the shakespeare-bin, cola-bin or cds-bin folders from the Drive link here and place them under datasets. I can provide similar files for the Formality dataset once you have access to the original corpus.

  4. To train the classifiers, see the examples in style_paraphrase/style_classify/examples. You can also use a grid search (with a Slurm scheduler) by using the code in style_paraphrase/style_classify/schedule.py. We also have a light-weight Flask interface to plot performance with epochs which works well with the Slurm grid search automation, check style_paraphrase/style_classify/webapp/run.sh.

  5. For training on custom datasets, run the commands under "Custom Datasets" to create fairseq binary files for your dataset (Step 1 and 2). Then, you can either modify the example scripts to point to your dataset or you could add an entry to style_paraphrase/style_classify/schedule.py. You will need to specify the number of classes and the total length of the dataset in the file, which is used to calculate the number of warmup steps.

Evaluation

Please check style_paraphrase/evaluation/README.md for more details.

Custom Datasets

Create a folder in datasets which will contain new_dataset as datasets/new_dataset. Paste your plaintext train/dev/test splits into this folder as train.txt, dev.txt, test.txt. Use one instance per line (note that the model truncates sequences longer than 50 subwords). Add train.label, dev.label, test.label files (with same number of lines as train.txt, dev.txt, test.txt). These files will contain the style label of the corresponding instance. See this folder for examples of label files.

  1. To convert a plaintext dataset into it's BPE form run the command,
python datasets/dataset2bpe.py --dataset datasets/new_dataset

Note that this process is reversible. To convert a BPE file back into its raw text form: python datasets/bpe2text.py --input <input> --output <output>.

  1. Next, for converting the BPE codes to fairseq binaries and building a label dictionary, first make sure you have downloaded RoBERTa and setup the $ROBERTA_LARGE global variable in your .bashrc (see "Setup" for more details). Then run,
datasets/bpe2binary.sh datasets/new_dataset
  1. To train inverse paraphrasers you will need to paraphrase the dataset. First, download the pretrained model paraphraser_gpt2_large from here. After downloading the pretrained paraphrase model run the command,
python datasets/paraphrase_splits.py --dataset datasets/new_dataset
  1. Add an entry to the DATASET_CONFIG dictionary in style_paraphrase/dataset_config.py, customizing configuration if needed.
"datasets/new_dataset": BASE_CONFIG
  1. Enter your dataset in the hyperparameters file and run python style_paraphrase/schedule.py.

Custom Paraphrase data

You can preprocess a TSV data of sentence pairs to a compatible format using,

python datasets/prepare_paraphrase_data.py \
    --input_file input.tsv \
    --output_folder datasets/custom_paraphrase_data \
    --train_fraction 0.95

Citation

If you find this repository useful, please cite us:

@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}

More Repositories

1

rankgen

Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxiv.org/abs/2205.09726).
Python
136
star
2

ai-detection-paraphrases

Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense" (https://arxiv.org/abs/2303.13408).
Python
128
star
3

squash-generation

Official code and data repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies" (https://arxiv.org/abs/1906.02622).
Python
95
star
4

hurdles-longform-qa

Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).
Python
46
star
5

longeval-summarization

Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https://arxiv.org/abs/2301.13298).
Python
41
star
6

logic-rules-sentiment

Code and dataset for our EMNLP 2018 paper "Revisiting the Importance of Logic Rules in Sentiment Classification".
Python
32
star
7

squash-website

Official demo repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies".
JavaScript
20
star
8

relic-retrieval

Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).
Python
20
star
9

Weather-Prediction-TensorFlow

A basic weather prediction software powered by TensorFlow
16
star
10

CDEEP-Downloader

Python scripts to download course videos off CDEEP
Python
12
star
11

blind-dehazing

An implementation of the ICCP '16 paper "Blind Dehazing Using Internal Patch Recurrence".
Python
11
star
12

tf-sentence-classification

This is a TensorFlow 1.1 implementation of Yoon Kim's paper, "Convolutional Neural Networks for Sentence Classification".
Python
10
star
13

ecg-analysis

ECG analysis to classify anterior myocardial infarction cases.
Python
9
star
14

allennlp-probe-hw

A homework assignment on probe tasks designed in AllenNLP for UMass Amherst's graduate NLP course (690D).
8
star
15

martiansideofthemoon.github.io

My personal website and blog (http://martiansideofthemoon.github.io)
HTML
7
star
16

macro-action-rl

An implementation of five reinforcement learning algorithms to simulate macro actions for the HFO problem.
C++
7
star
17

mixmatch-lxmert

Python
6
star
18

brittle-fracture-simulation

An implementation of the paper http://graphics.berkeley.edu/papers/Obrien-GMA-1999-08/
Python
5
star
19

ASR-and-Language-Papers

An organized list of papers and resources used by me in ASR and Language Modelling.
5
star
20

8-PSK-Costas-Loop

A GNURadio implementation of an 8 PSK Costas Loop.
CMake
5
star
21

Microprocessor-Projects

A set of two microprocessor projects as a part of EE 309 / 337 at IIT Bombay.
VHDL
5
star
22

Music-Scrapers

Python
4
star
23

diversity-sampling

An implementation of M-best diversity sampling for Interactive Segmentation and Language Generation using Neural Language Models..
C++
4
star
24

resume

My different resume files.
TeX
2
star
25

Hand-Controlled-Ubuntu-Launcher

Opens a webcam and based on number of fingers raised, opens a Ubuntu launcher application
Python
2
star
26

CS101-Project

A Pyraminx utility kit, consisting of an android app, basic Java server and Allegro based utilities to help speed cubers. It makes use of BFS to compute shortest solutions to the Pyraminx
C++
2
star
27

Photometric-Redshifts

We attempt to estimate redshifts using machine learning (with neural networks) on photometric data.
Python
2
star
28

Computer-Graphics

A set of assignments for the CS475m course in IIT Bombay.
C++
1
star
29

cs347-assignments

C
1
star
30

research-exchange

A collaborative research paper annotation tool.
JavaScript
1
star
31

Analog-Sampling-and-Storage

A VHDL implementation of an interfacing with an ADC which stores data in a hitachi SRAM. This data can be retrieved later on at a rate of one sample per millisecond. Designed to store upto 8 seconds of data.
VHDL
1
star