• Stars
    star
    136
  • Rank 266,189 (Top 6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxiv.org/abs/2205.09726).

RankGen - Improving Text Generation with Large Ranking Models

made-with-python arxiv PyPI version rankgen License: Apache 2.0 Downloads

This is the official repository for our EMNLP 2022 paper, RankGen - Improving Text Generation with Large Ranking Models. RankGen is a 1.2 billion encoder model which maps prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on literary retrieval.

This repository contains human evaluation data, links to HuggingFace-compatible model checkpoints, and code to integrate RankGen in beam search on HuggingFace models. RankGen is trained by fine-tuning the T5-XL encoder using the T5X library.

Updates

  • (Mar 2023) The training data for RankGen is now available (PG19 and Wiki splits)! You can get them on Google Cloud (gs://gresearch/rankgen/rankgen_pp_wiki_v1.zip, gs://gresearch/rankgen/rankgen_pp_pg19_v1.zip)
  • (Nov 2022) We have updated our arXiv version to show that RankGen beats newer decoding strategies like contrastive search, contrastive decoding and eta sampling!
  • (July 2022) RankGen is now a PyPI package, just run pip install rankgen to use it!
  • (July 2022) RankGen checkpoints are now available on the HuggingFace Model Hub (link)!

Model checkpoints

All RankGen checkpoints are available on the HuggingFace Model Hub - link

We recommend using RankGen-XL-all.

Checkpoint Size Hub Model Name HF Hub Link
RankGen-base-all 0.1B kalpeshk2011/rankgen-t5-base-all link
RankGen-large-all 0.3B kalpeshk2011/rankgen-t5-large-all link
RankGen-XL-all 1.2B kalpeshk2011/rankgen-t5-xl-all link
RankGen-XL-PG19 1.2B kalpeshk2011/rankgen-t5-xl-pg19 link

Older versions of the checkpoints:

RankGen XL checkpoints compatible with T5XEmbeddingGeneratorLegacy - here

T5X JAX checkpoints (base, large, XL) - here

Setup

Requirements (pip will install these dependencies for you)

Python 3.7+, torch (CUDA recommended), transformers

Installation

(from PyPI)

python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen

(from source)

python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
git clone https://github.com/martiansideofthemoon/rankgen
cd rankgen
pip install --editable .

Data Download / Test

Get the data here and place folder in root directory. Alternatively, use gdown as shown below,

gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4

Run the test script to make sure the RankGen checkpoint has loaded correctly,

python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all

### Expected output
0.0009239262409127233
0.0011521980725477804

Using RankGen

Loading RankGen is simple using the HuggingFace APIs, but we suggest using RankGenEncoder, which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. Please see rankgen/test_rankgen_encoder.py for an example of the usage or see below.

from rankgen import RankGenEncoder, RankGenGenerator

rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-xl-all")

Encoding text to prefix/suffix vectors

prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")

Generating text

# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")

inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]

# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])

Reproducing experiments in the paper

Running beam search with RankGen

The main file is rankgen/rankgen_beam_search.py. To execute it,

python rankgen/rankgen_beam_search.py \
    --dataset rankgen_data/wiki.jsonl \
    --rankgen_encoder kalpeshk2011/rankgen-t5-xl-all \
    --num_tokens 20 --num_samples 10 --beam_size 2 \
    --output_file outputs_beam/wiki_t5_xl_beam_2_tokens_20_samples_10.jsonl

Evaluating using MAUVE (make sure JSONL file has several thousand generations for intuitive MAUVE scores, 7713 in our experiments),

python rankgen/score_multi_beam.py --dataset outputs_beam/wiki_t5_xl_beam_2_tokens_10_samples_10.jsonl

Suffix Identification with GPT2

The main file is rankgen/rankgen_beam_search.py. To execute it,

mkdir gold-beats-neg-outputs
python rankgen/gpt2_score.py \
  --dataset rankgen_data/hellaswag_val.tsv \
  --model_size xl \
  --metric avg_conditional \
  --num_negatives 3

The corresponding data files can be found in the same Google Drive folder.

Human evaluation data

We conducted our human evaluation on Upwork, hiring English teachers and writers. We performed blind A/B testing between RankGen and nucleus sampling. We also asked our annotators to provide a 1-3 sentence explanation. You can find all the 600 annotations across two files in human-eval-data. To compute the evaluation scores run,

python rankgen/score_ab_text.py

Citation Information

If you use RankGen, please cite it as follows:

@inproceedings{rankgen22,
author={Kalpesh Krishna and Yapei Chang and John Wieting and Mohit Iyyer},
booktitle = {Empirical Methods in Natural Language Processing},
Year = "2022",
Title={RankGen: Improving Text Generation with Large Ranking Models},
}

More Repositories

1

style-transfer-paraphrase

Official code and data repository for our EMNLP 2020 long paper "Reformulating Unsupervised Style Transfer as Paraphrase Generation" (https://arxiv.org/abs/2010.05700).
HTML
227
star
2

ai-detection-paraphrases

Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense" (https://arxiv.org/abs/2303.13408).
Python
128
star
3

squash-generation

Official code and data repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies" (https://arxiv.org/abs/1906.02622).
Python
95
star
4

hurdles-longform-qa

Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).
Python
46
star
5

longeval-summarization

Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https://arxiv.org/abs/2301.13298).
Python
41
star
6

logic-rules-sentiment

Code and dataset for our EMNLP 2018 paper "Revisiting the Importance of Logic Rules in Sentiment Classification".
Python
32
star
7

squash-website

Official demo repository for our ACL 2019 long paper "Generating Question-Answer Hierarchies".
JavaScript
20
star
8

relic-retrieval

Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).
Python
20
star
9

Weather-Prediction-TensorFlow

A basic weather prediction software powered by TensorFlow
16
star
10

CDEEP-Downloader

Python scripts to download course videos off CDEEP
Python
12
star
11

blind-dehazing

An implementation of the ICCP '16 paper "Blind Dehazing Using Internal Patch Recurrence".
Python
11
star
12

tf-sentence-classification

This is a TensorFlow 1.1 implementation of Yoon Kim's paper, "Convolutional Neural Networks for Sentence Classification".
Python
10
star
13

ecg-analysis

ECG analysis to classify anterior myocardial infarction cases.
Python
9
star
14

allennlp-probe-hw

A homework assignment on probe tasks designed in AllenNLP for UMass Amherst's graduate NLP course (690D).
8
star
15

martiansideofthemoon.github.io

My personal website and blog (http://martiansideofthemoon.github.io)
HTML
7
star
16

macro-action-rl

An implementation of five reinforcement learning algorithms to simulate macro actions for the HFO problem.
C++
7
star
17

mixmatch-lxmert

Python
6
star
18

brittle-fracture-simulation

An implementation of the paper http://graphics.berkeley.edu/papers/Obrien-GMA-1999-08/
Python
5
star
19

ASR-and-Language-Papers

An organized list of papers and resources used by me in ASR and Language Modelling.
5
star
20

8-PSK-Costas-Loop

A GNURadio implementation of an 8 PSK Costas Loop.
CMake
5
star
21

Microprocessor-Projects

A set of two microprocessor projects as a part of EE 309 / 337 at IIT Bombay.
VHDL
5
star
22

Music-Scrapers

Python
4
star
23

diversity-sampling

An implementation of M-best diversity sampling for Interactive Segmentation and Language Generation using Neural Language Models..
C++
4
star
24

resume

My different resume files.
TeX
2
star
25

Hand-Controlled-Ubuntu-Launcher

Opens a webcam and based on number of fingers raised, opens a Ubuntu launcher application
Python
2
star
26

CS101-Project

A Pyraminx utility kit, consisting of an android app, basic Java server and Allegro based utilities to help speed cubers. It makes use of BFS to compute shortest solutions to the Pyraminx
C++
2
star
27

Photometric-Redshifts

We attempt to estimate redshifts using machine learning (with neural networks) on photometric data.
Python
2
star
28

Computer-Graphics

A set of assignments for the CS475m course in IIT Bombay.
C++
1
star
29

cs347-assignments

C
1
star
30

research-exchange

A collaborative research paper annotation tool.
JavaScript
1
star
31

Analog-Sampling-and-Storage

A VHDL implementation of an interfacing with an ADC which stores data in a hitachi SRAM. This data can be retrieved later on at a rate of one sample per millisecond. Designed to store upto 8 seconds of data.
VHDL
1
star