• Stars
    star
    112
  • Rank 312,240 (Top 7 %)
  • Language
    Python
  • Created almost 5 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Multi-stage passage ranking: monoBERT + duoBERT

duoBERT

duoBERT is a pairwise ranking model based on BERT that is the last stage of a multi-stage retrieval pipeline:

duobert

To train and re-rank with monoBERT, please check this repository.

As of Jan 13th 2020, our MS MARCO leaderboard entry is the top scoring model with available code:

MSMARCO Passage Re-Ranking Leaderboard (Jan 13th 2020) Eval MRR@10 Dev MRR@10
SOTA - Enriched BERT base + AOA index + CAS 0.393 0.408
BM25 + monoBERT + duoBERT + TCP (this code) 0.379 0.390

For more details, check out our paper:

NOTE! The duoBERT model is no longer under active development and this repo is no longer being maintained. We have shifted our efforts to ranking with sequence-to-sequence models. A T5-based variant of the mono/duo design is described in an overview of our submissions to the TREC-COVID challenge, and a more detailed description of mono/duoT5 is in preparation.

Data and Trained Models

We make the following data available for download:

  • bert-large-msmarco-pretrained_only.zip: monoBERT large pretrained on the MS MARCO corpus but not finetuned on the ranking task. We pretrained this model starting from the original BERT-large WWM (Whole Word Mask) checkpoint. It was pretrained for 100k iterations, batch size 128, learning rate 3e-6, and 10k warmup steps. We finetuned monoBERT and duoBERT from this checkpoint.
  • monobert-large-msmarco-pretrained-and-finetuned.zip: monoBERT large pretrained on the MS MARCO corpus and finetuned on the MS MARCO ranking task.
  • duobert-large-msmarco-pretrained-and-finetuned.zip: duoBERT large pretrained on the MS MARCO corpus and finetuned on the MS MARCO ranking task.
  • run.bm25.dev.small.tsv: Approximately 6,980,000 pairs of dev set queries and retrieved passages using BM25. In this tsv file, the first column is the query id, the second column is the passage id, and the third column is the rank of the passage. There are 1000 passages per query in this file.
  • run.bm25.test.small.tsv: Approximately 6,837,000 pairs of test set queries and retrieved passages using BM25.
  • run.monobert.dev.small.tsv: Approximately 6,980,000 pairs of dev set queries and retrieved passages using BM25 and re-ranked with monoBERT. In this tsv file, the first column is the query id, the second column is the passage id, and the third column is the rank of the passage. There are 1000 passages per query in this file.
  • run.monobert.test.small.tsv: Approximately 6,837,000 pairs of test set queries and retrieved passages using BM25 and re-ranked with monoBERT.
  • run.duobert.dev.small.tsv: Approximately 6,980 x 30 pairs of dev set queries and passages re-ranked using duoBERT. In this run, the input to duoBERT were the top-30 passages re-ranked by monoBERT.
  • run.duobert.test.tsv: Approximately 6,837 x 30 pairs of test set queries and passages re-ranked using duoBERT. In this run, the input to duoBERT were the top-30 passages re-ranked by monoBERT.
  • dataset_train.tf: Approximately 80M pairs of training set queries and passages (40M relevant and 40M non-relevant) in the TF Record format.
  • dataset_dev.tf: Approximately 6,980 x 30 pairs of dev set queries and passages in the TF Record format. These top-30 passages will be re-ranked by duoBERT.
  • dataset_test.tf: Approximately 6,837 x 30 pairs of test set queries and passages in the TF Record format. These top-30 passages will be re-ranked by duoBERT.
  • query_doc_ids_dev.txt: Approximately 6,980 x 30 pairs of query and doc id that will be used during inference.
  • query_doc_ids_test.txt: Approximately 6,837 x 30 pairs of query and doc id that will be used during inference.
  • queries.dev.small.tsv: 6,980 queries from the MS MARCO dev set. In this tsv file, the first column is the query id, and the second is the query text.
  • queries.eval.small.tsv: 6,837 queries from the MS MARCO test (eval) set. In this tsv file, the first column is the query id, and the second is the query text.
  • qrels.dev.small.tsv: 7,437 pairs of query relevant passage ids from the MS MARCO dev set. In this tsv file, the first column is the query id, and the third column is the passage id. The other two columns (second and fourth) are not used.
  • collection.tar.gz: All passages (8,841,823) in the MS MARCO passage corpus. In this tsv file, the first column is the passage id, and the second is the passage text.
  • triples.train.small.tar.gz: Approximatelly 40M triples of query, relevant and non-relevant passages that are used to train duoBERT.

Download and verify the above files from the below table:

File Size MD5 Download
bert-large-msmarco-pretrained-only.zip 3.44 GB 88f1d0bd351058b1da1eb49b60c2e750 [Dropbox]
monobert-large-msmarco-pretrained-and-finetuned.zip 3.42 GB db201b6433b3e605201746bda6b7723b [Dropbox]
duobert-large-msmarco-pretrained-and-finetuned.zip 3.43 GB dcae7441103ae8241f16df743b75337b [Dropbox]
run.bm25.dev.small.tsv.gz 44 MB 0a7802ab41999161339087186dda4145 [Dropbox]
run.bm25.test.small.tsv.gz 43 MB 1ea465405f6a2467cb62015454bc88c7 [Dropbox]
run.monobert.dev.small.tsv.gz 44 MB dee6065e7177facb7c740f607e40ac63 [Dropbox]
run.monobert.test.small.tsv.gz 43 MB f0e16234351a0a81d83f188e72662fbd [Dropbox]
run.duobert.dev.small.tsv.gz 2.0 MB 0be1f12ab7c7bd2d913d31756a8f0a19 [Dropbox]
run.duobert.test.small.tsv.gz 2.0 MB 0d4f1770f8be20411ed8c00fb727103d [Dropbox]
dataset_train.tf.gz 8.8 GB 7a3a6705f3662837a1e874d7ed970d27 [Dropbox]
dataset_dev.tf.gz 241 MB f4966bd5426092564a59c1a1c8e34539 [Dropbox]
dataset_test.tf.gz 236 MB 5387a926950b112616926fe3d475a22f [Dropbox]
query_doc_ids_dev.txt.gz 19 MB 05361aead605c1b8a8cc8d71ef3ff0f8 [Dropbox]
query_doc_ids_test.txt.gz 19 MB 5e657dff1e1f0748d29b291e5c731f9f [Dropbox]
queries.dev.small.tsv 283 KB 41e980d881317a4a323129d482e9f5e5 [Dropbox]
queries.eval.small.tsv 274 KB bafaf0b9eb23503d2a5948709f34fc3a [Dropbox]
qrels.dev.small.tsv 140 KB 38a80559a561707ac2ec0f150ecd1e8a [Dropbox]
collection.tar.gz 987 MB 87dd01826da3e2ad45447ba5af577628 [Dropbox]
triples.train.small.tar.gz 7.4 GB c13bf99ff23ca691105ad12eab837f84 [Dropbox]

All of the above files are stored in this repo. As an alternative to downloading each file separately, clone the repo and you'll have everything.

Replicating our MS MARCO results with duoBERT

Here we provide instructions on how to replicate our BM25 + monoBERT + duoBERT + TCP dev run on MS MARCO leaderboard.

NOTE 1: we will run these experiments using a TPU; thus, you will need a Google Cloud account. Alternatively, you can use a GPU, but we haven't tried ourselves.

NOTE 2: For instructions on how to train and run inference using monoBERT, please check this repository.

First download the following files (using the links in the table above):

  • qrels.dev.small.tsv
  • dataset_dev.tf
  • duobert-large-msmarco-pretrained-and-finetuned.zip

Unzip duobert-large-msmarco-pretrained-and-finetuned.zip and upload the files to a bucket in the Google Cloud Storage.

Create a virtual machine with TPU in the Google Cloud. We provide below a command-line example that should be executed in the Google Cloud Shell (change your-tpu accordingly):

ctpu up --zone=us-central1-b --name your-tpu --tpu-size=v3-8 --disk-size-gb=250 \
  --machine-type=n1-standard-4 --preemptible --tf-version=1.15 --noconf

ssh into the virtual machine and clone the git repo:

git clone https://github.com/castorini/duobert.git

Run duoBERT in evaluation mode (change your-tpu and your-bucket accordingly):

python run_duobert_msmarco.py \
  --data_dir=gs://your-bucket \
  --bert_config_file=gs://your-bucket/bert_config.json \
  --output_dir=. \
  --init_checkpoint=gs://your-bucket/model.ckpt-100000 \
  --max_seq_length=512 \
  --do_train=False \
  --do_eval=True \
  --eval_batch_size=128 \
  --num_eval_docs=30 \
  --use_tpu=True \
  --tpu_name=your-tpu \
  --tpu_zone=us-central1-b

This inference takes approximately 4 hours on a TPU v3. Once finished, run the evaluation script:

python3 msmarco_eval.py qrels.dev.small.tsv ./msmarco_predictions_dev.tsv

The output should be like this:

#####################
MRR @10: 0.3904377586755809
QueriesRanked: 6980
#####################

Training DuoBERT

Here we provide instructions to train duoBERT. Note that a fully trained model is available in the above table.

First download the following files (using the links in the table above):

  • qrels.dev.small.tsv
  • dataset_train.tf
  • bert-large-msmarco-pretrained-only.zip

Unzip bert-large-msmarco-pretrained-only.zip and upload all files to your Google Cloud Storage bucket.

Run duoBERT in training mode (change your-tpu and your-bucket accordingly):

python run_duobert_msmarco.py \
  --data_dir=gs://your-bucket \
  --bert_config_file=gs://your-bucket/bert_config.json \
  --output_dir=gs://your-bucket/output \
  --init_checkpoint=gs://your-bucket/model.ckpt-100000 \
  --max_seq_length=512 \
  --do_train=True \
  --do_eval=False \
  --learning_rate=3e-6 \
  --train_batch_size=128 \
  --num_train_steps=100000 \
  --num_warmup_steps=10000 \
  --use_tpu=True \
  --tpu_name=your-tpu \
  --tpu_zone=us-central1-b

This training should take approximately 30 hours on a TPU v3.

Creating a TF Record dataset

Here we provide instructions to create the training, dev, and test TF Record files that are consumed by duoBERT. Note that these files are available in the above table.

Use the links from the table above to download the following files:

  • collection.tar.gz (needs to be uncompressed)
  • triples.train.small.tar.gz (needs to be uncompressed)
  • queries.dev.small.tsv
  • queries.eval.small.tsv
  • run.monobert.dev.small.tsv
  • run.monobert.test.small.tsv
  • qrels.dev.small.tsv
  • vocab.txt (available in duobert-large-msmarco-pretrained-and-finetuned.zip)
python convert_msmarco_to_duobert_tfrecord.py \
  --output_folder=. \
  --corpus=collection.tsv \
  --vocab_file=vocab.txt \
  --triples_train=triples.train.small.tsv \
  --queries_dev=queries.dev.small.tsv \
  --queries_test=queries.eval.small.tsv \
  --run_dev=run.monobert.dev.small.tsv \
  --run_test=run.monobert.test.small.tsv \
  --qrels_dev=qrels.dev.small.tsv \
  --num_dev_docs=30 \
  --num_test_docs=30 \
  --max_seq_length=512 \
  --max_query_length=64

This conversion takes approximately 30-50 hours and will produce the following files:

  • dataset_train.tf
  • dataset_dev.tf
  • dataset_test.tf
  • query_doc_ids_dev.txt
  • query_doc_ids_test.txt

How do I cite this work?

@article{nogueira2019multi,
  title={Multi-stage document ranking with BERT},
  author={Nogueira, Rodrigo and Yang, Wei and Cho, Kyunghyun and Lin, Jimmy},
  journal={arXiv preprint arXiv:1910.14424},
  year={2019}
}

More Repositories

1

pyserini

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
Python
1,640
star
2

anserini

Anserini is a Lucene toolkit for reproducible information retrieval research
Java
1,025
star
3

daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.
Jupyter Notebook
657
star
4

hedwig

PyTorch deep learning models for document classification
Python
591
star
5

honk

PyTorch implementations of neural network models for keyword spotting
Python
511
star
6

docTTTTTquery

docTTTTTquery document expansion model
Python
351
star
7

pygaggle

a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
Jupyter Notebook
339
star
8

rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Python
282
star
9

BuboQA

Simple question answering over knowledge graphs (Mohammed et al., NAACL 2018)
Python
281
star
10

howl

Wake word detection modeling toolkit for Firefox Voice, supporting open datasets like Speech Commands and Common Voice.
Python
198
star
11

castor

PyTorch deep learning models for text processing
Python
179
star
12

DeeBERT

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Python
152
star
13

birch

Document ranking via sentence modeling using BERT
Python
143
star
14

covidex

A multi-stage neural search engine for the COVID-19 Open Research Dataset
TypeScript
137
star
15

MP-CNN-Torch

Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Lua
107
star
16

mr.tydi

Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.
Python
70
star
17

anserini-notebooks

Anserini notebooks
Jupyter Notebook
69
star
18

honkling

Web app for keyword spotting using TensorflowJS
JavaScript
69
star
19

afriberta

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages
Python
66
star
20

dhr

Dense hybrid representations for text retrieval
Python
59
star
21

data

Castorini data
Python
59
star
22

NCE-CNN-Torch

Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Lua
54
star
23

chatty-goose

A Python framework for conversational search
Python
40
star
24

transformers-arithmetic

Python
38
star
25

d-bert

Distilling BERT using natural language generation.
Python
35
star
26

hf-spacerini

Plug-and-play Search Interfaces with Pyserini and Hugging Face
Python
32
star
27

ragnarok

Retrieval-Augmented Generation battle!
Python
32
star
28

anserini-tools

Evaluation tools shared across anserini, pyserini, and pygaggle
Python
28
star
29

bertserini

BERTserini
Python
25
star
30

SimpleDBpediaQA

simple QA over knowledge graphs on DBpedia
Python
25
star
31

onboarding

Onboarding guide to Jimmy Lin's research group at the University of Waterloo
24
star
32

berxit

Python
21
star
33

umbrela

Python
20
star
34

VDPWI-NN-Torch

Very Deep Pairwise Word Interaction Neural Networks for modeling textual similarity (He and Lin, NAACL/HLT 2016)
Lua
19
star
35

perm-sc

Official codebase for permutation self-consistency.
Python
16
star
36

LiT5

Python
15
star
37

TREC-COVID

TREC-COVID results - this is a mirror of data on the TREC website in a more convenient format.
Roff
14
star
38

honk-models

Pre-trained models for Honk
11
star
39

howl-deploy

JavaScript deployment for Howl, the wake word detection modeling toolkit for Firefox Voice
JavaScript
10
star
40

Tweets2013-IA

The Tweets2013 Internet Archive collection
Scala
10
star
41

AfriTeVa-keji

Python
10
star
42

TrecQA-NegEx

Code and dataset for SIGIR 2017 short paper "Automatically Extracting High-Quality Negative Examples for Answer Selection in Question Answering"
Python
10
star
43

meanmax

MeanMax estimators.
Python
9
star
44

cqe

Python
9
star
45

SM-CNN-Torch

Torch implementation of Severyn and Moschitti's SIGIR 2015 CNN model for question answering
Lua
9
star
46

ONNX-demo

Python
8
star
47

anserini-notebooks-afirm2020

Colab notebooks for AFIRM '20
Jupyter Notebook
7
star
48

serverless-bert-reranking

Python
7
star
49

parrot

Keyword spotting using audio from speech synthesis services and YouTube
Python
7
star
50

touche-error-analysis

A reproduction study of the TouchΓ© 2020 dataset in the BEIR benchmark
Python
7
star
51

earlyexiting-monobert

Python
7
star
52

afriteva

Text - 2 - Text for African languages
Python
6
star
53

tct_colbert

Python
6
star
54

transformers-selective

Python
5
star
55

serverless-inference

Neural network inference on serverless architecture
Python
5
star
56

norbert

NorBERT: Anserini + dl4marco-bert
Python
4
star
57

anserini-spark

Anserini-Spark integration
Java
3
star
58

rank_llm_data

3
star
59

numbert

Passage Ranking Library using various pretrained LMs
Python
3
star
60

kim-cnn-vis

An in-browser visualization of Kim CNN
JavaScript
3
star
61

replicate-lce

Python
3
star
62

kws-gen-data

Data for KWS generator.
2
star
63

pyserini-data

Python
2
star
64

BuboQA-models

2
star
65

candle

PyTorch utilities for parameter pruning and multiplies reduction
Python
2
star
66

gooselight2

Search frontend for Anserini
Ruby
2
star
67

africlirmatrix

AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages.
2
star
68

biasprobe

Python
2
star
69

sigtestv

SIGnificance TESTing Violations: an end-to-end toolkit for evaluating neural networks.
Python
1
star
70

howl-models

1
star
71

SolrAnserini

Anserini integration with Solr
Python
1
star
72

gooselight

πŸ¦† Anserini + Blacklight πŸ¦†
Ruby
1
star
73

anlessini

Java
1
star
74

honkling-models

JavaScript
1
star
75

BuboQA-data

Hosting dataset for BuboQA
1
star
76

ragnarok_data

1
star