• Stars
    star
    1,640
  • Rank 28,498 (Top 0.6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.

Pyserini

PyPI Downloads PyPI Download Stats Maven Central Generic badge LICENSE

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with our group's Anserini IR toolkit, which is built on Lucene. Retrieval using dense representations is provided via integration with Facebook's Faiss library.

Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections With Pyserini, it's easy to reproduce runs on a number of standard IR test collections!

For additional details, our paper in SIGIR 2021 provides a nice overview.

⁉️ Important Note: Lucene 8 to Lucene 9 Transition

In 2022, Pyserini underwent a transition from Lucene 8 to Lucene 9. Most of the pre-built indexes have been rebuilt using Lucene 9, but there are a few still based on Lucene 8.

More details:

What's the impact? Indexes built with Lucene 8 are not fully compatible with Lucene 9 code (see Anserini #1952). The workaround is to disable consistent tie-breaking, which happens automatically if a Lucene 8 index is detected by Pyserini. However, Lucene 9 code running on Lucene 8 indexes will give slightly different results than Lucene 8 code running on Lucene 8 indexes. Note that Lucene 8 code is not able to read indexes built with Lucene 9.

Why is this necessary? Although disruptive, an upgrade to Lucene 9 is necessary to take advantage of Lucene's HNSW indexes, which will increase the capabilities of Pyserini and open up the design space of dense/sparse hybrids.

🎬 Installation

Install via PyPI (requires Python 3.8+):

pip install pyserini

Sparse retrieval depends on Anserini, which is itself built on Lucene, and thus Java 11.

Dense retrieval depends on neural networks and requires a more complex set of dependencies. A pip installation will automatically pull in the 🤗 Transformers library to satisfy the package requirements. Pyserini also depends on PyTorch and Faiss, but since these packages may require platform-specific custom configuration, they are not explicitly listed in the package requirements. We leave the installation of these packages to you.

The software ecosystem is rapidly evolving and a potential source of frustration is incompatibility among different versions of underlying dependencies. We provide additional detailed installation instructions here.

If you're planning on just using Pyserini, then the pip instructions above are fine. However, if you're planning on contributing to the codebase or want to work with the latest not-yet-released features, you'll need a development installation. Instructions are provided here.

🙋 How do I search?

Pyserini supports the following classes of retrieval models:

See this guide (same as the links above) for details on how to search common corpora in IR and NLP research (e.g., MS MARCO, NaturalQuestions, BEIR, etc.) using indexes that we have already built for you.

Once you get the top-k results, you'll actually want to fetch the document text... See this guide for how.

🙋 How do I index my own corpus?

Well, it depends on what type of retrieval model you want to search with:

The steps are different for different classes of models: this guide (same as the links above) describes the details.

🙋 Additional FAQs

⚗️ Reproducibility

With Pyserini, it's easy to reproduce runs on a number of standard IR test collections! We provide a number of pre-built indexes that directly support reproducibility "out of the box".

In our SIGIR 2022 paper, we introduced "two-click reproductions" that allow anyone to reproduce experimental runs with only two clicks (i.e., copy and paste). Documentation is organized into reproduction matrices for different corpora that provide a summary of different experimental conditions and query sets:

For more details, see our paper on Building a Culture of Reproducibility in Academic Research.

Programmatic execution of the reproductions

To run the MS MARCO reproductions programmatically, see instructions on each individual page above. For all the others:

python scripts/repro_matrix/run_all_beir.py
python scripts/repro_matrix/run_all_mrtydi.py
python scripts/repro_matrix/run_all_miracl.py
python scripts/repro_matrix/run_all_odqa.py --topics nq
python scripts/repro_matrix/run_all_odqa.py --topics tqa

And to generate the nicely formatted documentation pages:

python scripts/repro_matrix/generate_html_beir.py > docs/2cr/beir.html
python scripts/repro_matrix/generate_html_mrtydi.py > docs/2cr/mrtydi.html
python scripts/repro_matrix/generate_html_miracl.py > docs/2cr/miracl.html
python scripts/repro_matrix/generate_html_odqa.py > docs/2cr/odqa.html

Additional reproduction guides below provide detailed step-by-step instructions.

Sparse Retrieval

Dense Retrieval

Hybrid Sparse-Dense Retrieval

Available Corpora

Corpora Size Checksum
MS MARCO V1 passage: uniCOIL (noexp) 2.7 GB f17ddd8c7c00ff121c3c3b147d2e17d8
MS MARCO V1 passage: uniCOIL (d2q-T5) 3.4 GB 78eef752c78c8691f7d61600ceed306f
MS MARCO V1 doc: uniCOIL (noexp) 11 GB 11b226e1cacd9c8ae0a660fd14cdd710
MS MARCO V1 doc: uniCOIL (d2q-T5) 19 GB 6a00e2c0c375cb1e52c83ae5ac377ebb
MS MARCO V2 passage: uniCOIL (noexp) 24 GB d9cc1ed3049746e68a2c91bf90e5212d
MS MARCO V2 passage: uniCOIL (d2q-T5) 41 GB 1949a00bfd5e1f1a230a04bbc1f01539
MS MARCO V2 doc: uniCOIL (noexp) 55 GB 97ba262c497164de1054f357caea0c63
MS MARCO V2 doc: uniCOIL (d2q-T5) 72 GB c5639748c2cbad0152e10b0ebde3b804

📃 Additional Documentation

ℹ️ Release History

Additional technical notes

With v0.11.0.0 and before, Pyserini versions adopted the convention of X.Y.Z.W, where X.Y.Z tracks the version of Anserini, and W is used to distinguish different releases on the Python end. Starting with Anserini v0.12.0, Anserini and Pyserini versions have become decoupled.

Anserini is designed to work with JDK 11. There was a JRE path change above JDK 9 that breaks pyjnius 1.2.0, as documented in this issue, also reported in Anserini here and here. This issue was fixed with pyjnius 1.2.1 (released December 2019). The previous error was documented in this notebook and this notebook documents the fix.

References

If you use Pyserini, please cite the following paper:

@INPROCEEDINGS{Lin_etal_SIGIR2021_Pyserini,
   author = "Jimmy Lin and Xueguang Ma and Sheng-Chieh Lin and Jheng-Hong Yang and Ronak Pradeep and Rodrigo Nogueira",
   title = "{Pyserini}: A {Python} Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations",
   booktitle = "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)",
   year = 2021,
   pages = "2356--2362",
}

🙏 Acknowledgments

This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.

More Repositories

1

anserini

Anserini is a Lucene toolkit for reproducible information retrieval research
Java
1,025
star
2

daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.
Jupyter Notebook
657
star
3

hedwig

PyTorch deep learning models for document classification
Python
591
star
4

honk

PyTorch implementations of neural network models for keyword spotting
Python
511
star
5

docTTTTTquery

docTTTTTquery document expansion model
Python
351
star
6

pygaggle

a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
Jupyter Notebook
339
star
7

rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Python
282
star
8

BuboQA

Simple question answering over knowledge graphs (Mohammed et al., NAACL 2018)
Python
281
star
9

howl

Wake word detection modeling toolkit for Firefox Voice, supporting open datasets like Speech Commands and Common Voice.
Python
198
star
10

castor

PyTorch deep learning models for text processing
Python
179
star
11

DeeBERT

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
Python
152
star
12

birch

Document ranking via sentence modeling using BERT
Python
143
star
13

covidex

A multi-stage neural search engine for the COVID-19 Open Research Dataset
TypeScript
137
star
14

duobert

Multi-stage passage ranking: monoBERT + duoBERT
Python
112
star
15

MP-CNN-Torch

Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Lua
107
star
16

mr.tydi

Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.
Python
70
star
17

anserini-notebooks

Anserini notebooks
Jupyter Notebook
69
star
18

honkling

Web app for keyword spotting using TensorflowJS
JavaScript
69
star
19

afriberta

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages
Python
66
star
20

dhr

Dense hybrid representations for text retrieval
Python
59
star
21

data

Castorini data
Python
59
star
22

NCE-CNN-Torch

Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Lua
54
star
23

chatty-goose

A Python framework for conversational search
Python
40
star
24

transformers-arithmetic

Python
38
star
25

d-bert

Distilling BERT using natural language generation.
Python
35
star
26

hf-spacerini

Plug-and-play Search Interfaces with Pyserini and Hugging Face
Python
32
star
27

ragnarok

Retrieval-Augmented Generation battle!
Python
32
star
28

anserini-tools

Evaluation tools shared across anserini, pyserini, and pygaggle
Python
28
star
29

bertserini

BERTserini
Python
25
star
30

SimpleDBpediaQA

simple QA over knowledge graphs on DBpedia
Python
25
star
31

onboarding

Onboarding guide to Jimmy Lin's research group at the University of Waterloo
24
star
32

berxit

Python
21
star
33

umbrela

Python
20
star
34

VDPWI-NN-Torch

Very Deep Pairwise Word Interaction Neural Networks for modeling textual similarity (He and Lin, NAACL/HLT 2016)
Lua
19
star
35

perm-sc

Official codebase for permutation self-consistency.
Python
16
star
36

LiT5

Python
15
star
37

TREC-COVID

TREC-COVID results - this is a mirror of data on the TREC website in a more convenient format.
Roff
14
star
38

honk-models

Pre-trained models for Honk
11
star
39

howl-deploy

JavaScript deployment for Howl, the wake word detection modeling toolkit for Firefox Voice
JavaScript
10
star
40

Tweets2013-IA

The Tweets2013 Internet Archive collection
Scala
10
star
41

AfriTeVa-keji

Python
10
star
42

TrecQA-NegEx

Code and dataset for SIGIR 2017 short paper "Automatically Extracting High-Quality Negative Examples for Answer Selection in Question Answering"
Python
10
star
43

meanmax

MeanMax estimators.
Python
9
star
44

cqe

Python
9
star
45

SM-CNN-Torch

Torch implementation of Severyn and Moschitti's SIGIR 2015 CNN model for question answering
Lua
9
star
46

ONNX-demo

Python
8
star
47

anserini-notebooks-afirm2020

Colab notebooks for AFIRM '20
Jupyter Notebook
7
star
48

serverless-bert-reranking

Python
7
star
49

parrot

Keyword spotting using audio from speech synthesis services and YouTube
Python
7
star
50

touche-error-analysis

A reproduction study of the Touché 2020 dataset in the BEIR benchmark
Python
7
star
51

earlyexiting-monobert

Python
7
star
52

afriteva

Text - 2 - Text for African languages
Python
6
star
53

tct_colbert

Python
6
star
54

transformers-selective

Python
5
star
55

serverless-inference

Neural network inference on serverless architecture
Python
5
star
56

norbert

NorBERT: Anserini + dl4marco-bert
Python
4
star
57

anserini-spark

Anserini-Spark integration
Java
3
star
58

rank_llm_data

3
star
59

numbert

Passage Ranking Library using various pretrained LMs
Python
3
star
60

kim-cnn-vis

An in-browser visualization of Kim CNN
JavaScript
3
star
61

replicate-lce

Python
3
star
62

kws-gen-data

Data for KWS generator.
2
star
63

pyserini-data

Python
2
star
64

BuboQA-models

2
star
65

candle

PyTorch utilities for parameter pruning and multiplies reduction
Python
2
star
66

gooselight2

Search frontend for Anserini
Ruby
2
star
67

africlirmatrix

AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages.
2
star
68

biasprobe

Python
2
star
69

sigtestv

SIGnificance TESTing Violations: an end-to-end toolkit for evaluating neural networks.
Python
1
star
70

howl-models

1
star
71

SolrAnserini

Anserini integration with Solr
Python
1
star
72

gooselight

🦆 Anserini + Blacklight 🦆
Ruby
1
star
73

anlessini

Java
1
star
74

honkling-models

JavaScript
1
star
75

BuboQA-data

Hosting dataset for BuboQA
1
star
76

ragnarok_data

1
star