• Stars
    star
    150
  • Rank 240,831 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 4 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT

This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference.

Code in this repository is also available in the Huggingface Transformer repo (with minor modification for version compatibility). Check this page for models that we have trained in advance (the latest version of Huggingface Transformers Library is needed).

Installation

This repo is tested on Python 3.7.5, PyTorch 1.3.1, and Cuda 10.1. Using a virtulaenv or conda environemnt is recommended, for example:

conda install pytorch==1.3.1 torchvision cudatoolkit=10.1 -c pytorch

After installing the required environment, clone this repo, and install the following requirements:

git clone https://github.com/castorini/deebert
cd deebert
pip install -r ./requirements.txt
pip install -r ./examples/requirements.txt

Usage

There are four scripts in the scripts folder, which can be run from the repo root, e.g., scripts/train.sh.

In each script, there are several things to modify before running:

  • path to the GLUE dataset. Check this for more details.
  • path for saving fine-tuned models. Default: ./saved_models.
  • path for saving evaluation results. Default: ./plotting. Results are printed to stdout and also saved to npy files in this directory to facilitate plotting figures and further analyses.
  • model_type (bert or roberta)
  • model_size (base or large)
  • dataset (SST-2, MRPC, RTE, QNLI, QQP, or MNLI)

train.sh

This is for fine-tuning and evaluating models as in the original BERT paper.

train_highway.sh

This is for fine-tuning DeeBERT models.

eval_highway.sh

This is for evaluating each exit layer for fine-tuned DeeBERT models.

eval_entropy.sh

This is for evaluating fine-tuned DeeBERT models, given a number of different early exit entropy thresholds.

Citation

Please cite our paper if you find the repository useful:

@inproceedings{xin-etal-2020-deebert,
    title = "{D}ee{BERT}: Dynamic Early Exiting for Accelerating {BERT} Inference",
    author = "Xin, Ji  and
      Tang, Raphael  and
      Lee, Jaejun  and
      Yu, Yaoliang  and
      Lin, Jimmy",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.204",
    pages = "2246--2251",
}

More Repositories

1

pyserini

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
Python
1,504
star
2

anserini

Anserini is a Lucene toolkit for reproducible information retrieval research
Java
986
star
3

daam

Diffusion attentive attribution maps for interpreting Stable Diffusion.
Jupyter Notebook
612
star
4

hedwig

PyTorch deep learning models for document classification
Python
588
star
5

honk

PyTorch implementations of neural network models for keyword spotting
Python
504
star
6

docTTTTTquery

docTTTTTquery document expansion model
Python
346
star
7

pygaggle

a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini
Jupyter Notebook
322
star
8

BuboQA

Simple question answering over knowledge graphs (Mohammed et al., NAACL 2018)
Python
280
star
9

rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Python
247
star
10

howl

Wake word detection modeling toolkit for Firefox Voice, supporting open datasets like Speech Commands and Common Voice.
Python
191
star
11

castor

PyTorch deep learning models for text processing
Python
180
star
12

birch

Document ranking via sentence modeling using BERT
Python
142
star
13

covidex

A multi-stage neural search engine for the COVID-19 Open Research Dataset
TypeScript
136
star
14

duobert

Multi-stage passage ranking: monoBERT + duoBERT
Python
109
star
15

MP-CNN-Torch

Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Lua
107
star
16

anserini-notebooks

Anserini notebooks
Jupyter Notebook
69
star
17

mr.tydi

Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.
Python
68
star
18

honkling

Web app for keyword spotting using TensorflowJS
JavaScript
68
star
19

afriberta

AfriBERTa: Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages
Python
60
star
20

data

Castorini data
Python
59
star
21

dhr

Dense hybrid representations for text retrieval
Python
55
star
22

NCE-CNN-Torch

Noise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Lua
54
star
23

chatty-goose

A Python framework for conversational search
Python
38
star
24

transformers-arithmetic

Python
38
star
25

d-bert

Distilling BERT using natural language generation.
Python
35
star
26

hf-spacerini

Plug-and-play Search Interfaces with Pyserini and Hugging Face
Python
30
star
27

SimpleDBpediaQA

simple QA over knowledge graphs on DBpedia
Python
25
star
28

bertserini

BERTserini
Python
24
star
29

anserini-tools

Evaluation tools shared across anserini, pyserini, and pygaggle
Python
22
star
30

berxit

Python
21
star
31

onboarding

Onboarding guide to Jimmy Lin's research group at the University of Waterloo
21
star
32

VDPWI-NN-Torch

Very Deep Pairwise Word Interaction Neural Networks for modeling textual similarity (He and Lin, NAACL/HLT 2016)
Lua
19
star
33

perm-sc

Official codebase for permutation self-consistency.
Python
15
star
34

TREC-COVID

TREC-COVID results - this is a mirror of data on the TREC website in a more convenient format.
Roff
14
star
35

LiT5

Python
13
star
36

honk-models

Pre-trained models for Honk
11
star
37

howl-deploy

JavaScript deployment for Howl, the wake word detection modeling toolkit for Firefox Voice
JavaScript
10
star
38

TrecQA-NegEx

Code and dataset for SIGIR 2017 short paper "Automatically Extracting High-Quality Negative Examples for Answer Selection in Question Answering"
Python
10
star
39

Tweets2013-IA

The Tweets2013 Internet Archive collection
Scala
10
star
40

AfriTeVa-keji

Python
10
star
41

meanmax

MeanMax estimators.
Python
9
star
42

cqe

Python
9
star
43

SM-CNN-Torch

Torch implementation of Severyn and Moschitti's SIGIR 2015 CNN model for question answering
Lua
9
star
44

ONNX-demo

Python
8
star
45

anserini-notebooks-afirm2020

Colab notebooks for AFIRM '20
Jupyter Notebook
7
star
46

serverless-bert-reranking

Python
7
star
47

parrot

Keyword spotting using audio from speech synthesis services and YouTube
Python
7
star
48

earlyexiting-monobert

Python
7
star
49

afriteva

Text - 2 - Text for African languages
Python
6
star
50

tct_colbert

Python
6
star
51

transformers-selective

Python
5
star
52

serverless-inference

Neural network inference on serverless architecture
Python
5
star
53

norbert

NorBERT: Anserini + dl4marco-bert
Python
4
star
54

rank_llm_data

3
star
55

touche-error-analysis

Old is Gold? Systematic Error Analysis of Neural Retrieval Models against BM25 for Argument Retrieval
Python
3
star
56

numbert

Passage Ranking Library using various pretrained LMs
Python
3
star
57

anserini-spark

Anserini-Spark integration
Java
3
star
58

kim-cnn-vis

An in-browser visualization of Kim CNN
JavaScript
3
star
59

replicate-lce

Python
3
star
60

kws-gen-data

Data for KWS generator.
2
star
61

pyserini-data

Python
2
star
62

candle

PyTorch utilities for parameter pruning and multiplies reduction
Python
2
star
63

BuboQA-models

2
star
64

gooselight2

Search frontend for Anserini
Ruby
2
star
65

africlirmatrix

AfriCLIRMatrix is a test collection for cross-lingual information retrieval research in 15 diverse African languages.
2
star
66

biasprobe

Python
2
star
67

sigtestv

SIGnificance TESTing Violations: an end-to-end toolkit for evaluating neural networks.
Python
1
star
68

howl-models

1
star
69

SolrAnserini

Anserini integration with Solr
Python
1
star
70

gooselight

πŸ¦† Anserini + Blacklight πŸ¦†
Ruby
1
star
71

BuboQA-data

Hosting dataset for BuboQA
1
star
72

anlessini

Java
1
star
73

honkling-models

JavaScript
1
star