• Stars
    star
    454
  • Rank 96,373 (Top 2 %)
  • Language
    Python
  • Created about 3 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tools for curating biomedical training data for large-scale language modeling

BigBIO: Biomedical Dataset Library

BigBIO (BigScience Biomedical) is an open library of biomedical dataloaders built using Huggingface's (πŸ€—) datasets library for data-centric machine learning.

Our goals include:

  • Lightweight, programmatic access to biomedical datasets at scale
  • Promoting reproducibility in data processing
  • Better documentation for dataset provenance, licensing, and other key attributes
  • Easier generation of meta-datasets for natural language prompting, multi-task learning

Currently BigBIO provides support for:

  • 126+ biomedical datasets
  • 10+ languages
  • 12 task categories
  • Harmonized dataset schemas by task type
  • Metadata on licensing, coarse/fine-grained task types, domain, and more!

How to Use BigBIO

The preferred way to use these datasets is to access them from the Official BigBIO Hub.

Minimally, ensure you have the datasets library installed. Preferably, install the requirements as follows:

pip install -r requirements.txt.


You can access BigBIO datasets as follows:

from datasets import load_dataset
data = load_dataset("bigbio/biosses")

In most cases, scripts load the original schema of the dataset by default. You can also access the BigBIO split that streamlines access to key information in datasets given a particular task.


For example, the biosses dataset follows a pairs based schema, where text-based inputs (sentences, paragraphs) are assigned a "translated" pair.

from datasets import load_dataset
data = load_dataset("bigbio/biosses", name="biosses_bigbio_pairs")

Generally, you can load your datasets as follows:

# Load original schema
data = load_dataset("bigbio/<your_dataset>")

# Load BigBIO schema
data = load_dataset("bigbio/<your_dataset_here>", name="<your_dataset>_bigbio_<schema_name>")

Check the datacards on the Hub to see what splits are available to you. You can find more information about schemas in Documentation below.

Benchmark Support

BigBIO includes support for almost all datasets included in other popular English biomedical benchmarks.

Task Type Dataset BigBIO (ours) BLUE BLURB BoX DUA needed
NER BC2GM βœ“ βœ“ βœ“
NER BC5-chem βœ“ βœ“ βœ“ βœ“
NER BC5-disease βœ“ βœ“ βœ“ βœ“
NER EBM PICO βœ“ βœ“
NER JNLPBA βœ“ βœ“ βœ“
NER NCBI-disease βœ“ βœ“ βœ“
RE ChemProt βœ“ βœ“ βœ“ βœ“
RE DDI βœ“ βœ“ βœ“ βœ“
RE GAD βœ“ βœ“
QA PubMedQA βœ“ βœ“ βœ“
QA BioASQ βœ“ βœ“ βœ“ βœ“
DC HoC βœ“ βœ“ βœ“ βœ“
STS BIOSSES βœ“ βœ“ βœ“
STS MedSTS * βœ“ βœ“
NER n2c2 2010 βœ“ βœ“ βœ“ βœ“
NER ShARe/CLEF 2013 * βœ“ βœ“
NLI MedNLI βœ“ βœ“ βœ“
NER n2c2 deid 2006 βœ“ βœ“ βœ“
DC n2c2 RFHD 2014 βœ“ βœ“ βœ“
NER AnatEM βœ“ βœ“
NER BC4CHEMD βœ“ βœ“
NER BioNLP09 βœ“ βœ“
NER BioNLP11EPI βœ“ βœ“
NER BioNLP11ID βœ“ βœ“
NER BioNLP13CG βœ“ βœ“
NER BioNLP13GE βœ“ βœ“
NER BioNLP13PC βœ“ βœ“
NER CRAFT * βœ“
NER Ex-PTM βœ“ βœ“
NER Linnaeus βœ“ βœ“
POS GENIA * βœ“
SA Medical Drugs βœ“ βœ“
SR COVID private
SR Cooking private
SR HRT private
SR Accelerometer private
SR Acromegaly private

* denotes dataset implementation in-progress

Documentation

Tutorials

TBA - Links may not be applicable yet!

Contributing

BigBIO is an open source project - your involvement is warmly welcome! If you're excited to join us, we recommend the following steps:

  • Looking for ideas? See our Volunteer Project Board to see what we may need help with.

  • Have your own idea? Contact an admin in the form of an issue.

  • Implement your idea following guidelines set by the official contributing guide

  • Wait for admin approval; approval is iterative, but if accepted will belong to the main repository.

Currently, only admins will be merging all accepted changes to the Hub.

Feel free to join our Discord!

Citing

If you use BigBIO in your work, please cite

@article{fries2022bigbio,
	title = {
		BigBIO: A Framework for Data-Centric Biomedical Natural Language
		Processing
	},
	author = {
		Fries, Jason Alan and Weber, Leon and Seelam, Natasha and Altay,
		Gabriel and Datta, Debajyoti and Garda, Samuele and Kang, Myungsun
		and Su, Ruisi and Kusa, Wojciech and Cahyawijaya, Samuel and others
	},
	journal = {arXiv preprint arXiv:2206.15076},
	year = 2022
}

Acknowledgements

BigBIO is a open source, community effort made possible through the efforts of many volunteers as part of BigScience and the Biomedical Hackathon.

More Repositories

1

petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Python
9,056
star
2

promptsource

Toolkit for creating, sharing and using natural language prompts.
Python
2,627
star
3

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Python
1,327
star
4

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.
Shell
977
star
5

xmtf

Crosslingual Generalization through Multitask Finetuning
Jupyter Notebook
510
star
6

t-zero

Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
Python
456
star
7

data-preparation

Code used for sourcing and cleaning the BigScience ROOTS corpus
Jupyter Notebook
301
star
8

lam

Libraries, Archives and Museums (LAM)
79
star
9

data_tooling

Tools for managing datasets for governance and training.
HTML
77
star
10

multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
Python
69
star
11

evaluation

Code and Data for Evaluation WG
Python
41
star
12

data_sourcing

This directory gathers the tools developed by the Data Sourcing Working Group
Python
31
star
13

metadata

Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.
Python
30
star
14

model_card

24
star
15

tokenization

Python
11
star
16

carbon-footprint

A repository for `codecarbon` logs.
Jupyter Notebook
10
star
17

bloom-dechonk

A repo for running model shrinking experiments
Python
10
star
18

historical_texts

BigScience working group on language models for historical texts
Jupyter Notebook
8
star
19

catalogue_data

Scripts to prepare catalogue data
Jupyter Notebook
8
star
20

pii_processing

PII Processing code to detect and remediate PII in BigScience datasets. Reference implementation for the PII Hackathon
Python
8
star
21

training_dynamics

5
star
22

bibliography

A list of BigScience publications
TeX
3
star
23

scaling-laws-tokenization

scaling-laws-tokenization
2
star
24

datasets_stats

Generate statistics over datasets used in the context of BS
Makefile
2
star
25

evaluation-robustness-consistency

Tools for evaluating model robustness and consistency
Python
2
star
26

interpretability-ideas

1
star
27

evaluation-results

Dump of results for bigscience.
Python
1
star