• Stars
    star
    686
  • Rank 65,892 (Top 2 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 7 years ago
  • Updated 22 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Mother of All BCI Benchmarks

Mother of all BCI Benchmarks

banner

Build a comprehensive benchmark of popular Brain-Computer Interface (BCI) algorithms applied on an extensive list of freely available EEG datasets.

Disclaimer

This is an open science project that may evolve depending on the need of the community.

Build Status Code style: black PyPI Downloads

Welcome!

First and foremost, Welcome! 🎉 Willkommen! 🎊 Bienvenue! 🎈🎈🎈

Thank you for visiting the Mother of all BCI Benchmark repository.

This document is a hub to give you some information about the project. Jump straight to one of the sections below, or just scroll down to find out more.

What are we doing?

The problem

Brain-Computer Interfaces allow to interact with a computer using brain signals. In this project, we focus mostly on electroencephalographic signals (EEG), that is a very active research domain, with worldwide scientific contributions. Still:

  • Reproducible Research in BCI has a long way to go.
  • While many BCI datasets are made freely available, researchers do not publish code, and reproducing results required to benchmark new algorithms turns out to be trickier than it should be.
  • Performances can be significantly impacted by parameters of the preprocessing steps, toolboxes used and implementation “tricks” that are almost never reported in the literature.

As a result, there is no comprehensive benchmark of BCI algorithms, and newcomers are spending a tremendous amount of time browsing literature to find out what algorithm works best and on which dataset.

The solution

The Mother of all BCI Benchmarks allows to:

  • Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.
  • The code is available on GitHub, serving as a reference point for the future algorithmic developments.
  • Algorithms can be ranked and promoted on a website, providing a clear picture of the different solutions available in the field.

This project will be successful when we read in an abstract “ … the proposed method obtained a score of 89% on the MOABB (Mother of All BCI Benchmarks), outperforming the state of the art by 5% ...”.

Installation

Pip installation

To use MOABB, you could simply do:
pip install MOABB
See Troubleshooting section if you have a problem.

Manual installation

You could fork or clone the repository and go to the downloaded directory, then run:

  1. install poetry (only once per machine):
    curl -sSL https://install.python-poetry.org | python3 -
    or checkout installation instruction or use conda forge version
  2. (Optional, skip if not sure) Disable automatic environment creation:
    poetry config virtualenvs.create false
  3. install all dependencies in one command (have to be run in the project directory):
    poetry install

See contributors' guidelines for detailed explanation.

Requirements we use

See pyproject.toml file for full list of dependencies

Running

Verify Installation

To ensure it is running correctly, you can also run

python -m unittest moabb.tests

once it is installed.

Use MOABB

First, you could take a look at our tutorials that cover the most important concepts and use cases. Also, we have a several examples available.

You might be interested in MOABB documentation

Moabb and docker

Moabb has a default image to run the benchmark. You have two options to download this image: build from scratch or pull from the docker hub. We recommend pulling from the docker hub.

If this were your first time using docker, you would need to install the docker and login on docker hub. We recommend the official docker documentation for this step, it is essential to follow the instructions.

After installing docker, you can pull the image from the docker hub:

docker pull baristimunha/moabb
# rename the tag to moabb
docker tag baristimunha/moabb moabb

If you want to build the image from scratch, you can use the following command at the root. You may have to login with the API key in the NGC Catalog to run this command.

bash docker/create_docker.sh

With the image downloaded or rebuilt from scratch, you will have an image called moabb. To run the default benchmark, still at the root of the project, and you can use the following command:

mkdir dataset
mkdir results
mkdir output
bash docker/run_docker.sh PATH_TO_ROOT_FOLDER

An example of the command is:

cd /home/user/project/moabb
mkdir dataset
mkdir results
mkdir output
bash docker/run_docker.sh /home/user/project/moabb

Note: It is important to use an absolute path for the root folder to run, but you can modify the run_docker.sh script to save in another path beyond the root of the project. By default, the script will save the results in the project's root in the folder results, the datasets in the folder dataset and the output in the folder output.

Troubleshooting

Currently pip install moabb fails when pip version < 21, e.g. with 20.0.2 due to an idna package conflict. Newer pip versions resolve this conflict automatically. To fix this you can upgrade your pip version using: pip install -U pip before installing moabb.

Supported datasets

The list of supported datasets can be found here : https://neurotechx.github.io/moabb/datasets.html

Detailed information regarding datasets (electrodes, trials, sessions) are indicated on the wiki: https://github.com/NeuroTechX/moabb/wiki/Datasets-Support

Submit a new dataset

you can submit a new dataset by mentioning it to this issue. The datasets currently on our radar can be seen [here] (https://github.com/NeuroTechX/moabb/wiki/Datasets-Support)

Who are we?

The founders of the Mother of all BCI Benchmarks are Alexander Barachant and Vinay Jayaram. This project is under the umbrella of NeuroTechX, the international community for NeuroTech enthusiasts. The project is currently maintained by Sylvain Chevallier.

What do we need?

You! In whatever way you can help.

We need expertise in programming, user experience, software sustainability, documentation and technical writing and project management.

We'd love your feedback along the way.

Our primary goal is to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets, and we're excited to support the professional development of any and all of our contributors. If you're looking to learn to code, try out working collaboratively, or translate your skills to the digital domain, we're here to help.

Get involved

If you think you can help in any of the areas listed above (and we bet you can) or in any of the many areas that we haven't yet thought of (and here we're sure you can) then please check out our contributors' guidelines and our roadmap.

Please note that it's very important to us that we maintain a positive and supportive environment for everyone who wants to participate. When you join us we ask that you follow our code of conduct in all interactions both on and offline.

Contact us

If you want to report a problem or suggest an enhancement, we'd love for you to open an issue at this GitHub repository because then we can get right on it.

For a less formal discussion or exchanging ideas, you can also reach us on the Gitter channel or join our weekly office hours! This an open video meeting happening on a regular basis, please ask the link on the gitter channel. We are also on NeuroTechX Slack #moabb channel.

Architecture and Main Concepts

banner

There are 4 main concepts in the MOABB: the datasets, the paradigm, the evaluation, and the pipelines. In addition, we offer statistical and visualization utilities to simplify the workflow.

Datasets

A dataset handles and abstracts low-level access to the data. The dataset will read data stored locally, in the format in which they have been downloaded, and will convert them into a MNE raw object. There are options to pool all the different recording sessions per subject or to evaluate them separately.

Paradigm

A paradigm defines how the raw data will be converted to trials ready to be processed by a decoding algorithm. This is a function of the paradigm used, i.e. in motor imagery one can have two-class, multi-class, or continuous paradigms; similarly, different preprocessing is necessary for ERP vs ERD paradigms.

Evaluations

An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) -- it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.

Pipelines

Pipeline defines all steps required by an algorithm to obtain predictions. Pipelines are typically a chain of sklearn compatible transformers and end with a sklearn compatible estimator. See Pipelines for more info.

Statistics and visualization

Once an evaluation has been run, the raw results are returned as a DataFrame. This can be further processed via the following commands to generate some basic visualization and statistical comparisons:

from moabb.analysis import analyze

results = evaluation.process(pipeline_dict)
analyze(results)

Citing MOABB and related publications

To cite MOABB, you could use the following paper:

Vinay Jayaram and Alexandre Barachant. "MOABB: trustworthy algorithm benchmarking for BCIs." Journal of neural engineering 15.6 (2018): 066011. DOI

If you publish a paper using MOABB, please contact us on gitter or open an issue, and we will add your paper to the dedicated wiki page.

Thank You

Thank you so much (Danke schön! Merci beaucoup!) for visiting the project and we do hope that you'll join us on this amazing journey to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.

More Repositories

1

awesome-bci

Curated Collection of BCI resources
1,042
star
2

EEG-ExPy

EEG Experiments in Python
Python
443
star
3

eeg-101

Interactive neuroscience tutorial app using Muse and React Native to teach EEG and BCI basics.
JavaScript
244
star
4

eeg-notebooks_v0.1

Previous version of eeg-notebooks
Jupyter Notebook
184
star
5

bci-workshop

Material for the BCI Workshop held at District 3 in May 2015 by BCI Montréal.
Python
88
star
6

learn.neurotechedu.com

Source code for our open-source EDU website
Jupyter Notebook
71
star
7

dl-eeg-playground

Deep Learning EEG Playground
Jupyter Notebook
65
star
8

neurodoro

🍅 A dynamic brain-responsive pomodoro timer
Jupyter Notebook
52
star
9

neurotechx.github.io

NeuroTechX website
CSS
11
star
10

tutorials

NeuroTechX Tutorials & Learning Material
10
star
11

studentclubs

JavaScript
7
star
12

Brainlock

N400 based EEG biometric authentication system
Jupyter Notebook
6
star
13

node-edf

Resources for reading and writing EEG data into edf, edf+, and bdf files with JS (hopefully a library, eventually)
JavaScript
6
star
14

ntx_slack_archive

Archive of the messages posted on slack
4
star
15

Resource-Kit

Graphics resource kit for everything NTX
3
star
16

NTX-dockit

NeuroTechX Chapter Information!
HTML
3
star
17

Neurocademy

neurocademy 1.0 site
CSS
2
star
18

neureka-challenge

Neureka™ 2020 Epilepsy Challenge - From Tutorials to Winning Solutions
Jupyter Notebook
2
star
19

neurotechedu

HTML
1
star
20

ntx_slack_resources

Middleware used to synchronize neurotechedu and the slack edubot.
1
star
21

gamma-light

Gamma entrainment and amyloid beta reduction with light
JavaScript
1
star
22

ntx-sc

JavaScript
1
star