• Stars
    star
    262
  • Rank 155,168 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MRI brain extraction tool

HD-BET

This repository provides easy to use access to our recently published HD-BET brain extraction tool. HD-BET is the result of a joint project between the Department of Neuroradiology at the Heidelberg University Hospital and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ).

If you are using HD-BET, please cite the following publication:

Isensee F, Schell M, Tursunova I, Brugnara G, Bonekamp D, Neuberger U, Wick A, Schlemmer HP, Heiland S, Wick W, Bendszus M, Maier-Hein KH, Kickingereder P. Automated brain extraction of multi-sequence MRI using artificial neural networks. Hum Brain Mapp. 2019; 1–13. https://doi.org/10.1002/hbm.24750

Compared to other commonly used brain extraction tools, HD-BET has some significant advantages:

  • HD-BET was developed with MRI-data from a large multicentric clinical trial in adult brain tumor patients acquired across 37 institutions in Europe and included a broad range of MR hardware and acquisition parameters, pathologies or treatment-induced tissue alterations. We used 2/3 of data for training and validation and 1/3 for testing. Moreover independent testing of HD-BET was performed in three public benchmark datasets (NFBS, LPBA40 and CC-359).
  • HD-BET was trained with precontrast T1-w, postcontrast T1-w, T2-w and FLAIR sequences. It can perform independent brain extraction on various different MRI sequences and is not restricted to precontrast T1-weighted (T1-w) sequences. Other MRI sequences may work as well (just give it a try!)
  • HD-BET was designed to be robust with respect to brain tumors, lesions and resection cavities as well as different MRI scanner hardware and acquisition parameters.
  • HD-BET outperformed five publicly available brain extraction algorithms (FSL BET, AFNI 3DSkullStrip, Brainsuite BSE, ROBEX and BEaST) across all datasets and yielded median improvements of +1.33 to +2.63 points for the DICE coefficient and -0.80 to -2.75 mm for the Hausdorff distance (Bonferroni-adjusted p<0.001).
  • HD-BET is very fast on GPU with <10s run time per MRI sequence. Even on CPU it is not slower than other commonly used tools.

Installation Instructions

Note that you need to have a python3 installation for HD-BET to work. Please also make sure to install HD-BET with the correct pip version (the one that is connected to python3). You can verify this using the --version command:

(dl_venv) fabian@Fabian:~$ pip --version
pip 20.0.2 from /home/fabian/dl_venv/lib/python3.6/site-packages/pip (python 3.6)

If it does not show python 3.X, you can try pip3. If that also does not work you probably need to install python3 first.

Once python 3 and pip are set up correctly, run the following commands to install HD-BET:

  1. Clone this repository:
    git clone https://github.com/MIC-DKFZ/HD-BET
  2. Go into the repository (the folder with the setup.py file) and install:
    cd HD-BET
    pip install -e .
    
  3. Per default, model parameters will be downloaded to ~/hd-bet_params. If you wish to use a different folder, open HD_BET/paths.py in a text editor and modify folder_with_parameter_files

How to use it

Using HD_BET is straightforward. You can use it in any terminal on your linux system. The hd-bet command was installed automatically. We provide CPU as well as GPU support. Running on GPU is a lot faster though and should always be preferred. Here is a minimalistic example of how you can use HD-BET (you need to be in the HD_BET directory)

hd-bet -i INPUT_FILENAME

INPUT_FILENAME must be a nifti (.nii.gz) file containing 3D MRI image data. 4D image sequences are not supported (however can be splitted upfront into the individual temporal volumes using fslsplit1). INPUT_FILENAME can be either a pre- or postcontrast T1-w, T2-w or FLAIR MRI sequence. Other modalities might work as well. Input images must match the orientation of standard MNI152 template! Use fslreorient2std 2 upfront to ensure that this is the case.

By default, HD-BET will run in GPU mode, use the parameters of all five models (which originate from a five-fold cross-validation), use test time data augmentation by mirroring along all axes and not do any postprocessing.

For batch processing it is faster to process an entire folder at once as this will mitigate the overhead of loading and initializing the model for each case:

hd-bet -i INPUT_FOLDER -o OUTPUT_FOLDER

The above command will look for all nifti files (*.nii.gz) in the INPUT_FOLDER and save the brain masks under the same name in OUTPUT_FOLDER.

GPU is nice, but I don't have one of those... What now?

HD-BET has CPU support. Running on CPU takes a lot longer though and you will need quite a bit of RAM. To run on CPU, we recommend you use the following command:

hd-bet -i INPUT_FOLDER -o OUTPUT_FOLDER -device cpu -mode fast -tta 0

This works of course also with just an input file:

hd-bet -i INPUT_FILENAME -device cpu -mode fast -tta 0

The options -mode fast and -tta 0 will disable test time data augmentation (speedup of 8x) and use only one model instead of an ensemble of five models for the prediction.

More options:

For more information, please refer to the help functionality:

hd-bet --help

FAQ

  1. How much GPU memory do I need to run HD-BET?
    We ran all our experiments on NVIDIA Titan X GPUs with 12 GB memory. For inference you will need less, but since inference in implemented by exploiting the fully convolutional nature of CNNs the amount of memory required depends on your image. Typical image should run with less than 4 GB of GPU memory consumption. If you run into out of memory problems please check the following: 1) Make sure the voxel spacing of your data is correct and 2) Ensure your MRI image only contains the head region
  2. Will you provide the training code as well?
    No. The training code is tightly wound around the data which we cannot make public.
  3. What run time can I expect on CPU/GPU?
    This depends on your MRI image size. Typical run times (preprocessing, postprocessing and resampling included) are just a couple of seconds for GPU and about 2 Minutes on CPU (using -tta 0 -mode fast)

1https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils

2https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Orientation%20Explained

More Repositories

1

nnUNet

Python
5,539
star
2

medicaldetectiontoolkit

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Python
1,287
star
3

batchgenerators

A framework for data augmentation for 2D and 3D image classification and segmentation
Jupyter Notebook
1,077
star
4

nnDetection

nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.
Python
536
star
5

MedNeXt

[MICCAI 2023] MedNeXt is a fully ConvNeXt architecture for 3D medical image segmentation.
Python
313
star
6

TractSeg

Automatic White Matter Bundle Segmentation
Python
222
star
7

napari-sam

Python
220
star
8

trixi

Manage your machine learning experiments with trixi - modular, reproducible, high fashion. An experiment infrastructure optimized for PyTorch, but flexible enough to work for your framework and your tastes.
Python
219
star
9

basic_unet_example

An example project of how to use a U-Net for segmentation on medical images with PyTorch.
Python
137
star
10

MITK-Diffusion

MITK Diffusion - Official part of the Medical Imaging Interaction Toolkit
C++
76
star
11

LIDC-IDRI-processing

Scripts for the preprocessing of LIDC-IDRI data
Python
75
star
12

BraTS2017

Python
74
star
13

BodyPartRegression

Python
62
star
14

dynamic-network-architectures

Python
61
star
15

mood

Repository for the Medical Out-of-Distribution Analysis Challenge.
Python
60
star
16

ACDC2017

Python
54
star
17

niicat

This is a tool to quickly preview nifti images on the terminal
Python
51
star
18

RegRCNN

This repository holds the code framework used in the paper Reg R-CNN: Lesion Detection and Grading under Noisy Labels. It is a fork of MIC-DKFZ/medicaldetectiontoolkit with regression capabilites.
Python
51
star
19

Skeleton-Recall

Skeleton Recall Loss for Connectivity Conserving and Resource Efficient Segmentation of Thin Tubular Structures
Python
47
star
20

MultiTalent

Implemention of the Paper "MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation"
Python
46
star
21

image_classification

🎯 Deep Learning Framework for Image Classification & Regression in Pytorch for Fast Experiments
Python
42
star
22

RTTB

Swiss army knife for radiotherapy analysis
C++
26
star
23

vae-anomaly-experiments

Python
26
star
24

Hyppopy

Hyppopy is a python toolbox for blackbox optimization. It's purpose is to offer a unified and easy to use interface to a collection of solver libraries.
Python
25
star
25

patchly

A grid sampler for larger-than-memory N-dimensional images
Python
23
star
26

semantic_segmentation

Python
23
star
27

probabilistic_unet

A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations.
Jupyter Notebook
22
star
28

image-time-series

Code for deep learning-based glioma/tumor growth models
Python
21
star
29

anatomy_informed_DA

Python
18
star
30

batchgeneratorsv2

Python
13
star
31

foundation-models-for-cbmir

Python
12
star
32

MedVol

Python
12
star
33

ParticleSeg3D

Python
10
star
34

generalized_yolov5

An extension of YOLOv5 to non-natural images together with 5-Fold Cross-Validation
Python
8
star
35

radtract

RadTract: enhanced tractometry with radiomics-based imaging biomarkers for improved predictive modelling.
Python
8
star
36

gpconvcnp

Code for "GP-ConvCNP: Better Generalization for Convolutional Conditional Neural Processes on Time Series Data"
Python
8
star
37

cmdint

CmdInterface enables detailed logging of command line and python experiments in a very lightweight manner (coding wise). It wraps your command line or python function calls in a few lines of python code and logs everything you might need to reproduce the experiment later on or to simply check what you did a couple of years ago.
Python
8
star
38

acvl_utils

Python
7
star
39

MurineAirwaySegmentation

Python
7
star
40

cOOpD

Python
7
star
41

PROUNET

Prostate U-net
Python
7
star
42

napari-nifti

Python
4
star
43

agent-sam

Segment Anything model wrapper used by the Medical Imaging Interaction Toolkit (MITK).
Python
4
star
44

OverthINKingSegmenter

Python
3
star
45

perovskite-xai

Python
3
star
46

help_a_hematologist_out_challenge

Python
2
star
47

AGGC2022

Automated Gleason Grading on WSI
Python
2
star
48

tqdmp

Multiprocessing with tqdm progressbars!
Python
2
star
49

MatchPoint

MatchPoint is a translational image registration framework written in C++. It offers a standardized interface to utilize several registration algorithm resources (like ITK, plastimatch, elastix) easily in a host application.
C++
2
star
50

napari-mzarr

Python
2
star
51

n2c2-challenge-2019

Jupyter Notebook
2
star
52

mzarr

Python
1
star
53

imlh-icml-detection-tools

Python
1
star
54

napari-blosc2

Python
1
star
55

BraTPRO

Python
1
star