• Stars
    star
    536
  • Rank 82,794 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.

Version Python CUDA

What is nnDetection?

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. Recently, nnU-Net has tackled this challenge for the task of image segmentation with great success. Following nnU-Netโ€™s agenda, in this work we systematize and automate the configuration process for medical object detection. The resulting self-configuring method, nnDetection, adapts itself without any manual intervention to arbitrary medical detection problems while achieving results en par with or superior to the state-of-the-art. We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.

If you use nnDetection please cite our paper:

Baumgartner M., Jรคger P.F., Isensee F., Maier-Hein K.H. (2021) nnDetection: A Self-configuring Method for Medical Object Detection. In: de Bruijne M. et al. (eds) Medical Image Computing and Computer Assisted Intervention โ€“ MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science, vol 12905. Springer, Cham. https://doi.org/10.1007/978-3-030-87240-3_51

๐ŸŽ‰ nnDetection was early accepted to the International Conference on Medical Image Computing & Computer Assisted Intervention 2021 (MICCAI21) ๐ŸŽ‰

Installation

Docker

The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.

Please install docker and nvidia-docker2 before continuing.

All projects which are based on nnDetection assume that the base image was built with the following tagging scheme nnDetection:[version]. To build a container (nnDetection Version 0.1) run the following command from the base directory:

docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .

(--build-arg env_det_num_threads=6 and --build-arg env_det_verbose=1 are optional and are used to overwrite the provided default parameters)

The docker container expects data and models in its own /opt/data and /opt/models directories respectively. The directories need to be mounted via docker -v. For simplicity and speed, the ENV variables det_data and det_models can be set in the host system to point to the desired directories. To run:

docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash

Warning: When running a training inside the container it is necessary to increase the shared memory (via --shm-size).

Source

Please note that nndetection requires Python 3.8+. Please use PyTorch 1.X version for now and not 2.0

  1. Install CUDA (>10.1) and cudnn (make sure to select compatible versions!)
  2. [Optional] Depending on your GPU you might need to set TORCH_CUDA_ARCH_LIST, check compute capabilities here.
  3. Install torch (make sure to match the pytorch and CUDA versions!) (requires pytorch >1.10+) and torchvision(make sure to match the versions!).
  4. Clone nnDetection, cd [path_to_repo] and pip install -e .
  5. Set environment variables (more info can be found below):
    • det_data: [required] Path to the source directory where all the data will be located
    • det_models: [required] Path to directory where all models will be saved
    • OMP_NUM_THREADS=1 : [required] Needs to be set! Otherwise bad things will happen... Refer to batchgenerators documentation.
    • det_num_threads: [recommended] Number processes to use for augmentation (at least 6, default 12)
    • det_verbose: [optional] Can be used to deactivate progress bars (activated by default)
    • MLFLOW_TRACKING_URI: [optional] Specify the logging directory of mlflow. Refer to the mlflow documentation for more information.

Note: nnDetection was developed on Linux => Windows is not supported.

Test Installation
Run the following command in the terminal (!not! in pytorch root folder) to verify that the compilation of the C++/CUDA code was successfull:
python -c "import torch; import nndet._C; import nndet"

To test the whole installation please run the Toy Data set example.

Maximising Training Speed
To get the best possible performance we recommend using CUDA 11.0+ with cuDNN 8.1.X+ and a (!)locally compiled version(!) of Pytorch 1.7+

nnDetection

nnDetection Module Overview

nnDetection uses multiple Registries to keep track of different modules and easily switch between them via the config files.

Config Files nnDetection uses Hydra to dynamically configure and compose configurations. The configuration files are located in nndet.conf and can be overwritten to customize the behavior of the pipeline.

AUGMENTATION_REGISTRY The augmentation registry can be imported from nndet.io.augmentation and contains different augmentation configurations. Examples can be found in nndet.io.augmentation.bg_aug.

DATALOADER_REGISTRY The dataloader registry contains different dataloader classes to customize the IO of nnDetection. It can be imported from nndet.io.datamodule and examples can be found in nndet.io.datamodule.bg_loader.

PLANNER_REGISTRY New plans can be registered via the planner registry which contains classes to define and perform different architecture and preprocessing schemes. It can be imported from nndet.planning.experiment and examples can be found in nndet.planning.experiment.v001.

MODULE_REGISTRY The module registry contains the core modules of nnDetection which inherits from the Pytorch Lightning Module. It is the main module which is used for training and inference and contains all the necessary steps to build the final models. It can be imported from nndet.ptmodule and examples can be found in nndet.ptmodule.retinaunet.

nnDetection Functional Details

Experiments & Data

The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets. Some of the labels were corrected in data sets which we converted and can be downloaded (links can be found in the guides). The Experiments section contains multiple guides which explain the preparation of the data sets via the provided scripts.

Toy Data set

Running nndet_example will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).

# create data to test installation/environment (10 train 10 test)
nndet_example

# create full data set for prototyping (1000 train 1000 test)
nndet_example --full [--num_processes]

The full problem is very easy and the final results should be near perfect. After running the generation script follow the Planning, Training and Inference instructions below to construct the whole nnDetection pipeline.

Guides

Work in progress

Experiments

Besides the self-configuring method, nnDetection acts as a standard interface for many data sets. We provide guides to prepare all data sets from our evaluation to the correct and make it easy to reproduce our resutls. Furthermore, we provide pretrained models which can be used without investing large amounts of compute to rerun our experiments (see Section Pretrained Models).

Adding New Data sets

nnDetection relies on a standardized input format which is very similar to nnU-Net and allows easy integration of new data sets. More details about the format can be found below.

Folders

All data sets should reside inside Task[Number]_[Name] folders inside the specified detection data folder (the path to this folder can be set via the det_data environment flag). To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100. An overview is provided below ([Name] symbolise folders, - symbolise files, indents refer to substructures)

Warning[!]: Please avoid any . inside file names/folder names/paths since it can influence how paths/names are splitted.

${det_data}
    [Task000_Example]
        - dataset.yaml # dataset.json works too
        [raw_splitted]
            [imagesTr]
                - case0000_0000.nii.gz # case0000 modality 0
                - case0000_0001.nii.gz # case0000 modality 1
                - case0001_0000.nii.gz # case0001 modality 0
                - case0000_0001.nii.gz # case0001 modality 1
            [labelsTr]
                - case0000.nii.gz # instance segmentation case0000
                - case0000.json # properties of case0000
                - case0001.nii.gz # instance segmentation case0001
                - case0001.json # properties of case0001
            [imagesTs] # optional, same structure as imagesTr
             ...
            [labelsTs] # optional, same structure as labelsTr
             ...
    [Task001_Example1]
        ...

Data set Info

dataset.yaml or dataset.json provides general information about the data set: Note: [Important] Classes and modalities start with index 0!

task: Task000D3_Example

name: "Example" # [Optional]
dim: 3 # number of spatial dimensions of the data

# Note: need to use integer value which is defined below of target class!
target_class: 1 # [Optional] define class of interest for patient level evaluations
test_labels: True # manually splitted test set

labels: # classes of data set; need to start at 0
    "0": "Square"
    "1": "SquareHole"

modalities: # modalities of data set; need to start at 0
    "0": "CT"

Image Format

nnDetection uses the same image format as nnU-Net. Each case consists of at least one 3D nifty file with a single modality and are saved in the images folders. If multiple modalities are available, each modality uses a separate file and the sequence number at the end of the name indicates the modality (these need to correspond to the numbers specified in the data set file and be consistent across the whole data set).

An example with two modalities could look like this:

- case001_0000.nii.gz # Case ID: case001; Modality: 0
- case001_0001.nii.gz # Case ID: case001; Modality: 1

- case002_0000.nii.gz # Case ID: case002; Modality: 0
- case002_0001.nii.gz # Case ID: case002; Modality: 1

If multiple modalities are available, please check beforehand if they need to be registered and perform registration befor nnDetection preprocessing. nnDetection does (!)not(!) include automatic registration of multiple modalities.

Label Format

Labels are encoded with two files per case: one nifty file which contains the instance segmentation and one json file which includes the "meta" information of each instance. The nifty file should contain all annotated instances where each instance has a unique number and are in consecutive order (e.g. 0 ALWAYS refers to background, 1 refers to the first instance, 2 refers to the second instance ...) case[XXXX].json label files need to provide the class of every instance in the segmentation. In this example the first isntance is assigned to class 0 and the second instance is assigned to class 1:

{
    "instances": {
        "1": 0,
        "2": 1
    }
}

Each label file needs a corresponding json file to define the classes. We also wrote an Detection Annotation Guide which includes a dedicated section of the nnDetection format with additional visualizations :)

Using nnDetection

The following paragrah provides an high level overview of the functionality of nnDetection and which commands are available. A typical flow of commands would look like this:

nndet_prep -> nndet_unpack -> nndet_train -> nndet_consolidate -> nndet_predict

Eachs of this commands is explained below and more detailt information can be obtained by running nndet_[command] -h in the terminal.

Planning & Preprocessing

Before training the networks, nnDetection needs to preprocess and analyze the data. The preprocessing stage normalizes and resamples the data while the analyzed properties are used to create a plan which will be used for configuring the training. nnDetectionV0 requires a GPU with approximately the same amount of VRAM you are planning to use for training (we used a RTX2080TI; no monitor attached to it) to perform live estimation of the VRAM used by the network. Future releases aim at improving this process...

nndet_prep [tasks] [-o / --overwrites] [-np / --num_processes] [-npp / --num_processes_preprocessing] [--full_check]

# Example
nndet_prep 000

# Script
# /scripts/preprocess.py - main()

-o option can be used to overwrite parameters for planning and preprocessing (refer to the config files to see all parameters). The number of processes used for cropping and analysis can be adjusted by using -np and the number of processes used for resampling can be set via -npp. The current values are fairly save if 64GB of RAM is available. The --full_check will iterate over the data before starting any preprocessing and check correct formatting of the data and labels. If any problems occur during preprocessing please run the full check to make sure that the format is correct.

After planning and preprocessing the resulting data folder structure should look like this:

[Task000_Example]
    [raw_splitted]
    [raw_cropped] # only needed for different resampling strategies
        [imagesTr] # stores cropped image data; contains npz files
        [labelsTr] # stores labels
    [preprocessed]
        [analysis] # some plots to visualize properties of the underlying data set
        [properties] # sufficient for new plans
        [labelsTr] # labels in original format (original spacing)
        [labelsTs] # optional
        [Data identifier; e.g. D3V001_3d]
            [imagesTr] # preprocessed data
            [labelsTr] # preprocessed labels (resampled spacing)
        - {name of plan}.pkl e.g. D3V001_3d.pkl

Befor starting the training copy the data (Task Folder, data set info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with

nndet_unpack [path] [num_processes]

# Example (unpack example with 6 processes)
nndet_unpack ${det_data}/Task000D3_Example/preprocessed/D3V001_3d/imagesTr 6

# Script
# /scripts/utils.py - unpack()

Training and Evaluation

After the planning and preprocessing stage is finished the training phase can be started. The default setup of nnDetection is trained in a 5 fold cross-validation scheme. First, check which plans were generated during planning by checking the preprocessing folder and look for the pickled plan files. In most cases only the defaul plan will be generated (D3V001_3d) but there might be instances (e.g. Kits) where the low resolution plan will be generated too (D3V001_3dlr1).

nndet_train [task] [-o / --overwrites] [--sweep]

# Example (train default plan D3V001_3d and search best inference parameters)
nndet_train 000 --sweep

# Script
# /scripts/train.py - train()

Use -o exp.fold=X to overwrite the trained fold, this should be run for all folds X = 0, 1, 2, 3, 4! The --sweep option tells nnDetection to look for the best hyparameters for inference by empirically evaluating them on the validation set. Sweeping can also be performed later by running the following command:

nndet_sweep [task] [model] [fold]

# Example (sweep Task 000 of model RetinaUNetV001_D3V001_3d in fold 0)
nndet_sweep 000 RetinaUNetV001_D3V001_3d 0

# Script
# /experiments/train.py - sweep()

Evaluation can be invoked by the following command (requires access to the model and preprocessed data):

nndet_eval [task] [model] [fold] [--test] [--case] [--boxes] [--seg] [--instances] [--analyze_boxes]

# Example (evaluate and analyze box predictions of default model)
nndet_eval 000 RetinaUNetV001_D3V001_3d 0 --boxes --analyze_boxes

# Script
# /scripts/train.py - evaluate()

# Note: --test invokes evaluation of the test set
# Note: --seg, --instances are placeholders for future versions and not working yet

Inference

After running all folds it is time to collect the models and creat a unified inference plan. The following command will copy all the models and predictions from the folds. By adding the sweep_ options, the empiricaly hyperparameter optimization across all folds can be started. This will generate a unified plan for all models which will be used during inference.

nndet_consolidate [task] [model] [--overwrites] [--consolidate] [--num_folds] [--no_model] [--sweep_boxes] [--sweep_instances]

# Example
nndet_consolidate 000 RetinaUNetV001_D3V001_3d --sweep_boxes

# Script
# /scripts/consolidate.py - main()

For the final test set predictions simply select the best model according to the validation scores and run the prediction command below. Data which is located in raw_splitted/imagesTs will be automatically preprocessed and predicted by running the following command:

nndet_predict [task] [model] [--fold] [--num_tta] [--no_preprocess] [--check] [-npp / --num_processes_preprocessing] [--force_args]

# Example
nndet_predict 000 RetinaUNetV001_D3V001_3d --fold -1

# Script
# /scripts/predict.py - main()

If a self-made test set was used, evaluation can be performed by invoking nndet_eval with --test as described above.

Results

The final model directory will contain multiple subfolders with different information:

  • sweep: contain information from the parameter sweeps and are only used for debugging purposes
  • sweep_predictions: these contain prediction with additional ensembler state information which are used during the empirical parameter optimization. Since these save the model output in a fairly raw format they are bigger than the predictions seen during normal inference to avoid multiple model prediction runs during the parameter sweeps
  • [val/test]_predictions: Contains the prediction of the validation/test set in the restored image space.
  • val_predictions_preprocessed: This contains prediction in the preprocessed image space, i.e. the predictions from the resampled and cropped data. they are saved for debugging purposes.
  • [val/test]_results: this folder contains the validation/test rsults computed by nnDetection. More information on the metrics can be found below.
  • val_results_preprocessed: contains validation results inside the preprocessed image space are saved for debugging purposes
  • val_analysis[_preprocessed] experimental: provide additional analysis information of the predictions. This feature is marked as expeirmental since it uses a simplified matching algorithm and should only be used to gain an intuition of potential improvements.

The following section contains some additional information regarding the metrics which are computed by nnDetection. They can be found in [val/test]_results/results_boxes.json:

  • AP_IoU_0.10_MaxDet_100: is the main metric used for the evaluation in our paper. It is evaluated at an IoU threshold of 0.1 and 100 predictions per image. Note that this is a hard limit and if images contain much more instances this leads to wrong results.
  • mAP_IoU_0.10_0.50_0.05_MaxDet_100: Is the typically found COCO mAP metric evaluated at multiple IoU values. The IoU thresholds are different from those of the COCO evaluation to account for the generally lower IoU in 3D data
  • [num]_AP_IoU_0.10_MaxDet_100: AP metric computed per class
  • FROC_score_IoU_0.10 FROC score with default FPPI (1/8, 1/4, 1/2, 1, 2, 4, 8). Note (in contrast to the AP implementation): the multi-class case does not compute the metric per class but puts all predictions/gt into a single large pool (similar to AP_pool from https://arxiv.org/abs/2102.01066) and thus inter class calibration is important here. In most cases simply averaging the [num]_FROC scores manually to assign the same weight to each class should be prefered.
  • case evaluation experimental: It is possible to run case evaluations with nnDetection but this is still experimental and undergoing additional testing and might be changed in the future.

nnU-Net for Detection

Besides nnDetection we also include the scripts to prepare and evaluate nnU-Net in the context of obejct detection. Both frameworks need to be configured correctly before running the scripts to assure correctness. After preparing the data set in the nnDetection format (which is a superset of the nnU-Net format) it is possible to export it to nnU-Net via scripts/nnunet/nnunet_export.py. Since nnU-Net needs task ids without any additions it may be necessary to overwrite the task name via the -nt option for some dataets (e.g. Task019FG_ADAM needs to be renamed to Task019_ADAM). Follow the usual nnU-Net preprocessing and training pipeline to generate the needed models. Use the --npz option during training to save the predicted probabilities which are needed to generate the detection results. After determining the best ensemble configuration from nnU-Net pass all paths to scripts/nnunet/nnunet_export.py which will ensemble and postprocess the predictions for object detection. Per default the nnU-Net Plus scheme will be used which incorporates the empirical parameter optimization step. Use --simple flag to switch to the nnU-Net basic configuration.

Pretrained models

Coming Soon

FAQ & Common Issues

Installation & Initial Setup Errors
  1. Error: Undefined CUDA symbols when importing nndet._C or other import related Errors from nndet._C or CUDA related ARCH errors nnDetection includes additional CUDA code which needs to compiled upon installation and thus requires correct configuration of the CUDA dependencies. Please double check CUDA version of your PC, pytorch, torchvision and nnDetection build. This can be done by running nndet_env if the installation succeeded or by running python scripts/utils.py. An example output of the command is shown below:
----- PyTorch Information -----
PyTorch Version: 1.11.0+cu113
PyTorch Debug: False
PyTorch CUDA: 11.3
PyTorch Backend cudnn: 8200
PyTorch CUDA Arch List: ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86']
PyTorch Current Device Capability: (7, 5)
PyTorch CUDA available: True

----- System Information -----
System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0

System Arch List: None
System OMP_NUM_THREADS: 1
System CUDA_HOME is None: True
System CPU Count: 8
Python Version: 3.8.11 (default, Aug  3 2021, 15:09:35)
[GCC 7.5.0]

----- nnDetection Information -----
det_num_threads 6
det_data is set True
det_models is set True

Things to look out for:

Make sure that the versions of PyTorch CUDA and NVCC CUDA match (minor version mismatch as in this case, will work without error but could potentially introduce bugs.)

OMP_NUM_THREADS should always be set to 1 and det_num_threads should always be lower or equal Systemm CPU Count.

  1. Error persists even after fixing the environment Make sure to delete the build folder before rerunning the installation since it won't recompile the code otherwise.

  2. Error: No kernel image is available for execution

You are probably executing the build on a machine with a GPU architecture which was not present/set during the build.

Please check link to find the correct SM architecture and set TORCH_CUDA_ARCH_LIST approriately (e.g. check Dockefile for example). As before make sure to delete the build folder when rerunning the installation process.

  1. Please open an Issue and provide your environment as obtained by nndet_env.
Training doesn't start or is stuck
  1. Please run nndet_env and make sure OMP_NUM_THREADS is set to 1. No other values are supported here. To increase the number of workers used for IO and augmentation adjust nndet_num_threads.

  2. Try running the training without multiprocessing as a sanity check: nndet_train XXX -o augment_cfg.multiprocessing=False. Don't use this for the full training, this is just one step of the debugging process.

  3. Please open an Issue and provide your environment as obtained by nndet_env and report if the training without multiprocessing started correctly.

(Slow) Training Speed

The training time of nnDetection should be roughly equal for most data sets: 2 days (1-2 hours per epoch) with mixed precision 3d speed up and 4 days without (this number refers to RTX 2080TI, newer GPUs can be significantly faster, on high end configuration training takes 1 day). It is highly recommended to use GPUs with Tensor Cores to enable fast mixed precision training for reasonable turnaround times. There can be several reasons for slow training:

  1. PyTorch < 1.9 did not provide training speedup for mixed-precision 3d convs in their pip installable version and it was necessary to build it from source. (the docker build of nnDetection also provides the speedup). Newer versions like 1.10 and 1.11 provide the mixed precision speedup in their pip version (only tested with CUDA 11.X).

  2. There is a bottleneck in the setup. This can be identified as follows:

    1. Check the GPU Util -> it should be high for most of the time if it isn't, there is either a CPU or IO bottleneck. If it is high it is the missing pytorch speed up.
    2. Check CPU util: if the CPU util is high (and the GPU util isn't) more cpu threads are needed for augmentation (can be adjusted via det_num_threads and depends on your CPU). If GPU and CPU util are low, it is an IO bottleneck, it is quite hard to do anything about this (a typical SSD with ~500mb/s read speed ran fine for my experiments). If the CPU util is maxed out it is an CPU bottleneck: Adjust det_num_threads (similar to num workers in the normal pytorch dataloaders) for the available CPU resources (set this as high as possible but not more than available CPU threads) otherwise. Increasing the number of workers will increase the required RAM consumption -> make sure not to run out of memory there otherwise the training will be extreeemly slow and the workstation might crash.

Example for det_num_threads:

  • CPUs with less cores but high clock speed: Needs a lower det_num_threads value. On an Intel i7 9700 (non k) det_num_threads=6 reaches 90+ % GPU usage.
  • CPUs with many cores but lower clock speed: Needs a high det_num_threads value. In cluster environments det_num_threads=12 reaches ~80+% GPU usage.
GPU requirements
nnDetection v0.1 was developed for GPUs with at least 11GB of VRAM (e.g. RTX2080TI, TITAN RTX). All of our experiments were conducted with a RTX2080TI. While the memory can be adjusted by manipulating the correct setting we recommend using the default values for now. Future releases will refactor the planning stage to improve the VRAM estimation and add support for different memory budgets.
Training with bounding boxes
The first release of nnDetection focuses on 3d medical images and Retina U-Net. As a consequence training (specifically planning and augmentation) requrie segmentation annotations. In many cases this limitation can be circumvented by converting the bounding boxes into segmentations.
Mask RCNN and 2D Data sets
2D data sets and Mask R-CNN are not supported in the first release. We hope to provide these in the future.
Multi GPU Training
Multi GPU training is not officially supported yet. Inference and the metric computation are not properly designed to support these usecases!
Prebuild package
We are planning to provide prebuild wheels in the future but no prebuild wheels are available right now. Please use the provided Dockerfile or the installation instructions to run nnDetection.

Acknowledgements

nnDetection combines the information from multiple open source repositores we wish to acknoledge for their awesome work, please check them out!

nnU-Net

nnU-Net is self-configuring method for semantic segmentation and many steps of nnDetection follow in the footsteps of nnU-Net.

Medical Detection Toolkit

The Medical Detection Toolkit introduced the first codebase for 3D Object Detection and multiple tricks were transferred to nnDetection to assure optimal configuration for medical object detection.

Torchvision

nnDetection tried to follow the interfaces of torchvision to make it easy to understand for everyone coming from the 2D (and video) detection scene. As a result we used based our implementations of some of the core modules of the torchvision implementation.

Funding

Part of this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) โ€“ 410981386 and the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science.

More Repositories

1

nnUNet

Python
5,539
star
2

medicaldetectiontoolkit

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Python
1,287
star
3

batchgenerators

A framework for data augmentation for 2D and 3D image classification and segmentation
Jupyter Notebook
1,077
star
4

MedNeXt

[MICCAI 2023] MedNeXt is a fully ConvNeXt architecture for 3D medical image segmentation.
Python
313
star
5

HD-BET

MRI brain extraction tool
Python
262
star
6

TractSeg

Automatic White Matter Bundle Segmentation
Python
222
star
7

napari-sam

Python
220
star
8

trixi

Manage your machine learning experiments with trixi - modular, reproducible, high fashion. An experiment infrastructure optimized for PyTorch, but flexible enough to work for your framework and your tastes.
Python
219
star
9

basic_unet_example

An example project of how to use a U-Net for segmentation on medical images with PyTorch.
Python
137
star
10

MITK-Diffusion

MITK Diffusion - Official part of the Medical Imaging Interaction Toolkit
C++
76
star
11

LIDC-IDRI-processing

Scripts for the preprocessing of LIDC-IDRI data
Python
75
star
12

BraTS2017

Python
74
star
13

BodyPartRegression

Python
62
star
14

dynamic-network-architectures

Python
61
star
15

mood

Repository for the Medical Out-of-Distribution Analysis Challenge.
Python
60
star
16

ACDC2017

Python
54
star
17

niicat

This is a tool to quickly preview nifti images on the terminal
Python
51
star
18

RegRCNN

This repository holds the code framework used in the paper Reg R-CNN: Lesion Detection and Grading under Noisy Labels. It is a fork of MIC-DKFZ/medicaldetectiontoolkit with regression capabilites.
Python
51
star
19

Skeleton-Recall

Skeleton Recall Loss for Connectivity Conserving and Resource Efficient Segmentation of Thin Tubular Structures
Python
47
star
20

MultiTalent

Implemention of the Paper "MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation"
Python
46
star
21

image_classification

๐ŸŽฏ Deep Learning Framework for Image Classification & Regression in Pytorch for Fast Experiments
Python
42
star
22

RTTB

Swiss army knife for radiotherapy analysis
C++
26
star
23

vae-anomaly-experiments

Python
26
star
24

Hyppopy

Hyppopy is a python toolbox for blackbox optimization. It's purpose is to offer a unified and easy to use interface to a collection of solver libraries.
Python
25
star
25

patchly

A grid sampler for larger-than-memory N-dimensional images
Python
23
star
26

semantic_segmentation

Python
23
star
27

probabilistic_unet

A U-Net combined with a variational auto-encoder that is able to learn conditional distributions over semantic segmentations.
Jupyter Notebook
22
star
28

image-time-series

Code for deep learning-based glioma/tumor growth models
Python
21
star
29

anatomy_informed_DA

Python
18
star
30

batchgeneratorsv2

Python
13
star
31

foundation-models-for-cbmir

Python
12
star
32

MedVol

Python
12
star
33

ParticleSeg3D

Python
10
star
34

generalized_yolov5

An extension of YOLOv5 to non-natural images together with 5-Fold Cross-Validation
Python
8
star
35

radtract

RadTract: enhanced tractometry with radiomics-based imaging biomarkers for improved predictive modelling.
Python
8
star
36

gpconvcnp

Code for "GP-ConvCNP: Better Generalization for Convolutional Conditional Neural Processes on Time Series Data"
Python
8
star
37

cmdint

CmdInterface enables detailed logging of command line and python experiments in a very lightweight manner (coding wise). It wraps your command line or python function calls in a few lines of python code and logs everything you might need to reproduce the experiment later on or to simply check what you did a couple of years ago.
Python
8
star
38

acvl_utils

Python
7
star
39

MurineAirwaySegmentation

Python
7
star
40

cOOpD

Python
7
star
41

PROUNET

Prostate U-net
Python
7
star
42

napari-nifti

Python
4
star
43

agent-sam

Segment Anything model wrapper used by the Medical Imaging Interaction Toolkit (MITK).
Python
4
star
44

OverthINKingSegmenter

Python
3
star
45

perovskite-xai

Python
3
star
46

help_a_hematologist_out_challenge

Python
2
star
47

AGGC2022

Automated Gleason Grading on WSI
Python
2
star
48

tqdmp

Multiprocessing with tqdm progressbars!
Python
2
star
49

MatchPoint

MatchPoint is a translational image registration framework written in C++. It offers a standardized interface to utilize several registration algorithm resources (like ITK, plastimatch, elastix) easily in a host application.
C++
2
star
50

napari-mzarr

Python
2
star
51

n2c2-challenge-2019

Jupyter Notebook
2
star
52

mzarr

Python
1
star
53

imlh-icml-detection-tools

Python
1
star
54

napari-blosc2

Python
1
star
55

BraTPRO

Python
1
star