• Stars
    star
    154
  • Rank 242,180 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A generalizable application framework for segmentation, regression, and classification using PyTorch

GaNDLF

Codacy
Code style: black

The Generally Nuanced Deep Learning Framework for segmentation, regression and classification.

GaNDLF all options

Why use this?

  • Supports multiple
    • Deep Learning model architectures
    • Data dimensions (2D/3D)
    • Channels/images/sequences
    • Prediction classes
    • Domain modalities (i.e., Radiology Scans and Digitized Histopathology Tissue Sections)
    • Problem types (segmentation, regression, classification)
    • Multi-GPU (on same machine) training
  • Built-in
    • Nested cross-validation (and related combined statistics)
    • Support for parallel HPC-based computing
    • Support for training check-pointing
    • Support for Automatic mixed precision
  • Robust data augmentation, courtesy of TorchIO
  • Handles imbalanced classes (e.g., very small tumor in large organ)
  • Leverages robust open source software
  • No need to write any code to generate robust models

Citation

Please cite the following article for GaNDLF (full paper):

@article{pati2023gandlf,
    author={Pati, Sarthak and Thakur, Siddhesh P. and Hamamc{\i}, {\.{I}}brahim Ethem and Baid, Ujjwal and Baheti, Bhakti and Bhalerao, Megh and G{\"u}ley, Orhun and Mouchtaris, Sofia and Lang, David and Thermos, Spyridon and Gotkowski, Karol and Gonz{\'a}lez, Camila and Grenko, Caleb and Getka, Alexander and Edwards, Brandon and Sheller, Micah and Wu, Junwen and Karkada, Deepthi and Panchumarthy, Ravi and Ahluwalia, Vinayak and Zou, Chunrui and Bashyam, Vishnu and Li, Yuemeng and Haghighi, Babak and Chitalia, Rhea and Abousamra, Shahira and Kurc, Tahsin M. and Gastounioti, Aimilia and Er, Sezgin and Bergman, Mark and Saltz, Joel H. and Fan, Yong and Shah, Prashant and Mukhopadhyay, Anirban and Tsaftaris, Sotirios A. and Menze, Bjoern and Davatzikos, Christos and Kontos, Despina and Karargyris, Alexandros and Umeton, Renato and Mattson, Peter and Bakas, Spyridon},
    title={GaNDLF: the generally nuanced deep learning framework for scalable end-to-end clinical workflows},
    journal={Communications Engineering},
    year={2023},
    month={May},
    day={16},
    volume={2},
    number={1},
    pages={23},
    abstract={Deep Learning (DL) has the potential to optimize machine learning in both the scientific and clinical communities. However, greater expertise is required to develop DL algorithms, and the variability of implementations hinders their reproducibility, translation, and deployment. Here we present the community-driven Generally Nuanced Deep Learning Framework (GaNDLF), with the goal of lowering these barriers. GaNDLF makes the mechanism of DL development, training, and inference more stable, reproducible, interpretable, and scalable, without requiring an extensive technical background. GaNDLF aims to provide an end-to-end solution for all DL-related tasks in computational precision medicine. We demonstrate the ability of GaNDLF to analyze both radiology and histology images, with built-in support for k-fold cross-validation, data augmentation, multiple modalities and output classes. Our quantitative performance evaluation on numerous use cases, anatomies, and computational tasks supports GaNDLF as a robust application framework for deployment in clinical workflows.},
    issn={2731-3395},
    doi={10.1038/s44172-023-00066-3},
    url={https://doi.org/10.1038/s44172-023-00066-3}
}

Documentation

GaNDLF has extensive documentation and it is arranged in the following manner:

Contributing

Please see the contributing guide for more information.

Weekly Meeting

The GaNDLF development team hosts a weekly meeting to discuss feature additions, issues, and general future directions. If you are interested to join, please send us an email!

Disclaimer

  • The software has been designed for research purposes only and has neither been reviewed nor approved for clinical use by the Food and Drug Administration (FDA) or by any other federal/state agency.
  • This code (excluding dependent libraries) is governed by the Apache License, Version 2.0 provided in the LICENSE file unless otherwise specified.

Contact

For more information or any support, please post on the Discussions section.

More Repositories

1

training

Reference implementations of MLPerf™ training benchmarks
Python
1,495
star
2

inference

Reference implementations of MLPerf™ inference benchmarks
Python
966
star
3

ck

Collective Knowledge (CK) is an educational community project to learn how to run AI, ML and other emerging workloads in the most efficient and cost-effective way across diverse models, data sets, software and hardware using MLCommons CM (Collective Mind workflow automation framework)
Python
605
star
4

tiny

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
C++
293
star
5

algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Python
210
star
6

mlcube

MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
Python
149
star
7

medperf

An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
Python
144
star
8

peoples-speech

The People’s Speech Dataset
Jupyter Notebook
96
star
9

training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
Python
91
star
10

training_results_v0.7

This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.
Python
58
star
11

inference_results_v0.5

This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.
C++
56
star
12

modelbench

Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Python
53
star
13

inference_policies

Issues related to MLPerf™ Inference policies, including rules and suggested changes
50
star
14

training_results_v0.6

This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.
Python
42
star
15

croissant

Croissant is a high-level format for machine learning datasets that brings together four rich layers.
Jupyter Notebook
42
star
16

training_results_v0.5

This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.
Python
36
star
17

training_results_v1.0

This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.
Python
36
star
18

hpc

Reference implementations of MLPerf™ HPC training benchmarks
Jupyter Notebook
33
star
19

storage

MLPerf™ Storage Benchmark Suite
Shell
33
star
20

inference_results_v1.0

This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.
C++
31
star
21

mlcube_examples

MLCube® examples
Python
30
star
22

chakra

Repository for MLCommons Chakra schema and tools
Python
30
star
23

mobile_app_open

Mobile App Open
C++
30
star
24

training_results_v2.0

This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.
C++
27
star
25

modelgauge

Make it easy to automatically and uniformly measure the behavior of many AI Systems.
Python
25
star
26

policies

General policies for MLPerf™ including submission rules, coding standards, etc.
Python
24
star
27

training_results_v1.1

This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.
Python
23
star
28

mobile_models

MLPerf™ Mobile models
22
star
29

logging

MLPerf™ logging library
Python
20
star
30

inference_results_v2.1

This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.
19
star
31

ck-mlops

A collection of portable workflows, automation recipes and components for MLOps in a unified CK format. Note that this repository is outdated - please check the 2nd generation of the CK workflow automation meta-framework with portable MLOps and DevOps components here:
Python
17
star
32

inference_results_v0.7

This repository contains the results and code for the MLPerf™ Inference v0.7 benchmark.
C++
17
star
33

inference_results_v3.0

This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
16
star
34

training_results_v2.1

This repository contains the results and code for the MLPerf™ Training v2.1 benchmark.
C++
15
star
35

power-dev

Dev repo for power measurement for the MLPerf™ benchmarks
Python
14
star
36

medical

Medical ML Benchmark
Python
13
star
37

dynabench

Python
12
star
38

training_results_v3.0

This repository contains the results and code for the MLPerf™ Training v3.0 benchmark.
Python
11
star
39

tiny_results_v0.7

This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.
C
11
star
40

inference_results_v1.1

This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
Python
11
star
41

inference_results_v4.0

This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
9
star
42

dataperf

Data Benchmarking
8
star
43

inference_results_v2.0

This repository contains the results and code for the MLPerf™ Inference v2.0 benchmark.
Python
8
star
44

mobile_open

MLPerf Mobile benchmarks
Python
7
star
45

science

https://mlcommons.org/en/groups/research-science/
Jupyter Notebook
7
star
46

tiny_results_v0.5

This repository contains the results and code for the MLPerf™ Tiny Inference v0.5 benchmark.
C++
5
star
47

inference_results_v3.1

This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
5
star
48

tiny_results_v1.0

This repository contains the results and code for the MLPerf™ Tiny Inference v1.0 benchmark.
C
4
star
49

hpc_results_v0.7

This repository contains the results and code for the MLPerf™ HPC Training v0.7 benchmark.
Python
3
star
50

hpc_results_v2.0

This repository contains the results and code for the MLPerf™ HPC Training v2.0 benchmark.
Python
3
star
51

hpc_results_v1.0

This repository contains the results and code for the MLPerf™ HPC Training v1.0 benchmark.
Python
3
star
52

ck-venv

CK automation for virtual environments
Python
2
star
53

cm-mlops

Python
2
star
54

datasets_infra

2
star
55

training_results_v3.1

This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
Python
1
star
56

research

1
star
57

tiny_results_v1.1

This repository contains the results and code for the MLPerf™ Tiny Inference v1.1 benchmark.
C
1
star
58

medperf-website

JavaScript
1
star
59

mobile_results_v2.1

This repository contains the results and code for the MLPerf™ Mobile Inference v2.1 benchmark.
1
star
60

hpc_results_v3.0

This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
Python
1
star
61

ck_mlperf_results

Aggregated benchmarking results from MLPerf Inference, Tiny and Training in the MLCommons CM format for the Collective Knowledge Playground. Our goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
Python
1
star