• Stars
    star
    1
  • Language
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

More Repositories

1

training

Reference implementations of MLPerf™ training benchmarks
Python
1,495
star
2

inference

Reference implementations of MLPerf™ inference benchmarks
Python
966
star
3

ck

Collective Knowledge (CK) is an educational community project to learn how to run AI, ML and other emerging workloads in the most efficient and cost-effective way across diverse models, data sets, software and hardware using MLCommons CM (Collective Mind workflow automation framework)
Python
605
star
4

tiny

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
C++
293
star
5

algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Python
210
star
6

GaNDLF

A generalizable application framework for segmentation, regression, and classification using PyTorch
Python
154
star
7

mlcube

MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
Python
149
star
8

medperf

An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
Python
144
star
9

peoples-speech

The People’s Speech Dataset
Jupyter Notebook
96
star
10

training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
Python
91
star
11

training_results_v0.7

This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.
Python
58
star
12

inference_results_v0.5

This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.
C++
56
star
13

modelbench

Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Python
53
star
14

inference_policies

Issues related to MLPerf™ Inference policies, including rules and suggested changes
50
star
15

training_results_v0.6

This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.
Python
42
star
16

croissant

Croissant is a high-level format for machine learning datasets that brings together four rich layers.
Jupyter Notebook
42
star
17

training_results_v0.5

This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.
Python
36
star
18

training_results_v1.0

This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.
Python
36
star
19

hpc

Reference implementations of MLPerf™ HPC training benchmarks
Jupyter Notebook
33
star
20

storage

MLPerf™ Storage Benchmark Suite
Shell
33
star
21

inference_results_v1.0

This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.
C++
31
star
22

mlcube_examples

MLCube® examples
Python
30
star
23

chakra

Repository for MLCommons Chakra schema and tools
Python
30
star
24

mobile_app_open

Mobile App Open
C++
30
star
25

training_results_v2.0

This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.
C++
27
star
26

modelgauge

Make it easy to automatically and uniformly measure the behavior of many AI Systems.
Python
25
star
27

policies

General policies for MLPerf™ including submission rules, coding standards, etc.
Python
24
star
28

training_results_v1.1

This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.
Python
23
star
29

mobile_models

MLPerf™ Mobile models
22
star
30

logging

MLPerf™ logging library
Python
20
star
31

inference_results_v2.1

This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.
19
star
32

ck-mlops

A collection of portable workflows, automation recipes and components for MLOps in a unified CK format. Note that this repository is outdated - please check the 2nd generation of the CK workflow automation meta-framework with portable MLOps and DevOps components here:
Python
17
star
33

inference_results_v0.7

This repository contains the results and code for the MLPerf™ Inference v0.7 benchmark.
C++
17
star
34

inference_results_v3.0

This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
16
star
35

training_results_v2.1

This repository contains the results and code for the MLPerf™ Training v2.1 benchmark.
C++
15
star
36

power-dev

Dev repo for power measurement for the MLPerf™ benchmarks
Python
14
star
37

medical

Medical ML Benchmark
Python
13
star
38

dynabench

Python
12
star
39

training_results_v3.0

This repository contains the results and code for the MLPerf™ Training v3.0 benchmark.
Python
11
star
40

tiny_results_v0.7

This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.
C
11
star
41

inference_results_v1.1

This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
Python
11
star
42

inference_results_v4.0

This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
9
star
43

dataperf

Data Benchmarking
8
star
44

inference_results_v2.0

This repository contains the results and code for the MLPerf™ Inference v2.0 benchmark.
Python
8
star
45

mobile_open

MLPerf Mobile benchmarks
Python
7
star
46

science

https://mlcommons.org/en/groups/research-science/
Jupyter Notebook
7
star
47

tiny_results_v0.5

This repository contains the results and code for the MLPerf™ Tiny Inference v0.5 benchmark.
C++
5
star
48

inference_results_v3.1

This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
5
star
49

tiny_results_v1.0

This repository contains the results and code for the MLPerf™ Tiny Inference v1.0 benchmark.
C
4
star
50

hpc_results_v0.7

This repository contains the results and code for the MLPerf™ HPC Training v0.7 benchmark.
Python
3
star
51

hpc_results_v2.0

This repository contains the results and code for the MLPerf™ HPC Training v2.0 benchmark.
Python
3
star
52

hpc_results_v1.0

This repository contains the results and code for the MLPerf™ HPC Training v1.0 benchmark.
Python
3
star
53

ck-venv

CK automation for virtual environments
Python
2
star
54

cm-mlops

Python
2
star
55

datasets_infra

2
star
56

training_results_v3.1

This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
Python
1
star
57

tiny_results_v1.1

This repository contains the results and code for the MLPerf™ Tiny Inference v1.1 benchmark.
C
1
star
58

medperf-website

JavaScript
1
star
59

mobile_results_v2.1

This repository contains the results and code for the MLPerf™ Mobile Inference v2.1 benchmark.
1
star
60

hpc_results_v3.0

This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
Python
1
star
61

ck_mlperf_results

Aggregated benchmarking results from MLPerf Inference, Tiny and Training in the MLCommons CM format for the Collective Knowledge Playground. Our goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
Python
1
star