• Stars
    star
    42
  • Rank 651,809 (Top 13 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Croissant is a high-level format for machine learning datasets that brings together four rich layers.

More Repositories

1

training

Reference implementations of MLPerf™ training benchmarks
Python
1,495
star
2

inference

Reference implementations of MLPerf™ inference benchmarks
Python
966
star
3

ck

Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
Python
595
star
4

tiny

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
C++
293
star
5

algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Python
210
star
6

GaNDLF

A generalizable application framework for segmentation, regression, and classification using PyTorch
Python
154
star
7

mlcube

MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.
Python
149
star
8

medperf

An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
Python
138
star
9

peoples-speech

The People’s Speech Dataset
Jupyter Notebook
96
star
10

training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
Python
91
star
11

training_results_v0.7

This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.
Python
58
star
12

inference_results_v0.5

This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.
C++
56
star
13

inference_policies

Issues related to MLPerf™ Inference policies, including rules and suggested changes
50
star
14

modelbench

Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Python
49
star
15

training_results_v0.6

This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.
Python
42
star
16

training_results_v0.5

This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.
Python
36
star
17

training_results_v1.0

This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.
Python
36
star
18

hpc

Reference implementations of MLPerf™ HPC training benchmarks
Jupyter Notebook
33
star
19

storage

MLPerf™ Storage Benchmark Suite
Shell
33
star
20

inference_results_v1.0

This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.
C++
31
star
21

mlcube_examples

MLCube® examples
Python
30
star
22

chakra

Repository for MLCommons Chakra schema and tools
Python
30
star
23

mobile_app_open

Mobile App Open
C++
30
star
24

training_results_v2.0

This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.
C++
27
star
25

modelgauge

Make it easy to automatically and uniformly measure the behavior of many AI Systems.
Python
25
star
26

policies

General policies for MLPerf™ including submission rules, coding standards, etc.
Python
24
star
27

training_results_v1.1

This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.
Python
23
star
28

mobile_models

MLPerf™ Mobile models
22
star
29

logging

MLPerf™ logging library
Python
20
star
30

inference_results_v2.1

This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.
19
star
31

ck-mlops

A collection of portable workflows, automation recipes and components for MLOps in a unified CK format. Note that this repository is outdated - please check the 2nd generation of the CK workflow automation meta-framework with portable MLOps and DevOps components here:
Python
18
star
32

inference_results_v0.7

This repository contains the results and code for the MLPerf™ Inference v0.7 benchmark.
C++
17
star
33

inference_results_v3.0

This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
16
star
34

training_results_v2.1

This repository contains the results and code for the MLPerf™ Training v2.1 benchmark.
C++
15
star
35

power-dev

Dev repo for power measurement for the MLPerf™ benchmarks
Python
14
star
36

medical

Medical ML Benchmark
Python
13
star
37

dynabench

Python
12
star
38

training_results_v3.0

This repository contains the results and code for the MLPerf™ Training v3.0 benchmark.
Python
11
star
39

tiny_results_v0.7

This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.
C
11
star
40

inference_results_v1.1

This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
Python
11
star
41

inference_results_v4.0

This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
9
star
42

dataperf

Data Benchmarking
8
star
43

mobile_open

MLPerf Mobile benchmarks
Python
7
star
44

science

https://mlcommons.org/en/groups/research-science/
Jupyter Notebook
7
star
45

tiny_results_v0.5

This repository contains the results and code for the MLPerf™ Tiny Inference v0.5 benchmark.
C++
5
star
46

inference_results_v3.1

This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
5
star
47

tiny_results_v1.0

This repository contains the results and code for the MLPerf™ Tiny Inference v1.0 benchmark.
C
4
star
48

hpc_results_v0.7

This repository contains the results and code for the MLPerf™ HPC Training v0.7 benchmark.
Python
3
star
49

hpc_results_v2.0

This repository contains the results and code for the MLPerf™ HPC Training v2.0 benchmark.
Python
3
star
50

hpc_results_v1.0

This repository contains the results and code for the MLPerf™ HPC Training v1.0 benchmark.
Python
3
star
51

ck-venv

CK automation for virtual environments
Python
2
star
52

cm-mlops

Python
2
star
53

datasets_infra

2
star
54

training_results_v3.1

This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
Python
1
star
55

research

1
star
56

tiny_results_v1.1

This repository contains the results and code for the MLPerf™ Tiny Inference v1.1 benchmark.
C
1
star
57

medperf-website

JavaScript
1
star
58

mobile_results_v2.1

This repository contains the results and code for the MLPerf™ Mobile Inference v2.1 benchmark.
1
star
59

hpc_results_v3.0

This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
Python
1
star
60

inference_results_v2.0

Python
1
star
61

ck_mlperf_results

Aggregated benchmarking results from MLPerf Inference, Tiny and Training in the MLCommons CM format for the Collective Knowledge Playground. Our goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
Python
1
star