• Stars
    star
    149
  • Rank 248,704 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MLCube® is a project that reduces friction for machine learning by ensuring that models are easily portable and reproducible.

MLCube

License

PyPI MLCube PyPI MLCube Docker Runner PyPI MLCube Singularity Runner

MLCube® brings the concept of interchangeable parts to the world of machine learning models. It is the shipping container that enables researchers and developers to easily share the software that powers machine learning.

MLCube is a set of common conventions for creating ML software that can just "plug-and-play" on many systems. MLCube makes it easier for researchers to share innovative ML models, for a developer to experiment with many models, and for software companies to create infrastructure for models. It creates opportunities by putting ML in the hands of more people.

MLCube isn’t a new framework or service; MLCube is a consistent interface to machine learning models in containers like Docker. Models published with the MLCube interface can be run on local machines, on a variety of major clouds, or in Kubernetes clusters - all using the same code. MLCommons provides open source “runners” for each of these environments that make training a model in an MLCube a single command.

Note: This project is still in the very early stages and under active development, some parts may have unexpected/inconsistent behaviours.

Installing MLCube

Install from PyPI:

pip install mlcube

To uninstall:

pip uninstall mlcube

Usage Examples

Check out the examples for detailed examples and MLCube wiki.

License

MLCube is licensed under the Apache License 2.0.

See LICENSE for more information.

MLCube is a trademark of the MLCommons® Association.

Support

Create a GitHub issue

More Repositories

1

training

Reference implementations of MLPerf™ training benchmarks
Python
1,495
star
2

inference

Reference implementations of MLPerf™ inference benchmarks
Python
966
star
3

ck

Collective Knowledge (CK) is an educational community project to learn how to run AI, ML and other emerging workloads in the most efficient and cost-effective way across diverse models, data sets, software and hardware using MLCommons CM (Collective Mind workflow automation framework)
Python
605
star
4

tiny

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers
C++
293
star
5

algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Python
210
star
6

GaNDLF

A generalizable application framework for segmentation, regression, and classification using PyTorch
Python
154
star
7

medperf

An open benchmarking platform for medical artificial intelligence using Federated Evaluation.
Python
144
star
8

peoples-speech

The People’s Speech Dataset
Jupyter Notebook
96
star
9

training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
Python
91
star
10

training_results_v0.7

This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.
Python
58
star
11

inference_results_v0.5

This repository contains the results and code for the MLPerf™ Inference v0.5 benchmark.
C++
56
star
12

modelbench

Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Python
53
star
13

inference_policies

Issues related to MLPerf™ Inference policies, including rules and suggested changes
50
star
14

training_results_v0.6

This repository contains the results and code for the MLPerf™ Training v0.6 benchmark.
Python
42
star
15

croissant

Croissant is a high-level format for machine learning datasets that brings together four rich layers.
Jupyter Notebook
42
star
16

training_results_v0.5

This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.
Python
36
star
17

training_results_v1.0

This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.
Python
36
star
18

hpc

Reference implementations of MLPerf™ HPC training benchmarks
Jupyter Notebook
33
star
19

storage

MLPerf™ Storage Benchmark Suite
Shell
33
star
20

inference_results_v1.0

This repository contains the results and code for the MLPerf™ Inference v1.0 benchmark.
C++
31
star
21

mlcube_examples

MLCube® examples
Python
30
star
22

chakra

Repository for MLCommons Chakra schema and tools
Python
30
star
23

mobile_app_open

Mobile App Open
C++
30
star
24

training_results_v2.0

This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.
C++
27
star
25

modelgauge

Make it easy to automatically and uniformly measure the behavior of many AI Systems.
Python
25
star
26

policies

General policies for MLPerf™ including submission rules, coding standards, etc.
Python
24
star
27

training_results_v1.1

This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.
Python
23
star
28

mobile_models

MLPerf™ Mobile models
22
star
29

logging

MLPerf™ logging library
Python
20
star
30

inference_results_v2.1

This repository contains the results and code for the MLPerf™ Inference v2.1 benchmark.
19
star
31

ck-mlops

A collection of portable workflows, automation recipes and components for MLOps in a unified CK format. Note that this repository is outdated - please check the 2nd generation of the CK workflow automation meta-framework with portable MLOps and DevOps components here:
Python
17
star
32

inference_results_v0.7

This repository contains the results and code for the MLPerf™ Inference v0.7 benchmark.
C++
17
star
33

inference_results_v3.0

This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
16
star
34

training_results_v2.1

This repository contains the results and code for the MLPerf™ Training v2.1 benchmark.
C++
15
star
35

power-dev

Dev repo for power measurement for the MLPerf™ benchmarks
Python
14
star
36

medical

Medical ML Benchmark
Python
13
star
37

dynabench

Python
12
star
38

training_results_v3.0

This repository contains the results and code for the MLPerf™ Training v3.0 benchmark.
Python
11
star
39

tiny_results_v0.7

This repository contains the results and code for the MLPerf™ Tiny Inference v0.7 benchmark.
C
11
star
40

inference_results_v1.1

This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
Python
11
star
41

inference_results_v4.0

This repository contains the results and code for the MLPerf™ Inference v4.0 benchmark.
9
star
42

dataperf

Data Benchmarking
8
star
43

inference_results_v2.0

This repository contains the results and code for the MLPerf™ Inference v2.0 benchmark.
Python
8
star
44

mobile_open

MLPerf Mobile benchmarks
Python
7
star
45

science

https://mlcommons.org/en/groups/research-science/
Jupyter Notebook
7
star
46

tiny_results_v0.5

This repository contains the results and code for the MLPerf™ Tiny Inference v0.5 benchmark.
C++
5
star
47

inference_results_v3.1

This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
5
star
48

tiny_results_v1.0

This repository contains the results and code for the MLPerf™ Tiny Inference v1.0 benchmark.
C
4
star
49

hpc_results_v0.7

This repository contains the results and code for the MLPerf™ HPC Training v0.7 benchmark.
Python
3
star
50

hpc_results_v2.0

This repository contains the results and code for the MLPerf™ HPC Training v2.0 benchmark.
Python
3
star
51

hpc_results_v1.0

This repository contains the results and code for the MLPerf™ HPC Training v1.0 benchmark.
Python
3
star
52

ck-venv

CK automation for virtual environments
Python
2
star
53

cm-mlops

Python
2
star
54

datasets_infra

2
star
55

training_results_v3.1

This repository contains the results and code for the MLPerf™ Training v3.1 benchmark.
Python
1
star
56

research

1
star
57

tiny_results_v1.1

This repository contains the results and code for the MLPerf™ Tiny Inference v1.1 benchmark.
C
1
star
58

medperf-website

JavaScript
1
star
59

mobile_results_v2.1

This repository contains the results and code for the MLPerf™ Mobile Inference v2.1 benchmark.
1
star
60

hpc_results_v3.0

This repository contains the results and code for the MLPerf™ HPC Training v3.0 benchmark.
Python
1
star
61

ck_mlperf_results

Aggregated benchmarking results from MLPerf Inference, Tiny and Training in the MLCommons CM format for the Collective Knowledge Playground. Our goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
Python
1
star