• Stars
    star
    505
  • Rank 87,373 (Top 2 %)
  • Language
    C++
  • License
    BSD 3-Clause "New...
  • Created almost 10 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reliable Allreduce and Broadcast Interface for distributed machine learning

Rabit: Reliable Allreduce and Broadcast Interface

Build Status Documentation Status

Recent developments of Rabit have been moved into dmlc/xgboost. See discussion in dmlc/xgboost#5995.

rabit is a light weight library that provides a fault tolerant interface of Allreduce and Broadcast. It is designed to support easy implementations of distributed machine learning programs, many of which fall naturally under the Allreduce abstraction. The goal of rabit is to support portable , scalable and reliable distributed machine learning programs.

Features

All these features comes from the facts about small rabbit:)

  • Portable: rabit is light weight and runs everywhere
    • Rabit is a library instead of a framework, a program only needs to link the library to run
    • Rabit only replies on a mechanism to start program, which was provided by most framework
    • You can run rabit programs on many platforms, including Yarn(Hadoop), MPI using the same code
  • Scalable and Flexible: rabit runs fast
    • Rabit program use Allreduce to communicate, and do not suffer the cost between iterations of MapReduce abstraction.
    • Programs can call rabit functions in any order, as opposed to frameworks where callbacks are offered and called by the framework, i.e. inversion of control principle.
    • Programs persist over all the iterations, unless they fail and recover.
  • Reliable: rabit dig burrows to avoid disasters
    • Rabit programs can recover the model and results using synchronous function calls.
    • Rabit programs can set rabit_boostrap_cache=1 to support allreduce/broadcast operations before loadcheckpoint rabit::Init(); -> rabit::AllReduce(); -> rabit::loadCheckpoint(); -> for () { rabit::AllReduce(); rabit::Checkpoint();} -> rabit::Shutdown();

Use Rabit

  • Type make in the root folder will compile the rabit library in lib folder
  • Add lib to the library path and include to the include path of compiler
  • Languages: You can use rabit in C++ and python
    • It is also possible to port the library to other languages

Contributing

Rabit is an open-source library, contributions are welcomed, including:

  • The rabit core library.
  • Customized tracker script for new platforms and interface of new languages.
  • Tutorial and examples about the library.

More Repositories

1

xgboost

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
C++
26,028
star
2

dgl

Python package built to ease deep learning on graph, on top of existing DL frameworks.
Python
13,511
star
3

gluon-cv

Gluon CV Toolkit
Python
5,821
star
4

gluon-nlp

NLP made easy
Python
2,553
star
5

decord

An efficient video loader for deep learning with smart shuffling that's super easy to digest
C++
1,772
star
6

nnvm

C++
1,657
star
7

ps-lite

A lightweight parameter server interface
C++
1,525
star
8

minpy

NumPy interface with mixed backend execution
Python
1,109
star
9

mshadow

Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning
C++
1,106
star
10

cxxnet

move forward to https://github.com/dmlc/mxnet
C++
1,025
star
11

dlpack

common in-memory tensor structure
Python
885
star
12

dmlc-core

A common bricks library for building scalable and portable distributed machine learning.
C++
862
star
13

treelite

Universal model exchange and serialization format for decision tree forests
C++
729
star
14

minerva

Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.
C++
698
star
15

parameter_server

moved to https://github.com/dmlc/ps-lite
C++
648
star
16

mxnet-notebooks

Notebooks for MXNet
Jupyter Notebook
615
star
17

mxnet.js

MXNetJS: Javascript Package for Deep Learning in Browser (without server)
JavaScript
435
star
18

MXNet.jl

MXNet Julia Package - flexible and efficient deep learning in Julia
371
star
19

tensorboard

Standalone TensorBoard for visualizing in deep learning
Python
369
star
20

wormhole

Deprecated
C++
338
star
21

mxnet-memonger

Sublinear memory optimization for deep learning, reduce GPU memory cost to train deeper nets
Python
308
star
22

difacto

Distributed Factorization Machines
C++
296
star
23

XGBoost.jl

XGBoost Julia Package
Julia
288
star
24

mxnet-model-gallery

Pre-trained Models of DMLC Project
266
star
25

GNNLens2

Visualization tool for Graph Neural Networks
TypeScript
232
star
26

HalideIR

Symbolic Expression and Statement Module for new DSLs
C++
205
star
27

mxnet-gtc-tutorial

MXNet Tutorial for NVidia GTC 2016.
Jupyter Notebook
131
star
28

experimental-lda

C++
127
star
29

MXNet.cpp

C++ interface for mxnet
C++
114
star
30

experimental-mf

cache-friendly multithread matrix factorization
C++
87
star
31

web-data

The repo to host all the web data including images for documents in dmlc projects.
Jupyter Notebook
83
star
32

nnvm-fusion

Kernel Fusion and Runtime Compilation Based on NNVM
C++
70
star
33

dmlc.github.io

HTML
27
star
34

tl2cgen

TL2cgen (TreeLite 2 C GENerator) is a model compiler for decision tree models
C++
21
star
35

cub

Cuda
19
star
36

mxnet-deepmark

Benchmark speed and other issues internally, before push to deep-mark
Python
7
star
37

mxnet-examples

MXNet Example
6
star
38

xgboost-bench

Python
4
star
39

drat

Drat Repository for DMLC R packages
4
star
40

nn-examples

1
star
41

gluon-nlp-notebooks

1
star
42

docs-redirect-for-mxnet

redirect mxnet.readthedocs.io to mxnet.io
Python
1
star