• Stars
    star
    134
  • Rank 270,967 (Top 6 %)
  • Language
    C++
  • License
    MIT License
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Parallel Computing Tutorial

This repository introduces several optimization techniques that can be applied to improve the parallelism of matrix multiplication. The techniques include loop unrolling, loop reordering, loop tiling, multithreading, SIMD programming, and CUDA programming. Each technique is implemented in a separate source file (*.cpp inside src/) and all techniques use the common header file matmul.h. In addition, we also provide a benchmark.cpp and a Makefile to compile and benchmark the different matrix multiplication implementations.

Learning Resources

If your want to learn more about optimization techniques of efficient deep learning, please check out lectures on TinyML and Efficient Deep Learning Computing.

Directory Structure

Here is an outline of the main files and directories:

β”œβ”€β”€ src
β”‚   β”œβ”€β”€ loop_unrolling.cpp
β”‚   β”œβ”€β”€ loop_reordering.cpp
β”‚   β”œβ”€β”€ loop_tiling.cpp
β”‚   β”œβ”€β”€ naive.cpp
β”‚   β”œβ”€β”€ multithreading.cpp
β”‚   β”œβ”€β”€ SIMD_programming.cpp
β”‚   └── cuda_programming.cpp
β”œβ”€β”€ include
β”‚   └── matmul.h
β”œβ”€β”€ benchmark.cpp
└── Makefile

Directory Structure

Prerequisites

To compile and run the examples, you will need:

  • A C++ compiler (GCC, Clang, MSVC, etc.)
  • CUDA Toolkit (optional, only if you want to enable CUDA programming.)

Compilation

To compile the code, navigate to the repository root and execute:

make -j

This will produce an executable named benchmark.

Running the Benchmarks

To run the benchmark, execute:

./benchmark

The benchmark will run matrix multiplication using all techniques and output the time taken by each technique.

You can also measure the performance improvement achieved by a specific technique with an extra argument:

Available arguments are:

  • CUDA
  • SIMD_programming
  • loop_reodering
  • loop_tiling
  • loop_unrolling
  • multithreading

For example, to measure the performance improvement of the CUDA kernel:

./benchmark CUDA

Contributions

We welcome contributions! If you have a suggestion, bug report, or want to contribute to the code, feel free to open an issue or create a pull request. Please make sure your code follows the current code style.

License

This project is open-source and is licensed under the MIT License.

Contact

If you have any questions or suggestions, feel free to open an issue or reach out to the maintainers.

Acknowledgements

We would like to thank everyone who contributed to this repository, providing feedback and bug reports, making this project possible.

More Repositories

1

streaming-llm

[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
Python
6,530
star
2

bevfusion

[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
Python
2,286
star
3

temporal-shift-module

[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
Python
2,060
star
4

once-for-all

[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
Python
1,866
star
5

llm-awq

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Python
1,687
star
6

proxylessnas

[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
C++
1,420
star
7

torchquantum

A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers.
Jupyter Notebook
1,304
star
8

data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training
Python
1,277
star
9

efficientvit

EfficientViT is a new family of vision models for efficient high-resolution vision.
Python
1,218
star
10

torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
Cuda
1,181
star
11

smoothquant

[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Python
1,175
star
12

gan-compression

[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs
Python
1,104
star
13

anycost-gan

[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Python
778
star
14

tinyml

Python
755
star
15

TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library
C++
730
star
16

tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
C
717
star
17

fastcomposer

[IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Python
644
star
18

pvcnn

[NeurIPS 2019, Spotlight] Point-Voxel CNN for Efficient 3D Deep Learning
Python
639
star
19

lite-transformer

[ICLR 2020] Lite Transformer with Long-Short Range Attention
Python
589
star
20

spvnas

[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
Python
577
star
21

distrifuser

[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Python
538
star
22

mcunet

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Python
460
star
23

tiny-training

On-Device Training Under 256KB Memory [NeurIPS'22]
Python
432
star
24

amc

[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Python
428
star
25

dlg

[NeurIPS 2019] Deep Leakage From Gradients
Python
400
star
26

haq

[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Python
368
star
27

offsite-tuning

Offsite-Tuning: Transfer Learning without Full Model
Python
365
star
28

hardware-aware-transformers

[ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Python
321
star
29

litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Python
304
star
30

inter-operator-scheduler

[MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration
C++
191
star
31

amc-models

[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Python
166
star
32

apq

[CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy
Python
156
star
33

flatformer

[CVPR'23] FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer
Python
119
star
34

patch_conv

Patch convolution to avoid large GPU memory usage of Conv2D
Python
74
star
35

6s965-fall2022

Jupyter Notebook
64
star
36

sparsevit

[CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
Python
48
star
37

bnn-icestick

Binary Neural Network on IceStick FPGA.
Jupyter Notebook
47
star
38

e3d

Efficient 3D Deep Learning
46
star
39

neurips-micronet

[JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion
Jupyter Notebook
40
star
40

spatten-llm

[HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Scala
32
star
41

tinychat-tutorial

C++
28
star
42

pruning-sparsity-publications

14
star
43

iccad-tinyml-open

[ICCAD'22 TinyML Contest] Efficient Heart Stroke Detection on Low-cost Microcontrollers
C
14
star
44

calo-cluster

Jupyter Notebook
5
star
45

ml-blood-pressure

Python
5
star
46

gan-compression-dynamic

Python
3
star
47

data-efficient-gans-dynamic

Python
3
star