This repository introduces several optimization techniques that can be applied to improve the parallelism of matrix multiplication. The techniques include loop unrolling, loop reordering, loop tiling, multithreading, SIMD programming, and CUDA programming. Each technique is implemented in a separate source file (*.cpp inside src/) and all techniques use the common header file matmul.h. In addition, we also provide a benchmark.cpp and a Makefile to compile and benchmark the different matrix multiplication implementations.
If your want to learn more about optimization techniques of efficient deep learning, please check out lectures on TinyML and Efficient Deep Learning Computing.
Here is an outline of the main files and directories:
βββ src
β βββ loop_unrolling.cpp
β βββ loop_reordering.cpp
β βββ loop_tiling.cpp
β βββ naive.cpp
β βββ multithreading.cpp
β βββ SIMD_programming.cpp
β βββ cuda_programming.cpp
βββ include
β βββ matmul.h
βββ benchmark.cpp
βββ Makefile
To compile and run the examples, you will need:
- A C++ compiler (GCC, Clang, MSVC, etc.)
- CUDA Toolkit (optional, only if you want to enable CUDA programming.)
To compile the code, navigate to the repository root and execute:
make -j
This will produce an executable named benchmark
.
To run the benchmark, execute:
./benchmark
The benchmark will run matrix multiplication using all techniques and output the time taken by each technique.
You can also measure the performance improvement achieved by a specific technique with an extra argument:
Available arguments are:
- CUDA
- SIMD_programming
- loop_reodering
- loop_tiling
- loop_unrolling
- multithreading
For example, to measure the performance improvement of the CUDA kernel:
./benchmark CUDA
We welcome contributions! If you have a suggestion, bug report, or want to contribute to the code, feel free to open an issue or create a pull request. Please make sure your code follows the current code style.
This project is open-source and is licensed under the MIT License.
If you have any questions or suggestions, feel free to open an issue or reach out to the maintainers.
We would like to thank everyone who contributed to this repository, providing feedback and bug reports, making this project possible.