• Stars
    star
    297
  • Rank 140,075 (Top 3 %)
  • Language
    C++
  • License
    BSD 3-Clause "New...
  • Created over 6 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Scalable systolic array-based matrix-matrix multiplication implemented in Vivado HLS for Xilinx FPGAs.

Scalable matrix matrix multiplication on FPGA

DOI

This repository includes a pure Vitis HLS implementation of matrix-matrix multiplication (A*B=C) for Xilinx FPGAs, using Xilinx Vitis to instantiate memory and PCIe controllers and interface with the host.

Experiments run on a VCU1525 achieved 462 GFLOP/s, 301 GFLOP/s and 132 GFLOP/s for half, single, and double precision, respectively, with routing across the three SLRs being the primary bottleneck preventing further scaling. The code is not device-specific, and can be configured for any Xilinx FPGA supported by the Xilinx OpenCL runtime. Kernels have also been verified to execute on TUL KU115, Alveo U250, and Alveo U280 boards with similar results.

The implementation uses a systolic array approach, where linearly connected processing elements compute distinct contributions to the outer product of tiles of the output matrix.

The approach used to implement this kernel was presented at FPGA'20 [1]. For a general description of the optimization techniques that we apply, we refer to our article on HLS transformations [2]. We also gave a tutorial on HLS for HPC at SC'21, ISC'21, SC'20, HiPEAC'20, SC'19, SC'18, and PPoPP'18.

Downloading the code

This project uses the open source Vivado HLS extension library hlslib [3] for simulation, vectorization, finding Xilinx tools, host-side integration and more.

Since hlslib is included as a submodule, make sure you clone with --recursive or grab it after cloning with:

git submodule update --init 

Prerequisites

To build and run kernels in hardware, Xilinx Vitis must be installed and available on the PATH (tested on Alveo U250 and Alveo U280 with version 2021.1).

Configuration and running

This project is configured and built using CMake. Most parameters must be set at configuration-time, as they are used to specialize the hardware.

An example of configuring and building the kernel and executing it in hardware is shown below (starting from the source directory):

mkdir build
cd build
cmake ../ -DMM_DATA_TYPE=float -DMM_PARALLELISM_N=32 -DMM_PARALLELISM_M=8 -DMM_MEMORY_TILE_SIZE_N=512 -DMM_MEMORY_TILE_SIZE_M=512
make
make hw
./RunHardware.exe 1024 1024 1024 hw

Matrix sizes use the convention that A: NxK, B: KxM, and C: NxM.

Per default the build targets the Alveo U250 acceleration board, but this can be configured using the MM_PLATFORM CMake parameter.

The implementation is not restricted to use multiplication and addition as operators. To use other operators, for example addition and minimum to implement the distance product, specify them using the MM_MAP_OP and MM_REDUCE_OP CMake parameters, respectively. To see which operators are pre-implemented, and examples of how to implement new operators, see hlslib/include/hlslib/xilinx/Operators.h.

Selecting tile sizes

See our publication at FPGA'20 [1] on how to choose tile sizes for optimal fast memory and compute utilization.

Parallel performance

The amount of parallelism in the code is determined by the MM_PARALLELISM_N and MM_PARALLELISM_M configuration variables. The former determines the number of processing element instantiated, and the latter regulates the vector width/granularity of each processing element. MM_PARALLELISM_M should be set to a maximum of 64 bytes / sizeof(<your operand>) (i.e., 8 for float or int, 4 for double or long, 16 for 16-bit int, etc.) to avoid performance and routing issues.

The expected performance in Op/s (FLOP/s in the case of floating point types) of a given configuration can be computed as:

2 * MM_PARALLELISM_N * MM_PARALLELISM_M * Frequency

In practice, MM_PARALLELISM_N buffered values of A are applied to MM_PARALLELISM_M values of B.

Bugs

If you experience bugs, or have suggestions for improvements, please use the issue tracker to report them.

Publication

If this code has been useful to your research, please consider citing us:

BibTeX:

@inproceedings{mmm_hls,
  title={Flexible Communication Avoiding Matrix Multiplication on FPGA with High-Level Synthesis},
  author={de~Fine~Licht, Johannes and Kwasniewski, Grzegorz and Hoefler, Torsten},
  booktitle={The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA'20)},
  year={2020}
}

Plain text:

Johannes de Fine Licht, Grzegorz Kwasniewski, and Torsten Hoefler. "Flexible Communication Avoiding Matrix Multiplication on FPGA with High-Level Synthesis." In Proceedings of the 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA'20).

References

[1] Johannes de Fine Licht, Grzegorz Kwasniewski, and Torsten Hoefler, "Flexible Communication Avoiding Matrix Multiplication on FPGA with High-Level Synthesis", in Proceedings of 28th ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA'20), 2020.

[2] Johannes de Fine Licht, Maciej Besta, Simon Meierhans, and Torsten Hoefler. "Transformations of High-Level Synthesis Codes for High-Performance Computing." IEEE Transactions on Parallel and Distributed Systems (TPDS), Vol. 32, Issue 5, 2021.

[3] Johannes de Fine Licht, and Torsten Hoefler. "hlslib: Software Engineering for Hardware Design.", presented at the Fifth International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC'19).

More Repositories

1

graph-of-thoughts

Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
Python
2,096
star
2

dace

DaCe - Data Centric Parallel Programming
Python
491
star
3

QuaRot

Code for QuaRot, an end-to-end 4-bit inference of large language models.
Python
259
star
4

pymlir

Python interface for MLIR - the Multi-Level Intermediate Representation
Python
210
star
5

ncc

Neural Code Comprehension: A Learnable Representation of Code Semantics
Python
206
star
6

hls_tutorial_examples

Examples shown as part of the tutorial "Productive parallel programming on FPGA with high-level synthesis".
C++
189
star
7

MRAG

Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"
Python
161
star
8

serverless-benchmarks

SeBS: serverless benchmarking suite for automatic performance analysis of FaaS platforms.
Python
143
star
9

substation

Research and development for optimizing transformers
Python
121
star
10

pspin

PsPIN: A RISC-V in-network accelerator for flexible high-performance low-power packet processing
SystemVerilog
95
star
11

deep-weather

Deep Learning for Post-Processing Ensemble Weather Forecasts
Jupyter Notebook
86
star
12

daceml

A Data-Centric Compiler for Machine Learning
Python
81
star
13

FBLAS

BLAS implementation for Intel FPGA
C++
75
star
14

open-earth-compiler

development repository for the open earth compiler
MLIR
75
star
15

npbench

NPBench - A Benchmarking Suite for High-Performance NumPy
Python
73
star
16

ucudnn

Accelerating DNN Convolutional Layers with Micro-batches
C++
64
star
17

rFaaS

rFaaS: a high-performance FaaS platform with RDMA acceleration for low-latency invocations.
C++
48
star
18

haystack

Haystack is an analytical cache model that given a program computes the number of cache misses.
C++
42
star
19

sparsity-in-deep-learning

Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests
TeX
40
star
20

mlir-dace

Data-Centric MLIR dialect
C++
37
star
21

redmark

ReDMArk: Bypassing RDMA Security Mechanisms.
C++
37
star
22

apfp

FPGA acceleration of arbitrary precision floating point computations.
C++
34
star
23

NoPFS

Near-optimal Prefetching System
32
star
24

sten

Sparsity support for PyTorch
Python
31
star
25

rapidchiplet

A toolchain for rapid design space exploration of chiplet architectures
C++
27
star
26

ens10

Scripts and examples for the ENS-10 Ensemble Prediction System machine learning dataset
Python
25
star
27

gms

GraphMineSuite (GMS): a benchmarking suite for graph mining algorithms such as graph pattern matching or graph learning
C++
25
star
28

sage

Python
24
star
29

liblsb

Rebol
23
star
30

smoe

Spatial Mixture-of-Experts
Python
19
star
31

CoRM

CoRM: Compactable Remote Memory over RDMA
C++
19
star
32

dace-vscode

Rich editor for SDFGs with included profiling and debugging, static analysis, and interactive optimization.
TypeScript
18
star
33

kafkadirect

RDMA-enabled Apache Kafka
Java
17
star
34

faaskeeper

A fully serverless implementation of the ZooKeeper coordination protocol.
Python
17
star
35

fmi

Function Message Interface (FMI): library for message-passing and collective communication for serverless functions.
C++
15
star
36

SMI

Streaming Message Interface: High-Performance Distributed Memory Programming on Reconfigurable Hardware
C++
15
star
37

stencilflow

Python
15
star
38

naos

Naos: Serialization-free RDMA networking in Java
Java
15
star
39

absinthe

Absinthe is an optimization framework to fuse and tile stencil codes in one shot
Python
14
star
40

NNCompression

Compressing weather and climate data into neural networks
Python
13
star
41

DNN-cpp-proxies

C++/MPI proxies for distributed training of deep neural networks.
C++
13
star
42

arrow-matrix

Arrow Matrix Decomposition - Communication-Efficient Distributed Sparse Matrix Multiplication
Python
13
star
43

CheckEmbed

Official Implementation of "CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks"
Python
12
star
44

.github

10
star
45

LogGOPSim

A LogGOPS (LogP, LogGP, LogGPS) Simulator and Simulation Framework
C
10
star
46

vldb19-distributed-locking

This repository hosts the code used for the following paper: Claude Barthels, Ingo Müller, Konstantin Taranov, Torsten Hoefler, Gustavo Alonso. "Strong consistency is not hard to get: Two-Phase Locking and Two-Phase Commit on Thousands of Cores." In: PVLDB, 2020.
C++
10
star
47

SimFS

SimFS: A Virtualizing Simulation Data File System Interface
C++
8
star
48

CLaMPI

Caching Layer for MPI
C
8
star
49

FBACode

Python
8
star
50

nbody_hls

Implementation of the N^2-formulation of N-body simulation with Vivado HLS for SDAccel platforms.
C++
8
star
51

GDI-RMA

Official Implementation of "The Graph Database Interface: Scaling Online Transactional and Analytical Graph Workloads to Hundreds of Thousands of Cores"
C
8
star
52

DiffDA

Python
7
star
53

stencil_hls

Implementation of time and space-tiled stencil in Vivado HLS.
C++
7
star
54

open-earth-benchmarks

Open repository for climate and weather benchmark kernels
C++
7
star
55

cppless

C++
6
star
56

polybench-comparator

Regression and comparison tools for the Polybench benchmark
Shell
6
star
57

nevermore

The source code for the Nevermore paper at ACM CCS'22
C++
6
star
58

foMPI-NA

C
6
star
59

perf-taint

Taint-based program analysis framework for empirical performance modeling.
LLVM
5
star
60

streamingsched

Streaming Task Scheduling
Python
5
star
61

faaskeeper-python

Python client library for FaaSKeeper, the serverless ZooKeeeper.
Python
5
star
62

muliticast-based-allgather

C
4
star
63

smat

Code for High Performance Unstructured SpMM Computation Using Tensor Cores
Emacs Lisp
4
star
64

libNBC

Shell
3
star
65

climetlab-maelstrom-ens10

MAELSTROM ENS10 dataset plugin for CliMetLab
Jupyter Notebook
3
star
66

dace-webclient

Web-based SDFG viewer for DaCe
JavaScript
3
star
67

spatial-collectives

Optimized communication collectives for the Cerebras waferscale engine
Python
3
star
68

libhear

C++
3
star
69

TCPunch

C++
3
star
70

LGSxNS3

Python
2
star
71

cppless-clang

2
star
72

c2dace

C
2
star
73

probgraph

Emacs Lisp
2
star
74

LogGOPSim2

C++
2
star
75

fflib

C
2
star
76

serverless-benchmarks-data

TeX
2
star
77

rivets

C
2
star
78

conflux

C++
1
star
79

fuzzyflow-artifact

Computational artifacts for the FuzzyFlow publication
Shell
1
star
80

SAILOR

Python
1
star
81

praas-benchmarks

Jupyter Notebook
1
star
82

HTSIM-old

C++
1
star
83

faas-profiler

Python
1
star
84

UPM

User-guided Page Merging: Memory Deduplication for Serverless
C
1
star
85

f2dace-artifact

Fortran
1
star