Awesome Cuda
This is a list of useful libraries and resources for CUDA development.
Presentations
-
Optimizing Parallel Reduction in CUDA - In this presentation it is shown how a fast, but relatively simple, reduction algorithm can be implemented.
-
CUDA C/C++ BASICS - This presentations explains the concepts of CUDA kernels, memory management, threads, thread blocks, shared memory, thread syncrhonization. A simple addition kernel is shown, and an optimized stencil 1D stencil kernel is shown.
-
Advanced CUDA - Optimizing to Get 20x Performance - This presentation covers: Tesla 10-Series Architecture, Particle Simulation Example, Host to Device Memory Transfer, Asynchronous Data Transfers, OpenGL Interoperability, Shared Memory, Coalesced Memory Access, Bank Conflicts, SIMT, Page-locked Memory, Registers, Arithmetic Intensity, Finite Differences Example, Texture Memory.
-
Advanced CUDA Webinar - Memory Optimizations - This presentation covers: Asynchronous Data Transfers , Context Based Synchronization, Stream Based Synchronization, Events, Zero Copy, Memory Bandwidth, Coalescing, Shared Memory, Bank Conflicts, Matrix Transpose Example, Textures.
-
Better Performance at Lower Occupancy - Excellent presentation where it is shown that we can achieve better performance by assigning more parallel work to each thread and by using Instruction-level parallelism. Covered topics are: Arithmetic Latency, Arithmetic Throughput, Little's Law, Thread-level parallelism(TLP), Instruction-level parallelism(ILP), Matrix Multiplication Example.
-
Fun With Parallel Algorithms. Segmented Scan. Neutral territory method - In these slides, it is shown how a segmented scan can easily be implemented using a variation of a normal scan.
-
GPU/CPU Programming for Engineers - Lecture 13 - This lecture provides a good walkthrough of all the different memory types: Global Memory, Texture Memory, Constant Memory, Shared Memory, Registers and Local Memory.
Libraries
-
Thrust - A parallel algorithms library whose main goal is programmer productivity and rapid development. But if your main goal is reaching the best possible performance, you are advised to use a more low-level library, such as CUDPP or chag::pp.
-
Hemi - A nice little utility library that allows you to write code that can be run either on the CPU or GPU, and allows you to launch C++ lambda functions as CUDA kernels. Its main goal is to make it easier to write portable CUDA programs.
-
CUDPP - A library that provides 15 parallel primitives. In difference to Thrust, CUDPP is a more performance oriented library, and it is also much more low-level. Recommended if performance is more important than programmer productivity.
-
Parallel Primitives Library: chag::pp - This library provides the parallel primitives Reduction, Prefix Sum, Stream Compaction, Split, and Radix Sort. The authors have demonstrated that their implementation of Stream Compaction and Prefix Sum are the fastest ones available!
Papers
-
Multireduce and Multiscan on Modern GPUs - In this master's thesis, it is examined how you can implement an efficient Multireduce and Multiscan on the GPU.
-
Efficient Parallel Scan Algorithms for Many-core GPUs - In this paper, it is shown how the scan and segmented scan algorithms can be efficiently implemented using a divide-and-conquer approach.
-
Ana Balevic's homepage - Ana Balevic has done research in implementing compression algorithms on the GPU, and in her publications she describes fast implementations of RLE, VLE(Huffman coding) and arithmetic coding on the GPU.
-
Run-length Encoding on Graphics Hardware - Shows another approach to implementing RLE on the GPU. In difference to Ana Belvic's fine grain parallelization approach, this paper describes an approach where the data is split into blocks, and then every thread is assigned a block and does RLE on that block.
-
Efficient Stream Compaction on Wide SIMD Many-Core Architectures - The paper that the chag::pp library is based on.
-
Histogram calculation in CUDA - This article explains how a histogram can be calculated in CUDA.
-
Modern GPU - Modern GPU is a text that describes algorithms and strategies for writing fast CUDA code. And it also provides a library where all of the explained concepts are implemented.
Articles
-
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell - In this article, it is shown how an even faster histogram calculation algorithm can be implemented.
-
Faster Parallel Reductions on Kepler - It is shown in this article how the reduction algorithm described by Mark Harris can be made faster on Kepler.
-
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell - It is shown how we can use shared memory atomics to implement a faster histogram implementation on Maxwell.
Videos
- Intro to Parallel Programming CUDA - Udacity - An Udacity course for learning CUDA.
Contributing
This list is still under construction and is far from done. Anyone who wants to add links to the list are very much welcome to do so by a pull request!