• Stars
    star
    50
  • Rank 577,233 (Top 12 %)
  • Language
    C
  • Created about 7 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MPI benchmark to test and measure collective performance

mpiBench

Times MPI collectives over a series of message sizes

What is mpiBench?

mpiBench.c

This program measures MPI collective performance for a range of message sizes. The user may specify:

  • the collective to perform,
  • the message size limits,
  • the number of iterations to perform,
  • the maximum memory a process may allocate for MPI buffers,
  • the maximum time permitted for a given test,
  • and the number of Cartesian dimensions to divide processes into.

The default behavior of mpiBench will run from 0-256K byte messages for all supported collectives on MPI_COMM_WORLD with a 1G buffer limit. Each test will execute as many iterations as it can to fit within a default time limit of 50000 usecs.

crunch_mpiBench

This is a perl script which can be used to filter data and generate reports from mpiBench output files. It can merge data from multiple mpiBench output files into a single report. It can also filter output to a subset of collectives. By default, it reports the operation duration time (i.e., how long the collective took to complete). For some collectives, it can also report the effective bandwidth. If provided two datasets, it computes a speedup factor.

What is measured

mpiBench measures the total time required to iterate through a loop of back-to-back invocations of the same collective (optionally separated by a barrier), and divides by the number of iterations. In other words the timing kernel looks like the following:

time_start = timer();
for (i=0 ; i < iterations; i++) {
  collective(msg_size);
  barrier();
}
time_end = timer();
time = (time_end - time_start) / iterations;

Each participating MPI process performs this measurement and all report their times. It is the average, minimum, and maximum across this set of times which is reported.

Before the timing kernel is started, the collective is invoked once to prime it, since the initial call may be subject to overhead that later calls are not. Then, the collective is timed across a small set of iterations (~5) to get a rough estimate for the time required for a single invocation. If the user specifies a time limit using the -t option, this esitmate is used to reduce the number of iterations made in the timing kernel loop, as necessary, so it may executed within the time limit.

Basic Usage

Build:

make

Run:

srun -n <procs> ./mpiBench > output.txt

Analyze:

crunch_mpiBench output.txt

Build Instructions

There are several make targets available:

  • make -- simple build
  • make nobar -- build without barriers between consecutive collective invocations
  • make debug -- build with "-g -O0" for debugging purposes
  • make clean -- clean the build

If you'd like to build manually without the makefiles, there are some compile-time options that you should be aware of:

-D NO_BARRIER - drop barrier between consecutive collective invocations -D USE_GETTIMEOFDAY - use gettimeofday() instead of MPI_Wtime() for timing info

Usage Syntax

Usage:  mpiBench [options] [operations]

Options:
  -b <byte>  Beginning message size in bytes (default 0)
  -e <byte>  Ending message size in bytes (default 1K)
  -i <itrs>  Maximum number of iterations for a single test
             (default 1000)
  -m <byte>  Process memory buffer limit (send+recv) in bytes
             (default 1G)
  -t <usec>  Time limit for any single test in microseconds
             (default 0 = infinity)
  -d <ndim>  Number of dimensions to split processes in
             (default 0 = MPI_COMM_WORLD only)
  -c         Check receive buffer for expected data in last
             interation (default disabled)
  -C         Check receive buffer for expected data every
             iteration (default disabled)
  -h         Print this help screen and exit
  where <byte> = [0-9]+[KMG], e.g., 32K or 64M

Operations:
  Barrier
  Bcast
  Alltoall, Alltoallv
  Allgather, Allgatherv
  Gather, Gatherv
  Scatter
  Allreduce
  Reduce

Examples

mpiBench

Run the default set of tests:

srun -n2 -ppdebug mpiBench

Run the default message size range and iteration count for Alltoall, Allreduce, and Barrier:

srun -n2 -ppdebug mpiBench Alltoall Allreduce Barrier

Run from 32-256 bytes and time across 100 iterations of Alltoall:

srun -n2 -ppdebug mpiBench -b 32 -e 256 -i 100 Alltoall

Run from 0-2K bytes and default iteration count for Gather, but reduce the iteration count, as necessary, so each message size test finishes within 100,000 usecs:

srun -n2 -ppdebug mpiBench -e 2K -t 100000 Gather

crunch_mpiBench

Show data for just Alltoall:

crunch_mpiBench -op Alltoall out.txt

Merge data from several files into a single report:

crunch_mpiBench out1.txt out2.txt out3.txt

Display effective bandwidth for Allgather and Alltoall:

crunch_mpiBench -bw -op Allgather,Alltoall out.txt

Compare times in output files in dir1 with those in dir2:

crunch_mpiBench -data DIR1_DATA dir1/* -data DIR2_DATA dir2/*

Additional Notes

Rank 0 always acts as the root process for collectives which involve a root.

If the minimum and maximum are quite different, then some processes may be escaping ahead to start later iterations before the last one has completely finished. In this case, one may use the maximum time reported or insert a barrier between consecutive invocations (build with "make" instead of "make nobar") to syncronize the processes.

For Reduce and Allreduce, vectors of doubles are added, so message sizes of 1, 2, and 4-bytes are skipped.

Two available make commands build mpiBench with test kernels like the following:

   "make"              "make nobar"
start=timer()        start=timer()
for(i=o;i<N;i++)     for(i=o;i<N;i++)
{                    {
  MPI_Gather()         MPI_Gather()
  MPI_Barrier()
}                    }
end=timer()          end=timer()
time=(end-start)/N   time=(end-start)/N

"make nobar" may allow processes to escape ahead, but does not include cost of barrier.

More Repositories

1

zfp

Compressed numerical arrays that support high-speed random access
C++
668
star
2

sundials

Official development repository for SUNDIALS - a SUite of Nonlinear and DIfferential/ALgebraic equation Solvers. Pull requests are welcome for bug fixes and minor changes.
C
515
star
3

RAJA

RAJA Performance Portability Layer (C++)
C++
458
star
4

Caliper

Caliper is an instrumentation and performance profiling library
C++
345
star
5

Umpire

An application-focused API for memory management on NUMA & GPU architectures
C++
315
star
6

blt

A streamlined CMake build system foundation for developing HPC software
C++
253
star
7

lbann

Livermore Big Artificial Neural Network Toolkit
C++
223
star
8

SAMRAI

Structured Adaptive Mesh Refinement Application Infrastructure - a scalable C++ framework for block-structured AMR application development
C++
214
star
9

hiop

HPC solver for nonlinear optimization problems
C++
210
star
10

conduit

Simplified Data Exchange for HPC Simulations
C++
207
star
11

libROM

Data-driven model reduction library with an emphasis on large scale parallelism and linear subspace methods
C++
201
star
12

magpie

Magpie contains a number of scripts for running Big Data software in HPC environments, including Hadoop and Spark. There is support for Lustre, Slurm, Moab, Torque. LSF, Flux, and more.
Shell
193
star
13

HPC-Tutorials

Future home of hpc-tutorials.llnl.gov
C
188
star
14

units

A run-time C++ library for working with units of measurement and conversions between them and with string representations of units and measurements
C++
140
star
15

maestrowf

A tool to easily orchestrate general computational workflows both locally and on supercomputers
Python
133
star
16

merlin

Machine Learning for HPC Workflows
Python
121
star
17

serac

Serac is a high order nonlinear thermomechanical simulation code
C++
120
star
18

axom

CS infrastructure components for HPC applications
C++
110
star
19

UnifyFS

UnifyFS: A file system for burst buffers
C
106
star
20

ior

Parallel filesystem I/O benchmark
C
105
star
21

umap

User-space Page Management
C++
104
star
22

CHAI

Copy-hiding array abstraction to automatically migrate data between memory spaces
C++
104
star
23

cowc

Cars Overhead With Context related scripts described in Mundhenk et al. 2016 ECCV.
Python
104
star
24

scr

SCR caches checkpoint data in storage on the compute nodes of a Linux cluster to provide a fast, scalable checkpoint / restart capability for MPI codes.
C
99
star
25

LULESH

Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics (LULESH)
C++
97
star
26

msr-safe

Allows safer access to model specific registers (MSRs)
C
92
star
27

RAJAPerf

RAJA Performance Suite
C++
90
star
28

FAST

Fusion models for Atomic and molecular STructures (FAST)
Python
89
star
29

shroud

Shroud: generate Fortran and Python wrappers for C and C++ libraries
C++
87
star
30

MacPatch

Software & Patch management for macOS
Objective-C
85
star
31

Aluminum

High-performance, GPU-aware communication library
C++
84
star
32

mpiP

A light-weight MPI profiler.
C
79
star
33

yorick

yorick interpreted language
C
78
star
34

camp

Compiler agnostic metaprogramming library providing concepts, type operations and tuples for C++ and cuda
C++
78
star
35

fpzip

Lossless compressor of multidimensional floating-point arrays
C++
75
star
36

GOTCHA

GOTCHA is a library for wrapping function calls in shared libraries
C
68
star
37

dataracebench

Data race benchmark suite for evaluating OpenMP correctness tools aimed to detect data races.
C
67
star
38

variorum

Vendor-neutral library for exposing power and performance features across diverse architectures
C++
67
star
39

STAT

STAT - the Stack Trace Analysis Tool
C
63
star
40

lmt

Lustre Monitoring Tools
C
62
star
41

pyranda

A Python driven, Fortran powered Finite Difference solver for arbitrary hyperbolic PDE systems. This is the mini-app for the Miranda code.
Fortran
61
star
42

spheral

C++
60
star
43

Abmarl

Agent Based Modeling and Reinforcement Learning
Python
56
star
44

pylibROM

Python interface for libROM, library for reduced order models
Python
56
star
45

ExaCA

Cellular automata code for alloy nucleation and solidification written with Kokkos
C++
56
star
46

lustre

LLNL's branches of Lustre
C
55
star
47

metall

Persistent memory allocator for data-centric analytics
C++
53
star
48

libmsr

Wrapper library for model-specific registers. APIs cover RAPL, performance counters, clocks and turbo.
C
52
star
49

H5Z-ZFP

A registered ZFP compression plugin for HDF5
C
50
star
50

cardioid

Cardiac simulation toolkit.
C++
49
star
51

scraper

Python library for getting metadata from source code hosting tools
Python
49
star
52

llnl.github.io

Public home for LLNL software catalog
JavaScript
48
star
53

mttime

Time Domain Moment Tensor Inversion in Python
Python
45
star
54

quandary

Optimal control for open quantum systems
C++
45
star
55

LaSDI

Jupyter Notebook
45
star
56

GridDyn

GridDyn is an open-source power transmission simulation software package
C++
45
star
57

qball

Qball (also known as qb@ll) is a first-principles molecular dynamics code that is used to compute the electronic structure of atoms, molecules, solids, and liquids within the Density Functional Theory (DFT) formalism. It is a fork of the Qbox code by Francois Gygi.
C++
45
star
58

mgmol

MGmol is a scalable O(N) First-Principles Molecular Dynamics code that is capable of performing large-scale electronics structure calculations and molecular dynamics simulations of atomistic systems.
C++
44
star
59

Juqbox.jl

Juqbox.jl solves quantum optimal control problems in closed quantum systems
Julia
42
star
60

ExaConstit

A crystal plasticity FEM code that runs on the GPU
C++
41
star
61

unum

Universal Number Library
C
40
star
62

fastcam

A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore National Laboratory.
Jupyter Notebook
39
star
63

DJINN

Deep jointly-informed neural networks -- as easy-to-use algorithm for designing/initializing neural nets
Python
39
star
64

CxxPolyFit

A simple library for producing multidimensional polynomial fits for C++
Fortran
37
star
65

cruise

User space POSIX-like file system in main memory
C
35
star
66

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code
C++
35
star
67

UEDGE

2D fluid simulation of plasma and neutrals in magnetic fusion devices
Fortran
34
star
68

wrap

MPI wrapper generator, for writing PMPI tool libraries
Python
34
star
69

acrotensor

A C++ library for computing large scale tensor contractions.
C++
34
star
70

AMPE

Adaptive Mesh Phase-field Evolution
C++
34
star
71

MACSio

A Multi-purpose, Application-Centric, Scalable I/O Proxy Application
C
34
star
72

zero-rk

Zero-order Reaction Kinetics (Zero-RK) is a software package that simulates chemically reacting systems in a computationally efficient manner.
C++
33
star
73

ddcMD

A fully GPU-accelerated molecular dynamics program for the Martini force field
C
33
star
74

GPLaSDI

Python
32
star
75

Quicksilver

A proxy app for the Monte Carlo Transport Code, Mercury. LLNL-CODE-684037
C++
32
star
76

mpibind

Pragmatic, Productive, and Portable Affinity for HPC
C
32
star
77

FPChecker

A dynamic analysis tool to detect floating-point errors in HPC applications.
Python
31
star
78

CallFlow

Visualization tool for analyzing call trees and graphs
Vue
30
star
79

FGPU

Fortran
30
star
80

graphite

A repository for implementing graph network models based on atomic structures.
Jupyter Notebook
30
star
81

ygm

C++
29
star
82

AMG

Algebraic multigrid benchmark
C
28
star
83

gLaSDI

Python
28
star
84

Silo

Mesh and Field I/O Library and Scientific Database
C
28
star
85

CARE

CHAI and RAJA provide an excellent base on which to build portable codes. CARE expands that functionality, adding new features such as loop fusion capability and a portable interface for many numerical algorithms. It provides all the basics for anyone wanting to write portable code.
C++
28
star
86

hatchet

Graph-indexed Pandas DataFrames for analyzing hierarchical performance data
JavaScript
28
star
87

burstfs

C
27
star
88

ravel

Ravel MPI trace visualization tool
C++
27
star
89

mpiGraph

MPI benchmark to generate network bandwidth images
Perl
27
star
90

macc

Robust neural network surrogate for inertial confinement fusion
Python
26
star
91

benchpark

An open collaborative repository for reproducible specifications of HPC benchmarks and cross site benchmarking environments
Python
26
star
92

Tribol

Modular interface physics library featuring state-of-the-art contact physics methods.
C++
25
star
93

uberenv

Automates using spack to build and deploy software
Shell
25
star
94

havoqgt

C++
25
star
95

muster

Massively Scalable Clustering
C++
23
star
96

MemAxes

Interactive Visualization of Memory Access Samples
C++
23
star
97

cram

Tool to run many small MPI jobs inside of one large MPI job.
Python
23
star
98

MuyGPyS

A fast, pure python implementation of the MuyGPs Gaussian process realization and training algorithm.
Python
23
star
99

mdtest

Used for testing the metadata performance of a file system
C
23
star
100

SoRa

SoRa uses genetic programming to find mathematical representations from experimental data
Python
23
star