• This repository has been archived on 01/Aug/2024
  • Stars
    star
    350
  • Rank 121,229 (Top 3 %)
  • Language
    C++
  • License
    Other
  • Created almost 7 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Modular C++ Toolkit for Performance Analysis and Logging. Profiling API and Tools for C, C++, CUDA, Fortran, and Python. The C++ template API is essentially a framework to creating tools: it is designed to provide a unifying interface for recording various performance measurements alongside data logging and interfaces to other tools.

timemory

Timing + Memory + Hardware Counter Utilities for C / C++ / CUDA / Python

Build Status Build status codecov

timemory on GitHub (Source code)

timemory General Documentation (ReadTheDocs)

timemory Source Code Documentation (Doxygen)

timemory Testing Dashboard (CDash)

timemory Tutorials

timemory Wiki

GitHub git clone https://github.com/NERSC/timemory.git
PyPi pip install timemory
Spack spack install timemory
conda-forge conda install -c conda-forge timemory
Conda Recipe Conda Downloads Conda Version Conda Platforms

Purpose

The goal of timemory is to create an open-source performance measurement and analyis package with modular and reusable components which can be used to adapt to any existing C/C++ performance measurement and analysis API and is arbitrarily extendable by users within their application. Timemory is not just another profiling tool, it is a profling toolkit which streamlines building custom profiling tools through modularity and then utilizes the toolkit to provides several pre-built tools.

In other words, timemory provides many pre-built tools, libraries, and interfaces but, due to it's modularity, codes can re-use only individual pieces -- such as the classes for measuring different timing intervals, memory usage, and hardware counters -- without the timemory "runtime management".

Building and Installing

Timemory uses a standard CMake installation. Several installation examples can be found in the Wiki. See the installation documentation for detailed information on the CMake options.

Documentation

The full documentation is available at timemory.readthedocs.io. Detailed source documentation is provided in the doygen section of the full documentation. Tutorials are available in the github.com/NERSC/timemory-tutorials.

Overview

The primary objective of the timemory is the development of a common framework for binding together software monitoring code (i.e. performance analysis, debugging, logging) into a compact and highly-efficient interface.

Timemory arose out of the need for a universal adapator kit for the various APIs provided several existing tools and a straight-forward and intuitive method for creating new tools. Timemory makes it possible to bundle together deterministic performance measurements, statistical performance measurements (i.e. sampling), debug messages, data logging, and data validation into the same interface for custom application-specific software monitoring interfaces, easily building tools like time, netstat, instrumentation profilers, sampling profilers, and writing implementations for MPI-P, MPI-T, OMPT, KokkosP, etc. Furthermore, timemory can forward its markers to several third-party profilers such as LIKWID, Caliper, TAU, gperftools, Perfetto, VTune, Allinea-MAP, CrayPAT, Nsight-Systems, Nsight-Compute, and NVProf.

Timemory provides a front-end C/C++/Fortran API and Python API which allows arbitrary selection of 50+ different components from timers to hardware counters to interfaces with third-party tools. This is all built generically from the toolkit API with type-safe bundles of tools such as: component_tuple<wall_clock, papi_vector, nvtx_marker, user_bundle> where wall_clock is a wall-clock timer, papi_vector is a handle for hardware counters, nvxt_marker creates notations in the NVIDIA CUDA profilers, and user_bundle is a generic component which downstream users can insert more components into at runtime.

Performance measurement components written with timemory are arbitrarily scalable up to any number of threads and processes and fully support intermixing different measurements at different locations within the program -- this uniquely enables timemory to be deployed to collect performance data at scale in HPC because highly detailed collection can occur at specific locations within the program where ubiquitous collection would simulatenously degrade performance significantly and require a prohibitive amount of memory.

Timemory can be used as a backend to bundle instrumentation and sampling tools together, support serialization to JSON/XML, and provide statistics among other uses. It can also be utilized as a front-end to invoke custom instrumentation and sampling tools. Timemory uses the abstract term "component" for a structure which encapsulates some performance analysis operation. The structure might encapsulate function calls to another tool, record timestamps for timing, log values provided by the application, provide a operator for replacing a function in the code dynamically, audit the incoming arguments and/or outgoing return value from function, or just provide stubs which can be overloaded by the linker.

Visualization and Analysis

The native output format of timemory is JSON and text; other output formats such as XML are also supported. The text format is intended to be human readable. The JSON data is intended for analysis and comes in two flavors: hierarchical and flat. Basic plotting capabilities are available via timemory-plotting but users are highly encouraged to use hatchet for analyzing the heirarchical JSON data in pandas dataframes. Hatchet supports filtering, unions, addition, subtractions, output to dot and flamegraph formats, and an interactive Jupyter notebook. At present, timemory supports 45+ metric types for analysis in Hatchet.

Categories

There are 4 primary categories in timemory: components, operations, bundlers, and storage. Components provide the specifics of how to perform a particular behavior, operations provide the scaffold for requesting that a component perform an operation in complex scenarios, bundlers group components into a single generic handle, and storage manages data collection over the lifetime of the application. When all four categories are combined, timemory effectively resembles a standard performance analysis tool which passively collects data and provides reports and analysis at the termination of the application. Timemory, however, makes it very easy to subtract storage from the equation and, in doing so, transforms timemory into a toolkit for customized data collection.

  1. Components
    • Individual classes which encapsulate one or more measurement, analysis, logging, or third-party library action(s)
    • Any data specific to one instance of performing the action is stored within the instance of the class
    • Any configuration data specific to that type is typically stored within static member functions which return a reference to the configuration data
    • These classes are designed to support direct usage within other tools, libraries, etc.
    • Examples include:
      • tim::component::wall_clock : a simple wall-clock timer
      • tim::component::vtune_profiler : a simple component which turns the VTune Profiler on and off (when VTune is actively profiling application)
      • tim::component::data_tracker_integer : associates an integer values with a label as the application executes (e.g. number of loop iterations used somewhere)
      • tim::component::papi_vector : uses the PAPI library to collect hardware-counters values
      • tim::component::user_bundle : encapsulates an array of components which the user can dynamically manipulate during runtime
  2. Operations
    • Templated classes whose primary purpose is to provide the implementation for performing some action on a component, e.g. tim::operation::start<wall_clock> will attempt to call the start() member function on a wall_clock component instance
    • Default implementations generally have one or two public functions: a constructor and/or a function call operator
      • These generally accept any/all arguments and use SFINAE to determine whether the operation can be performed with or without the given arguments (i.e. does wall_clock have a store(int) function? store()?)
    • Operations are (generally) not directly utilized by the user and are typically optimized out of the binary
    • Examples include:
      • tim::operation::start : instruct a component to start collection
      • tim::operation::sample : instruct a component to take individual measurement
      • tim::operation::derive : extra data from other components if it is available
  3. Bundlers
    • Provide a generic handle for multiple components
    • Member functions generally accept any/all arguments and use operations classes to correctly to handle differences between different capabilities of the components it is bundling
    • Examples include:
      • tim::auto_tuple
      • tim::component_tuple
      • tim::component_list
      • tim::lightweight_tuple
    • Various flavors provide different implicit behaviors and allocate memory differently
      • auto_tuple starts all components when constructed and stops all components when destructed whereas component_tuple requires an explicit start
      • component_tuple allocates all components on the stack and components are "always on" whereas component_list allocates components on the heap and thus components can be activated/deactivated at runtime
      • lightweight_tuple does not implicitly perform any expensive actions, such as call-stack tracking in "Storage"
  4. Storage
    • Provides persistent storage for multiple instances of components over the lifetime of a thread in the application
    • Responsible for maintaining the hierarchy and order of component measurements, i.e. call-stack tracking
    • Responsible for combining component data from multiple threads and/or processes and outputting the results

NOTE: tim::lightweight_tuple is the recommended bundle for those seeking to use timemory as a toolkit for implementing custom tools and interfaces

Features

  • C++ Template API
    • Modular and fully-customizable
    • Adheres to C++ standard template library paradigm of "you don't pay for what you don't use"
    • Simplifies and facilitates creation and implementation of performance measurement tools
      • Create your own instrumentation profiler
      • Create your own instrumentation library
      • Create your own sampling profiler
      • Create your own sampling library
      • Create your own execution wrappers
      • Supplement timemory-provided tools with your own custom component(s)
      • Thread-safe data aggregation
      • Aggregate collection over multiple processes (MPI and UPC++ support)
      • Serialization to text, JSON, XML
    • Components are composable with other components
    • Variadic component bundlers which maintain complete type-safety
      • Components can be bundled together into a single handle without abstractions
    • Components can store data in any valid C++ data type
    • Components can return data in any valid C++ data type
  • C / C++ / CUDA / Fortran Library API
    • Straight-forward collection of functions and macros for creating built-in performance analysis to your code
    • Component collection can be arbitrarily inter-mixed
      • E.g. collect "A" and "B" in one region, "A" and "C" in another region
    • Component collection can be dynamically manipulated at runtime
      • E.g. add/remove "A" at any point, on any thread, on any process
  • Python API
    • Decorators and context-managers for functions or regions in code
    • Python function profiling
    • Python line-by-line profiling
    • Every component in timemory-avail is provided as a stand-alone Python class
      • Provide low-overhead measurements for building your own Python profiling tools
  • Python Analysis via pandas
  • Command-line Tools
    • timemory-avail
      • Provides available components, settings, and hardware counters
      • Quick API reference tool
    • timem (UNIX)
      • Extended version of UNIX time command-line tool that includes additional information on memory usage, context switches, and hardware counters
      • Support collecting hardware counters (Linux-only, requires PAPI)
    • timemory-run (Linux)
      • Dynamic instrumentation profiling tool
      • Supports runtime instrumentation and binary re-writing
    • timemory-nvml
      • Data collection similar to nvidia-smi
    • timemory-python-profiler
      • Python function profiler supporting all timemory components
      • from timemory.profiler import Profile
    • timemory-python-trace
      • Python line-by-line profiler supporting all timemory components
      • from timemory.trace import Trace
    • timemory-python-line-profiler
      • Python line-by-line profiler based on line-profiler package
      • Extended to use components: cpu-clock, memory-usage, context-switches, etc. (all components which collect scalar values)
      • from timemory.line_profiler import LineProfiler
  • Instrumentation Libraries

Samples

Various macros are defined for C in source/timemory/compat/timemory_c.h and source/timemory/variadic/macros.hpp. Numerous samples of their usage can be found in the examples.

Sample C++ Template API

#include "timemory/timemory.hpp"

namespace comp = tim::component;
using namespace tim;

// specific set of components
using specific_t = component_tuple<comp::wall_clock, comp::cpu_clock>;
using generic_t  = component_tuple<comp::user_global_bundle>;

int
main(int argc, char** argv)
{
    // configure default settings
    settings::flat_profile() = true;
    settings::timing_units() = "msec";

    // initialize with cmd-line
    timemory_init(argc, argv);
    
    // add argparse support
    timemory_argparse(&argc, &argv);

    // create a region "main"
    specific_t m{ "main" };
    m.start();
    m.stop();

    // pause and resume collection globally
    settings::enabled() = false;
    specific_t h{ "hidden" };
    h.start().stop();
    settings::enabled() = true;

    // Add peak_rss component to specific_t
    mpl::push_back_t<specific_t, comp::peak_rss> wprss{ "with peak_rss" };
    
    // create region collecting only peak_rss
    component_tuple<comp::peak_rss> oprss{ "only peak_rss" };

    // convert component_tuple to a type that starts/stops upon construction/destruction
    {
        scope::config _scope{};
        if(true)  _scope += scope::flat{};
        if(false) _scope += scope::timeline{};
        convert_t<specific_t, auto_tuple<>> scoped{ "scoped start/stop + flat", _scope };
        // will yield auto_tuple<comp::wall_clock, comp::cpu_clock>
    }

    // configure the generic bundle via set of strings
    runtime::configure<comp::user_global_bundle>({ "wall_clock", "peak_rss" });
    // configure the generic bundle via set of enumeration ids
    runtime::configure<comp::user_global_bundle>({ TIMEMORY_WALL_CLOCK, TIMEMORY_CPU_CLOCK });
    // configure the generic bundle via component instances
    comp::user_global_bundle::configure<comp::page_rss, comp::papi_vector>();
    
    generic_t g{ "generic", quirk::config<quirk::auto_start>{} };
    g.stop();

    // Output the results
    timemory_finalize();
    return 0;
}

Sample C / C++ Library API

#include "timemory/library.h"
#include "timemory/timemory.h"

int
main(int argc, char** argv)
{
    // configure settings
    int overwrite       = 0;
    int update_settings = 1;
    // default to flat-profile
    timemory_set_environ("TIMEMORY_FLAT_PROFILE", "ON", overwrite, update_settings);
    // force timing units
    overwrite = 1;
    timemory_set_environ("TIMEMORY_TIMING_UNITS", "msec", overwrite, update_settings);

    // initialize with cmd-line
    timemory_init_library(argc, argv);

    // check if inited, init with name
    if(!timemory_library_is_initialized())
        timemory_named_init_library("ex-c");

    // define the default set of components
    timemory_set_default("wall_clock, cpu_clock");

    // create a region "main"
    timemory_push_region("main");
    timemory_pop_region("main");

    // pause and resume collection globally
    timemory_pause();
    timemory_push_region("hidden");
    timemory_pop_region("hidden");
    timemory_resume();

    // Add/remove component(s) to the current set of components
    timemory_add_components("peak_rss");
    timemory_remove_components("peak_rss");

    // get an identifier for a region and end it
    uint64_t idx = timemory_get_begin_record("indexed");
    timemory_end_record(idx);

    // assign an existing identifier for a region
    timemory_begin_record("indexed/2", &idx);
    timemory_end_record(idx);

    // create region collecting a specific set of data
    timemory_begin_record_enum("enum", &idx, TIMEMORY_PEAK_RSS, TIMEMORY_COMPONENTS_END);
    timemory_end_record(idx);

    timemory_begin_record_types("types", &idx, "peak_rss");
    timemory_end_record(idx);

    // replace current set of components and then restore previous set
    timemory_push_components("page_rss");
    timemory_pop_components();

    timemory_push_components_enum(2, TIMEMORY_WALL_CLOCK, TIMEMORY_CPU_CLOCK);
    timemory_pop_components();

    // Output the results
    timemory_finalize_library();
    return 0;
}

Sample Fortran API

program fortran_example
    use timemory
    use iso_c_binding, only : C_INT64_T
    implicit none
    integer(C_INT64_T) :: idx

    ! initialize with explicit name
    call timemory_init_library("ex-fortran")

    ! initialize with name extracted from get_command_argument(0, ...)
    ! call timemory_init_library("")

    ! define the default set of components
    call timemory_set_default("wall_clock, cpu_clock")

    ! Start region "main"
    call timemory_push_region("main")

    ! Add peak_rss to the current set of components
    call timemory_add_components("peak_rss")

    ! Nested region "inner" nested under "main"
    call timemory_push_region("inner")

    ! End the "inner" region
    call timemory_pop_region("inner")

    ! remove peak_rss
    call timemory_remove_components("peak_rss")

    ! begin a region and get an identifier
    idx = timemory_get_begin_record("indexed")

    ! replace current set of components
    call timemory_push_components("page_rss")

    ! Nested region "inner" with only page_rss components
    call timemory_push_region("inner (pushed)")

    ! Stop "inner" region with only page_rss components
    call timemory_pop_region("inner (pushed)")

    ! restore previous set of components
    call timemory_pop_components()

    ! end the "indexed" region
    call timemory_end_record(idx)

    ! End "main"
    call timemory_pop_region("main")

    ! Output the results
    call timemory_finalize_library()

end program fortran_example

Sample Python API

Decorator

from timemory.bundle import marker

@marker(["cpu_clock", "peak_rss"])
def foo():
    pass

Context Manager

from timemory.profiler import profile

def bar():
    with profile(["wall_clock", "cpu_util"]):
        foo()

Individual Components

from timemory.component import WallClock

def spam():

    wc = WallClock("spam")
    wc.start()

    bar()

    wc.stop()
    data = wc.get()
    print(data)

Argparse Support

import argparse

parser = argparse.ArgumentParser("example")
# ...
timemory.add_arguments(parser)

args = parser.parse_args()

Component Storage

from timemory.storage import WallClockStorage

# data for current rank
data = WallClockStorage.get()
# combined data on rank zero but all ranks must call it
dmp_data = WallClockStorage.dmp_get()

Versioning

Timemory originated as a very simple tool for recording timing and memory measurements (hence the name) in C, C++, and Python and only supported three modes prior to the 3.0.0 release: a fixed set of timers, a pair of memory measurements, and the combination of the two. Prior to the 3.0.0 release, timemory was almost completely rewritten from scratch with the sole exceptions of some C/C++ macro, e.g. TIMEMORY_AUTO_TIMER, and some Python decorators and context-manager, e.g. timemory.util.auto_timer, whose behavior were able to be fully replicated in the new release. Thus, while it may appear that timemory is a mature project at v3.0+, it is essentially still in it's first major release.

Citing timemory

To reference timemory in a publication, please cite the following paper:

  • Madsen, J.R. et al. (2020) Timemory: Modular Performance Analysis for HPC. In: Sadayappan P., Chamberlain B., Juckeland G., Ltaief H. (eds) High Performance Computing. ISC High Performance 2020. Lecture Notes in Computer Science, vol 12151. Springer, Cham

Additional Information

For more information, refer to the documentation.

More Repositories

1

shifter

Shifter - Linux Containers for HPC
C
351
star
2

jupyterlab-slurm

TypeScript
94
star
3

slurm-magic

IPython magic for SLURM.
Python
67
star
4

slurm-ray-cluster

Shell
52
star
5

dl4sci-tf-tutorials

Official TensorFlow 2.0 tutorial notebooks for the Deep Learning for Science School at LBNL
Jupyter Notebook
42
star
6

sc22-dl-tutorial

Material for the SC22 Deep Learning at Scale Tutorial
Python
39
star
7

sc23-dl-tutorial

SC23 Deep Learning at Scale Tutorial Material
Python
36
star
8

podman-hpc

Python
35
star
9

pytorch-examples

PyTorch examples for NERSC systems
Jupyter Notebook
28
star
10

Shifter-Tutorial

Collection of tutorials for using Shifter to bring containers to HPC
26
star
11

sc21-dl-tutorial

Material for the SC21 Deep Learning at Scale Tutorial
Python
25
star
12

sshspawner

Spawn JupyterHub single-user servers with ssh
Python
24
star
13

data-seminars

22
star
14

pytokio

[READ ONLY] Refer to gitlab repo for updated version - Total Knowledge of I/O Reference Implementation. Please see wiki for contribution guidelines.
Python
21
star
15

sc20-dl-tutorial

Python
21
star
16

CosmoFlow

Jupyter Notebook
19
star
17

timemory-tutorials

Tutorials for Timemory
Jupyter Notebook
19
star
18

itt-python

Includes Python bindings to instrumentation and tracing technology (ITT) APIs for VTune
C
19
star
19

sc19-dl-tutorial

Hands-on material for the SC19 tutorial, Deep Learning at Scale
Jupyter Notebook
17
star
20

jupyterlab-recents

A JupyterLab extension that tracks recent files and directories.
TypeScript
15
star
21

openmp-series-2024

OpenMP Training Series, May to October 2024
C
15
star
22

jupyterlab-favorites

Add the ability to save favorite folders to JupyterLab for quicker browsing
TypeScript
14
star
23

intro-HPC-bootcamp-2023

Jupyter Notebook
14
star
24

nersc-dl-multigpu

single-GPU to multi-GPU training of PyTorch apps at NERSC
Python
14
star
25

jupyterhub-deploy

Docker Deployment of NERSC Jupyterhub (including auth and spawner modules)
Python
13
star
26

crash-course-supercomputing

C
13
star
27

sc18-dl-tutorial

Keras tutorial code for the SC18 tutorial on Deep Learning at Scale
Jupyter Notebook
12
star
28

sfapi_client

Python client for SF API
Python
10
star
29

2016-11-14-sc16-Container-Tutorial

Tutorial contents for SC16 on November 14, 2016
10
star
30

dl4sci-scaling-tutorial

Deep Learning Scaling tutorial material for the Deep Learning for Science School at Berkeley Lab
Jupyter Notebook
10
star
31

spack-infrastructure

Shell
9
star
32

cori-tf-distributed-examples

Scripts/Benchmarks for Running Tensorflow Distributed on Cori
Python
9
star
33

MiniDFT

MiniDFT is a minimalist version of Quantum ESPRESSO that performs only LDA total energy and gradient calculations. The purpose of miniDFT is to explore new parallelization schemes, programming languages, programming models and computer architectures, and evaluate their suitability for plane-wave DFT calculations. These types of experiments will be more easily done with MiniDFT than with QE, because the current version of MiniDFT is significantly smaller than QE.
Fortran
9
star
34

nersc-dl-wandb

Guidelines on using Weights and Biases logging for deep learning applications on NERSC machines
Python
8
star
35

buildtest-nersc

C++
8
star
36

ml-pm-training-2022

ML Perlmutter User Training
Python
7
star
37

hpcpp

C++
7
star
38

train

resources for user training exercises
Shell
7
star
39

DL-Parallel-IO

Scalable Data Pipeline and Parallel IO in Deep Learning Framework
Python
6
star
40

object-store

Evaluating object stores with HPC science applications
C
6
star
41

c2d

c2d utility to take Conda environments and Jupyter notebooks from Cori to a local Docker image.
Shell
6
star
42

sshproxy

NERSC SSH Proxy Service
Python
6
star
43

gsiauthenticator

Python
6
star
44

gpu-for-science-day-july-2019

Hacking Competition Code for NERSC's GPU for Science Day, July 2019
C++
6
star
45

nccl-ofi-plugin

Repository for building the NCCL OFI plugin from AWS and NVIDIA
Shell
6
star
46

isc19-dl-tutorial

Jupyter Notebook
6
star
47

CompactCori

Setup and simulation scripts for CompactCori
Python
5
star
48

hep_cnn_benchmark

TensorFlow Benchmark for the HEP Deep Learning CNN Model
Python
5
star
49

data-day-2022

Demo code for the NERSC Data Day 2022 training event.
Python
5
star
50

instrumentation-benchmark

Performance analysis tests for performance analysis tools
CMake
4
star
51

ecp-container-tutorial

Material for the ECP Tutorial on Containers
4
star
52

jupyterlab-cpustatus

Show CPU usage in JupyterLab statusbar
TypeScript
4
star
53

base-images

Base Docker Images
C
4
star
54

spin-docker-compose-example

Python
4
star
55

clonenotebooks

An extension to NBViewer for cloning notebooks to a local directory
Jupyter Notebook
4
star
56

lustre-design-analysis

Data and analysis demonstrating how to quantitatively design a Lustre file system
Jupyter Notebook
4
star
57

new-user-training-notebooks

Some notebook demos from the new user training
Jupyter Notebook
3
star
58

nersc-ml-images

Contains Dockefiles for NERSC ML images
Dockerfile
3
star
59

sshapiauthenticator

Custom Jupyter Authenticator that works against NERSC's SSH Auth API service
Python
3
star
60

tf-perf-kernels

This repository contains scripts calling kernels for TensorFlow along with profiling scripts for Cori-GPU.
Jupyter Notebook
3
star
61

heterogeneous-IO

Understanding the IO Performance on KNL and Haswell
Jupyter Notebook
3
star
62

smwflow

Tools for applying gitflow to Cray systems management
Python
3
star
63

spin-recipes

Repository of Spin Recipes
Dockerfile
3
star
64

chos

CHOS: The CHroot OS that lets users choose their OS.
C
3
star
65

slurm-helpers

handy bash and python snippets for interacting with Slurm at NERSC
Python
3
star
66

jupyterhub-entrypoint

JupyterHub Entrypoint Service. Registry that maintains custom entrypoint settings through a REST API.
Python
3
star
67

cmake-helpers

Tips and utilities for using Cmake in a project
3
star
68

Perlmutter_Training_Jan2022

Training material for Perlmutter User Training 2022
Cuda
3
star
69

mlperf-prof

Common set of utilities for MLPerf-HPC profiling
Python
3
star
70

customs

Inspect and report Python packages of interest
Python
3
star
71

example-jupyter-notebooks

General example notebooks for Jupyter at NERSC
Jupyter Notebook
2
star
72

Migrate-to-Perlmutter

C++
2
star
73

nersc-tensorboard-helper

Scripts to help run tensorboard at NERSC
Jupyter Notebook
2
star
74

BB-unit-tests

A suite of unit tests for the Burst Buffer
Shell
2
star
75

giveandtake

Give and Take Command at NERSC
Shell
2
star
76

QENESAP

Fortran
2
star
77

nersc-nvidia-ai4sci

NERSC / NVIDIA AI for Science Bootcamp
Jupyter Notebook
2
star
78

pam_mfa

PAM module to enable opt-in MFA
C
2
star
79

Hadoop-on-Demand

Scripts for running Hadoop on demand on NERSC systems.
Shell
2
star
80

sc22-canopie-hpc-benchmarks

Shell
2
star
81

variable-time-job

Shell
2
star
82

cug19-da-tutorial

This repo hosts the Data Analytics Tutorial for CUG 2019
Jupyter Notebook
2
star
83

dayabay-learn

Learning to Extract Features from the Daya Bay Reactor Neutrino Experiment
Jupyter Notebook
2
star
84

SMWG-reference-arch

A reference architecture for monitoring of Cray supercomputers
2
star
85

swin_v2_weather

This repository contains the SwinV2_Weather model, developed for the "Analyzing and Exploring Training Recipes for Large-Scale Transformer-Based Weather Prediction" paper. The repo includes training scripts, pre-processing utilities, and model configuration files.
Python
2
star
86

nersc-refresh-announcements

JupyterLab Extension that fetches and displays announcements to the user from an external API.
Python
2
star
87

qpredict

Predicting queue wait times for batch jobs on NERSC systems
Python
1
star
88

nersc-latex-presentation

NERSC-themed LaTeX beamer presentation template
TeX
1
star
89

jupyterlab_resuse

Proof of concept for NERSC
TypeScript
1
star
90

tokio-abcutils

Tools and methods to analyze TOKIO-ABC results
Jupyter Notebook
1
star
91

shifter-tools

A collection of scripts for Shifter @ NERSC
Python
1
star
92

nersc-tf-tests

Python
1
star
93

cray-bdc-deeplearning

Repository of examples for the NERSC-CRAY Big Data Center collaboration
Python
1
star
94

inference_benchmarks

Python
1
star
95

community-software

Shell
1
star
96

dayabay-data-conversion

Converting Daya Bay ROOT file data to HDF5
Python
1
star
97

ipypathchooser

An ipywidget for choosing a path (file or directory) interactively
Python
1
star
98

FLAC-HDF5-plugin

A third party FLAC filter for HDF5
1
star
99

rstudio-deploy

NERSC's R Studio docker deployment
Nginx
1
star
100

ecp19-dl-tutorial

Material for the Deep Learning tutorial at the 2019 ECP All-Hands Meeting
Jupyter Notebook
1
star