• Stars
    star
    304
  • Rank 132,807 (Top 3 %)
  • Language
    Shell
  • License
    Apache License 2.0
  • Created almost 9 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Lightweight user-defined software stacks for high-performance computing.

What is Charliecloud?

Charliecloud provides user-defined software stacks (UDSS) for high-performance computing (HPC) centers. This “bring your own software stack” functionality addresses needs such as:

  • software dependencies that are numerous, complex, unusual, differently configured, or simply newer/older than what the center provides;
  • build-time requirements unavailable within the center, such as relatively unfettered internet access;
  • validated software stacks and configuration to meet the standards of a particular field of inquiry;
  • portability of environments between resources, including workstations and other test and development system not managed by the center;
  • consistent environments, even archivally so, that can be easily, reliably, and verifiably reproduced in the future; and/or
  • usability and comprehensibility.

How does it work?

Charliecloud uses Linux user namespaces to run containers with no privileged operations or daemons and minimal configuration changes on center resources. This simple approach avoids most security risks while maintaining access to the performance and functionality already on offer.

Container images can be built using Docker or anything else that can generate a standard Linux filesystem tree.

How do I learn more?

Who is responsible?

Contributors:

How can I participate?

Use our GitHub page: https://github.com/hpc/charliecloud

Bug reports and feature requests should be filed as “Issues”. Questions, comments, support requests, and everything else should use our “Discussions”. Don't worry if you put something in the wrong place; we’ll be more than happy to help regardless.

We also have a mailing list for announcements: https://groups.io/g/charliecloud

Patches are much appreciated on the software itself as well as documentation. Optionally, please include in your first patch a credit for yourself in the list above.

We are friendly and welcoming of diversity on all dimensions.

How do I cite Charliecloud?

If Charliecloud helped your research, or it was useful to you in any other context where bibliographic citations are appropriate, please cite the following open-access paper:

Reid Priedhorsky and Tim Randles. “Charliecloud: Unprivileged containers for user-defined software stacks in HPC”, 2017. In Proc. Supercomputing. DOI: 10.1145/3126908.3126925.

Note: This paper contains out-of-date number for the size of Charliecloud’s code. Please instead use the current number in the FAQ.

Copyright and license

Charliecloud is copyright © 2014–2022 Triad National Security, LLC and others.

This software was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department of Energy/National Nuclear Security Administration.

This is open source software (LA-CC 14-096); you can redistribute it and/or modify it under the terms of the Apache License, Version 2.0. A copy is included in file LICENSE. You may not use this software except in compliance with the license.

The Government is granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this material to reproduce, prepare derivative works, distribute copies to the public, perform publicly and display publicly, and to permit others to do so.

Neither the government nor Triad National Security, LLC makes any warranty, express or implied, or assumes any liability for use of this software.

If software is modified to produce derivative works, such derivative works should be clearly marked, so as not to confuse it with the version available from LANL.

More Repositories

1

ior

IOR and mdtest
C
359
star
2

dcp

dcp is a distributed file copy program that automatically distributes and dynamically balances work equally across nodes in a large distributed system without centralized state.
Shell
193
star
3

mpifileutils

File utilities designed for scalability and performance.
C
160
star
4

libcircle

An API to provide an efficient distributed queue on a cluster. Libcircle is currently used in production to quickly traverse and perform operations on a file tree which contains several hundred-million file nodes.
C
98
star
5

Spindle

Scalable dynamic library and python loading in HPC environments
Makefile
93
star
6

MPI-Examples

Some example MPI programs
89
star
7

xpmem

Linux Cross-Memory Attach
C
79
star
8

pavilion2

Pavilion is a Python 3 (3.5+) based framework for running and analyzing tests targeting HPC systems.
Python
40
star
9

libhio

libhio is a library intended for writing data to hierarchical data store systems.
C
21
star
10

libdftw

A distributed and decentralized filesystem treewalk function, similiar to the interface of linux's ftw(3). libdftw automatically and dynamically balances the treewalk workload across many nodes in a large distributed system.
Shell
19
star
11

mpimemu

MPI Memory Consumption Utilities
C
18
star
12

supermagic

Very simple MPI sanity code. Nothing more, nothing less.
C
14
star
13

hybridize

Generate an optimal rootfs hybridize list of files that should be symlinked to NFS mount and not required before NFS mount happens
Perl
13
star
14

iptablesbuild

iptablesbuild is effectively a configuration manager for iptables. It is intended to manage iptables configurations in a centralized location for multiple systems.
Perl
13
star
15

cluster-school

LANL Supercomputing Institute curriculum
Shell
13
star
16

Parallel-coreutils

Parallelized gnu-coreutils
C
12
star
17

sprintstatf

Print a stat struct using a method similar to sprintf(3).
C
10
star
18

hpc-collab

This project provides provisioned HPC cluster models using underlying virtualization mechanisms.
Shell
10
star
19

give

A tool to transfer permission of files to others in a linux-based environment.
Python
10
star
20

pexec

parallel execution command, on host or across a cluster, run commands, copy, etc
Perl
9
star
21

lustre

Yet another branch of lustre.
C
8
star
22

quo-vadis

A cross-stack coordination layer to dynamically map runtime components to hardware resources
C++
7
star
23

dpusm

Data Processing Unit Services Module
C
7
star
24

rma-mt

C
7
star
25

gnawts

A Splunk app for fast detangling of supercomputer logs.
Python
7
star
26

OpenLorenz

Web-based HPC dashboard and more
JavaScript
5
star
27

genpxe

generate cluster pxe files from a flat config file
Perl
5
star
28

clusterscripts

useful? cluster. scripts!
Shell
4
star
29

ethcfg

Perform external ethernet interface configuration and hostnames
Perl
4
star
30

nrd

Neighborless Route Detection (NRD) is a utility that dynamically manages ECMP/MultiPath routes by listening for OSPF Hello packets.
Go
4
star
31

hxhim

C++
4
star
32

trinity_net_tests

low level ugni based network tests for Trinity
C
3
star
33

ppsst

Prerequisites, Packages, Services, Sanity check tools
Shell
3
star
34

mpi_sessions_code_sandbox

sandbox for exploring concepts for MPI Sessions
C
2
star
35

shasta_wrapper

Scripting to simplify the administration of HPE Cray EX Systems
Shell
1
star
36

cce-mpi-openmpi-1.6.4

CCE Open MPI 1.6.4
C
1
star
37

scality-dl

Collection of scripts specifically for starting diskless Scality 4.3.7
Shell
1
star
38

rca-mesh-coords

An application that uses RCA to get the mesh coordinates of NIDs within an allocation.
C
1
star
39

perceus-reload

HOWTO dump and reload a PERCEUS db from a flat-file configuration
Perl
1
star
40

cce-mpi-openmpi-1.7.1

CCE Open MPI 1.7.1
C
1
star
41

openmpi-plat

Open MPI platform files
1
star
42

ACES-fs-acceptance

These are test plans and scripts or code input to execute file system acceptance tests for ACES systems.
Python
1
star
43

hpc.github.io

JavaScript
1
star