• This repository has been archived on 30/Dec/2020
  • Stars
    star
    117
  • Rank 301,828 (Top 6 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Singularity implementation of k8s operator for interacting with SLURM.

WLM-operator

The singularity-cri and wlm-operator projects were created by Sylabs to explore interaction between the Kubernetes and HPC worlds. In 2020, rather than dilute our efforts over a large number of projects, we have focused on Singularity itself and our supporting services. We're also looking forward to introducing new features and technologies in 2021.

At this point we have archived the repositories to indicate that they aren't under active development or maintenance. We recognize there is still interest in singularity-cri and wlm-operator, and we'd like these projects to find a home within a community that can further develop and maintain them. The code is open-source under the Apache License 2.0, to be compatible with other projects in the k8s ecosystem.

Please reach out to us via [email protected] if you are interested in establishing a new home for the projects.


CircleCI

WLM operator is a Kubernetes operator implementation, capable of submitting and monitoring WLM jobs, while using all of Kubernetes features, such as smart scheduling and volumes.

WLM operator connects Kubernetes node with a whole WLM cluster, which enables multi-cluster scheduling. In other words, Kubernetes integrates with WLM as one to many.

Each WLM partition(queue) is represented as a dedicated virtual node in Kubernetes. WLM operator can automatically discover WLM partition resources(CPUs, memory, nodes, wall-time) and propagates them to Kubernetes by labeling virtual node. Those node labels will be respected during Slurm job scheduling so that a job will appear only on a suitable partition with enough resources.

Right now WLM-operator supports only SLURM clusters. But it's easy to add a support for another WLM. For it you need to implement a GRPc server. You can use current SLURM implementation as a reference.

Installation

Since wlm-operator is now built with go modules there is no need to create standard go workspace. If you still prefer keeping source code under GOPATH make sure GO111MODULE is set.

Prerequisites

  • Go 1.11+

Installation steps

Installation process is required to connect Kubernetes with Slurm cluster.

NOTE: further described installation process for a single Slurm cluster, the same steps should be performed for each cluster to be connected.

  1. Create a new Kubernetes node with Singularity-CRI on the Slurm login host. Make sure you set up NoSchedule taint so that no random pod will be scheduled there.

  2. Create a new dedicated user on the Slurm login host. All submitted Slurm jobs will be executed on behalf of that user. Make sure the user has execute permissions for the following Slurm binaries:sbatch, scancel, sacct and scontol.

  3. Clone the repo.

git clone https://github.com/sylabs/wlm-operator
  1. Build and start red-box – a gRPC proxy between Kubernetes and a Slurm cluster.
cd wlm-operator && make

Use dedicated user from step 2 to run red-box, e.g. set up User in systemd red-box.service. By default red-box listens on /var/run/syslurm/red-box.sock, so you have to make sure the user has read and write permissions for /var/run/syslurm.

  1. Set up Slurm operator in Kubernetes.
kubectl apply -f deploy/crds/slurm_v1alpha1_slurmjob.yaml
kubectl apply -f deploy/operator-rbac.yaml
kubectl apply -f deploy/operator.yaml

This will create new CRD that introduces SlurmJob to Kubernetes. After that, Kubernetes controller for SlurmJob CRD is set up as a Deployment.

  1. Start up configurator that will bring up a virtual node for each partition in the Slurm cluster.
kubectl apply -f deploy/configurator.yaml

After all those steps Kubernetes cluster is ready to run SlurmJobs.

Usage

The most convenient way to submit them is using YAML files, take a look at basic examples.

We will walk thought basic example how to submit jobs to Slurm in Vagrant.

apiVersion: wlm.sylabs.io/v1alpha1
kind: SlurmJob
metadata:
  name: cow
spec:
  batch: |
    #!/bin/sh
    #SBATCH --nodes=1
    #SBATCH --output cow.out
    srun singularity pull -U library://sylabsed/examples/lolcow
    srun singularity run lolcow_latest.sif
    srun rm lolcow_latest.sif
  nodeSelector:
    wlm.sylabs.io/containers: singularity
  results:
    from: cow.out
    mount:
      name: data
      hostPath:
        path: /home/job-results
        type: DirectoryOrCreate

In the example above we will run lolcow Singularity container in Slurm and collect the results to /home/job-results located on a k8s node where job has been scheduled. Generally, job results can be collected to any supported k8s volume.

Slurm job specification will be processed by operator and a dummy pod will be scheduled in order to transfer job specification to a specific queue. That dummy pod will not have actual physical process under that hood, but instead its specification will be used to schedule slurm job directly on a connected cluster. To collect results another pod will be created with UID and GID 1000 (default values), so you should make sure it has a write access to a volume where you want to store the results (host directory /home/job-results in the example above). The UID and GID are inherited from virtual kubelet that spawns the pod, and virtual kubelet inherits them from configurator (see runAsUser in configurator.yaml).

After that you can submit cow job:

$ kubectl apply -f examples/cow.yaml 
slurmjob.wlm.sylabs.io "cow" created

$ kubectl get slurmjob
NAME   AGE   STATUS
cow    66s   Succeeded


$ kubectl get pod
NAME                             READY   STATUS         RESTARTS   AGE
cow-job                          0/1     Job finished   0          17s
cow-job-collect                  0/1     Completed      0          9s

Validate job results appeared on a node:

$ ls -la /home/job-results
cow-job
  
$ ls /home/job-results/cow-job 
cow.out

$ cat cow.out
WARNING: No default remote in use, falling back to: https://library.sylabs.io
 _________________________________________
/ It is right that he too should have his \
| little chronicle, his memories, his     |
| reason, and be able to recognize the    |
| good in the bad, the bad in the worst,  |
| and so grow gently old all down the     |
| unchanging days and die one day like    |
| any other day, only shorter.            |
|                                         |
\ -- Samuel Beckett, "Malone Dies"        /
 -----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Results collection

Slurm operator supports result collection into k8s volume so that a user won't need to have access Slurm cluster to analyze job results.

However, some configuration is required for this feature to work. More specifically, results can be collected located on a login host only (i.e. where red-box is running), while Slurm job can be scheduled on an arbitrary Slurm worker node. This means that some kind of a shared storage among Slurm nodes should be configured so that despite of a Slurm worker node chosen to run a job, results will appear on a login host as well. NOTE: result collection is a network and IO consuming task, so collecting large files (e.g. 1Gb result of an ML job) may not be a great idea.

Let's walk through basic configuration steps. Further assumed that file cow.out from example above is collected. This file can be found on a Slurm worker node that is executing a job. More specifically, you'll find it in a folder, from which job was submitted (i.e. red-box's working dir). Configuration for other results file will differ in shared paths only:

$RESULTS_DIR = red-box's working directory

Share $RESULTS_DIR among all Slurm nodes, e.g set up nfs share for $RESULTS_DIR.

Configuring red-box

By default red-box performs automatic resources discovery for all partitions. However, it's possible to setup available resources for a partition manually with in the config file. The following resources can be specified: nodes, cpu_per_node, mem_per_node and wall_time. Additionally you can specify partition features there, e.g. available software or hardware. Config path should be passed to red-box with the --config flag.

Config example:

patition1:
  nodes: 10
  mem_per_node: 2048 # in MBs
  cpu_per_node: 8
  wall_time: 10h 
partition2:
  nodes: 10
  # mem, cpu and wall_time will be automatic discovered
partition3:
  additional_feautres:
    - name: singularity
      version: 3.2.0
    - name: nvidia-gpu
      version: 2080ti-cuda-7.0
      quantity: 20

Vagrant

If you want to try wlm-operator locally before updating your production cluster, use vagrant that will automatically install and configure all necessary software:

cd vagrant
vagrant up && vagrant ssh k8s-master

NOTE: vagrant up may take about 15 minutes to start as k8s cluster will be installed from scratch.

Vagrant will spin up two VMs: a k8s master and a k8s worker node with Slurm installed. If you wish to set up more workers, fell free to modify N parameter in Vagrantfile.

More Repositories

1

singularity

SingularityCE is the Community Edition of Singularity, an open source container platform designed to be simple, fast, and secure.
Go
743
star
2

singularity-cri

The Singularity implementation of the Kubernetes Container Runtime Interface
Go
114
star
3

examples

files and instructions for creating and using example containers from the sylabs.io blog
Jupyter Notebook
103
star
4

singularity-userdocs

User Documentation for SingularityCE.
Python
18
star
5

sif

Singularity Image Format (SIF) reference implementation.
Go
17
star
6

sykube

This repository contains Sykube source and definition file used for Sykube image available on Sylabs Cloud.
Shell
8
star
7

scs-build-client

Go client for the Singularity Container Services (SCS) Build Service
Go
6
star
8

singularityce-community

Documents, meeting minutes, etc. for the SingularityCE community.
6
star
9

scs-library-client

Go client for the Singularity Container Services (SCS) Container Library Service
Go
4
star
10

json-resp

Marshall and unmarshall response data and errors in JSON format.
Go
4
star
11

singularity-admindocs

Admin documentation for SingularityCE
Python
3
star
12

scs-key-client

Go client for the Singularity Container Services (SCS) Key Service
Go
3
star
13

arch-scheduler

K8s SIF arch based scheduler extender
Go
3
star
14

SyADFS

Sylabs' Application-level Distributed File System
Go
2
star
15

oci-tools

Go module for working with OCI container images.
Go
2
star
16

fuzzball-service

Programmatic management of high performance compute resources.
Go
2
star
17

singularity-cri-userdocs

user documentation for the kubernetes singularity cri shim!
Batchfile
1
star
18

vagrant-boxes

Ruby
1
star
19

buildenv

Docker build environment for singularity
Shell
1
star
20

fuzzctl

CLI for Fuzzball, enabling management of high performance compute workflows.
Go
1
star
21

fuzzball-agent

Fuzzball Agent daemon, enabling programmatic management of high performance compute resources.
Go
1
star
22

image-tools

Go
1
star