• Stars
    star
    207
  • Rank 189,769 (Top 4 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Mellanox Network Operator

License Go Report Card

Table of contents generated with markdown-toc

NVIDIA Network Operator

NVIDIA Network Operator leverages Kubernetes CRDs and Operator SDK to manage Networking related Components in order to enable Fast networking, RDMA and GPUDirect for workloads in a Kubernetes cluster.

The Goal of Network Operator is to manage all networking related components to enable execution of RDMA and GPUDirect RDMA workloads in a kubernetes cluster including:

  • Mellanox Networking drivers to enable advanced features
  • Kubernetes device plugins to provide hardware resources for fast network
  • Kubernetes secondary network for Network intensive workloads

Documentation

For more information please visit the official documentation.

Prerequisites

Kubernetes Node Feature Discovery (NFD)

NVIDIA Network operator relies on Node labeling to get the cluster to the desired state. Node Feature Discovery v0.13.2 or newer is deployed by default via HELM chart installation. NFD is used to label nodes with the following labels:

  • PCI vendor and device information
  • RDMA capability
  • GPU features*

NOTE: We use nodeFeatureRules to label PCI vendor and device.This is enabled via nfd.deployNodeFeatureRules chart parameter.

Example NFD worker configurations:

    config:
      sources:
        pci:
          deviceClassWhitelist:
          - "0300"
          - "0302"
          deviceLabelFields:
          - vendor

* Required for GPUDirect driver container deployment

NOTE: If NFD is already deployed in the cluster, make sure to pass --set nfd.enabled=false to the helm install command to avoid conflicts, and if NFD is deployed from this repo the enableNodeFeatureApi flag is enabled by default to have the ability to create NodeFeatureRules.

Resource Definitions

The Operator Acts on the following CRDs:

NICClusterPolicy CRD

CRD that defines a Cluster state for Mellanox Network devices.

NOTE: The operator will act on a NicClusterPolicy instance with a predefined name "nic-cluster-policy", instances with different names will be ignored.

NICClusterPolicy spec:

NICClusterPolicy CRD Spec includes the following sub-states:

NOTE: Any sub-state may be omitted if it is not required for the cluster.

NOTE: NVIDIA IPAM and Whereabouts IPAM plugin can be deployed simultaneously in the same cluster

Example for NICClusterPolicy resource:

In the example below we request OFED driver to be deployed together with RDMA shared device plugin.

apiVersion: mellanox.com/v1alpha1
kind: NicClusterPolicy
metadata:
  name: nic-cluster-policy
spec:
  ofedDriver:
    image: mofed
    repository: nvcr.io/nvidia/mellanox
    version: 23.04-0.5.3.3.1
    startupProbe:
      initialDelaySeconds: 10
      periodSeconds: 10
    livenessProbe:
      initialDelaySeconds: 30
      periodSeconds: 30
    readinessProbe:
      initialDelaySeconds: 10
      periodSeconds: 30
  rdmaSharedDevicePlugin:
    image: k8s-rdma-shared-dev-plugin
    repository: nvcr.io/nvidia/cloud-native
    version: v1.3.2
    # The config below directly propagates to k8s-rdma-shared-device-plugin configuration.
    # Replace 'devices' with your (RDMA capable) netdevice name.
    config: |
      {
        "configList": [
          {
            "resourceName": "rdma_shared_device_a",
            "rdmaHcaMax": 63,
            "selectors": {
              "vendors": ["15b3"],
              "deviceIDs": ["1017"],
              "ifNames": ["ens2f0"]
            }
          }
        ]
      }
  secondaryNetwork:
    cniPlugins:
      image: plugins
      repository: ghcr.io/k8snetworkplumbingwg
      version: v1.2.0-amd64
    multus:
      image: multus-cni
      repository: ghcr.io/k8snetworkplumbingwg
      version: v3.9.3
      # if config is missing or empty then multus config will be automatically generated from the CNI configuration file of the master plugin (the first file in lexicographical order in cni-conf-dir)
      config: ''
    ipamPlugin:
      image: whereabouts
      repository: ghcr.io/k8snetworkplumbingwg
      version: v0.6.1-amd64

Can be found at: example/crs/mellanox.com_v1alpha1_nicclusterpolicy_cr.yaml

NicClusterPolicy with NVIDIA Kubernetes IPAM configuration

apiVersion: mellanox.com/v1alpha1
kind: NicClusterPolicy
metadata:
  name: nic-cluster-policy
spec:
  ofedDriver:
    image: mofed
    repository: nvcr.io/nvidia/mellanox
    version: 23.04-0.5.3.3.1
    startupProbe:
      initialDelaySeconds: 10
      periodSeconds: 10
    livenessProbe:
      initialDelaySeconds: 30
      periodSeconds: 30
    readinessProbe:
      initialDelaySeconds: 10
      periodSeconds: 30
  rdmaSharedDevicePlugin:
    image: k8s-rdma-shared-dev-plugin
    repository: nvcr.io/nvidia/cloud-native
    version: v1.3.2
    # The config below directly propagates to k8s-rdma-shared-device-plugin configuration.
    # Replace 'devices' with your (RDMA capable) netdevice name.
    config: |
      {
        "configList": [
          {
            "resourceName": "rdma_shared_device_a",
            "rdmaHcaMax": 63,
            "selectors": {
              "vendors": ["15b3"],
              "deviceIDs": ["101b"]
            }
          }
        ]
      }
  secondaryNetwork:
    cniPlugins:
      image: plugins
      repository: ghcr.io/k8snetworkplumbingwg
      version: v1.2.0-amd64
    multus:
      image: multus-cni
      repository: ghcr.io/k8snetworkplumbingwg
      version: v3.9.3
      config: ''
  nvIpam:
    image: nvidia-k8s-ipam
    repository: ghcr.io/mellanox
    version: v0.0.3
    config: '{
      "pools":  {
        "my-pool": {"subnet": "192.168.0.0/24", "perNodeBlockSize": 100, "gateway": "192.168.0.1"}
      }
    }'

Can be found at: example/crs/mellanox.com_v1alpha1_nicclusterpolicy_cr-nvidia-ipam.yaml

NICClusterPolicy status

NICClusterPolicy status field reflects the current state of the system. It contains a per sub-state and a global state status.

The sub-state status indicates if the cluster has transitioned to the desired state for that sub-state, e.g OFED driver container deployed and loaded on relevant nodes, RDMA device plugin deployed and running on relevant nodes.

The global state reflects the logical AND of each individual sub-state.

Example Status field of a NICClusterPolicy instance
status:
  appliedStates:
  - name: state-pod-security-policy
    state: ignore
  - name: state-multus-cni
    state: ready
  - name: state-container-networking-plugins
    state: ignore
  - name: state-ipoib-cni
    state: ignore
  - name: state-whereabouts-cni
    state: ready
  - name: state-OFED
    state: ready
  - name: state-SRIOV-device-plugin
    state: ignore
  - name: state-RDMA-device-plugin
    state: ready
  - name: state-ib-kubernetes
    state: ignore
  - name: state-nv-ipam-cni
    state: ready
  state: ready

NOTE: An ignore State indicates that the sub-state was not defined in the custom resource thus it is ignored.

MacvlanNetwork CRD

This CRD defines a MacVlan secondary network. It is translated by the Operator to a NetworkAttachmentDefinition instance as defined in k8snetworkplumbingwg/multi-net-spec.

MacvlanNetwork spec:

MacvlanNetwork CRD Spec includes the following fields:

  • networkNamespace: Namespace for NetworkAttachmentDefinition related to this MacvlanNetwork CRD.
  • master: Name of the host interface to enslave. Defaults to default route interface.
  • mode: Mode of interface one of "bridge", "private", "vepa", "passthru", default "bridge".
  • mtu: MTU of interface to the specified value. 0 for master's MTU.
  • ipam: IPAM configuration to be used for this network.
Example for MacvlanNetwork resource:

In the example below we deploy MacvlanNetwork CRD instance with mode as bridge, MTU 1500, default route interface as master, with resource "rdma/rdma_shared_device_a", that will be used to deploy NetworkAttachmentDefinition for macvlan to default namespace.

With Whereabouts IPAM CNI

apiVersion: mellanox.com/v1alpha1
kind: MacvlanNetwork
metadata:
  name: example-macvlannetwork
spec:
  networkNamespace: "default"
  master: "ens2f0"
  mode: "bridge"
  mtu: 1500
  ipam: |
    {
      "type": "whereabouts",
      "datastore": "kubernetes",
      "kubernetes": {
        "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
      },
      "range": "192.168.2.225/28",
      "exclude": [
       "192.168.2.229/30",
       "192.168.2.236/32"
      ],
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "info",
      "gateway": "192.168.2.1"
    }

Can be found at: example/crs/mellanox.com_v1alpha1_macvlannetwork_cr.yaml

With NVIDIA Kubernetes IPAM

apiVersion: mellanox.com/v1alpha1
kind: MacvlanNetwork
metadata:
  name: example-macvlannetwork
spec:
  networkNamespace: "default"
  master: "ens2f0"
  mode: "bridge"
  mtu: 1500
  ipam: |
    {
      "type": "nv-ipam",
      "poolName": "my-pool"
    }

Can be found at: example/crs/mellanox.com_v1alpha1_macvlannetwork_cr-nvidia-ipam.yaml

HostDeviceNetwork CRD

This CRD defines a HostDevice secondary network. It is translated by the Operator to a NetworkAttachmentDefinition instance as defined in k8snetworkplumbingwg/multi-net-spec.

HostDeviceNetwork spec:

HostDeviceNetwork CRD Spec includes the following fields:

  • networkNamespace: Namespace for NetworkAttachmentDefinition related to this HostDeviceNetwork CRD.
  • resourceName: Host device resource pool.
  • ipam: IPAM configuration to be used for this network.
Example for HostDeviceNetwork resource:

In the example below we deploy HostDeviceNetwork CRD instance with "hostdev" resource pool, that will be used to deploy NetworkAttachmentDefinition for HostDevice network to default namespace.

apiVersion: mellanox.com/v1alpha1
kind: HostDeviceNetwork
metadata:
  name: example-hostdevice-network
spec:
  networkNamespace: "default"
  resourceName: "hostdev"
  ipam: |
    {
      "type": "whereabouts",
      "datastore": "kubernetes",
      "kubernetes": {
        "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
      },
      "range": "192.168.3.225/28",
      "exclude": [
       "192.168.3.229/30",
       "192.168.3.236/32"
      ],
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "info"
    }

Can be found at: example/crs/mellanox.com_v1alpha1_hostdevicenetwork_cr.yaml

IPoIBNetwork CRD

This CRD defines an IPoIBNetwork secondary network. It is translated by the Operator to a NetworkAttachmentDefinition instance as defined in k8snetworkplumbingwg/multi-net-spec.

IPoIBNetwork spec:

HostDeviceNetwork CRD Spec includes the following fields:

  • networkNamespace: Namespace for NetworkAttachmentDefinition related to this HostDeviceNetwork CRD.
  • master: Name of the host interface to enslave.
  • ipam: IPAM configuration to be used for this network.
Example for IPoIBNetwork resource:

In the example below we deploy IPoIBNetwork CRD instance with "ibs3f1" host interface, that will be used to deploy NetworkAttachmentDefinition for IPoIBNetwork network to default namespace.

apiVersion: mellanox.com/v1alpha1
kind: IPoIBNetwork
metadata:
  name: example-ipoibnetwork
spec:
  networkNamespace: "default"
  master: "ibs3f1"
  ipam: |
    {
      "type": "whereabouts",
      "datastore": "kubernetes",
      "kubernetes": {
        "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig"
      },
      "range": "192.168.5.225/28",
      "exclude": [
       "192.168.6.229/30",
       "192.168.6.236/32"
      ],
      "log_file" : "/var/log/whereabouts.log",
      "log_level" : "info",
      "gateway": "192.168.6.1"
    }

Can be found at: example/crs/mellanox.com_v1alpha1_ipoibnetwork_cr.yaml

Pod Security Policy

NVIDIA Network Operator supports Pod Security Policies. When NicClusterPolicy is created with psp.enabled=True, privileged PSP is created and applied to all network-operator's pods. Requires admission controller to be enabled.

System Requirements

  • RDMA capable hardware: Mellanox ConnectX-5 NIC or newer.
  • NVIDIA GPU and driver supporting GPUDirect e.g Quadro RTX 6000/8000 or Tesla T4 or Tesla V100 or Tesla V100. (GPU-Direct only)
  • Operating Systems: Ubuntu 20.04 LTS

NOTE: As more driver containers are built the operator will be able to support additional platforms. NOTE: ConnectX-6 Lx is not supported.

Tested Network Adapters

The following Network Adapters have been tested with NVIDIA Network Operator:

  • ConnectX-5
  • ConnectX-6 Dx

Compatibility Notes

  • NVIDIA Network Operator is compatible with NVIDIA GPU Operator v1.5.2 and above
  • Starting from v465 NVIDIA GPU driver includes a built-in nvidia_peermem module which is a replacement for nv_peer_mem module. NVIDIA GPU operator manages nvidia_peermem module loading.

Deployment Example

Deployment of NVIDIA Network Operator consists of:

  • Deploying NVIDIA Network Operator CRDs found under ./config/crd/bases:
    • mellanox.com_nicclusterpolicies_crd.yaml
    • mellanox.com_macvlan_crds.yaml
    • k8s.cni.cncf.io-networkattachmentdefinitions-crd.yaml
  • Deploying network operator resources found under ./deploy/ e.g. operator namespace, role, role binding, service account and the NVIDIA Network Operator daemonset
  • Defining and deploying a NICClusterPolicy custom resource. Example can be found under ./example/crs/mellanox.com_v1alpha1_nicclusterpolicy_cr.yaml
  • Defining and deploying a MacvlanNetwork custom resource. Example can be found under ./example/crs/mellanox.com_v1alpha1_macvlannetwork_cr.yaml

A deployment example can be found under example folder here.

Docker image

Network operator uses alpine base image by default. To build Network operator with another base image you need to pass BASE_IMAGE argument:

docker build -t network-operator \
--build-arg BASE_IMAGE=registry.access.redhat.com/ubi8/ubi-minimal:latest \
.

Driver Containers

Driver containers are essentially containers that have or yield kernel modules compatible with the underlying kernel. An initialization script loads the modules when the container is run (in privileged mode) making them available to the kernel.

While this approach may seem odd. It provides a way to deliver drivers to immutable systems.

Mellanox OFED container

Mellanox OFED driver container supports customization of its behaviour via environment variables. This is regarded as advanced functionallity and generally should not be needed.

check MOFED Driver Container Environment Variables

Upgrade

Check Upgrade section in Helm Chart documentation for details.

Externally Provided Configurations For Network Operator Sub-Components

In most cases, Network Operator will be deployed together with the related configurations for the various sub-components it deploys e.g. Nvidia k8s IPAM plugin, RDMA shared device plugin or SR-IOV Network device plugin.

Specifying configuration either via Helm values when installing NVIDIA network operator, or by specifying them when directly creating NicClusterPolicy CR. These configurations eventually trigger the creation of a ConfigMap object in K8s.

As an example, NVIDIA K8s IPAM plugin configuration is specified either via:

Helm values:

deployCR: true
nvIpam:
  deploy: true
  image: nvidia-k8s-ipam
  repository: ghcr.io/mellanox
  version: v0.0.3
  config: |-
    {
    "pools":  {
      "rdma-pool": {"subnet": "192.168.0.0/16", "perNodeBlockSize": 100, "gateway": "192.168.0.1"}
      }
    }

NicClusterPolicy CR:

apiVersion: mellanox.com/v1alpha1
kind: NicClusterPolicy
metadata:
  name: nic-cluster-policy
spec:
  nvIpam:
    image: nvidia-k8s-ipam
    repository: ghcr.io/mellanox
    version: v0.0.3
    config: '{
      "pools":  {
        "my-pool": {"subnet": "192.168.0.0/16", "perNodeBlockSize": 100, "gateway": "192.168.0.1"}
      }
    }'

The configuration is then processed by the operator, eventually rendering and creating a ConfigMap, nvidia-k8s-ipam-config, within the namespace the operator was deployed. It contains the configuration for nvidia k8s IPAM plugin.

For some advanced use-cases, it is desirable to provide such configurations at a later time. (e.g if network configuration is not known during Network Operator deployment time)

To support this, it is possible to explicitly set such configuration to nil in Helm values or omit the config field of the relevant component while creating NicClusterPolicy CR. This will prevent Network Operator from creating such ConfigMaps, allowing the user to provide its own.

Example (omitting nvidia k8s ipam config):

Helm values:

deployCR: true
nvIpam:
  deploy: true
  image: nvidia-k8s-ipam
  repository: ghcr.io/mellanox
  version: v0.0.3
  config: null

NicClusterPolicy CR:

apiVersion: mellanox.com/v1alpha1
kind: NicClusterPolicy
metadata:
  name: nic-cluster-policy
spec:
  nvIpam:
    image: nvidia-k8s-ipam
    repository: ghcr.io/mellanox
    version: v0.0.3

Note: It is the responsibility of the user to delete any existing configurations (ConfigMaps) if they were already created by the Network Operator as well as deleting his own configuration when they are no longer required.

More Repositories

1

libvma

Linux user space library for network socket acceleration based on RDMA compatible network adaptors
C++
581
star
2

sockperf

Network Benchmarking Utility
C++
567
star
3

SparkRDMA

This is archive of SparkRDMA project. The new repository with RDMA shuffle acceleration for Apache Spark is here: https://github.com/Nvidia/sparkucx
Java
240
star
4

nv_peer_memory

C
234
star
5

k8s-rdma-shared-dev-plugin

Go
188
star
6

mlxsw

C
167
star
7

nccl-rdma-sharp-plugins

RDMA and SHARP plugins for nccl library
C
157
star
8

mstflint

Mstflint - an open source version of MFT (Mellanox Firmware Tools)
C
114
star
9

k8s-rdma-sriov-dev-plugin

Kubernetes Rdma SRIOV device plugin
Go
110
star
10

gpu_direct_rdma_access

example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory
C
99
star
11

mlnx-tools

Mellanox userland tools and scripts
Python
95
star
12

docker-sriov-plugin

Docker networking plugin for SRIOV and passthrough interfaces
Go
76
star
13

rdmamap

RDMA library for mapping associate netdevice and character devices
Go
58
star
14

ib-kubernetes

Go
57
star
15

libxlio

C++
41
star
16

ofed-docker

Shell
41
star
17

linux-sysinfo-snapshot

Linux Sysinfo Snapshot
Python
39
star
18

scalablefunctions

All about Scalable functions
39
star
19

SAI-Implementation

This repository contains SAI implementation for Mellanox hardware.
C
37
star
20

SAI-P4-BM

C++
36
star
21

SwitchRouterSDK-interfaces

C
32
star
22

mkt

Mellanox Kernel developers Toolset (MKT)
Python
25
star
23

mlx_steering_dump

Mellanox Steering Dump Tool for SWS and HWS acceleration
Python
24
star
24

ovs-tests

A collection of tests for the Open vSwitch HW offload.
Shell
23
star
25

R4H

RDMA for HDFS
Java
23
star
26

bfb-build

BFB (BlueField boot stream and OS installer) build environment
Shell
22
star
27

ufm_sdk_3.0

Python
19
star
28

ibdump

C
19
star
29

scapy-ui

Scapy UI - Web based scapy tools
Python
18
star
30

DCTrafficGen

Data Center Traffic Generator Library
C++
17
star
31

rshim-user-space

Linux based user-space RSHIM driver for the Mellanox BlueField SoC
C
17
star
32

vnf_acceleration_example

C
16
star
33

nvidia-k8s-ipam

IPAM plugin for kubernetes
Go
14
star
34

hw_offload_api_examples

Examples of usage for Mellanox HW offloads
C
14
star
35

pcx

Persistent Collectives X- A collective communication library for high performance, low cost persistent collectives over RDMA devices.
C++
13
star
36

hw-mgmt

Shell
13
star
37

rshim

BlueField RSHIM driver
C
12
star
38

rdma_fc

Demonstration of flow control over RDMA fabric
C
11
star
39

ngc_multinode_perf

Performance tests for multinode NGC.Ready certification
Shell
11
star
40

ipoib-cni

IP Over Infiniband (IPoIB) CNI Plugin
Go
11
star
41

pka

Mellanox BlueField PKA support
C
11
star
42

UDA

Unstructured Data Accelerator (RDMA) for Hadoop MapReduce
C++
10
star
43

mlxdevm-go

mlxdevm library for for device management in go language
Go
10
star
44

bfscripts

Collection of scripts used for BlueField SoC system management.
Shell
10
star
45

k8s-images

Dockerfile
10
star
46

devx

Objective-C
9
star
47

MT.ComB

Multi-Threaded (MT) Communication Benchmark
C
8
star
48

container_tools

Few useful container orchestration, deployment tools when using RDMA
Go
8
star
49

kubernetes-ci

CI for Kubernetes with Mellanox features
Shell
8
star
50

libpsample

C
8
star
51

config-tools

Mellanox Configuration tool for Linux Host
Shell
7
star
52

tls-af_ktls_tool

C
7
star
53

tls-offload

C
7
star
54

OVS

C
7
star
55

EC

!!! NOTICE: DEPRECATED !!! Java Erasure Coding NIC Offload library. For the C level EC offloads, use MLNX_OFED libraries and documentation.
C
6
star
56

TFDeploy

TensorFlow deploy script to easily run on multiple servers
Python
6
star
57

NVMEoF-P2P

A fork of the Linux kernel for NVMEoF target driver using PCI P2P capabilities for full I/O path offloading.
C
6
star
58

napalm

Network Automation and Programmability Abstraction Layer with Multivendor support
Python
5
star
59

containerized-ovs-forwarder

Python
5
star
60

bluefield-linux

Linux kernel to support Mellanox BlueField SoCs
C
5
star
61

kmtracker

Linux Kernel memory tracker
Go
5
star
62

bf-release

BlueField release files, configuration files and post-installation steps
Python
5
star
63

mofed_dockerfiles

MOFED Docker files
Roff
5
star
64

docker-nmos-cpp

Shell
4
star
65

wjh-linux

Python
4
star
66

ALVS

C
4
star
67

Switch-SDK-drivers

Switch SDK Driver
C
4
star
68

container_scripts

Some container scripts
Shell
4
star
69

ipmb-host

IPMB driver to send requests from the BlueField to the BMC on CentOS
C
4
star
70

mlnx_lib

C
4
star
71

nic-configuration-operator

Nvidia Networking NIC Configuration Operator For Kubernetes
Go
4
star
72

mellanox-netdev-stdlib-mlnxos

MLNX_OS specific Provider code for "netdev-stdlib". Netdev provides a set of network resource abstractions for automating network device configuration using Puppet
Ruby
4
star
73

libmlxdevm

Mellanox device management C library
C
3
star
74

virtio-emulation

C
3
star
75

dpdk-mlx4

DPDK.org tree with enhanced librte_pmd_mlx4
Objective-C
3
star
76

sai_p4_compiler

C++
3
star
77

mlnx-project-config

Python
3
star
78

ATC

C
3
star
79

regex

C
3
star
80

network-operator-docs

NVIDIA Network Operator documentation sources
PowerShell
3
star
81

nic-kernel

Nvidia NBU integration kernel
C
3
star
82

DPDK-18.11-for-Ubuntu-18.04

C
3
star
83

nagios4mlnxos

Nagios Plugin for Mellanox's Switches
Perl
3
star
84

meta-bluefield

Shell
3
star
85

NNT-Linux-driver

NNT Linux driver for MFT & MSTFLINT packages
C
3
star
86

OpenAI.recipe

Recommended configuration for large-scale setup - OpenAI
2
star
87

Kubespray-role-for-RDMA-shared-DP

2
star
88

ci-demo

Groovy
2
star
89

libdpcp

C++
2
star
90

doca-driver-build

Shell
2
star
91

ceilometer_sriov_counters

Plugin for Ceilometer SRIOV traffic counters
Python
2
star
92

mlnx-openstack

Puppet manifests for deploying Mellanox OpenStack plugins
Puppet
2
star
93

QAT_Engine

C
2
star
94

iproute2

2
star
95

nic-feature-discovery

NVIDIA NIC feature discovery
Go
2
star
96

eswitchd

Python
2
star
97

mlx-strongswan

Mellanox version of strongswan cloned from strongswan-5.9.0.tar.gz
C
2
star
98

nginx_automation

This is simple Python automation for Nginx - VMA related activity
Python
2
star
99

ipsec-offload

2
star
100

dpdk-utest

Rust
2
star