• Stars
    star
    129
  • Rank 277,603 (Top 6 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Docker topology builder for network simulations

docker-topo

Docker network topology builder

Build Status

Supported images

  • Arista cEOS-lab
  • Arista vEOS-lab
  • Arista CVP
  • All vrnetlab images - experimental support, currently only tested with CSR1k, vMX and XRv.
  • User-defined docker images

Installation

With Python virtualenv (recommended)

python3 -m pip install virtualenv
python3 -m virtualenv testdir; cd testdir
source bin/activate 
pip install git+https://github.com/networkop/docker-topo.git

Without virtualenv

python3 -m pip install --upgrade --user git+https://github.com/networkop/docker-topo.git

Note: Python 2.x is not supported

Usage

# docker-topo -h
usage: docker-topo [-h] [-d] [--create | --destroy] [-s] [-a] topology

Tool to create cEOS topologies

positional arguments:
  topology       Topology file

optional arguments:
  -h, --help     show this help message and exit
  -d, --debug    Enable Debug

Actions:
  Create or destroy topology

  --create       Create topology
  --destroy      Destroy topology

Save:
  Save or archive the topology

  -s, --save     Save topology configs
  -a, --archive  Archive topology file and configs

Topology file

Topology file is a YAML file describing how docker containers are to be interconnected. This information is stored in the links variable which contains a list of links. Each link is described by a unique set of connected interfaces. There are several versions of topology file formats.

Topology file v1

This version is considered legacy and is documented here.

Topology file v2

Each link in a links array is a dictionary with the following format:

VERSION: 2
links:
  - endpoints:
      - "Device-A:Interface-2" 
      - "Device-B:Interface-1"
  - driver: macvlan
    driver_opts: 
      parent: wlp58s0
    endpoints: ["Device-A:Interface-1", "Device-B:Interface-2"]

Each link dictionary supports the following elements:

  • endpoints - the only mandatory element, contains a list of endpoints to be connected to a link.
  • driver - defines the link driver to be used. Currently supported drivers are veth, bridge, macvlan. When driver is not specified, default bridge driver is used. The following limitations apply:
    • macvlan driver will require a mandatory driver_opts object described below
    • veth driver is talking directly to netlink (no libnetwork involved) and making changes to namespaces
  • driver_opts - optional object containing driver options as required by Docker's libnetwork. Currently only used for macvlan's parent interface definition

Each link endpoint is encoded as "DeviceName:InterfaceName:IPPrefix" with the following contraints:

  • DeviceName determines which docker image is going to be used by (case-insensitive) matching of the following strings:
    • host - alpine-host image is going to be used
    • cvp - cvp image is going to be used
    • veos - Arista vEOS image built according to the procedure described here
    • vmx - Juniper vMX image built with vrnetlab
    • csr - Cisco CSR1000v image built with vrnetlab
    • xrv - Cisco IOS XRv image built with vrnetlab
    • For anything else Arista cEOS image will be used
  • InterfaceName must match the exact name of the interface you expect to see inside a container. For example if you expect to connect a link to DeviceA interface eth0, endpoint definition should be "DeviceA:eth0"
  • IPPrefix - Optional parameter that works ONLY for alpine-host devices and will attempt to configure a provided IP prefix inside a container.

Bridge vs veth driver caveats

Both bridge and veth driver have their own set of caveats. Keep them in mind when choosing a driver:

Features bridge veth
multipoint links supported not supported
sudo privileges not required required
docker modifications requires patched docker deamon uses standard docker daemon
L2 multicast only LLDP supported

You can mix both bridge and veth drivers in the same topology, however make sure that bridge driver links always come first, followed by the veth links. For example:

VERSION: 2
driver: veth

links:
  - endpoints: ["Leaf1:eth1", "Leaf2:eth1"]
    driver: 'bridge'
  - endpoints: ["Leaf1:eth2", "Leaf2:eth2"]
  - endpoints: ["Leaf1:eth3", "Leaf2:eth3"]

Note: You also need to specify the default eth0 intefaces for all endpoints given docker-topo does not create said interface whereas normal docker create does create this interface without defining.

macOS / OSX Support

Pyroute2 supports BSD as of 0.5.2, but veth drivers will not work in topology files.

Pyroute2 runs natively on Linux and emulates some limited subset of RTNL netlink API on BSD systems on top of PF_ROUTE notifications and standard system tools.

(Optional) Global variables

Along with the mandatory link array, there are a number of options that can be specified to override some of the default settings. Below are the list of options with their default values:

VERSION: 1  # Topology file version. Accepts [1|2]
CEOS_IMAGE: ceos:latest # cEOS docker image name
CONF_DIR: './config' # Config directory to store cEOS startup configuration files
PUBLISH_BASE: 8000 # Publish cEOS ports starting from this number
OOB_PREFIX: '192.168.100.0/24' # Only used when link contains CVP. This prefix is assinged to CVP's eth1
PREFIX: 'CEOS-LAB' # This will default to a topology filename (without .yml extension)
driver: None

All of the capitalised global variables can also be provided as environment variables with the following priority:

  1. Global variables defined in a topology file
  2. Global variables from environment variables
  3. Defaults

The final driver variable can be used to specify the default link driver for ALL links at once. This can be used to create all links with non-default veth type drivers:

VERSION: 2
driver: veth
links:
  - endpoints: ["host1:eth1", "host2:eth1"]
  - endpoints: ["host1:eth2", "host3:eth1"]

There should be several examples in the ./topo-extra-files/examples directory

(Optional) Exposing arbitrary ports

By default, PUBLISH_BASE will expose internal HTTPS (443/tcp) port of a container. It is possible to expose any number of internal ports for each container by defining PUBLISH_BASE in the following way:

PUBLISH_BASE:
  443/tcp: None # Will expose inside 443 to a random outside port
  22/tcp: 2000 # All containers will get their ports exposed starting from outside port 2000
  161/tcp: [127.0.0.1, 1600] # Similar to the above but only exposes ports on the defined local IP address

Note: topology file must have at least one interface of type bridge in order for PUBLISH_BASE to work.

(Optional) Saving and archiving network topologies

By default, docker-topo will pick up any files located in the CONF_DIR and, if the filename matches the PREFIX_DEVICENAME, will mount it inside the container as /mnt/flash/startup-config.

When the topology is running, there's a way to easily save the output of "show run" from each device inside the CONF_DIR to make them available on next reboot:

$ docker-topo -s topology.yml
Config directory exists, existing files may be overwritten. Continue? [y/n]:y
INFO:__main__:All configs saved in ./config

Archive option creates a tar.gz file with the CONF_DIR directory and the topology YAML file

$ docker-topo -a topology.yml
INFO:__main__:Archive file topo.tar.gz created
$ tar -tvf topo.tar.gz 
drwxr-xr-x root/root         0 2018-09-14 11:53 config/
-rw-r--r-- null/null       660 2018-09-11 15:08 topology.yml

(Optional) User-defined docker images

It is possible to create topology with arbitrary docker images. One such example is the openstack topology. Whenever a CUSTOM_IMAGE dictionary present, any device names that did not match against the well-known images (e.g. cEOS, vEOS, Host, VMX, CSR etc.), will be matched against the keys of this dictionary and, if match is found, the corresponding value will be used as an image. So in case the topology file has:

CUSTOM_IMAGE:
  search_key: docker_image

The docker-topo image matching logic will try to see if search_key in device_name.lower() and if True, will build a Generic device type with self.image == docker_image.

Example 1 - Creating a 2-node topology interconnected directly with veth links (without config)

+------+             +------+
|      |et1+-----+et2|      |
|cEOS 1|             |cEOS 2|
|      |et2+-----+et1|      |
+------+             +------+
sudo docker-topo --create topo-extra-files/examples/v2/2-node.yml

Example 2 - Creating a 3-node topology using the default docker bridge driver (with config)

+------+             +------+
|cEOS 1|et1+-----+et2|cEOS 2|
+------+             +------+
   et2                  et1
    +                    +
    |      +------+      |
    +--+et1|cEOS 3|et2+--+
           +------+

mkdir config
echo "hostname cEOS-1" > ./config/3-node_cEOS-1
echo "hostname cEOS-2" > ./config/3-node_cEOS-2
echo "hostname cEOS-3" > ./config/3-node_cEOS-3
docker-topo --create topo-extra-files/examples/v1-legacy/3-node.yml

List and connect to devices

# docker ps -a 
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                     PORTS                   NAMES
2315373f8741        ceosimage:latest    "/sbin/init"             About a minute ago   Up About a minute          0.0.0.0:9002->443/tcp   3-node_cEOS-3
e427def01f3a        ceosimage:latest    "/sbin/init"             About a minute ago   Up About a minute          0.0.0.0:9001->443/tcp   3-node_cEOS-2
f1a2ac8a904f        ceosimage:latest    "/sbin/init"             About a minute ago   Up About a minute          0.0.0.0:9000->443/tcp   3-node_cEOS-1


# docker exec -it 3-node_cEOS-1 Cli
cEOS-1>

Destroy a topology

docker-topo --destroy topo-extra-files/examples/3-node.yml

Troubleshooting

  • If you get the following error, try renaming the topology filename to a string shorter than 15 characters

    pyroute2.netlink.exceptions.NetlinkError: (34, 'Numerical result out of range')
    
  • CVP can't connect to cEOS devices - make sure that CVP is attached with at least two interfaces. The first one is always for external access and the second one if always for device management

More Repositories

1

k8s-networking-guide

Source Repo for https://tkng.io
HTML
245
star
2

meshnet-cni

a (K8s) CNI plugin to create arbitrary virtual network topologies
Go
94
star
3

arista-network-ci

A portable network CI demo with Gitlab, Ansible, cEOS, Robot Framework and Batfish
Python
69
star
4

k8s-topo

Topology builder for network simulations inside K8S
Python
66
star
5

k8s-guide-labs

Labs for "The Kubernetes Networking Guide"
Makefile
61
star
6

yang

Collection of hands-on lab introducing basics of YANG, NETCONF, RESTCONF on IOS-XE and Junos devices
Python
59
star
7

xdp-xconnect

Cross-connect Linux interfaces with XDP
C
55
star
8

cue-networking

Example of using CUE to model baremetal network configurations
CUE
43
star
9

ssh-copy-net

ssh-copy-id for network devices
Python
27
star
10

cisco-ansible-provisioning

Python
24
star
11

smart-vpn-client

Smart VPN client
Go
24
star
12

envoy-split-proxy

L7 split-routing with Envoy
Go
19
star
13

cue-ansible

CUE vs Ansible
Python
18
star
14

declarative-netbox

API and a set of tools to manage Netbox configuration declaratively
Go
18
star
15

network-ci

Python
16
star
16

network-as-a-service

Network-as-a-Service Proof-of-Concept
Python
15
star
17

terraform-yang

Terraform provider utilizing gNMI interface and OpenConfig models
Go
14
star
18

cx

Containerised Cumulus VX
Python
12
star
19

yang-to-cue

Importing YANG definitions into CUE
Go
9
star
20

simple-cisco-tdd

Python
7
star
21

cue-networking-II

github.com/networkop/cue-networking Part II
Jinja
7
star
22

rest-unl-client

Python
7
star
23

libnetwork-multinet

Shell
5
star
24

docker-openstack-lab

DIY containerised Arista-integrated Openstack Lab
Shell
4
star
25

terraform-cvp

Go
4
star
26

flexvpn

Python
4
star
27

airctl

Go
3
star
28

acb-oc

3
star
29

cloudroutesync

Sync netlink routes with your cloud routing table
Go
2
star
30

kubernetes-on-eos

Kubernetes ported to Arista EOS
2
star
31

cvp-netsim

Python
2
star
32

ingress-puzzle

Makefile
2
star
33

tf-mcloud-demo

Hybrid cloud multi-vendor orchestration with Terraform and Cloudvision Portal
HCL
2
star
34

gtc-nvue-A31079

Shell
1
star
35

nmos-kubernetes

1
star
36

dpu-ptp

Jinja
1
star
37

TestCentre-js

JavaScript
1
star
38

k8s-on-baremetal-network

Demo of using Kubernetes to manage configuration of a baremetal network
Shell
1
star
39

ecmp-conntrack

Maglev-style LB on Arista
Python
1
star
40

chef-unl-os

Ruby
1
star
41

kolla-odl-bgpvpn

Shell
1
star
42

netbox-awx-automation

fork of https://gitlab.com/nvidia-networking/systems-engineering/poc-support/netbox-awx-automation
Jinja
1
star
43

cvp-in-docker

Procedure on how to build a portable CVP docker image
Shell
1
star
44

networkop.github.io

HTML
1
star