• Stars
    star
    227
  • Rank 175,873 (Top 4 %)
  • Language
    Go
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated 19 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Virter is a command line tool for simple creation and cloning of virtual machines based on libvirt

Virter

Virter is a command line tool for simple creation and cloning of virtual machines.

Virter supports VMs running standard general purpose distributions such as CentOS and Ubuntu. It is especially useful for development and testing of projects which cannot use containers due to kernel dependencies, such as DRBD and LINSTOR.

Quick Start

First install and set up libvirt. Then download one of the releases. Virter is packaged as a single binary, so just put that into /usr/local/bin and you are ready to use Virter:

virter image pull alma-8 # also would be auto-pulled in next step
virter vm run --name alma-8-hello --id 100 --wait-ssh alma-8
virter vm ssh alma-8-hello
virter vm rm alma-8-hello

In the above example, the virter "ID" is the index into the possible IPs within the IP range of the network definition. For a typical 192.x.y.z/24 net definition, the virter "ID" would be z, or 100 in this case.

Depending on your host distribution and libvirt installation, you may need to:

Usage

For usage just run virter help.

Additional options that may be helpful when using virter vm run:

  • List all available images: virter image ls --available

  • Using a provisioning file: --provision /root/alma8.toml

  • Adding additional disk(s): --disk "name=disk1,size=20GiB,format=qcow2,bus=virtio"

  • Adding a bridged interface --nic "type=bridge,source=br0,mac=1a:2b:3c:4d:5e:01"

  • Adding a NAT'ed interface: --nic "type=network,source=default,mac=1a:2b:3c:4d:5e:01"

Other examples are provided in the examples directory. See the README files for the individual examples.

There is also additional documentation for the provisioning feature. This is useful for defining new images. These images can be used to start clusters of cloned VMs.

Installation Details

Virter requires:

  • A running libvirt daemon on the host where it is run
  • When container provisioning is used: A container runtime. Currently, Virter supports docker and podman.

Configuration is read by default from ~/.config/virter/virter.toml.

When starting Virter for the first time, a default configuration file will be generated, including documentation about the various flags.

Container runtime

Select the container runtime by setting container.provider to either docker or podman.

podman

Virter communicates with podman via it's REST-API. Make sure the API socket is available.

This may be done by:

  • Starting podman via systemd: systemctl --user start podman.socket (use systemctl --user enable --now podman.socket to make this permanent)
  • Start podman manually: podman system service

Network domain

If you require DNS resolution from your VMs to return correct FQDNs, add the domain to your libvirt network definition:

<network>
  ...
  <domain name='test'/>
  ...
</network>

By default, Virter uses the libvirt network named default.

Check out doc/networks.md for more details on VM networking.

Connecting with ssh

SSH can be configured for convenient access to virtual machines created by Virter. See doc/ssh.md for details.

DHCP Leases

Libvirt produces some weird behavior when MAC or IP addresses are reused while there is still an active DHCP lease for them. This can result in a new VM getting assigned a random IP instead of the IP corresponding to its ID.

To work around this, Virter tries to execute the dhcp_release utility in order to release the DHCP lease from libvirt's DHCP server when a VM is removed. This utility has to be run by the root user, so Virter executes it using sudo.

If execution fails (for example because the utility is not installed or the sudo rules are not set up correctly), the error is ignored by Virter.

So, to make Virter work more reliably, especially when you are running lots of VMs in a short amount of time, you should install the dhcp_release utility (usually packaged as dnsmasq-utils). Additionally, you should make sure that your user can run dhcp_release as root, for example by using a sudo rule like this:

%libvirt ALL=(ALL) NOPASSWD: /usr/bin/dhcp_release

This allows all users in the group libvirt to run the dhcp_release utility without being prompted for a password.

Console logs

The --console argument to virter vm run causes serial output from the VM to be saved to a file. This file is created with the current user as the owner. However, it is written to by libvirt, so it needs to located on a filesystem to which libvirt can write. NFS mounts generally cannot be used due to root_squash.

In addition, when the VM generates a lot of output, this can trigger virtlogd to roll the log file over, which creates a file owned by root (assuming virtlogd is running as root). To prevent this, increase max_size in /etc/libvirt/virtlogd.conf.

libvirt storage pool

Virter requires a libvirt storage pool for its images and VM volumes. By default, it expects a pool named default. Some libvirt distributions configure this pool automatically. Some, such as the standard packages on Ubuntu Focal, do not. If the pool does not exist, create it like this:

virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
virsh pool-autostart default
virsh pool-start default

AppArmor

On some distributions, AppArmor denies access to /var/lib/libvirt/images by default. This leads to messages in dmesg along the lines of:

[ 4274.237593] audit: type=1400 audit(1598348576.161:102): apparmor="DENIED" operation="open" profile="libvirt-d84ef9d7-a7ad-4388-bd5d-cfc3a3db28a6" name="/var/lib/libvirt/images/centos-8" pid=14918 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055

This can be circumvented by overriding the AppArmor abstraction for that directory:

cat <<EOF >> /etc/apparmor.d/local/abstractions/libvirt-qemu
/var/lib/libvirt/images/* rwk,
# required for QEMU accessing UEFI nvram variables
/usr/share/OVMF/* rk,
owner /var/lib/libvirt/qemu/nvram/*_VARS.fd rwk,
owner /var/lib/libvirt/qemu/nvram/*_VARS.ms.fd rwk,
EOF
systemctl restart apparmor.service
systemctl restart libvirtd.service

Architecture

Virter connects to the libvirt daemon for all the heavy lifting. It supplies bootstrap configuration to the VMs using cloud-init volumes, so that the hostname is set and SSH access is possible.

Building from source

If you want to test the latest unstable version of Virter, you can build the git version from sources:

git clone https://github.com/LINBIT/virter
cd virter
go build .

Comparison to other tools

virsh

Virter is good for starting and cloning cloud-init based VMs. virsh is useful for more detailed libvirt management. They work well together.

virt-install

virt-install is built for images that use conventional installers. Virter uses cloud-init, making it simpler to use and quicker to start a fresh VM.

Running VMs in AWS/GCP/OpenNebula

Virter is local to a single host making snapshot/restore/clone operations very efficient. Virter could be thought of as cloud provisioning for your local machine.

Vagrant

Virter and Vagrant have essentially the same goal. Virter is more tightly integrated with the Linux virtualization stack, resulting in better snapshot/restore/clone support.

Multipass

Virter and Multipass have similar goals, but Multipass is Ubuntu specific.

Docker

Virter is like Docker for VMs. The user experience of the tools is generally similar. Docker containers share the host kernel, whereas Virter starts VMs with their own kernel.

Kata Containers

Virter starts VMs running a variety of Linux distributions, whereas Kata Containers uses a specific guest that then runs containers.

Weave Ignite

Ignite has very strong requirements on the guest, so it cannot be used for running standard distributions.

Development

Virter is a standard go project using modules. Go 1.13+ is supported.

More Repositories

1

linstor-server

High Performance Software-Defined Block Storage for container, cloud and virtualisation. Fully integrated with Docker, Kubernetes, Openstack, Proxmox etc.
Java
972
star
2

drbd

LINBIT DRBD kernel module
C
577
star
3

csync2

file synchronization tool using librsync and current state databases
C
145
star
4

drbd-utils

DRBD userspace utilities (for 9.x, 8.4, 8.3)
C
79
star
5

windrbd

DRBD driver for windows
C
51
star
6

drbdtop

CLI management tool for DRBD. Like top, but for DRBD resources.
Go
48
star
7

k8s-await-election

Start the main process of a pod only if elected via kubernetes leader election. While this was developed for LINSTOR, it may proof useful for other use cases.
Go
47
star
8

linbit-documentation

Official DRBD documentation
Makefile
36
star
9

thin-send-recv

zfs send and zfs recv alike for the LVM thin world
C
35
star
10

drbd-reactor

Monitors DRBD resources via plugins.
Rust
31
star
11

linstor-proxmox

Integration pluging bridging LINSTOR to Proxmox VE
Perl
31
star
12

linstor-gateway

Manages Highly-Available iSCSI targets, NVMe-oF targets, and NFS exports via LINSTOR
Go
29
star
13

linstor-client

Python client for LINSTOR
Python
22
star
14

drbd-8.4

LINBIT DRBD-8.4 (deprecated; use https://github.com/LINBIT/drbd instead)
C
18
star
15

linstor-gui

HTML5 GUI frontend for LINSTOR
TypeScript
17
star
16

linstor-api-py

LINSTOR Python API
Python
15
star
17

drbdmanage

Management system for DRBD9
Python
13
star
18

linstor-flexvolume

Go
12
star
19

linstor-external-provisioner

Go
11
star
20

golinstor

golang bindings for linstor
Go
9
star
21

openstack-cinder

Openstack Cinder, with DRBD driver included
Python
8
star
22

linstor-docker-volume-go

A Go based version of the LINSTOR Docker volume plugin
Go
8
star
23

linstor-ansible

Ansible playbook for quickly deploying a LINSTOR storage cluster
Jinja
8
star
24

drbd-flexvolume

DRBD flexvolume plugin for Kubernetes
Go
8
star
25

vmshed

shedules tests with virter
Go
7
star
26

addon-linstor

OpenNebula driver to manage and access storage managed by LINSTOR
Python
7
star
27

drbd-flex-provision

Kubernetes External Provisioner for DRBD
Go
7
star
28

linstor-common

Code shared between Linstor client and Linstor server
Python
7
star
29

drbdmanage-proxmox

Integration pluging bridging DRBD Manage to Proxmox VE
Perl
6
star
30

drbdmanage-docker-volume

Docker volume plugin to manage DRBD SDS volumes through docker
Python
6
star
31

linstor-operator-builder

Builds the linstor-operator from the piraeus-operator
Smarty
5
star
32

gosshclient

A higher level ssh client in Go that allows for interactive ssh sessions and executing scripts. This started in virter, but is a useful package on its own
Go
5
star
33

cloud-init-for-windows

A minimal version of cloud-init for windows
Shell
5
star
34

saas

Spatch As A Service
Go
4
star
35

lbtest

Execute tests efficiently and concurrently in many VMs
Shell
3
star
36

charmed-linstor

Juju charm for deploying LINBIT SDS / Piraeus-Datastore on a Kubernetes cluster
Python
3
star
37

generate-cat-file

C
3
star
38

libdrbd-perl

A perl library that allows interaction with DRBD resources and their objects (volumes, connections, options). It allows for resource file generation and provides wrappers around low level DRBD commands (drbdadm, drbdsetup, drbdmeta).
Perl
3
star
39

linstor-docker-volume

Docker volume plugin for LINSTOR
Python
2
star
40

gocorosync

Go bindings to interact with Corosync
Go
2
star
41

prestomanifesto

create multi architecture docker registries
Go
2
star
42

containerapi

Go bindings to manage containers in a runtime agnostic way (Docker, podman)
Go
2
star
43

drbd-headers

DRBD headers used by userspace utils and all kernel components
C
2
star
44

linstor-wait-until

Go
2
star
45

drbd-8.3

LINBIT DRBD-8.3 (historical)
C
2
star
46

drbd-8.0

LINBIT DRBD-8.0 (historical)
C
2
star
47

vagrant-cluster

Vagrant and shell scripts that ease DRBD9/drbdmange testing
Shell
2
star
48

talks

public talks like webinars, conference slides,...
TSQL
2
star
49

linstor-csi-builder

Builds LINBIT's version of the LINSTOR CSI driver
Makefile
2
star
50

dnfjson

A wrapper around libdnf that produces JSON output
Python
1
star
51

drbd-kernel-compat

DRBD kernel backwards compatibility tests and wrappers
C
1
star
52

wdrbd9

DRBD driver for windows
1
star
53

drbd-0.7

LINBIT DRBD-0.7 (historical)
C
1
star
54

Root-on-DRBD

Example scripts for the Root-on-DRBD TechGuide.
Shell
1
star
55

libtcr

TCR library
C
1
star
56

agentx-rs

RFC conformant AgentX library implementing all PDU types
Rust
1
star
57

bestdrbdmodule

web service to find the best matching kernel module for a RHEL7+ distributions
Go
1
star
58

linstor-api-java

LINSTOR Java API
Java
1
star