• Stars
    star
    111
  • Rank 314,510 (Top 7 %)
  • Language
    Shell
  • Created almost 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This role configures and creates VMs on a KVM hypervisor.

Libvirt VM

This role configures and creates (or destroys) VMs on a KVM hypervisor.

Requirements

The host should have Virtualization Technology (VT) enabled and should be preconfigured with libvirt/KVM.

Role Variables

  • libvirt_vm_default_console_log_dir: The default directory in which to store VM console logs, if a VM-specific log file path is not given. Default is "/var/log/libvirt-consoles".

  • libvirt_vm_default_uuid_deterministic: Whether UUID should be calculated by hashing the VM name. If not, the UUID is randomly generated by libvirt when the VM is defined. Default is False.

  • libvirt_vm_image_cache_path: The directory in which to cache downloaded images. Default is "/tmp/".

  • libvirt_volume_default_images_path: Directory in which instance images are stored. Default is '/var/lib/libvirt/images'.

  • libvirt_volume_default_type: What type of backing volume does the instance use? Default is volume. Options include block, file, network and volume.

  • libvirt_volume_default_format: Format for volumes created by the role. Default is qcow2. Options include raw, qcow2, vmdk. See man virsh for the full range.

  • libvirt_volume_default_device: Control how device appears in guest OS. Defaults to disk. Options include cdrom and disk.

  • libvirt_vm_engine: virtualisation engine. If not set, the role will attempt to auto-detect the optimal engine to use.

  • libvirt_vm_emulator: path to emulator binary. If not set, the role will attempt to auto-detect the correct emulator to use.

  • libvirt_cpu_mode_default: The default CPU mode if libvirt_cpu_mode or vm.cpu_mode is undefined.

  • libvirt_vm_arch: CPU architecture, default is x86_64.

  • libvirt_vm_uri: Override the libvirt connection URI. See the libvirt docs docs for more details.

  • libvirt_vm_virsh_default_env: Variables contained within this dictionary are added to the environment used when executing virsh commands.

  • libvirt_vm_clock_offset. If defined the instances clock offset is set to the provided value. When undefined sync is set to localtime.

  • libvirt_vm_trust_guest_rx_filters: Whether to trust guest receive filters. This gets mapped to the trustGuestRxFilters attribute of VM interfaces. Default is false

  • libvirt_vms: list of VMs to be created/destroyed. Each one may have the following attributes:

    • state: set to present to create or absent to destroy the VM. Defaults to present.

    • name: the name to assign to the VM.

    • uuid: the UUID to manually assign to the VM. If specified, neither uuid_deterministic nor libvirt_vm_default_uuid_deterministic are used.

    • uuid_deterministic: overrides default set in libvirt_vm_default_uuid_deterministic

    • memory_mb: the memory to assign to the VM, in megabytes.

    • vcpus: the number of VCPU cores to assign to the VM.

    • machine: Virtual machine type. Default is None if libvirt_vm_engine is kvm, otherwise pc-1.0.

    • cpu_mode: Virtual machine CPU mode. Default is host-passthrough if libvirt_vm_engine is kvm, otherwise host-model. Can be set to none to not configure a cpu mode.

    • clock_offset: Overrides default set in libvirt_vm_clock_offset

    • enable_vnc: If true enables VNC listening on localhost for use with VirtManager and similar tools

    • enable_spice: If true enables SPICE listening for use with Virtual Machine Manager and similar tools

    • volumes: a list of volumes to attach to the VM. Each volume is defined with the following dict:

      • type: What type of backing volume does the instance use? All options for libvirt_volume_default_type are valid here. Default is libvirt_volume_default_type.
      • pool: Name or UUID of the storage pool from which the volume should be allocated. Required when type is volume.
      • name: Name to associate with the volume being created; For file type volumes include extension if you would like volumes created with one.
      • file_path: Where the image of file type volumes should be placed; defaults to libvirt_volume_default_images_path
      • device: Control how device appears in guest OS. All options for libvirt_volume_default_device are valid here. Default is libvirt_volume_default_type.
      • capacity: volume capacity, can be suffixed with k, M, G, T, P or E when type is network or MB,GB,TB, etc when type is disk (required when type is disk or network)
      • auth: Authentication details should they be required. If auth is required, username, type, and uuid or usage will need to be supplied. uuid and usage should not be both supplied.
      • source: Where the remote volume comes from when type is network. protocol, name and hosts_list should be supplied. port is optional.
      • format: Format of the volume. All options for libvirt_volume_default_format are valid here. Default is libvirt_volume_default_format.
      • image: (optional) a URL to an image with which the volume is initalised (full copy).
      • checksum: (optional) checksum of the image to avoid download when it's not necessary.
      • backing_image: (optional) name of the backing volume which is assumed to already be the same pool (copy-on-write).
      • image and backing_image are mutually exclusive options.
      • target: (optional) Manually influence type and order of volumes
      • dev: (optional) Block device path when type is block.
      • remote_src: (optional) When type is file or block, specify wether image points to a remote file (true) or a file local to the host that launched the playbook (false). Defaults to true.
    • usb_devices: a list of usb devices to present to the vm from the host.

      Each usb device is defined with the following dict:

      • vendor: The vendor id of the USB device.
      • product: The product id of the USB device.

      Note - Libvirt will error if the VM is provisioned and the USB device is not attached.

      To obtain the vendor id and product id of the usb device from the host running as sudo / root with the usb device plugged in run lsusb -v. Example below with an attached Sandisk USB Memory Stick with vendor id: 0x0781 and product id: 0x5567

      lsusb -v | grep -A4 -i sandisk
      
        idVendor           0x0781 SanDisk Corp.
        idProduct          0x5567 Cruzer Blade
        bcdDevice            1.00
        iManufacturer           1 
        iProduct                2 
      
    • interfaces: a list of network interfaces to attach to the VM. Each network interface is defined with the following dict:

      • type: The type of the interface. Possible values:

        • network: Attaches the interface to a named Libvirt virtual network. This is the default value.
        • direct: Directly attaches the interface to one of the host's physical interfaces, using the macvtap driver.
      • network: Name of the network to which an interface should be attached. Must be specified if and only if the interface type is network.

      • mac: "Hardware" address of the virtual instance, if absent one is created

      • source: A dict defining the host interface to which this VM interface should be attached. Must be specified if and only if the interface type is direct. Includes the following attributes:

        • dev: The name of the host interface to which this VM interface should be attached.
        • mode: options include vepa, bridge, private and passthrough. See man virsh for more details. Default is vepa.
      • trust_guest_rx_filters: Whether to trust guest receive filters. This gets mapped to the trustGuestRxFilters attribute of VM interfaces. Default is libvirt_vm_trust_guest_rx_filters.

      • model: The name of the interface model. Eg. e1000 or ne2k_pci, if undefined it defaults to virtio.

      • alias: An optional interface alias. This can be used to tie specific network configuration to persistent network devices via name. The user defined alias is always prefixed with ua- to be compliant (aliases without ua- are ignored by libvirt. If undefined it defaults to libvirt managed vnetX.

    • console_log_enabled: if true, log console output to a file at the path specified by console_log_path, instead of to a PTY. If false, direct terminal output to a PTY at serial port 0. Default is false.

    • console_log_path: Path to console log file. Default is {{ libvirt_vm_default_console_log_dir }}/{{ name }}-console.log.

    • start: Whether to immediately start the VM after defining it. Default is true.

    • autostart: Whether to start the VM when the host starts up. Default is true.

    • boot_firmware: Can be one of: bios, or efi. Defaults to bios.

    • xml_file: Optionally supply a modified XML template. Base customisation off the default vm.xml.j2 template so as to include the expected jinja expressions the role uses.

N.B. the following variables are deprecated: libvirt_vm_state, libvirt_vm_name, libvirt_vm_memory_mb, libvirt_vm_vcpus, libvirt_vm_engine, libvirt_vm_machine, libvirt_vm_cpu_mode, libvirt_vm_volumes, libvirt_vm_interfaces and libvirt_vm_console_log_path. If the variable libvirt_vms is left unset, its default value will be a singleton list containing a VM specification using these deprecated variables.

Dependencies

If using qcow2 format drives qemu-img (in qemu-utils package) is required.

Example Playbook

---
- name: Create VMs
  hosts: hypervisor
  roles:
    - role: stackhpc.libvirt-vm
      libvirt_vms:
        - state: present
          name: 'vm1'
          memory_mb: 512
          vcpus: 2
          volumes:
            - name: 'data1'
              device: 'disk'
              format: 'qcow2'
              capacity: '400GB'
              pool: 'my-pool'
            - name: 'debian-10.2.0-amd64-netinst.iso'
              type: 'file'
              device: 'cdrom'
              format: 'raw'
              target: 'hda'  # first device on ide bus
            - name: 'networkfs'
              type: 'network'
              format: 'raw'
              capacity: '50G'
              auth:
                username: 'admin'
                type: 'ceph'
                usage: 'rbd-pool'
              source:
                protocol: 'rbd'
                name: 'rbd/volume'
                hosts_list:
                  - 'mon1.example.org'
                  - 'mon2.example.org'
                  - 'mon3.example.org'
            - type: 'block'
              format: 'raw'
              dev: '/dev/sda'

          interfaces:
            - network: 'br-datacentre'
          
          usb_devices:
            - vendor: '0x0781'
              product: '0x5567'

        - state: present
          name: 'vm2'
          memory_mb: 1024
          vcpus: 1
          volumes:
            - name: 'data2'
              device: 'disk'
              format: 'qcow2'
              capacity: '200GB'
              pool: 'my-pool'
            - name: 'filestore'
              type: 'file'
              file_path: '/srv/cloud/images'
              capacity: '900GB'
          interfaces:
            - type: 'direct'
              source:
                dev: 'eth123'
                mode: 'private'
            - type: 'bridge'
              source:
                dev: 'br-datacentre'

Author Information

More Repositories

1

a-universe-from-nothing

Kayobe configuration for the Kayobe workshop "A Universe from Nothing: Containerised OpenStack deployment using Kolla, Ansible and Kayobe"
Shell
59
star
2

ansible-role-openhpc

Ansible role for OpenHPC
Jinja
45
star
3

ansible-slurm-appliance

A Slurm-based HPC workload management environment, driven by Ansible.
Jinja
43
star
4

slurm-k8s-cluster

A Slurm cluster for Kubernetes
Shell
38
star
5

azimuth

Python
32
star
6

pyhelm3

Python client for interacting with Helm 3.
Python
31
star
7

ansible-role-libvirt-host

This role configures a host as a Libvirt/KVM hypervisor. It can also configure storage pools and networks on the host.
Jinja
30
star
8

kayobe-original

Deployment of containerised OpenStack to bare metal using OpenStack kolla and bifrost
26
star
9

hpc-tests

HPC tests using MPI codes & synthetic benchmarks with IB/RoCE comparisions - from StackHPC Ltd.
Jupyter Notebook
17
star
10

dell-powerconnect-switch

Ansible role for managing Dell PowerConnect switches
16
star
11

ansible-role-beegfs

Create beegfs server and client
Python
16
star
12

ansible-collection-cephadm

Python
16
star
13

capi-helm-charts

Smarty
15
star
14

stackhpc-image-elements

OpenStack Diskimage Builder elements for HPC
Shell
15
star
15

os-capacity

Find out how much capacity you have in your OpenStack cloud.
Python
11
star
16

drac

Ansible role for DRAC BIOS and RAID configuration
Python
11
star
17

terraform-magnum

Terraform Kubernetes cluster using OpenStack Magnum
HCL
9
star
18

kube-perftest

Python
9
star
19

ansible-role-luks

Ansible role to setup of LUKS encryption
Python
8
star
20

zenith

Python
7
star
21

os-networks

Ansible role to register OpenStack networks, subnets and routers
7
star
22

slurm-openstack-tools

Tools to manage and provide functionality for Slurm clusters on OpenStack
Python
7
star
23

ansible-role-os-images

Ansible role to build and upload OpenStack instance images
6
star
24

kayobe-ops

Kayobe custom playbooks for common operations
Shell
6
star
25

ansible-role-cluster-infra

Ansible Galaxy role for generating OpenStack infrastructure for Cluster-as-a-Service
Jinja
6
star
26

beokay

Kayobe environment management
Python
5
star
27

tenks

Virtual bare metal cluster management
Python
5
star
28

ansible-role-ofed

Installs OFED from Mellanox Repositories
Jinja
5
star
29

pgmon

Proof-of-concept infrastructure monitoring tool using PostgreSQL.
JavaScript
5
star
30

drac-facts

Ansible role to gather facts about BIOS settings and RAID configuration on Dell machines with an iDRAC card
Python
5
star
31

ansible-role-os-projects

Ansible role to register OpenStack projects, users and related resources
Jinja
5
star
32

sockpuppet

Prometheus exporter for TCP metrics
Python
4
star
33

stackhpc-kayobe-config

StackHPC Kayobe configuration
Shell
4
star
34

stackhpc-release-train

StackHPC release automation
HCL
4
star
35

slurm-visualisation-app

JavaScript
4
star
36

mellanox-switch

Ansible role for managing configuration of Mellanox switches running MLNX-OS
4
star
37

openstack-admin-guide

Template of an OpenStack administration guide
Python
4
star
38

ansible-collection-azimuth-ops

RobotFramework
4
star
39

ansible-drac-examples

Example Ansible playbooks for managing Dell servers BIOS and RAID configuration
Python
4
star
40

phe-appliances

Python
4
star
41

ansible-collection-pulp

Python
4
star
42

ansible-lustre

Ansible to configure Lustre, with examples
HCL
4
star
43

naming-things-is-hard

Tools for versioning and tagging software
Python
3
star
44

ansible-role-os-container-clusters

Ansible role to register OpenStack container clusters and templates
3
star
45

ansible-role-sriov

Ansible role to configure SR-IOV
Jinja
3
star
46

kayobe-automation

kayobe-automation
Shell
3
star
47

docker-mlnx-ufm

Docker image for Mellanox UFM Infiniband fabric manager
Shell
3
star
48

ansible-scaling

Scripts and playbooks to test and benchmark Ansible at scale
Python
3
star
49

iris-workshop-kayobe-config

Kayobe configuration for the IRIS Scientific OpenStack workshop
Shell
3
star
50

alaska-kayobe-config

Kayobe configuration for the SKA SDP performance prototype platform ALaSKA
Jinja
3
star
51

monasca-grafana-app

Grafana plugin "app" to provide additional pages for Monasca.
TypeScript
3
star
52

azimuth-llm

A Helm chart for deploying LLMs on Azimuth
Jupyter Notebook
2
star
53

azimuth-config

Example configuration repository for Azimuth deployments.
Shell
2
star
54

asciinema-editor

Video file editor for asciinema.org
Python
2
star
55

ansible-role-openvpn

OpenVPN tunnel / mesh role for federated Cluster-as-a-Service
Jinja
2
star
56

stackhpc-monasca-agent-plugins

A collection Monasca Agent plugins for gathering metrics
Python
2
star
57

ansible-role-os-shade

Ansible role to install python package shade
2
star
58

terraform-bastion

Bastion host state management for *vglab* and *sausage* cloud
HCL
2
star
59

ansible_collection_lustre

WIP for lustre deployment
Python
2
star
60

caas-workstation

Jinja
2
star
61

octavia-scripts

Scripts to test and generate production grade certificates for Octavia
Shell
2
star
62

ansible-role-cluster-nfs

A simple NFS server and client playbook intended for Cluster-as-a-Service cloud deployments
2
star
63

ansible-slurm-exporter

Python
2
star
64

container-image-sync

2
star
65

shakespeare

A tempest config generator
Jinja
2
star
66

ansible-role-grubcmdline

Ansible role to modify kernel command line
Jinja
2
star
67

stackhpc-inspector-plugins

StackHPC plugins for OpenStack Hardware Discovery service, ironic inspector
Python
2
star
68

nrel-slurm-app

Private clone of https://github.com/stackhpc/openhpc-demo for NREL Slurm appliance
HCL
2
star
69

ngs-stress

Neutron genericswitch ML2 driver stress test
Python
2
star
70

ansible-role-os-container-infra

Python
2
star
71

stackhpc-config-cookiecutter

Cookiecutter template for Ansible playbooks and configuration of infrastructure on an existing OpenStack cloud
Shell
2
star
72

stackhpc-osutils

OpenStack utilities
Python
2
star
73

ansible-collection-openstack-ops

Ansible collection for OpenStack operations
2
star
74

ansible_collection_slurm_openstack_tools

Ansible tools for working with Slurm-based OpenHPC clusters
Python
2
star
75

ansible-role-mlnx-ufm

Ansible role to deploy Mellanox UFM Infiniband fabric manager in a Docker container
2
star
76

ansible-collection-linux

Ansible collection for managing Linux operating systems
Jinja
2
star
77

kernel_packages

Manage CentOS kernel and packages depending on kernel version
Python
1
star
78

cinderella

Prometheus clean up tool
Python
1
star
79

cluster-api-janitor-openstack

Python
1
star
80

openfoam-benchmarks

Containerised OpenFoam benchmarks for HPC systems
1
star
81

azimuth-caas-operator

K8s operator to create ansible based clusters using K8s CRDs
Python
1
star
82

awx-operator-helm

Smarty
1
star
83

zenith-client-example

Smarty
1
star
84

ansible-role-grafana-conf

Ansible role for configuring Grafana dashboards and data sources
Python
1
star
85

s3-active-storage-prototype

Python
1
star
86

k8s-rdma-horovod

RDMA benchmark on Kubernetes using Horovod helm chart
Shell
1
star
87

gitlab-runner-openstack

HCL
1
star
88

monasca-fluentd

Dockerfile
1
star
89

ansible-collection-hashicorp

Ansible collection for Hashicorp tools (Vault, Consul, etc)
1
star
90

s3-active-storage-rs

S3 Active Storage server
Rust
1
star
91

ansible-collection-terraform

Python
1
star
92

jasmin-fs-benchmark

Benchmarking-as-code for JASMIN filesystems
Jupyter Notebook
1
star
93

p3-appliances

Templates and playbooks for creating middleware platforms on ALaSKA P3
Jinja
1
star
94

stackhpc-ipa-hardware-managers

Python
1
star
95

ADVise

Anomaly detection and visualisation on hardware introspection data and benchmarks
Python
1
star
96

.github

1
star
97

cluster-api-addon-provider

Python
1
star
98

mungetout

Convert from Ironic Inspector introspection format to cardiff format
Python
1
star
99

efi-lockdown

Ansible role to generate a bootable iso that will install custom secure boot keys
1
star
100

docker-rally

Docker container for running rally
Shell
1
star