• Stars
    star
    160
  • Rank 233,459 (Top 5 %)
  • Language
    C
  • License
    BSD 3-Clause "New...
  • Created almost 5 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

framework for emulating devices in userspace

libvfio-user

vfio-user is a framework that allows implementing PCI devices in userspace. Clients (such as qemu) talk the vfio-user protocol over a UNIX socket to a server. This library, libvfio-user, provides an API for implementing such servers.

vfio-user example block diagram

VFIO is a kernel facility for providing secure access to PCI devices in userspace (including pass-through to a VM). With vfio-user, instead of talking to the kernel, all interactions are done in userspace, without requiring any kernel component; the kernel VFIO implementation is not used at all for a vfio-user device.

Put another way, vfio-user is to VFIO as vhost-user is to vhost.

The vfio-user protocol is intentionally modelled after the VFIO ioctl() interface, and shares many of its definitions. However, there is not an exact equivalence: for example, IOMMU groups are not represented in vfio-user.

There many different purposes you might put this library to, such as prototyping novel devices, testing frameworks, implementing alternatives to qemu's device emulation, adapting a device class to work over a network, etc.

The library abstracts most of the complexity around representing the device. Applications using libvfio-user provide a description of the device (eg. region and IRQ information) and as set of callbacks which are invoked by libvfio-user when those regions are accessed.

Memory Mapping the Device

The device driver can allow parts of the virtual device to be memory mapped by the virtual machine (e.g. the PCI BARs). The business logic needs to implement the mmap callback and reply to the request passing the memory address whose backing pages are then used to satisfy the original mmap call; more details here.

Interrupts

Interrupts are implemented via eventfd's passed from the client and registered with the library. libvfio-user consumers can then trigger interrupts by writing to the eventfd.

Building libvfio-user

Build requirements:

  • meson (v0.53.0 or above)
  • apt install libjson-c-dev libcmocka-dev or
  • yum install json-c-devel libcmocka-devel

The kernel headers are necessary because VFIO structs and defines are reused.

To build:

meson build
ninja -C build

Finally build your program and link with libvfio-user.so.

Supported features

With the client support found in cloud-hypervisor or the in-development qemu support, most guest VM use cases will work. See below for some details on how to try this out.

However, guests with an IOMMU (vIOMMU) will not currently work: the number of DMA regions is strictly limited, and there are also issues with some server implementations such as SPDK's virtual NVMe controller.

Currently, libvfio-user has explicit support for PCI devices only. In addition, only PCI endpoints are supported (no bridges etc.).

API

The API is currently documented via the libvfio-user header file, along with some additional documentation.

The library (and the protocol) are actively under development, and should not yet be considered a stable API or interface.

The API is not thread safe, but individual vfu_ctx_t handles can be used separately by each thread: that is, there is no global library state.

Mailing List & Chat

libvfio-user development is discussed in [email protected]. Subscribe here: https://lists.gnu.org/mailman/listinfo/libvfio-user-devel.

We are on Slack at libvfio-user.slack.com (invite link); or IRC at #qemu on OFTC.

Contributing

Contributions are welcome; please file an issue or open a PR. Anything substantial is worth discussing with us first.

Please make sure to mark any commits with Signed-off-by (git commit -s), which signals agreement with the Developer Certificate of Origin v1.1.

Running make pre-push will do the same checks as done in github CI. After merging, a Coverity scan is also done.

See Testing for details on how the library is tested.

Examples

The samples directory contains various libvfio-user examples.

lspci

lspci implements an example of how to dump the PCI header of a libvfio-user device and examine it with lspci(8):

# lspci -vv -F <(build/samples/lspci)
00:00.0 Non-VGA unclassified device: Device 0000:0000
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Region 0: I/O ports at <unassigned> [disabled]
        Region 1: I/O ports at <unassigned> [disabled]
        Region 2: I/O ports at <unassigned> [disabled]
        Region 3: I/O ports at <unassigned> [disabled]
        Region 4: I/O ports at <unassigned> [disabled]
        Region 5: I/O ports at <unassigned> [disabled]
        Capabilities: [40] Power Management version 0
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-

The above sample implements a very simple PCI device that supports the Power Management PCI capability. The sample can be trivially modified to change the PCI configuration space header and add more PCI capabilities.

Client/Server Implementation

Client/server implements a basic client/server model where basic tasks are performed.

The server implements a device that can be programmed to trigger interrupts (INTx) to the client. This is done by writing the desired time in seconds since Epoch to BAR0. The server then triggers an eventfd-based IRQ and then a message-based one (in order to demonstrate how it's done when passing of file descriptors isn't possible/desirable). The device also works as memory storage: BAR1 can be freely written to/read from by the host.

Since this is a completely made up device, there's no kernel driver (yet). Client implements a client that knows how to drive this particular device (that would normally be QEMU + guest VM + kernel driver).

The client exercises all commands in the vfio-user protocol, and then proceeds to perform live migration. The client spawns the destination server (this would be normally done by libvirt) and then migrates the device state, before switching entirely to the destination server. We re-use the source client instead of spawning a destination one as this is something libvirt/QEMU would normally do.

To spice things up, the client programs the source server to trigger an interrupt and then migrates to the destination server; the programmed interrupt is delivered by the destination server. Also, while the device is being live migrated, the client spawns a thread that constantly writes to BAR1 in a tight loop. This thread emulates the guest VM accessing the device while the main thread (what would normally be QEMU) is driving the migration.

Start the source server as follows (pick whatever you like for /tmp/vfio-user.sock):

rm -f /tmp/vfio-user.sock* ; build/samples/server -v /tmp/vfio-user.sock

And then the client:

build/samples/client /tmp/vfio-user.sock

After a couple of seconds the client will start live migration. The source server will exit and the destination server will start, watch the client terminal for destination server messages.

gpio

A gpio server implements a very simple GPIO device that can be used with a Linux VM.

Start the gpio server process:

rm /tmp/vfio-user.sock
./build/samples/gpio-pci-idio-16 -v /tmp/vfio-user.sock &

Next, build qemu and start a VM, as described below.

Log in to your guest VM. You'll probably need to build the gpio-pci-idio-16 kernel module yourself - it's part of the standard Linux kernel, but not usually built and shipped on x86.

Once built, you should be able to load the module and observe the emulated GPIO device's pins:

insmod gpio-pci-idio-16.ko
cat /sys/class/gpio/gpiochip480/base > /sys/class/gpio/export
for ((i=0;i<12;i++)); do cat /sys/class/gpio/OUT0/value; done

shadow_ioeventfd_server

shadow_ioeventfd_server.c and shadow_ioeventfd_speed_test.c are used to demonstrate the benefits of shadow ioeventfd, see ioregionfd for more information.

Other usage notes

qemu

vfio-user client support is not yet merged into qemu. Instead, download and build this branch of qemu.

Create a Linux install image, or use a pre-made one.

Then, presuming you have a libvfio-user server listening on the UNIX socket /tmp/vfio-user.sock, you can start your guest VM with something like this:

./x86_64-softmmu/qemu-system-x86_64 -mem-prealloc -m 256 \
-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/gpio,share=yes,size=256M \
-numa node,memdev=ram-node0 \
-kernel ~/vmlinuz -initrd ~/initrd -nographic \
-append "console=ttyS0 root=/dev/sda1 single" \
-hda ~/bionic-server-cloudimg-amd64-0.raw \
-device vfio-user-pci,socket=/tmp/vfio-user.sock

SPDK

SPDK uses libvfio-user to implement a virtual NVMe controller: see docs/spdk.md for more details.

libvirt

You can configure vfio-user devices in a libvirt domain configuration:

  1. Add xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0' to the domain element.

  2. Enable sharing of the guest's RAM:

<memoryBacking>
  <source type='file'/>
  <access mode='shared'/>
</memoryBacking>
  1. Pass the vfio-user device:
<qemu:commandline>
  <qemu:arg value='-device'/>
  <qemu:arg value='vfio-user-pci,socket=/var/run/vfio-user.sock,x-enable-migration=on'/>
</qemu:commandline>

History

This project was formerly known as "muser", short for "Mediated Userspace Device". It implemented a proof-of-concept VFIO mediated device in userspace. Normally, VFIO mdev devices require a kernel module; muser implemented a small kernel module that forwarded onto userspace. The old kernel-module-based implementation can be found in the kmod branch.

More Repositories

1

terraform-provider-nutanix

Terraform Nutanix Provider
Go
95
star
2

blueprints

Nutanix Calm Blueprints
Python
73
star
3

nutanix.ansible

Official Nutanix Ansible collections
Python
64
star
4

Automation

Centralized Repo for community driven Nutanix automation
JavaScript
57
star
5

openshift

Welcome to the official Nutanix home dedicated to running the OpenShift Container Platform on Nutanix. Here you'll find information and code to enable the best-in-class Hybrid Cloud experience using the Nutanix Cloud Platform and Red Hat OpenShift.
38
star
6

calm-dsl

Keep Calm and DSL On!
Python
31
star
7

karbon-platform-services

Python
25
star
8

kubectl-karbon

Kubectl plugin to connect to Nutanix karbon clusters
Go
24
star
9

helm

Nutanix Helm Charts repository
Smarty
18
star
10

csi-plugin

17
star
11

docker-machine

Rancher Node Driver for Nutanix AHV
Go
13
star
12

passport-wso2

WSO2 Identity Server authentication strategy for Passport and Node.js.
JavaScript
9
star
13

papiea

TypeScript
7
star
14

sails-forcedotcom

Force.com adapter for Sails.js
JavaScript
7
star
15

ServiceNow

ServiceNow Integration
6
star
16

papiea-clj

a general purpose, extendable and flexible intent engine
Clojure
6
star
17

curie

Mirror of Curie, the scenario execution engine for X-Ray. Official source and issues should be filed on GitLab
Python
6
star
18

ntnx-api-golang-clients

ntnx api golang clients
Go
6
star
19

Calm-Servicenow-Plugin

Repo for the calm service now plugin
5
star
20

nai-llm

Python
5
star
21

emailjs

Email Module
JavaScript
4
star
22

express-webhook

Express middleware which helps with bringing up express app with routes to subscribe to web-hook events.
JavaScript
4
star
23

Nutanix-Calm-Jenkins-Plugin

Java
4
star
24

ntnx-api-cmdlet-help

Help package for Nutanix Commandlets
Shell
4
star
25

validated-designs

PowerShell
3
star
26

nai-llm-k8s

Python
3
star
27

calm_runbooks

Python
3
star
28

rancher-ui-driver

Rancher UI driver for Nutanix Rancher Node Driver
JavaScript
3
star
29

xray-scenarios

The scenarios included with Nutanix X-Ray and used with Curie. NOTE: This is a Mirror. Official source and issues should be filed on GitLab.
3
star
30

kubeflow-manifests

Kubeflow on NKE
Python
3
star
31

xi-iot-python-sdk

xi-iot-python-sdk
Python
2
star
32

conntrack-migrator

C
2
star
33

Cloud-Foundry-CPI-for-Nutanix-AHV

Ruby
2
star
34

hackathon

All things Hackathon @nutanix, used at .NEXT and other events
Python
2
star
35

ntnx-api-python-clients

ntnx api python clients
2
star
36

objects-analytics-elk-config

1
star
37

papiea-js-example

TypeScript
1
star
38

kps-connector-go-template

Go template for building a KPS Connector
Go
1
star
39

terraform-provider-nutanixkps

Go
1
star
40

kps-connector-idl

Makefile
1
star
41

Network-Event-Listener-Framework

Go
1
star
42

generate-jwt-key

Nutanix JWT key generator script
Python
1
star
43

epic-scripts

Scripts for Epic on Nutanix
Shell
1
star
44

homebrew-tap

Homebrew formula tap repo for Nutanix Tools
Ruby
1
star
45

api-guidelines-validator

A tool to validate adherence to a defined set of API guidelines
JavaScript
1
star
46

kps-connector-go-sdk

Golang SDK for KPS Connectors
Go
1
star
47

helm-releases

1
star
48

1-Click-Nutanix-Core

1 Click Nutanix Core
1
star