GPU Passthrough from Arch Linux
Issues
If you have any issues with these steps please hit me up and I will try to fix them!
Combines these sources:
-
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
-
https://passthroughpo.st/quick-dirty-arch-passthrough-guide/
When to do this:
When you want to play windows 10 video games from your arch box because:
-
wangblows and you have access to a windows 10 iso
-
you don't want to read mountains of text because you just want to play gaems.
Required downloads:
- a Windows 10 installation iso
Link: here
Direct Download: here
- virtio* drivers for windows10
Link: here
Direct Download: here
Disclamer:
Most of this stuff is in the archlinux guide at the top, read more of that if any of this is confusing or something terribly goes wrong. This is my rig:
PCI passthrough via OVMF (GPU)
Initialization
-
Make sure that you have already enabled IOMMU via AMD-Vi or Intel Vt-d in your motherboard's BIOS HIT F10 or del or whatever the key is for your motherboard during bios initialization at beginning of startup, enable either VT-d if you have an Intel CPU or AMD-vi if you have an AMD CPU
-
edit
/etc/default/grub
and add intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT
$ sudo nvim /etc/default/grub
For Intel:
GRUB_CMDLINE_LINUX_DEFAULT="quiet ... intel_iommu=on"
For AMD:
GRUB_CMDLINE_LINUX_DEFAULT="quiet ... amd_iommu=on"
- re-configure your grub:
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
- reboot
$ sudo reboot now
Isolating the GPU
One of the first things you will want to do is isolate your GPU. The goal of this is to prevent the Linux kernel from loading drivers that would take control of the GPU. Because of this, it is necessary to have two GPUs installed and functional within your system. One will be used for interacting with your Linux host (just like normal), and the other will be passed-through to your Windows guest. In the past, this had to be achieved through using a driver called pci-stub. While it is still possible to do so, it is older and holds no advantage over its successor –vfio-pci.
- find the device ID of the GPU that will be passed through by running lscpi
$ lspci -nn
and look through the given output until you find your desired GPU, they're bold in this case:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 980] [10de:13c0] (rev a1) 01:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
Configuring vfio-pci and Regenerating your Initramfs
Next, we need to instruct vfio-pci to target the device in question through the ID numbers gathered above.
- edit
/etc/modprobe.d/vfio.conf
file and adding the following line with your ids from the last step above:
options vfio-pci ids=10de:13c0,10de:0fbb
Next, we will need to ensure that vfio-pci is loaded before other graphics drivers.
- edit
/etc/mkinitcpio.conf
. At the very top of your file you should see a section titled MODULES. Towards the bottom of this section you should see the uncommented line: MODULES= . Add the in the following order before any other drivers (nouveau, radeon, nvidia, etc) which may be listed: vfio vfio_iommu_type1 vfio_pci vfio_virqfd. The line should look like the following:
MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd nouveau"
In the same file, also add modconf to the HOOKS line:
HOOKS="modconf"
- rebuild initramfs.
mkinitcpio -g /boot/linux-custom.img
- reboot
$ sudo reboot now
Checking whether it worked
- check pci devices:
$ lspci -nnk
Find your GPU and ensure that under “Kernel driver in use:” vfio-pci is displayed:
1:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 980] [10de:13c0] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] GM204 [GeForce GTX 980] [1462:3177]
Kernel driver in use: vfio-pci
Kernel modules: nouveau
01:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
Subsystem: Micro-Star International Co., Ltd. [MSI] GM204 High Definition Audio Controller [1462:3177]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
- ???
- profit
Configuring OVMF and Running libvirt
- download libvirt, virt-manager, ovmf, and qemu (these are all available in the AUR). OVMF is an open-source UEFI firmware designed for KVM and QEMU virtual machines. ovmf may be omitted if your hardware does not support it, or if you would prefer to use SeaBIOS. However, configuring it is very simple and typically worth the effort.
sudo pacman -S libvirt virt-manager ovmf qemu
- edit
/etc/libvirt/qemu.conf
and add the path to your OVMF firmware image:
nvram = ["/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"]
- start and enable both libvirtd and its logger, virtlogd.socket in systemd if you use a different init system, substitute it's commands in for systmectl start
$ sudo systemctl start libvirtd.service
$ sudo systemctl start virtlogd.socket
$ sudo systemctl enable libvirtd.service
$ sudo systemctl enable virtlogd.socket
With libvirt running, and your GPU bound, you are now prepared to open up virt-manager and begin configuring your virtual machine.
virt-manager, a GUI for managing virtual machines
setting up virt-manager
virt-manager has a fairly comprehensive and intuitive GUI, so you should have little trouble getting your Windows guest up and running.
- download virt-manager
$ sudo pacman -S virt-manager
- add yourself to the libvirt group (replace vanities with your username)
$ sudo usermod -a -G libvirt vanities
- launch virt-manager
$ virt-manager &
-
when the VM creation wizard asks you to name your VM (final step before clicking "Finish"), check the "Customize before install" checkbox.
-
in the "Overview" section, set your firmware to "UEFI". If the option is grayed out, make sure that you have correctly specified the location of your firmware in /etc/libvirt/qemu.conf and restart libvirtd.service by running
sudo systemctl restart libvirtd
-
in the "CPUs" section, change your CPU model to "host-passthrough". If it is not in the list, you will have to type it by hand. This will ensure that your CPU is detected properly, since it causes libvirt to expose your CPU capabilities exactly as they are instead of only those it recognizes (which is the preferred default behavior to make CPU behavior easier to reproduce). Without it, some applications may complain about your CPU being of an unknown model.
-
go into "Add Hardware" and add a Controller for SCSI drives of the "VirtIO SCSI" model.
-
then change the default IDE disk for a SCSI disk, which will bind to said controller.
a. windows VMs will not recognize those drives by default, so you need to download the ISO containing the drivers from here and add an SATA CD-ROM storage device linking to said ISO, otherwise you will not be able to get Windows to recognize it during the installation process.
-
make sure there is another SATA CD-ROM device that is handling your windows10 iso from the top links.
-
setup your GPU, navigate to the “Add Hardware” section and select both the GPU and its sound device that was isolated previously in the PCI tab
installing windows
- test to see if it works by pressing the play button after configuring your VM and install windows
You may see this screen, just type exit
and bo to the BIOs screen.
From the BIOs screen, select and enter
the Boot Manager
Lastly, pick one of the DVD-ROM ones from these menus
- from here, you should be able to see windows 10 booting up, we need to load the virtio-scsi drivers
When you get to Windows Setup click Custom: Install windows only (advanced)
You should notice that our SCSI hard drive hasn't been detected yet, click Load driver
Select the correct CD-ROM labled virto-win-XXXXX**
Finally, select the amd64
architecture
- your SCSI hard drive device should be there and you should be able to contiune the windows10 install
Performance Tuning
Check out my virth xml file
CPU pinnging
CPU topology
- check your cpu topology by running
lscpu -e
editing virsh
edit by running something similar with your desired editor and VM name:
sudo EDITOR=nvim virsh edit win10
if this doesn't work, check your VM name:
sudo virsh list
your virsh config file should look something like this if your cpu is like mine, otherwise revert to the arch guide: cpu-pinning guide
enabling hugepages
- edit
/etc/default/grub
$ sudo nvim /etc/default/grub
- add
hugepages=2048
to GRUB_COMMAND_LINE_DEFAULT
your final grub should look like this:
- re-configure your grub:
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
- reboot and test it out