This guide presents (or aims to present) a clear, step-by-step approach to help intermediate Linux users set up a single GPU passthrough virtual machine. This guide is subject to change.
This guide is by no means comprised of my work alone. I strongly encourage you to check out the resources section at the very bottom of this page.
From here on, I will assume you have an intermediate-to-advanced understanding of your own system.
Hardware requirements (click me)
Your hardware must be capable of a few things before continuing. If your CPU, GPU, or motherboard is incapable of the following, you may not be able to proceed.
- Your CPU must support hardware virtualization.
- Intel CPUs: check Intel's website to see if your CPU supports VT-d.
- AMD CPUs: All AMD CPUs released after 2011 support virtualization, except for the K10 series (2007–2013), which lacks IOMMU. If using a K10 CPU, ensure your motherboard has a dedicated IOMMU.
- Your motherboard must support IOMMU.
- Ideally, your guest GPU ROM supports UEFI.
- It's likely it does, but check this list to be sure.
Software requirements
- A desktop environment or window manager.
- Although everything in this guide can be done from the CLI, virt-manager is fully featured and makes the process far less painful.
Necessary packages, sorted by distribution:
Gentoo Linux
emerge -av qemu virt-manager libvirt ebtables dnsmasq
Arch Linux
pacman -S qemu libvirt edk2-ovmf virt-manager dnsmasq ebtables
Fedora
dnf install @virtualization
Ubuntu
apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients bridge-utils virt-manager ovmf
Enable the "libvirtd" service using your init system of choice:
SystemD
systemctl enable libvirtd
runit
sv enable libvirtd
OpenRC
rc-update add libvirtd default
IOMMU is a nonspecific name for Intel VT-d and AMD-Vi. VT-d should not be confused with VT-x. See https://en.wikipedia.org/wiki/X86_virtualization.
Enable Intel VT-d or AMD-Vi in your BIOS. If you can't find the setting, check your motherboard manufacturer's website; it may be labeled differently.
After enabling virtualization support in your BIOS, set the required kernel parameter depending on your CPU:
For systems using GRUB
Edit your GRUB configuration
# nvim /etc/default/grub |
---|
Intel |
GRUB_CMDLINE_LINUX_DEFAULT="... intel_iommu=on iommu=pt ..." |
On AMD systems, the kernel should automatically detect IOMMU hardware support and enable it. Continue to "Verify IOMMU is Enabled" |
Regenerate your grub config to apply the changes.
grub-mkconfig -o /boot/grub/grub.cfg
For systems using systemd-boot
Edit your boot entry.
# nvim /boot/loader/entries/*.conf |
---|
Intel |
options root=UUID=...intel_iommu=on.. |
On AMD systems, the kernel should automatically detect IOMMU hardware support and enable it. Continue to "Verify IOMMU is Enabled" |
Afterwards, reboot your system for the changes to take effect.
Check if IOMMU has been properly enabled using dmesg:
dmesg | grep -i -e DMAR -e IOMMU
Look for lines indicating that IOMMU is enabled, it should look something like so:
- Intel:
[ 3.141592] Intel-IOMMU: enabled
- AMD:
[ 3.141592] AMD-Vi: AMD IOMMUv2 loaded and initialized
Once IOMMU is enabled, you'll need to verify your PCI devices are properly grouped. Each device in your target PCI device's group must be passed through to the guest together.
For example, if my GPU's VGA controller is in IOMMU group 4, I must pass every other device in group 4 to the virtual machine along with it. Typically, one IOMMU group will contain all GPU-related PCI devices.
Run the following; it will output how your PCI devices are mapped to IOMMU groups. You'll need this information later.
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
Add yourself to the libvirt
, kvm
, and input
groups:
usermod -aG libvirt,kvm,input your-username
Being part of these groups will allow you to run your virtual machine(s) without root.
Make sure to log out and back in again for the changes to take effect.
If your guest isn't Windows 10/11, ignore Windows specific instructions.
Start virt-manager
and create a new QEMU/KVM virtual machine. Follow the on-screen instructions up until step four.
- (Windows) Download the Fedora VirtIO guest drivers.
- Either the "Stable" or "Latest" ISO builds will work.
- During step four, uncheck the box labeled "Enable storage for this virtual machine".
-
Why?
If you opt to enable storage during this step, virt-manager will create a drive using an emulated SATA controller. This guide opts to use a VirtIO drive, as it offers superior performance.
-
- On the final step, check the box labeled Customize before install.
- Click finish to proceed to customization.
- In the "overview" section of the newly opened window:
- Change the firmware to UEFI.
- Set the chipset to Q35.
-
Why?
Q35 is preferable for GPU passthrough because it provides a native PCIe bus. i440FX may work for your specific use case.
-
- In the "CPUs" tab, check the button labeled "Copy host CPU configuration (host-passthrough)"
- Staying in the "CPUs" tab, expand the "Topology". Configure your vCPU topology to your liking.
- Click "Add Hardware" and add a new storage device; the size is up to you. Make sure the bus type for the drive is set to "VirtIO."
- (Windows) Add another storage device. Select "Select or create custom storage"," Manage", then select the VirtIO guest driver ISO you downloaded prior.
- Set the device type to "CDROM", and the bus type to "SATA" so that the Windows installer recognizes it.
- Click "Begin Installation" in the top left and install your guest OS.
- (Windows) will not be able to detect the VirtIO disk we're trying to install it to, as we have not yet installed the VirtIO drivers.
- When available, select "Load Driver" and load the driver contained within "your-fedora-iso/amd64/win*."
- (Windows) will not be able to detect the VirtIO disk we're trying to install it to, as we have not yet installed the VirtIO drivers.
Congratulations, you should now have a functioning virtual machine without any PCI devices passed through. Shut down the VM and move on to the next section.
Before continuing, you should clone your current virtual machine. In the next steps, we will be removing all devices that allow you to interface with the machine from within your desktop environment/window manager. If you end up needing to perform maintenance, it's beneficial to have a way to boot into your VM without having to unbind or rebind your GPU's drivers repeatedly.
- Right-click on your newly created VM.
- Click "Clone..."
- Name the cloned machine to your preference.
- Deselect the virtual disk associated with the original VM during the cloning process.
- This ensures that the cloned machine does not create a new virtual disk but shares the original.
(read me) Removing Unnecessary Virtual Integration Devices
Although removing these devices is technically optional, leaving them attached can lead to unintended behavior in your virtual machine. For example, if the QXL video adapter remains bound, the guest operating system will recognize it as the primary display. As a result, your other monitor(s) may either remain blank, be designated as non-primary, or both.
In your prime virtual machine's configuration, you should remove the following:
- Every spice related device.
-
Help! I'm getting 'error: Spice audio is not supported without spice graphics.'
To resolve this, you will need to edit your VM's XML by hand. In a terminal, typevirsh edit your-vm
. Search for every instance ofspice
and remove the associated line/section.
-
- The QXL video adapter
- The emulated mouse and keyboard
- Instead, directly pass through your Host's mouse and keyboard.
- Navigate to Add Hardware > USB Host Device > Your mouse
- Do the same for your USB keyboard.
- Instead, directly pass through your Host's mouse and keyboard.
- The USB tablet.
Now you may add the PCI devices you want to pass through to your virtual machine.
- Click "Add Hardware".
- Select the "PCI Host Device" tab.
- Within this tab, select a device within your GPU's IOMMU group.
- Click "Finish"
Repeat these steps until every device in your GPU's IOMMU group is passed through to the Guest.
Passing through audio from the guest to the host can be accomplished using most audio servers. I won't be listing all of them here because it would be very verbose and unnecessary. As mentioned above, if you're using PulseAudio, PipeWire + JACK, or Scream, check out the Arch Wiki..
PipeWire
Add the following to the devices
section in your virtual machine's XML:
$ virsh edit vm-name |
---|
<devices>
...
<audio id="1" type="pipewire" runtimeDir="/run/user/1000">
<input name="qemuinput" />
<output name="qemuoutput" />
</audio>
</devices> |
(Please change the runtimeDir value from 1000 accordingly to your desktop user uid)
To resolve the common Failed to initialize PW context error, you can modify the qemu configuration to use your user.
nvim /etc/libvirt/qemu.conf |
---|
user = "example" |
Libvirt hooks allow custom scripts to be executed when a specific action occurs on the Host. In our case, special start
and stop
scripts will be triggered when our Guest machine is started/stopped. For more information, check out this PassthroughPOST article, and the VFIO-Tools GitHub page.
To begin, create the required directories and files:
Create directories and files
These commands will create the required directories and scripts and make them executable. Replace every instance of your-vm-name with your virtual machine's name.
Create the primary QEMU hook.
mkdir -p /etc/libvirt/hooks && > /etc/libvirt/hooks/qemu && chmod +x $_
Create your start script.
mkdir -p /etc/libvirt/hooks/qemu.d/your-vm-name/prepare/begin && > /etc/libvirt/hooks/qemu.d/your-vm-name/prepare/begin/start.sh && chmod +x $_
Create your stop script.
mkdir -p /etc/libvirt/hooks/qemu.d/your-vm-name/release/end && > /etc/libvirt/hooks/qemu.d/your-vm-name/release/end/stop.sh && chmod +x $_
Paste the following into the newly created 'qemu' script
/etc/libvirt/hooks/qemu |
---|
#!/bin/bash
GUEST_NAME="$1"
HOOK_NAME="$2"
STATE_NAME="$3"
BASEDIR="$(dirname "$0")"
HOOKPATH="$BASEDIR/qemu.d/$GUEST_NAME/$HOOK_NAME/$STATE_NAME"
set -e # If a script exits with an error, we should too.
if [ -f "$HOOKPATH" ]; then
"$HOOKPATH" "$@"
elif [ -d "$HOOKPATH" ]; then
while read -r file; do
"$file" "$@"
done <<<"$(find -L "$HOOKPATH" -maxdepth 1 -type f -executable -print)"
fi |
QEMU will automatically detach PCI devices passed to the guest. Typically, it is not necessary to manually bind and unbind
vfio
drivers usingmodprobe
or to detach and attach PCI devices viavirsh
, as recommended in many other guides. I have found that misuse of such methods can often lead to script hangs. If those approaches are necessary and work for you, that’s amazing; however, I cannot endorse these methods for everyone.
Do not copy your start/stop scripts blindly. I highly recommend writing your own according to your needs. The most common use for hooks (for single GPU passthrough machines, at least) is to automatically start and stop your display manager and to unbind/rebind the proprietary NVIDIA drivers.
Example start
Script
/etc/libvirt/hooks/qemu.d/your-vm-name/prepare/begin/start.sh |
---|
#!/bin/bash
systemctl isolate multi-user.target
modprobe -r nvidia_drm
modprobe -r nvidia_modeset
modprobe -r nvidia_uvm
modprobe -r nvidia |
Example stop
Script
/etc/libvirt/hooks/qemu.d/your-vm-name/release/end/stop.sh |
---|
#!/bin/bash
modprobe nvidia_drm
modprobe nvidia_modeset
modprobe nvidia_uvm
modprobe nvidia
systemctl isolate graphical.target |
P.S. If you're wondering what you should include in your hooks, I have one suggestion: less.
Congratulations! You should now have a functional QEMU/KVM virtual machine, complete with PCI passthrough. If you think you can improve this guide in any way, I encourage you to open a pull request. For support, you're free to open an issue here or view the list of resources below.
Without the assistance of the individuals listed below, wikis, and guides, this would not have been possible. There's a wealth of knowledge out there; go seek it out.
- ⭐ The VFIO subreddit and wiki
- ⭐ The VFIO Discord
- ⭐ The VFIO blogspot how-to
- ⭐ The Arch Linux wiki
- The Gentoo wiki
- Libvirt documentation
- QaidVoid's single GPU passthrough guide.
- Special thanks to this guide, as it inspired me to make this in the first place.
- joeknock90's single GPU passthrough guide
- This Google doc