Let’s take a look at how to passthrough an Nvidia GeForce graphics adapter into a Windows 10 virtual machine running on a KVM host (based on Linux CentOS 8).
The KVM host BIOS must have IOMMU support enabled. Check that IOMMU support is enabled in /etc/default/grub. If not, add a line to the file:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet intel_iommu=on"
And update the configuration:
grub2-mkconfig -o /boot/grub2/grub.cfg
Disable the nouveau driver, which can cause problems with vfio-pci binding to the GPU.
grubby --update-kernel=ALL --args="rd.driver.blacklist=nouveau nouveau.modeset=0"
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
dracut /boot/initramfs-$(uname -r).img $(uname -r)
echo 'blacklist nouveau' > /etc/modprobe.d/nouveau-blacklist.conf
Get a list of available PCI devices on the KVM host:
lspci -nn | grep -i nvidia
Each video card consists of several components. For example:
82:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti Rev. A] [10de:1e07] (rev a1) 82:00.1 Audio device [0403]: NVIDIA Corporation TU102 High Definition Audio Controller [10de:10f7] (rev a1) 82:00.2 USB controller [0c03]: NVIDIA Corporation TU102 USB 3.1 Host Controller [10de:1ad6] (rev a1) 82:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU102 USB Type-C UCSI Controller [10de:1ad7] (rev a1)
Copy the VID:PID of the devices you want to passthrough to the VM.
Create the /etc/modprobe.d/vfio.conf file and add to it a line with the list of devices available to vfio-pci (one-line list):
options vfio-pci ids=10de:1e07, 10de:10f7, 10de:1ad6, 10de:1ad7
Run the command:
echo 'vfio-pci' > /etc/modules-load.d/vfio-pci.conf
And reboot the KVM host.
Check that your graphic cards are available in vfio mode:
lspci -vs 81:00.0
The output should contain the lines:
Kernel driver in use: vfio-pci Kernel modules: nouveau
In my example, I created a clean VM (with UEFI firmware) with Windows 10 on the KVM host and turned it off. Now you need to edit the configuration XML file of the KVM virtual machine.
Inside the <hyperv></hyperv> tags, add:
<vendor_id state='on' value='111111111'/>
Inside tags <features></feature>:
<kvm>
<hidden state='on'/>
</kvm>
And at the end of the configuration file, before the </devices> </domain> tags, add the following configuration:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0' bus='0x82' slot='0x0' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0' bus='0x82' slot='0x0' function='0x1'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0' bus='0x82' slot='0x0' function='0x2'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0' bus='0x82' slot='0x0' function='0x3'/>
</source>
</hostdev>
In this example, we want to passthrough all devices from the GPU device 82:00 to the VM. There are 4 functions available on it, and we have added them all to the XML file:
- VGA controller (82:00.0)
- Audio device (82:00.1)
- USB 3.1 Host Controller3.1 (82:00.2)
- USB Type-C UCSI Controller (82:00.3)
Save the XML file and start the VM. Windows should detect the new graphics device and you can install the Nvidia GPU drivers.