GPU Passthrough with GTX 1060 and RTX 3080 on AMD CPU

Hello garuda community.

So, I tried to do GPU Passthrough with my RTX 3080 on guest (windows 10) and GTX 1060 on host.
But after many tries, I was only able to do it with OpenSuse leap 15.3 with a script I found (here: https://www.youtube.com/watch?v=Nu2bHV8mA6c) but not on garuda linux dr460nised.
On garuda linux, I can’t pass (at start or after start with scripts) my RTX 3080’s driver on “vfio-pci”.

Here is my garuda-inxi :

https://forum.garudalinux.org/t/ledge-pc-garuda-inxi/18482?u=ledge

(this garuda install is fully clean of tries of GPU passthrough)

And tutorials/guides/forums I found and tried:

Idea that won’t work:

  • forbid nvidia driver to load on start, because I won’t get any display cause I only have Nvidia GPUs

Hope I don’t forget anything to say and thank you for reading my post

Have you tried the scripts and failed? How?
https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Script_variants

2 Likes

I have already tried some of these scripts, but I tried them again and tried default way to show you what output I get.

Default way

09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3080 Lite Hash Rate] [10de:2216] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    09:00.1 Audio device [0403]: NVIDIA Corporation GA102 High Definition Audio Controller [10de:1aef] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
host:
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    08:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel

Passthrough all GPUs but the boot GPU:
guest :

09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3080 Lite Hash Rate] [10de:2216] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    09:00.1 Audio device [0403]: NVIDIA Corporation GA102 High Definition Audio Controller [10de:1aef] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
host :
    08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
         Subsystem: eVga.com. Corp. Device [3842:6267]
         Kernel driver in use: nvidia
         Kernel modules: nouveau, nvidia_drm, nvidia
    08:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
         Subsystem: eVga.com. Corp. Device [3842:6267]
         Kernel driver in use: snd_hda_intel
         Kernel modules: snd_hda_intel

Passthrough selected GPU :
guest :

09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3080 Lite Hash Rate] [10de:2216] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    09:00.1 Audio device [0403]: NVIDIA Corporation GA102 High Definition Audio Controller [10de:1aef] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
host :
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    08:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel

Passthrough IOMMU Group based of GPU:
guest :

09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3080 Lite Hash Rate] [10de:2216] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    09:00.1 Audio device [0403]: NVIDIA Corporation GA102 High Definition Audio Controller [10de:1aef] (rev a1)
        Subsystem: ASUSTeK Computer Inc. Device [1043:8822]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
host :
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: nvidia
        Kernel modules: nouveau, nvidia_drm, nvidia
    08:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
        Subsystem: eVga.com. Corp. Device [3842:6267]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel

here is what I did:

  1. adding iommu=pt to the GRUB_CMDLINE_LINUX into /etc/default/grub and rebuilt grub by doing sudo grub-mkconfig -o /boot/grub/grub.cfg
  2. after rebooting, by doing sudo dmesg | grep -i -e IOMMU I get :
 [    0.000000] Command line: BOOT_IMAGE=/@/boot/vmlinuz-linux-zen root=UUID=0932f3fb-ebe7-4906-a6eb-2969d12d2e64 rw rootflags=subvol=@ iommu=pt quiet quiet splash rd.udev.log_priority=3 vt.global_cursor_default=0 loglevel=3
[    0.033115] Kernel command line: BOOT_IMAGE=/@/boot/vmlinuz-linux-zen root=UUID=0932f3fb-ebe7-4906-a6eb-2969d12d2e64 rw rootflags=subvol=@ iommu=pt quiet quiet splash rd.udev.log_priority=3 vt.global_cursor_default=0 loglevel=3
[    0.560339] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.580546] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.580575] pci 0000:00:01.0: Adding to iommu group 0
[    0.580585] pci 0000:00:01.1: Adding to iommu group 1
[    0.580594] pci 0000:00:01.2: Adding to iommu group 2
[    0.580602] pci 0000:00:02.0: Adding to iommu group 3
[    0.580616] pci 0000:00:03.0: Adding to iommu group 4
[    0.580625] pci 0000:00:03.1: Adding to iommu group 5
[    0.580634] pci 0000:00:03.2: Adding to iommu group 6
[    0.580643] pci 0000:00:04.0: Adding to iommu group 7
[    0.580651] pci 0000:00:05.0: Adding to iommu group 8
[    0.580663] pci 0000:00:07.0: Adding to iommu group 9
[    0.580671] pci 0000:00:07.1: Adding to iommu group 10
[    0.580682] pci 0000:00:08.0: Adding to iommu group 11
[    0.580691] pci 0000:00:08.1: Adding to iommu group 12
[    0.580703] pci 0000:00:14.0: Adding to iommu group 13
[    0.580710] pci 0000:00:14.3: Adding to iommu group 13
[    0.580735] pci 0000:00:18.0: Adding to iommu group 14
[    0.580742] pci 0000:00:18.1: Adding to iommu group 14
[    0.580748] pci 0000:00:18.2: Adding to iommu group 14
[    0.580754] pci 0000:00:18.3: Adding to iommu group 14
[    0.580761] pci 0000:00:18.4: Adding to iommu group 14
[    0.580767] pci 0000:00:18.5: Adding to iommu group 14
[    0.580774] pci 0000:00:18.6: Adding to iommu group 14
[    0.580780] pci 0000:00:18.7: Adding to iommu group 14
[    0.580790] pci 0000:01:00.0: Adding to iommu group 15
[    0.580800] pci 0000:02:00.0: Adding to iommu group 16
[    0.580855] pci 0000:03:05.0: Adding to iommu group 17
[    0.580879] pci 0000:03:08.0: Adding to iommu group 18
[    0.580904] pci 0000:03:09.0: Adding to iommu group 19
[    0.580928] pci 0000:03:0a.0: Adding to iommu group 20
[    0.580981] pci 0000:04:00.0: Adding to iommu group 21
[    0.580990] pci 0000:05:00.0: Adding to iommu group 18
[    0.580999] pci 0000:05:00.1: Adding to iommu group 18
[    0.581007] pci 0000:05:00.3: Adding to iommu group 18
[    0.581016] pci 0000:06:00.0: Adding to iommu group 19
[    0.581025] pci 0000:07:00.0: Adding to iommu group 20
[    0.581045] pci 0000:08:00.0: Adding to iommu group 22
[    0.581061] pci 0000:08:00.1: Adding to iommu group 22
[    0.581090] pci 0000:09:00.0: Adding to iommu group 23
[    0.581107] pci 0000:09:00.1: Adding to iommu group 23
[    0.581116] pci 0000:0a:00.0: Adding to iommu group 24
[    0.581128] pci 0000:0b:00.0: Adding to iommu group 25
[    0.581140] pci 0000:0b:00.1: Adding to iommu group 26
[    0.581152] pci 0000:0b:00.3: Adding to iommu group 27
[    0.581164] pci 0000:0b:00.4: Adding to iommu group 28
[    0.581606] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.581788] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    0.727848] AMD-Vi: AMD IOMMUv2 loaded and initialized
  1. I edit /etc/mkinitcpio.conf to add vfio_pci vfio vfio_iommu_type1 vfio_virqfd in MODULES section before nvidia
    result : MODULES=(crc32c vfio_pci vfio vfio_iommu_type1 vfio_virqfd nvidia nvidia_modeset nvidia_uvm nvidia_drm)
    -modconf is already in HOOKS section
    -then I regenerate initramfs with mkinitcpio -P and reboot

  2. a) binding vfio via device ID:
    -I add options vfio-pci ids=10de:2216,10de:1aef to /etc/modprobe.d/vfio.conf
    -then I regenerate initramfs and reboot
    -you have result at the start of this message
    b) (I cleared 4.a) procedure before doing this one) Special procedures, Script variants!
    -I paste scripts in a new file in /usr/local/bin/vfio-pci-override.sh
    -I edit /etc/mkinitcpio.conf to add /usr/local/bin/vfio-pci-override.sh in FILES section
    -I edit /etc/modprobe.d/vfio.conf to add install vfio-pci /usr/local/bin/vfio-pci-override.sh
    -then I regenerate initramfs and reboot
    -results at the start of this message

This might be a long shot, but I do see they released an update for your BIOS last month. If your configuration is correct and it is still not working, it might be worth a shot. ROG Strix X570-F Gaming | ROG Strix | Gaming Motherboards|ROG - Republic of Gamers|ROG USA

1 Like

This might be a long shot, but I do see they released an update for your BIOS last month. If your configuration is correct and it is still not working, it might be worth a shot. ROG Strix X570-F Gaming | ROG Strix | Gaming Motherboards|ROG - Republic of Gamers|ROG USA

Update BIOS is not a bad idea, but it will not resolve my problem, if I was not able to do gpu passthrough at all on any distro, it would be a solution, but I was able to do it with OpenSuse Leap 15.3.

I was doing more research about my problem, and by doing sudo dmesg | grep -i vfio I got return :

[    1.631690] VFIO - User Level meta-driver version: 0.3
[    1.641719] vfio_pci: add [10de:2216[ffffffff:ffffffff]] class 0x000000/00000000
[    1.641724] vfio_pci: add [10de:1c03[ffffffff:ffffffff]] class 0x000000/00000000
[ 1155.443122] Modules linked in: snd_seq_dummy snd_hrtimer snd_seq snd_seq_device qrtr joydev mousedev intel_rapl_msr intel_rapl_common eeepc_wmi asus_wmi snd_hda_codec_realtek edac_mce_amd sparse_keymap snd_hda_codec_generic ledtrig_audio snd_hda_codec_hdmi platform_profile
video kvm_amd snd_hda_intel snd_intel_dspcfg vfat fat snd_intel_sdw_acpi rfkill wmi_bmof kvm snd_hda_codec mxm_wmi snd_hda_core snd_hwdep crct10dif_pclmul snd_pcm crc32_pclmul ghash_clmulni_intel snd_timer sp5100_tco aesni_intel usbhid crypto_simd ccp cryptd rapl i2c_piix4 k10
temp rng_core igb snd dca soundcore mac_hid wmi pinctrl_amd acpi_cpufreq uinput ipmi_devintf ipmi_msghandler sg fuse crypto_user zram bpf_preload ip_tables x_tables btrfs blake2b_generic libcrc32c crc32c_generic xor raid6_pq crc32c_intel xhci_pci xhci_pci_renesas nvidia_uvm(PO
E) vfio_pci vfio_pci_core irqbypass vfio_virqfd vfio_iommu_type1 vfio nvidia_drm(POE) nvidia_modeset(POE) nvidia(POE)

the most important is that the module nvidia_uvm load before all vfio modules.
But in my /etc/mkinitcpio.conf this module must load after vfio:
MODULES=(crc32c vfio_pci vfio vfio_iommu_type1 vfio_virqfd nvidia nvidia_modeset nvidia_uvm nvidia_drm)

Do you guys have any solutions ?

It looks like you may be able to use a well-constructed sed command to reorganize the line. I see an example of this technique here, and it looks like the same person posted their question again here.

I do not consider the Linus Tech Tips forum or Reddit to be reliable sources for advice, but it does look like a relatively simple(ish) solution that worked. :man_shrugging:

1 Like

I don't understand why you recommand me to use sed command, my line is correctly set.

:face_with_raised_eyebrow:

In a previous comment you mentioned it is not in the correct order:

That was the reason I suggested a possible method for changing the order of the modules.

1 Like

I said that the module nvidia_uvm seems to load at start before all vfio modules, but in my file, it is mean to load after vfio modules

No more Idea ?

I think you should try out some other kernels, starting with LTS.

What kernel is your SUSE install where it works using?

1 Like

I'll try using other kernels, I'll send here if it worked.
When I was using Suse, the kernel I use was 5.3.18 Linux Kernel.

Oh wow--I thought you had meant recently. That must have been years ago!

2 Likes

I tried Suse current 2021 so that's not so far in the past.
I try linux-zen (default) and linux-lts kernels (I'm stuck on "loading linux-hardened..." with hardened one) and none of them make the passthrough work

@BluishHumility I described before what I did to do GPU Passthrough, you didn’t say anything about that, did I do it right ?

@Ledge I honestly have no idea, I have never used Nvidia hardware and have very limited experience with the elaborate configuration required to get it to behave as expected.

ok, thank you for trying to help

I edit /etc/mkinitcpio.conf to add vfio_pci vfio vfio_iommu_type1 vfio_virqfd in MODULES section before nvidia
result : MODULES=(crc32c vfio_pci vfio vfio_iommu_type1 vfio_virqfd nvidia nvidia_modeset nvidia_uvm nvidia_drm)
-modconf is already in HOOKS section
-then I regenerate initramfs with mkinitcpio -P and reboot

a) binding vfio via device ID:
-I add options vfio-pci ids=10de:2216,10de:1aef to /etc/modprobe.d/vfio.conf
-then I regenerate initramfs and reboot
-you have result at the start of this message
b) (I cleared 4.a) procedure before doing this one) Special procedures, Script variants!
-I paste scripts in a new file in /usr/local/bin/vfio-pci-override.sh
-I edit /etc/mkinitcpio.conf to add /usr/local/bin/vfio-pci-override.sh in FILES section
-I edit /etc/modprobe.d/vfio.conf to add install vfio-pci /usr/local/bin/vfio-pci-override.sh
-then I regenerate initramfs and reboot
-results at the start of this message

Have you any idea on how do I failed ?

I have zero experience on gpu/hw passthrough. By experience I may try some pointers, but you have to look them up, doing your homework.

AFAIK and IIRC…

  • It is possible to disable one of two cards with an Xorg conf file, with one Device Section per gpu, setting relevant PCI ID in each section. There is an option to disable the device, so use this for the one you want to pass-through.
  • Modules/drivers normally have several options, that you can find with modinfo <module-name>. Maybe one or more of those options can help with your case. Details about how and when to use them is usually found at vendors’ documentation (web, or manuals, or web searching). You can activate those options, either in a conf file in /etc/modprobe.d/*.conf or /etc/module-load.d/*.conf (I don’t remember which one or maybe both work), or with kernel command parameters.
  • When testing several combinations of kernel params, you don’t necessarily need to use them permanent with /etc/default/grub, thus updating grub each time. Just edit grub menu entry on boot.
  • Sometimes you need to step back for a while and clean your perspective and mind, as wrong assumptions that persist block you from finding that tiny little solution you are looking for.
  • Ask people who have actual experience with pass-through methods. They have probably done most, if not all the mistakes as you and can give wise advice.

:person_shrugging:

1 Like

@FGD @Cannabis Would you guys be willing to take a read through and see if anything jumps out at you?