**VIDEO GUIDE** How to easily passthough a Nvidia GPU as primary without dumping your own vbios!


Recommended Posts

On 16/04/2017 at 5:27 PM, eric.frederich said:

 

Sorry, been away on vacation.  I switched to seabios and it seemed to work.  I installed Linux Mint and have both monitors working now.

Of course now it's easter I need to leave the house for the day, no time to play with it.

 

I did it with both the graphics and HDMI audio since I heard you have to have everything within the IOMMU group mapped in.  I'd like to try to use my onboard audio since my monitors don't have a line-out for the HDMI.  Will this stop working if I don't have the HDMI audio mapped in?... Is it possible to have both onboard and HDMI mapped into the VM?  The configuration page is a drop-down list so I can only select one, but maybe by editing the XML itself?

 

yes you can have 2 sound cards passed through.

Just click the plus on the template next to sound card and you can add the second card.

 

Link to comment
  • 2 months later...

I got it working! yay! with an eVGA 1050 Ti I just got.

 

I have one really dumb question tough. It's one of those things that I don't see anyone ever address because it's so simple that everyone assumes that everyone knows the answer but I just can't figure it out.........

 

I'm running a WIndows 10 VM with the above Nvidia card that works just great with Splashtop Desktop. However, I want the hardware itself - the HDMI port on the 1050 - to feed my receiver to my TV as the Windows 10 device (not the unraid device.) Is that possible or are you forced into thin client for some reason?

Link to comment
14 hours ago, Henry Thomas said:

I got it working! yay! with an eVGA 1050 Ti I just got.

 

I have one really dumb question tough. It's one of those things that I don't see anyone ever address because it's so simple that everyone assumes that everyone knows the answer but I just can't figure it out.........

 

I'm running a WIndows 10 VM with the above Nvidia card that works just great with Splashtop Desktop. However, I want the hardware itself - the HDMI port on the 1050 - to feed my receiver to my TV as the Windows 10 device (not the unraid device.) Is that possible or are you forced into thin client for some reason?

Hi Henry,

Yep just plug the hdmi into your tv/receiver and the pic and sound will go through the hdmi.

So you can game/ watch videos etc on the tv. :)

  • Upvote 1
Link to comment
9 hours ago, gridrunner said:

Hi Henry,

Yep just plug the hdmi into your tv/receiver and the pic and sound will go through the hdmi.

So you can game/ watch videos etc on the tv. :)

 

Thanks - I had blank screens for the past week setting this up - when I activated Windows that flipped the video on (unless it was related to something else going on through that process.) Now I have audio issues. It sounds like the speakers are under water and people are blowing bubbles when hey talk - sound makes the video go 1/2 speed also. THis happened when I passed through a USB Logitech receiver for wireless mouse/keyboard thingie.

Link to comment
  • 3 months later...

I am making progress. I added VNC as first and Nvidia as second GPU. Did the manual changes in the XML to add the 3rd party bios.

 

RDC  is working now, which is great. However, I can no longer output from the nvidia to an external monitor. Also, when accessing device manager (through RDC), the nvidia card shows with error 43 (Windows has stopped this device because it has reported problems).

 

Appreciate your help!

Link to comment

And yet again me with some more progress.

 

I realized that it is not feasible to set up VNC as primary and Nvidia as secondary. So, I dropped this idea.

 

I am now back to running the KVM with primary Nvidia. I even succeeded to get RDC to work. The issue was that I had set a static IP and somehow I cannot set a static IP once I have my GPU passed through. I tried everything including the DHCP IP as static IP, but nothing is working.


Could it be that static IP assignment for some reason does not work when passing through a single GPU?

Link to comment
  • 2 weeks later...
On 07/11/2017 at 7:17 PM, steve1977 said:

quick follow up. what are kvm net drivers? all works well when gpu is set to vnc.


Sent from my iPhone using Tapatalk

By 'KVM net drivers'  i mean the ethernet virtio driver for the KVM network adaptor.

Please post your XML from the VM  in which when the GPU is passed through, from which you are having problems with the ethernet adaptor.

Link to comment

I'm wondering if someone can help me get this working.

 

Setup:

  • Core i7 7820X
  • AsRock X299 Killer SLI/ac motherboard
  • GTX 960 in slot 1, SATA controller in slot 2
  • ROM config set up
  • Guest: Windows 10 OVMF

Symptom: The machine boots okay, with the console going to my display. When I start the VM, my display goes black but I never see the "TianoCore" logo. Eventually the monitor goes to sleep. I can RDP into the VM, and Device Manager says "Windows has stopped this device because it has reported problems. (Code 43)" I tried removing and re-adding the device, and deleting the installed drivers. Rebooting the host restores my console display.

 

I tried using a vbios dumped from unraid, dumped from CPU-Z, and several downloaded from TechPowerUp. (All edited with a hex editor.) Unexpectedly, the ones dumped from the card didn't require the hex edit. Also surprisingly, the ones from CPU-Z and from unraid differed (?).

 

I also tried switching to Seabios with no luck.

 

Last night when I was playing with passthrough with both Windows VMs, I could have sworn I got the slot 1 card to work, briefly. (I need to get an i9 CPU for x16 speeds on slots 1 and 3.) Today when I'm trying to get just slot 1 working, I'm having no luck.

 

My UEFI BIOS doesn't have an option to select a video card, as far as I can tell. Otherwise I'd buy a cheap single-slot card and boot of off that. And I can't give up slot 1 for that card, since I need both slots 1 and 3 for my two passthrough VMs...

 

Attaching my CPU-Z report and my XML.

windows.xml

cpuz.png

Edited by coppit
Link to comment

Yay! I figured it out! I think what was happening was that when I followed the instructions to dump the VBIOS in unraid, I got confused with respect to which GPU I was dumping versus which I was trying to pass through from the first slot. I grabbed an old card and put it into slot 1, and followed the remaining instructions exactly, and it worked!

Link to comment
On 11/9/2017 at 3:52 AM, gridrunner said:

By 'KVM net drivers'  i mean the ethernet virtio driver for the KVM network adaptor.

Please post your XML from the VM  in which when the GPU is passed through, from which you are having problems with the ethernet adaptor.

 

In the meantime, I have decided to buy a second GPU to make things easier. I have placed it in a second PCI slot and assigned it to my Win 10 VM. Unfortunately, this is also not working. I only get a black screen. I have opened a separate thread. Let me share though the info you had asked for.

 

Anything you can see from my XML file what I may be doing wrong here? Please see XML file below:

 

https://pastebin.com/nrRdSqVA

Link to comment
On 11/11/2017 at 3:17 AM, steve1977 said:

 

In the meantime, I have decided to buy a second GPU to make things easier. I have placed it in a second PCI slot and assigned it to my Win 10 VM. Unfortunately, this is also not working. I only get a black screen. I have opened a separate thread. Let me share though the info you had asked for.

 

Anything you can see from my XML file what I may be doing wrong here? Please see XML file below:

 

https://pastebin.com/nrRdSqVA

Hi Steve,

Looking at your XML it seems fine. The part where the GPU is passed through is here

 <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x17' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x17' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>

You are passing through both the sound part and graphics part of the GPU which is correct.

Your XML has no vbios passed through however I guess that you don't want to as you now have 2 GPUs.

One thing to note though is that even though you have 2 GPUs, if the primary card is an Nvidia and you want to pass through 

that card it will still need a vbios.

 

I cant tell which GPU is in the XML above and its iommu groups etc.

The lspci that you have posted is from inside a VM so it doesn't give any info about your unRAID server

 

[root@localhost ~]# lspci -v
00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
        Subsystem: Red Hat, Inc QEMU Virtual Machine
        Flags: bus master, fast devsel, latency 0

Please go to the unraid webui and then tools then system devices and copy and paste the PCI devices and iommu groups.

Thanks :)

Link to comment

Thanks for your help. Please see below requested information.

 

https://pastebin.com/YpWmqJra

 

In the meantime, I have succeeded to passthrough the GPU with a second Win10 VM with an OVMF bios. I can still not pass it through to my first WIn10 VM with Seabios. I'd still like to use the first VM with Seabios, so your help would still be much appreciated if you can see antyhing that prevents the Seabios VM to work?

 

On separate note, I cannot run both of my Win10 VMs in parallel over RDC. Maybe you have some thoughts? 

 

 

And my ultimate objective of all of above is to get Gamestream to work. Even with the OVMF VM (where passthrough works), I cannot get it to work. I saw in another thread that you succeeded to get it running? Is this still the case and any advice for me? 

 

Big thanks again for all your help in the forum and also your youtube channel, which is very well done!

Link to comment
  • 4 months later...

Sorry to resurrect an older thread, but I've followed the steps described in the video and I cannot get my primary card passed through. When I do, I get a log full of this:

2018-03-19T22:23:26.840789Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x201230, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840795Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x201238, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840801Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x202220, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840807Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x202228, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840812Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x202230, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840819Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x202238, 0x0,8) failed: Device or resource busy
2018-03-19T22:23:26.840824Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x203220, 0x98989800989898,8) failed: Device or resource busy
2018-03-19T22:23:26.840831Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x203228, 0x98989800989898,8) failed: Device or resource busy
2018-03-19T22:23:26.840836Z qemu-system-x86_64: vfio_region_write(0000:17:00.0:region1+0x203230, 0x98989800989898,8) failed: Device or resource busy

Interestingly enough when I have the bios dump in the VM it displays this:

IMG_20180319_183332.thumb.jpg.b5709a6c8b200ba3e4e39f6ce725ea5b.jpg

Which is just the log Unraid shows when it's booting, magnified and corrupted. The static grows over time and if I leave the VM on too long Unraid crashes, it's very strange.

I'm using Unraid 6.5 and here's my xml:

<domain type='kvm'>
  <name>Windows 10 Bottom 17</name>
  <uuid>c47e62c7-a0bb-991c-bbed-854a4780a41c</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='10'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='12'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='14'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.11'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/c47e62c7-a0bb-991c-bbed-854a4780a41c_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='3' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Windows 10 Bottom 17/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Win10_1709_English_x64 (DL1).iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Win10_1709_English_x64 (DL1).iso'/>
      <target dev='hdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:64:a9:ff'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x17' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/disk1/isos/1080ti.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x17' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </memballoon>
  </devices>
</domain>

I have tried using seabios but the VM refuses to do anything and spits out initialization errors.

 

EDIT: As always, mear minutes after posting this I found my problem. efifb was using my primary card, so I made a script using this workaround and put it in my crontab to run at boot.

Edited by namAehT
Found solution
Link to comment
  • 5 months later...

So I'm trying this for the first time and hit a snag, well 2 one is I cannot see in the vm that I created how to get to the XML in the drop down I must have something off? And the other is when I name the file as 1050ti.dump it does not show up in the the Gaphics ROM BIOS entry line but if I name it 1050ti.rom it will? I have my box running on the IGPU and the 1050ti is in the second PCIe x16 slot I have my dual port nic in slot 1 x16 any ideas what I'm missing

 

 

XML.png

Link to comment
14 hours ago, mrbilky said:

So I'm trying this for the first time and hit a snag, well 2 one is I cannot see in the vm that I created how to get to the XML in the drop down I must have something off? And the other is when I name the file as 1050ti.dump it does not show up in the the Gaphics ROM BIOS entry line but if I name it 1050ti.rom it will? I have my box running on the IGPU and the 1050ti is in the second PCIe x16 slot I have my dual port nic in slot 1 x16 any ideas what I'm missing

 

 

XML.png

 

Hi @mrbilky

How to get to the VM XML has changed since I made that video. We no longer get there from the drop down. Nowadays we just go to edit to bring up the VM template manager and in the top right-hand corner, you will see a toggle that says form view. Just change that to XML view and from there you can edit the XML for that VM.

 

2003322997_formview.thumb.png.312c77d3b6231d1aeec30a3be13a7e52.png

 

It is fine to rename any vbios from .dump to .rom. It's just the template manager that looks for the extension .rom So to see the file in the VM manager each vbios needs the .rom extension to be seen from there. 

 

Seeing as though you have an iGPU you should really need to pass through the vbios. Make sure that in the MB bios that the igpu is set as the primary display out (it should NOT be set the PCIe) When unRAID boots you should see the console text coming from the iGPU output. If you see the console text coming from the gtx 1050ti output then that card is set as primary in the bios. (unRAID will output from the primary GPU only, for its console)

So just check all of your settings in the MB as you should really need to pass a vbios having both iGPU and a dedicated card.

Hope this helps ?

Link to comment
  • 4 months later...

@SpaceInvaderOne I'm having a problem with GPU passthrew since I upgraded to 6.6.x.  I got it to go away by enabling MSI Interrupts for a couple weeks but I pushed the rope and updated the nVidia drivers it came back.  The VM problem is with in 15 minutes windows 10 VM will freeze the unRAID host.  If I remove the GPU passthrew it all works fine.  I repeated all the steps, including rolling back the driver to the 2018-12-03 version and trying again and I cannot get it to work.  I didn't have to use a rom dump.  My GPU pass threw worked for the last 2 or 3 years with out a problem till 6.6.x came out.  It doesn't work with any 6.6. version.  Seeing you seam to be the expert with GPU passthrew I'm desperately asking for your help.  Thank you in advance for your help. 

 

Here is the bug report that I closed thinking the problem was once and for all closed.  It has everything I did in fixing it.  unRAID was stumped.  We didn't know it was the VM at first causing the problem...

 

 

Edited by Rudder2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.