gridrunner

VIDEO GUIDE***How to pass through an NVIDIA GPU as primary or only gpu in unRAID

Recommended Posts

On 01/08/2017 at 7:46 PM, Matoking said:

I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough.

 

Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM!

 

---

 

Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work.

 

I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver.

 

https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher

 

 

great work I have linked this to the op. :)

Share this post


Link to post
Share on other sites
Posted (edited)

I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me!

 

It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group).

 

After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank.

 

Any ideas or suggestions of how to fix?

 

 

Edited by entegral

Share this post


Link to post
Share on other sites
On 10/08/2017 at 1:36 AM, entegral said:

I was also pointed here after running into issues with the install of an EVGA GTX 1080 Ti. Additionally I also subscribe to your youtube channel, amazing work and thank you for all the help you have already given me!

 

It seems that the problem I have encountered may be related to the vbios, then again, i havent attempted a dump because my bios offer the ability to boot to the onboard vga port, so i don think that is necessary... In case it's relevant, my motherboard is an asus Z9PA-D8. HVM and IOMMU are both enabled according to unraid's 'info' tab, and the card is the only pci device that is in its IOMMU group (other than the nvidia audio, which is also in the same group).

 

After installing an ubuntu vm with VNC (per your introduction to unraid vm's video), and then enabling the discrete card after install, the grub bootloader displays and im able to navigate its options successfully. To my novice mind, this seems to indicate that the gpu passthrough is working, right? But as soon as i make a selection to boot ubuntu, the screen freezes on that slightly off-black ubuntu loading screen color and becomes unresponsive. Even a 'force stop' of the vm doesn't clear/reset the screen. If the vm is force-stopped and then started again I am able to successfully view/interact with the grub bootloader, but as soon as i try to boot into ubuntu, the screen goes blank.

 

Any ideas or suggestions of how to fix?

 

 

Hi, @entegral yes if you can see the grub boot loader then GPU pass through is working. When setting up a ubuntu VM from the template, it defaults to using bios type OVMF

I would use bios type Seabios for Ubuntu. So make a new ubuntu VM and when making it go to the template and toggle advanced view in the top right then you can choose bios type and select Seabios. Give this a try :) 

Share this post


Link to post
Share on other sites
On 8/1/2017 at 1:46 PM, Matoking said:

I was pointed towards this thread when I had trouble isolating my 1070 for PCI passthrough.

 

Long story short, I tried dumping my vBIOS like instructed in the video, but couldn't do so (the `cat` command printed I/O errors instead). Instead, I resorted to dumping the full vBIOS under Windows and using a hex editor to splice the relevant part of the ROM into a new file, using some of the partial vBIOS files uploaded here as samples. This finally allowed me to pass the GPU to the Windows VM!

 

---

 

Anyway, I wrote a Python script that should automate this process (you give it a full ROM from techPowerUp or one you dumped using nvflash under Windows), and it should create a patched ROM that you can use to make GPU passthrough work.

 

I passed a few ROMs I downloaded from techPowerUp through the script and compared them to what you guys uploaded here, and so far the Pascal vBIOS files appeared to match, bit by bit. Still, I can't stress it enough that this script is based on guesswork, so it may end up bricking your GPU if you're unlucky. It does a few rudimentary sanity checks, but I would recommend dumping the partial ROM yourself if you can. Still, for those who are pulling your hair out over not being able to do that, this may be a lifesaver.

 

https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher

 

This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios?

 

I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance.

Share this post


Link to post
Share on other sites
2 hours ago, ice pube said:

This looks pretty cool - I looked at the github but I am really dumb with stuff like this. Can you explain the steps you used in WIndows to create the flashed bios?

 

I have an Nvidia EVGA 1050 TI (https://www.techpowerup.com/gpudb/b3905/evga-gtx-1050-ti-sc-acx-2-0) I am trying to pass through to my Win 10 VM. Thanks in advance.

 

I have EVGA 1050Ti SC card. I found a vbios on techpowerup but it said it was untested, and it didn't work (bottom half of screen looked fine, but upper half was all screwed up.

 

So I pulled my own using GPUZ. I installed the card in my old Windows box, ran GPUZ, and extracted the VBIOS. Then editted it with HxD to remove the "nvidia header". Put it on my server, added reference in my VM XML, and it works perfect.

Share this post


Link to post
Share on other sites

Seems to work. No error code 43 in device manager of guest OS.

I followed the instructions from 2nd video, but i didn't tried with a monitor connected, just with a VNC remote connection.

I tried before to pass through the GPU with another Linux based distribution, but in that case didn't worked or i didn't succeed.

 

Host:

OS: unRAID version: 6.3.5

System: Dell Power Edge T20, CPU: Xeon 1225 v3 (with integrated GPU)

 

Guest:

OS: Win 8.1 Pro x64

GPU: GTX 1050 Ti (4 GB), NVIDIA driver: 376.09

 

Edited by Dorin

Share this post


Link to post
Share on other sites

Hi all !

First, many thanks to gridrunner for the great tuto in first page :)

I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770).

I have Error Code 43 in my Win10 VM after successfull install nvidia drivers.

Do you know if its supposed to work with my hardware ?

I need a latest motherboard ?

Thanks :)

Share this post


Link to post
Share on other sites

i need help setting up my 1050 ti on my laptop as a gpu passthrough on qemu, i am using revenge os arch linux  

Share this post


Link to post
Share on other sites
On 10/5/2017 at 10:50 AM, ren88 said:

i need help setting up my 1050 ti on my laptop as a gpu passthrough on qemu, i am using revenge os arch linux  

 

Are you using unRAID?

Share this post


Link to post
Share on other sites
On 05/10/2017 at 11:22 AM, Dual_Shock said:

Hi all !

First, many thanks to gridrunner for the great tuto in first page :)

I have DL the trial unRAID 6.3.5 to experiment GPU Passthrough on a Dell Precision T5600 (chipset C600, 64Go DDR3, Bi-Xeon E5 2620, GTX770).

I have Error Code 43 in my Win10 VM after successfull install nvidia drivers.

Do you know if its supposed to work with my hardware ?

I need a latest motherboard ?

Thanks :)

It finally works for me with a GTX970 instead of my GTX770 !!! And I didn't even need to put the dump bios in the XML ...

However, the performance are very poor. On the bench Unigine Heaven in Dx11, I am at 20 FPS average ... :( (Normally 60-80)

Share this post


Link to post
Share on other sites

I am not a gamer, but this is not typical of VM slowdowns I have read about. I'd expect reductions of maybe 20% or so. So there might still be something not quite right in your config. If this is the sole video card, you might try the ROM file in the XML. Could also be that you are not giving enough cores or memory. Or not allocating matching cores and matching hyper-thread cores properly.

 

Review carefully and experiment and you might find something that could pump up the video performance.

  • Like 1

Share this post


Link to post
Share on other sites

Thanks for your help.

I have tried with 4 cores, 4 Go = 26 FPS average

I have tried with 24 cores, 16 Go = 27 FPS average 

:(

I will test with including ROM Bios in the XML.

Share this post


Link to post
Share on other sites

@gridrunner may have other ideas. Reduction in gaming performance from >60 to 26 FPS is not typical.

Share this post


Link to post
Share on other sites

@Dual_Shock   please post your xml, iommu groups, and your cpu thread pairings so we can see :)

Definitely, try passing through the vbios.Your 770 probably didn't work because it didn't support

EFI so would only work using seabios and not ovmf. Passing through a 770 vbios that does support that will

make the card start with an EFI bios so work that or you could flash the card but its much easier to use rom in XML.

Check your bios settings that your primary GPU is onboard if you have that and make sure that multi-monitor is off.

Also don't mix cores from across your 2 CPUs. 

Share this post


Link to post
Share on other sites

I followed the instructions. Hope I didn't brick anything.

 

I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initiall unbinded it.

 

Everything seems to be still working though, but I am worried that I bricked something by not binding the card again?

Share this post


Link to post
Share on other sites

I'm getting error code 43 with the latest unraid beta release. Drivers install just fine but i get that error code.

Share this post


Link to post
Share on other sites
On 10/31/2017 at 7:35 PM, steve1977 said:

I followed the instructions. Hope I didn't brick anything.

 

I have an GTX1050, which is in my primary PCI port. I dumped the bios using commandline. I didn't move the card to a secondary slot, which I hope was ok? Everything actually worked and I succeeded to dump the bios. The only thing that didn't work is to bind the card again. I get an error message that this card doesn't exist. I initially unbinded it.

 

Any thoughts on above? My GPU (primary slot GTX 1050) is no longer binded and I don't know how to bind it again. I had unbinded to dump the bios, but then failed to bind it again. Any thoughts how to do so? Thanks in advance!

Share this post


Link to post
Share on other sites

Hope to get this sorted out. Let me provide you some more information.

 

Context where the HW : only one GPU (GTX 1050) used in primary PCI slot, GPU used by Unraid and not assigned to VM

 

Below how "lspci -v" gives me related to the GPU. You will notice that the kernel driver is not in use (this was different when I did this first and unbinded it).

 

https://pastebin.com/8XFap1JA

 

Followed the comments to bind the card again. See error message below:

 

root@Tower:~# cd /sys/bus/pci/devices/0000:65:00.0/
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 1 > rom
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo 0 > rom
root@Tower:/sys/bus/pci/devices/0000:65:00.0# echo "0000:65:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
-bash: echo: write error: No such device
 

And some more info from tools/system devices in case this helps trouble-shooting:

 

IOMMU group 36
    [10de:1c81] 65:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1)
    [10de:0fb9] 65:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

 

 

How can I "bind" the GPU again? What happened when I "successfully" unbinded my card?

Share this post


Link to post
Share on other sites
On 1/11/2017 at 2:09 PM, gridrunner said:

so was it working fine before 6.4.0-rc9f or is this the first time you have tried?

 

For some reason Hyper-V was enabled and it didn't worked, not even after disabling it. I created a new VM with that disabled and it worked.

Weird.

Share this post


Link to post
Share on other sites

I'm switching the card in my primary slot and I have

 

<alias name='hostdev0'/>

above the address line in my xml.  Do I leave this in?  The other VM that I previously had in the primary slot didn't have this line:

	<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
	  <rom file='/mnt/disks/sm961/system/gt730bios.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>

Thanks

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.