Creating Linux Mint VM with nVidia GPU?


Recommended Posts

Could someone point me to a guide to install a Linux Mint VM when you have an nVidia GPU please? I am hitting the "black screen on boot" issue that seems to plague nVidia cards.

 

If I pass the nVidia GPU to the VM I don't even get the grub menu to add the nomodeset parameter. If I use the VNC GPU then I do get grub and can get into the desktop and install but cannot add the nVidia drivers.

 

I am stumped.

Link to comment

I have erased that VM and started again.

 

1. When I assign the VNC GPU, I can complete the installation properly but of course have no nvidia drivers available for the discrete GPU.

 

2. If I assign the discrete nvidia GPU and try to create a new VM, I see the bios splash, get the grub menu and can 'e' to add the nomodeset parameter to the options. Regardless of whether I try the main or compatibility boot, if I 'ctrl-x' to proceed with the boot, I just get a solid black screen with no blinking cursor. Pressing ctrl-alt-del will reboot the session and bring me back to the bios then grub screen. Nothing I have tried will allow me to proceed.

 

Any help please?

 

EDIT : Adding nvidia.modeset=0 to the options in grub menu passes the solid black screen, shows the small Mint logo with dots for a few seconds then gives the blank screen with blinking cursor. It doesn't get past that though.

Edited by DanielCoffey
Link to comment

I have the processor iGPU enabled in the bios. unRAID boots off that and I can see the output fine for the unRAID log from that HDMI connection. The 780ti is in the first PCIe x16 slot and is not assigned to unRAID. I simply make it available to the VM on the template screen along with its sound card.

 

It all works fine in my Win10 VM (which is stopped at the moment). It is just the Linux Mint VM which is giving the typical Mint/nvidia black screen boot problems that were usually resolved by adding nomodeset parameter to the grub boot screen.

 

I simply cannot get past the black screen when I give the 780ti to Mint in the Linux template. I can see output from grub - I can edit the grub options... I just can't get the Mint installer to load through the 780ti.

Link to comment

I am not sure, but i believe this is kepler architecture?

Maybe because it's olded, it's not really supporting all the states for a proper passthrough on some of the OS's?

I would suggest to extract the rom for the card, or try to get one from techpowerup. and then, specify the rom file in the xml of the VM. 

you might have better success...

 

Just to clarify, i am able to passthrough 10xx cards to mint 17, 18 VM without the need to use that nomodeset parameter...

I tried for both: second PCIe slot (no rom required) and first slot PCIe (with rom extracted and file specified in the xml)

but i am not having integrated GPU - i'm on x99 platform.

 

Link to comment

You may be right but I have just had a total computer failure... FF Post error the moment unRAID starts off the onboard graphics... so it means a shopping trip anyway.

 

I will have to drain and tear down the entire PC since it is all water cooled and with rigid tubing but I may be able to isolate the faulty component. The GPU is well out of warranty and has had a hard life being highly overclocked since day one. The motherboard, CPU and RAM are still in warranty.

 

I will report back once I make any progress.

Link to comment

it might be that the only solution is with the rom file, for that 780ti card. I have my doubts anyway if this is a long term solution - it could be that VM will start once, then consequent restarts will not boot anymore ... and a reboot (or shutdown+start) of the unraid box will be required.

 

Try first with a 10xx card from a friend. or a 750ti - that one is on Maxwell i believe, which probably will work better.

 

good luck

Link to comment

I had a delay while I tested to find the source of unexplained reboots (power delivery to the motherboard in the end) and then decided to go shopping. I now have a 1050 Ti for low power Linux use and a 1080 Ti for the Win10 gaming. I have not recreated the Linux VM yet as I am still adding components back into the array (new PSU to be fitted tomorrow) so I don't know if it will automatically detect the 10xx GPU.

Link to comment

Hmm... this is really beginning to get frustrating. I am still getting the blank screen even with the new 1080 Ti as the only GPU, with or without the nomodeset / nouveau.modeset=0 or grub_gfxmode=1024x768x16.

 

I have the card in its own IOMMU group so my next step is to obtain the ROM and give the VM the rom details. Fingers crossed that works otherwise it is off to Ubuntu. Heck, even Windows gets it right!

 

IOMMU group 0
	[8086:591f] 00:00.0 Host bridge: Intel Corporation Intel Kaby Lake Host Bridge (rev 05)
IOMMU group 1
	[8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 05)
IOMMU group 2
	[8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 05)
IOMMU group 3
	[8086:5912] 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 630 (rev 04)
IOMMU group 4
	[8086:a2af] 00:14.0 USB controller: Intel Corporation 200 Series PCH USB 3.0 xHCI Controller
IOMMU group 5
	[8086:a282] 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
IOMMU group 6
	[8086:a2e7] 00:1b.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #17 (rev f0)
IOMMU group 7
	[8086:a2eb] 00:1b.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #21 (rev f0)
IOMMU group 8
	[8086:a290] 00:1c.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #1 (rev f0)
IOMMU group 9
	[8086:a294] 00:1c.4 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #5 (rev f0)
IOMMU group 10
	[8086:a296] 00:1c.6 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #7 (rev f0)
IOMMU group 11
	[8086:a298] 00:1d.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #9 (rev f0)
IOMMU group 12
	[8086:a2c5] 00:1f.0 ISA bridge: Intel Corporation 200 Series PCH LPC Controller (Z270)
	[8086:a2a1] 00:1f.2 Memory controller: Intel Corporation 200 Series PCH PMC
	[8086:a2a3] 00:1f.4 SMBus: Intel Corporation 200 Series PCH SMBus Controller
IOMMU group 13
	[8086:15b8] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V
IOMMU group 14
	[10de:1b06] 01:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
	[10de:10ef] 01:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
IOMMU group 15
	[1b73:1100] 02:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10)
IOMMU group 16
	[144d:a804] 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
IOMMU group 17
	[1b21:2142] 06:00.0 USB controller: ASMedia Technology Inc. Device 2142
IOMMU group 18
	[1b21:2142] 07:00.0 USB controller: ASMedia Technology Inc. Device 2142

 

Edited by DanielCoffey
Link to comment

I have managed to get it to work in SeaBIOS rather than OVMF but need to understand the consequences.

 

1. Passing the 1080 Ti ROM in the VM definition (as per gridrunner's video) - no change, I get the grub menu then a blank screen

 

2. Recreate the VM with SeaBIOS. This worked in that the initial boot seemed slower but I got the Mint splash icon instead of the grub menu. I then ended up on a Mint desktop in emulated GPU mode at 640x480. I was able to complete the install, update the nVidia drivers and get a proper resolution.

 

What are the key differences between SeaBIOS and OVMF? How can I get the grub menu when in SeaBIOS? Can I switch that VM to OVMF now I have managed to create the vdisk?

 

I did try creating a second OVMF Mint VM and pointing it to the vdisk from the SeaBIOS one but got thrown into a shell rather than Grub.

Link to comment

Argh! Tearing my hair out with this... now a fresh Ubuntu VM won't start properly when I assign the 1080 Ti to it. I could swear I had it working off the 1050 Ti a couple of days ago but then deleted the VM and tried to install Mint when I took out the 1050 Ti and put in the 1080 Ti.

 

The reason for all the swapping is that the motherboard is a consumer board - Asus Maximus IX Formula - and I am short of both accessible PCIe slots and also PCIe lanes. My CPU cooler blocks the first x1 slot tucked behind the primary graphics card. I have the 1080 Ti in the first slot (2.5-width air cooler), the USB PCIe card in the second GPU slot, a gap with an unused x1 slot then the NVMe drive in the last x4 slot on an ASUS Hyper M.2 x4. That last x4 slot disables the adjacent x1 slot when used. Poop! I can't put the 1050 Ti in as well because that would force the USB card into the last x4 slot and the NVMe stick would have to go on one of the onboard M.2 slots, the only accessible one of which would disable two SATA ports... which I need for the array and cache.

 

Anyway, given the ASUS Maximus IX Formula, an unassigned 1080 Ti, USB card and NVMe card, proper IOMMU groups and unRAID booted off the iGPU, here is the issue...

 

VM : Windows 10, OVMF, 1080 Ti - perfect boot, desktop displayed as expected.

 

VM : Linux Mint, OVMF, 1080 Ti - grub visible. Even with one of nomodeset, nouveau.modeset=0, xforcevesa and grub_gfxmode=1024x768x16 all I get is a black screen, no cursor. Passing the 1080 Ti ROM to the VM makes no change.

VM : Linux Mint, SeaBIOS, 1080 Ti - no grub, slow boot, desktop displayed as expected.

 

Tried changing unRAID to use UEFI boot mode... no change with Linux Mint OVMF VM. Left as UEFI.

 

VM : Ubuntu, OVMF, 1080 Ti - grub visible. Either black screen if I try nomodeset, nouveau.modeset=0 or xforcevesa or garbled unmoving pink "snow" with five black blocks across the top. Passing the 1080 Ti ROM to the VM makes no change.

VM : Ubuntu, SeaBIOS, 1080 Ti - SeaBIOS failed with No Bootable Device. Unable to resolve.

 

Reset unRAID back to Legacy boot mode. Restarted server. No change with Ubuntu in SeaBIOS, still no bootable device.

 

I am at my wits end here. I NEED to get a Linux VM working and do not understand why it just is not playing ball.

 

Link to comment

any chance to unblock that 1st pcie slot by rotating the cpu cooler?

another option, is to use a pcie raiser extension cable for that blocked 1x slot - but it depends on the case layout, if you have an extra slot to exit that usb card outside the mobo layout at the bottom... Maybe to have it hanging in the case, and plug in devices that you are not swapping frequently (e.g. keyboard, controller, etc)

 

All this adds to the rabbit hole you're in... :)

but i suggested these, because the GPU's should stay in the 2 designated GPU PCIE slots - they are connected with the cpu pcie lanes (and not the lanes from the chipset) 

 

Try without the USB card, to have your GPU cards & VM's configured as you want. If that works, then clearly you have to find a way to plug that USB card in that first 1x slot... (just note that device id's might change, so a reconfiguration of VM's xmls might be required after removing/adding the usb card...)

 

I will check about my mint vm what is configured. It might be that i'm also not seing a grub menu, just the green dots loading...

 

 

Link to comment

Thanks for the response.

 

Yes I need the USB card - none of the onboard rear-facing USB ports have Reset which is why I got it. The cooler isn't any smaller the other way round - it is a Noctua DH15. I should have bought the DH15S but didn't know about the issue until this one arrived and I have fitted it now. There is barely a couple of mm too little space for the USB card in slot 1 so when I get a server board I will check the clearance again.

 

I could certainly consider a PCIe x1 riser cable but again once I am on a server board it will be a non-issue. The one I have my eye on has an exposed NVMe slot which will free a PCIe slot. I will then be able to fit the 1080 Ti (2.5 slots) and its associated USB card, the 1050 Ti (2 slots) and its USB card (not yet purchased) and that will probably be all I need to fit. This board either locks the NVMe away under a plastic shroud (no good for cooling) or sticks it up in the air (and kills off two SATA ports).

 

The two GPUs can certainly be swapped round on this board. When there is only one, it gets x16 but as soon as you fit the second they both drop to x8/x8 - the perils of consumer boards of course - not enough lanes to go round. I may try the 1050 Ti and see if Mint or Ubuntu will install with it fitted. If it does, there will likely be something different in the 1080 Ti BIOS which Linux doesn't like.

Link to comment

Having swapped the 1080 Ti for the 1050 Ti I have spotted the following...

 

1. unRAID booted off the iGPU, 1080 Ti unused in slot 1

 

Linux Mint 18.2 OVMF - blank screen after GRUB. Nomodeset, nouveau.modeset=0, nvidia.modeset=0 no effect

Ubuntu 16.04 OVMF - pink garbled screen after GRUB every time

 

2. unRAID booted off iGPU, 1050 Ti unused in slot 1

 

Linux Mint 18.2 OVMF - blank screen after GRUB. Nomodeset, nouveau.modeset=0, nvidia.modeset=0 no effect

Ubuntu 16.04 OVMF - perfect boot, Ubuntu installs correctly first time with no extra parameters needed in GRUB

 

There is clearly a difference between the 1080 Ti and 1050 Ti as far as Ubuntu is concerned but Mint likes neither of them. I wonder if it is the ASUS Maximus IX Formula incorrectly handling the virtualisation or if the ASUS 1080 Ti STRIX card is not playing nicely with virtualisation?

 

Windows 10 doesn't seem to mind either card apart from the fact that you have to install it with VNC then switch to the discreet GPU once the install is finished as explained by gridrunner.

Link to comment

totally understand the situation.

d15s is asymetrical and might allow for better compatibility (but might go into the top fan of the case, etc...)

 

but d15 ... then no option to take away the external fan? it will only increase temps by 2-3 C.

 

 

i checked now - i also don't see the ovmf tianocore splash logo and neither the grub menu when booting the linux mint 18.2 - i just get the dots...

then, the mint logo with the loading dots, then blank screen for 1-2 seconds, and finally the login screen - with an 1060. but this does not bother me currently

 

give it a try also to give the rom explicitly to the gpu, even if it's not required  - maybe a downloaded one will help you better than "who knows what" customized bios asus might have put in there...

 

good luck...

Link to comment

While the D15 does block the very first PCIe x1 slot, it is not because of the fans. The issue is the space between the back of the cooler and the case. I could fit a very short x1 card in there but my USB card is just a couple of mm too long.

 

I could also rotate the cooler 90 degrees to gain some space but I do not have a vented side panel. It is oriented front to back at the moment with an acrylic side panel in the CaseLabs S8 and I would like to keep it that way for now.

 

Please could you have a look at the XML for your Linux Mint VM and show it to me because I would like to see how it is structured differently from what I am trying to use. Also which version of unRAID are you using and is it in Legacy or EFI boot mode?

Link to comment

i'm using the latest stable 6.3.5

 

Not sure about your question - it's a setting in uefi bios or some configuration in unraid?

If bios, i think i'm using Legacy OS (non uefi) - really i am a bit confused on this and not sure if/how it matters.

 

Here's the xml. Just note it's passing through a secondary 1060 gpu (so no rom file). also i'm passing through an on-board ASMedia usb controller (bus 0x07).

 

<domain type='kvm'>
  <name>MintCin</name>
  <uuid>xx</uuid>
  <description></description>
  <metadata>
    <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
    <emulatorpin cpuset='0,6'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.7'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/xx_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/Samsung_SSD_750_EVO_500GB_xx/MintCin/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </controller>
    <filesystem type='mount' accessmode='passthrough'>
      <source dir='/mnt/user/lindrive'/>
      <target dir='lindrive'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </filesystem>
    <interface type='bridge'>
      <mac address='xx'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
</domain>


 

Link to comment

Thanks for posting the XML.

 

The only significant difference I can see is that I have the GPU described as...

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/domains/vbios/asus1050ti.rom'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>

and you have...

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
    </hostdev>

The difference is that I have an address type with differing bus (0x02/0x03 and so on) but same slot (0x00) for each PCIe device and you have the same bus (0x02) but differing slot (0x05/0x06 and so on) for each PCIe device. I have no idea where the template picks those up from but it is the only significant difference I can find.

 

I have tried passing the ROM. I am in OVMF and Q35_2.10. I just ge the black screen.

 

Ubuntu on the other hand (with the same GRUB options) sees the card and boots perfectly to the desktop. I have no idea what Mint is doing differently.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.