amelius

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

amelius's Achievements

Noob

Noob (1/14)

1

Reputation

  1. So, I just switched motherboards to a Gigabyte Aorus Extreme x399 from an ROG Zenith Extreme. One of the things I like about this new board is that instead of having a PCI expansion board for 10gbe, it has it built into the motherboard. Both of these boards use Aquantia for the 10 gigabit port. On the new board, plugging into the 10gbe slot doesn't seem to do anything, while using either of the two 1gbe slots works fine. When checking from terminal, I observed the following: root@Atlas:~# lshw -class network *-network description: Ethernet interface product: I210 Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 03 serial: e0:d5:5e:e0:dc:42 size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.4.0-k duplex=full firmware=3.11, 0x80000469 ip=192.168.2.107 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:39 memory:c0e00000-c0efffff ioport:2000(size=32) memory:c0f00000-c0f03fff memory:c0d00000-c0dfffff *-network UNCLAIMED description: Network controller product: Wireless 8265 / 8275 vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 78 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress cap_list configuration: latency=0 resources: memory:c1100000-c1101fff *-network DISABLED description: Ethernet interface product: I210 Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 03 serial: e0:d5:5e:e0:dc:44 capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=igb driverversion=5.4.0-k firmware=3.11, 0x80000469 ip=192.168.2.87 latency=0 link=no multicast=yes port=twisted pair resources: irq:24 memory:c0b00000-c0bfffff ioport:1000(size=32) memory:c0c00000-c0c03fff memory:c0a00000-c0afffff *-network UNCLAIMED description: Ethernet controller product: AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] vendor: Aquantia Corp. physical id: 0 bus info: pci@0000:07:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pciexpress pm msix msi vpd bus_master cap_list configuration: latency=0 resources: memory:c0840000-c084ffff memory:c0850000-c0850fff memory:c0400000-c07fffff memory:c0800000-c083ffff *-network:0 DISABLED description: Ethernet interface physical id: 1 logical name: gretap0 capabilities: ethernet physical configuration: broadcast=yes multicast=yes *-network:1 DISABLED description: Ethernet interface physical id: 2 logical name: erspan0 capabilities: ethernet physical configuration: broadcast=yes multicast=yes *-network:2 DISABLED description: Ethernet interface physical id: 3 logical name: bond0 serial: 9e:ed:8b:10:af:a8 capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=bonding driverversion=3.7.1 firmware=2 link=no master=yes multicast=yes So it looks like the adapter in question is "*-network UNCLAIMED". I'm not entirely sure how to fix this, forum posts for linux in general have suggested that some driver is missing, and to fix it by sudo apt-get install linux-backports-modules-jaunty however, that wouldn't work on UnRaid because it's not a debian based distribution. What's odd to me is that a very similar card, just one that's external worked fine, it was even the same brand. I'd like to figure out how to get this card to be claimed/working properly. I'm suspecting some sort of driver issue, but that's where I'm stuck, not sure where to go from here. Thank you, appreciate the help!
  2. So, I added [TimeMachine] path = /mnt/user/TimeMachine ea support = Yes vfs objects = catia fruit streams_xattr fruit:encoding = native fruit:locking = none fruit:metadata = netatalk fruit:resource = file fruit:time machine = yes fruit:time machine max size = 2.0 T into my "Samba Extra Configuration" under SMB settings, and cat /etc/avahi/services/smb.service <?xml version='1.0' standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM 'avahi-service.dtd'> <!-- Generated settings: --> <service-group> <name replace-wildcards='yes'>%h</name> <service> <type>_smb._tcp</type> <port>445</port> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> <service> <type>_adisk._tcp</type> <port>0</port> <txt-record>dk1=adVN=TimeMachine,adVF=0x82</txt-record> </service> </service-group> Except i have the following two issues: When <service> <type>_adisk._tcp</type> <port>0</port> <txt-record>dk1=adVN=TimeMachine,adVF=0x82</txt-record> </service> is added into that file, the other shares disappear via SMB, and I can no longer access them. The time machine share appears as an option for time machine, but when connecting to it, it fails to add it saying it doesn't support the right features. When this segment is removed, it doesn't show up in time machine, but everything else works fine. I'm perfectly fine writing a script to fix it if unraid overwrites the stuff that makes this work, can you guys tell me how you got it working? @limetech @gfjardim
  3. I can't seem to get the system temperature plugin working. I installed nerd tools and got Perl, I installed the plugin, detected drivers, and loaded drivers. However, every dropdown for the only has one option "Not Used", and rescanning doesn't help. The driver I loaded was lm78. My CPU is a TR 1950x and my mobo is a Asus ROG Zenith Extreme x399. Any ideas? I saw this thread about having to compile drivers or something? https://github.com/groeck/lm-sensors/issues/16
  4. Hi, I'm having trouble with this, I installed perl via nerdpack, I installed the plugin, clicked detect, it found the lm78 driver, and then it could not detect any temperature/fan things. All the dropdowns don't have any options in them. Not sure what to do. I saw this, wondering if I need to update lm_sensors and install it87. Can someone advise me on how to do that, since I can't seem to find any package manager or even GCC in Unraid. Thanks!
  5. Update: Unraid can now passthrough GPUs properly for TR as well.
  6. I have no idea if or when it changed but as long as you set the property I listed above, it works fine, I gave 3 passes through 1080 Tis, and a Titan V passed through, all working. Only caveat is if you don't shut down a VM gracefully (you power it off instead of shutting down) the GPU associated with that VM won't work till you reboot the entire thing.
  7. Yeah, Threadripper still has issues for KVM based virtualization, so far, the only thing I've seen that works is ESXi which operates on a different virtualization system entirely. I've heard that Windows Hypervisor also works, but I've not had a reason to test that.
  8. I haven't really bothered with temperature monitoring at all yet, it supposedly might require extra drivers (not really sure about it), but I don't really care, I have a custom watercooling loop that has 600W more thermal dissipation capability than the balls-to-the-wall TDP my system components can generate overclocked. As for the hypervisor.cpuid.v0 = False disabling performance enhancements, maybe it does, but when I benchmarked it against another rig with a 1080 Ti in it, the performance difference was pretty negligible (and even then, the one that one out simply had a higher overclock by a bit anyways). I also tested it in a game, and had like... 2 fps difference at 4k and 1440p. I would say that would put any potential performance hit squarely in the "entirely imperceptible" category.
  9. Tried that, didn't help. I looked around, and it seems that this is an issue with KVM virtualization, and that the only ones that work are a) the windows hypervisor, and b) ESXi. I've tested ESXi, and it's working well for me. If you want to make use of your system and not wait for fixes for this issue, you might want to give ESXi a shot.
  10. Idk where you heard that, sure, that's what their site *claims* but in reality, it's totally not an issue, you just need to set hypervisor.cpuid.v0 = FALSE and it's all fine. Also, ESXi handles that D3 issue no problem, since you can set the way it handles turning PCI devices on and off. (Tip, if you want to not reboot between using the same GPU, make sure you don't do a forced shutdown on a VM that has a GPU passed through, only a proper shutdown will make it available again for that same (or another) vm without a reboot of the host. If you want a guide that outlines passing through in ESXi, https://www.reddit.com/r/Amd/comments/72ula0/tr1950x_gtx_1060_passthrough_with_esxi/ has a rough outline that works perfectly. I tested it with my configuration and had no issues with getting up and running on both Windows 10 and Ubuntu 16.04 with a GTX 1080 Ti passed through on each one (though on Ubuntu I ran into an annoying login loop i've run into before, but that's just an issue with Xorg and Nvidia drivers not playing nice, not an issue with the passthrough though.)
  11. So after struggling with trying to use Unraid and looking around some forums, I ended up pivoting to ESXi, which actually has no problem with GPU passthrough (though the configuration is a pain), but has a shocking amount of difficulty passing through USB devices (but this is also surmountable). The only downside is the lack of convenient software RAID support.
  12. Hi, I've been trying to get my GPU(s) to properly pass through to my VMs, and I keep encountering two weird things. 1) If i don't reboot between VM startups, I get a weird error: "internal error: Unknown PCI header type '127'" But more problematically, 2) "vfio: Unable to power on device, stuck in D3" seems to happen in the logs whenever I boot up a VM with gpu passthrough, and the GPU doesn't get passed through, nothing shows up on screens, and if I check what the output is in VNC, it doesn't appear in device manager for windows, and for ubuntu, the whole OS seems to hang on login. System Specs: Threadripper 1950X Asus ROG Zenith Extreme Motherboard 64 gb ddr4-3000 memory 3x Samsung 960 Evo (this is my array) 2x GTX 1080 Ti founder's edition (what i'm trying to pass through, one to a Windows 10 vm, one to a Ubuntu 16.04 VM) I've thus far tried blacklisting the GPUs, I've tried manually specifying the ROM dump, both VMs are using OVMF and Q35, both work fine when only VNC is specified as the graphics adapter. I've tried disabling kvm, to avoid the nvidia issue where the GPUs don't work if it's in kvm, but i'm not sure if I did that right. VM XML files are attached. Syslinux config: default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append iommu=pt vfio-pci.ids=10de:1b06 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest So, anyone got any ideas? ubuntuvm.xml windowsvm.xml
  13. Is it possible to boot an unraid VM from an existing partition, while maintaining the functionally of the partition (to allow it to still be booted directly). Is it possible to changes pcie device pass through without recreating the VMs?