Krzaku

Members
  • Posts

    60
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Krzaku's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. Does this plugin handle stopping dockers differently from unraid gui? I have an issue with Deluge, where when it is started by the backup plugin it is started in some bugged state where the web gui is broken. I have to restart it manually from unraid gui. Stopping/starting manually using unraid gui does not cause this issue. Any ideas what might be causing this?
  2. Lmao the "Add SSL Certificate" button on the center isn't the same as the one in the upper right corner. The one in the center which I was clicking was taking me straight to lets encrypt, only the one in the corner has the dropdown.
  3. @Foxglove when I click "Add SSL Certificate" I don't have an option to add custom, it just goes straight away to Let's Encrypt.
  4. Why is there no option to add a custom ssl certificate? There is only option to either not use ssl at all or use builtin lets encrypt. Nginx Proxy Manager supposedly supports this.
  5. It happened again recently, unfortunately I have nothing more than a picture of the error: After this happened, cpu usage was more or less frozen in the following state:
  6. Unraid has been acting very unreliably, hanging to the point of me having to restart the system forcibly. When it hangs most of the time the screen is blank (screensaver?), keyboard is unresponsive and there is no possibility to connect to the server, and since unraid doesn't save its logs anywhere persistent, the logs from those crashes are lost. Just today a VM hanged (screen stuck on last frame), but the server was left (semi) running. Semi, because I still couldn't shutdown cleanly. I tried force stopping the VM, but that hanged as well. After a few minutes and refreshing the page, it appeared to have stopped it (it did not fully, the process was gone but memory usage was still high with no processes showing usage above 1%, so the VM process must have been left in some zombie like state), so I tried rebooting the server, but that hanged as well on "stopping services", at which point the server became even more unresponsive, the webui barely responded loading half of the pages, ssh was not working. This kind of issues have been going for over a year, half of parity checks in the log were because of unraid becoming unresponsive. Anyway, back to the point. This time I managed to capture some more information in the form of dmesg logs. They mention KVM and MMU but that's about as far as I can understand from them. I'm passing my GPU and NVMe drives to the VM. dmesg: https://pastebin.com/YXseGUWp vm definition: https://pastebin.com/PvZXmuCr kernel params: append pcie_acs_override=downstream,multifunction iommu=pt isolcpus=1-6,9-14 initrd=/bzroot vfio-pci.ids=8086:a370,8087:0aaa,10de:1e87,10de:10f8,10de:1ad8,10de:1ad9,1033:0194,144d:a808,1987:5012
  7. All I can say is that it was working before, the only variable being the RAM amount. So you're saying that even if I isolate the first 2 cores, unraid will still use them for its own work?
  8. I have resolved the issue by skipping the first 2 threads when passing to a VM, so now I am passing the middle 12 threads. I still don't understand how this was causing the issue though.
  9. As a test, I stubbed all of the devices passed to the VM anyway, which did not fix the issue: pcie_acs_override=downstream,multifunction iommu=pt vfio-pci.ids=8086:a370,8087:0aaa,10de:1e87,10de:10f8,10de:1ad8,10de:1ad9,1033:0194,144d:a804,1987:5012 isolcpus=0-5,8-13
  10. That is almost all correct, except I'm only stubbing the WiFi card and not any other device, might this be the issue? These are my kernel params: pcie_acs_override=downstream,multifunction iommu=pt vfio-pci.ids=8086:a370 isolcpus=0-5,8-13
  11. After a few test runs it would seem that not one single device causes this. The VM only ever booted (to the EFI shell) after I removed ALL devices, including USB controller, both NVMe drives, the WiFi card and the GPU (I tried a few combinations). Also, the magic number with which the VM boots seems to be 9 cores not 8. And yes, when it does not boot, there are no errors in the VM logs and it doesn't stop until I force stop it.
  12. I had 32GB of ram and 12 cores assigned to my Win 10 VM. I added another 32GB (exact same brand and model), and now that same VM won't boot with any amount of ram with more than 8 cores. If I add more cores the VM won't boot and the first HT core is pinned at 100% while the others are 0%. I set the CPU Isolation correctly. I included the diagnostics, can anyone help?
  13. Bump. This is pretty annoying. I always forget to restart it manually and then I go to work and I cannot access my server... Is there a way to restart it through the cmd? I could then use user.scripts to run that script on array start.
  14. Trying this results in an infinite boot loop for me (booting from a non sm2263 device). Also, the method with binding the drive does nothing as well (I used vfio-pci.ids kernel param, but I assume it works the same? the drive disappeared from available drives after rebooting but was available in vm). Where can I download rc7 to test if it works with that? It's a new drive I just bought so I don't know if it worked before. The drive is Intel 660p 1TB. @EDIT I returned the drive, no point in hacking it in software to make it work when there are alternatives. A pity someone as big as intel put out such a dud.