All Activity

This stream auto-updates

  1. Past hour
  2. No, not yet. I will recheck in the next few days.
  3. Am I reading this right that you have to manually update the library through the terminal every time you add new content? The library will not automatically update when new downloads are added?
  4. Hab soeben eine Lösung entdeckt, falls das Problem wieder auftaucht, anscheinend hat NPM Probleme mit ".certbot.lock" files: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2881
  5. Thanks. I ended up manually deleting the folder using Krusader - I was able to delete the folder (containing the sqlite database) on my cache drive, but I was unable to move the audiobookshelf folder from appdata to the trash, it didn't give me any any errors, but it also didn't seem to do anything, despite refreshing and trying a few times. Reinstalling did seem to work and let me create a user and pass, so mission success!
  6. My drives aren't all detected when I start the array after moving to a new machine (new but same board). I took out the drives into a hard drive toaster and all of them boot normal and are detected on my normal laptop. I've provided diagnostics and an image of what the array looks like (apologies for quality). Would anyone be able to help me figure out what's going on? I have a backup of most of the data. Any input is greatly appreciated. diagnostics-20240411-0454.zip
  7. My drives aren't all detected when I start the array after moving to a new machine (new but same board). I took out the drives into a hard drive toaster and all of them boot normal and are detected on my normal laptop. I've provided diagnostics and an image of what the array looks like (apologies for quality). Would anyone be able to help me figure out what's going on? I have a backup of most of the data. Any input is greatly appreciated. starforge-diagnostics-20240411-0454.zip
  8. [Backblaze_Personal_Backup] Well, in addition to the slow upload speeds I and (mostly) other Unraid users that run the BPB docker have been having and trying to mitigate for at least the past 2-3 months, now I'm having a new issue since yesterday afternoon: the BB UI just keeps showing that everything is up-to-date and, when I hit the 'Backup Now' button, it scans and finds no changes, even though I know there have been file changes in my source folder/'drive'. While BB was uploading for me the past couple of weeks at, max, about 20 Mbps (per Glances; BB's numbers under Settings->Performance are about 0.62-0.63 Mbits/sec max) with numbers of threads maxed at 100, it was at least uploading something; it's not even doing that now and it's not even showing an obvious error or anything. I really don't want to get conspiratorial, but this, along with the speed issues that primarily appear to be affecting Unraid users using the backblaze-personal-wine-container docker since about January-February or so, really makes one of Ian Fleming's more-famous quotes pop into my head ("Once is coincidence. Twice is happenstance. Three times is enemy action."). Hey, it may very well be ongoing bugs with the BB application and/or the docker and hopefully it'll be fixed, but I kind of wonder if BB has finally "fixed the glitch" on what has to be some of their heaviest users.
  9. Today
  10. Bien vu !!! Je ne sais pas pourquoi cela c'était mis sur 25... je l'ai modifié en 24 et tout fonctionne.. Merci beaucoup pour l'aide
  11. I believe so... I just barely got a AMD Ryzen 7 5800x today and it gave me that same error. I was using an old i5-3570k and the temps were showing.
  12. here is my scheduled settings - Im not sure how to get into the flash drive logs - thanks again for your help
  13. Thank you all in advance for your time reading this post. The short version of the story is that I have had this sever configuration since 2012. I have updated a component here and there but it was a straight swap looking for the cheapest upgrade. I am now looking to update/upgrade to something more current. What I have: • 24 bay Server case • 3 pci sata expansion (LSI SAS 2008) • CPU (Intel® Xeon® CPU E31225 @ 3.10GHz) • RAM 32GB DDR3 • Board (X9SCL-II/X9SCM-II) • Unraid is unning on top of ESXI (why? because this was the setup I followed when I initially created so long ago) ○ 1 Cache ○ 2 Parity (6TB each) ○ Mix of 3-6TB XFS formatted drives all filled to the brim Nothing has to stay if it is a blocker to what I want. What I want: • Upgrade to 2x 10TB+ parity drives to shrink array (straight forward enough) • If possible, dump the 3 PCI cards to connect all drives ○ Recommend any boards? • Speed up transcoding for 4k videos ○ CPU? GPU? RAM? • Utilize high speed transfer (set up for 2GB ethernet) • Run gaming servers for my nephew (plays a lot of minecraft) • Virtual machines, ○ I have a few on ESXI (windows and linux). Want to migrate them over to unraid • Configured for power efficiency (idling drives as much as possible to save power) would like the community's recommendation on hardware options so I can rebuild for long term. My Budget is something reasonable, middle of the road, to achieve what I want.
  14. Just a quick follow-up with some more information on what was eating up my CPU. Just seems to be the qemu process running the guest VM, Still looking for help if anyone has any ideas. Process: /usr/bin/qemu-system-x86_64 -name guest=Test_VM,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-6-Test_VM/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/etc/libvirt/qemu/nvram/b802d3d5-a531-df66-286e-a275849dfcc4_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine pc-q35-7.2,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format -accel kvm -cpu host,migratable=on,topoext=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off -m 16384 -object {"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184,"host-nodes":[0],"policy":"bind"} -overcommit mem-lock=off -smp 10,sockets=1,dies=1,cores=5,threads=2 -object {"qom-type":"iothread","id":"iothread1"} -uuid b802d3d5-a531-df66-286e-a275849dfcc4 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=43,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device {"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"} -device {"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"} -device {"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"} -device {"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"} -device {"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"} -device {"driver":"pcie-root-port","port":8,"chassis":6,"id":"pci.6","bus":"pcie.0","multifunction":true,"addr":"0x1"} -device {"driver":"pcie-root-port","port":9,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x1"} -device {"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pcie.0","addr":"0x7"} -device {"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.2","addr":"0x0"} -blockdev {"driver":"file","filename":"/mnt/vmdisk/domains/Test_VM/vdisk1.img","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":null} -device {"driver":"virtio-blk-pci","bus":"pci.3","addr":"0x0","drive":"libvirt-2-format","id":"virtio-disk2","bootindex":1,"write-cache":"on"} -blockdev {"driver":"file","filename":"/mnt/vmdisk/domains/Test_VM/vdisk2.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device {"driver":"virtio-blk-pci","bus":"pci.4","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk3","write-cache":"on"} -netdev tap,fd=44,vhost=on,vhostfd=46,id=hostnet0 -device {"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:14:49:cd","bus":"pci.1","addr":"0x0"} -chardev pty,id=charserial0 -device {"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0} -chardev socket,id=charchannel0,fd=42,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"} -device {"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"} -audiodev {"id":"audio1","driver":"none"} -device {"driver":"vfio-pci","host":"0000:42:00.0","id":"hostdev0","bus":"pci.5","addr":"0x0"} -device {"driver":"vfio-pci","host":"0000:42:00.1","id":"hostdev1","bus":"pci.6","addr":"0x0"} -device {"driver":"vfio-pci","host":"0000:44:00.3","id":"hostdev2","bus":"pci.7","addr":"0x0"} -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on htop_process.txt
  15. Ran out of free ports so I recently added an Intel RES2SV240 SAS-2 Expander to my LSI 9207-8i in order to add a new 18tb drive to the array as Parity 2 and move the current 14tb Parity 2 drive to a data drive. Dual link sas connection from the expander to the hba and new sas to sata breakout cables. Expander plugged into pci slot for power. After setting everything up and booting up the system all drives were recognized but 1 drive was disabled and there were crc errors. I changed out the sata breakout cables and checked all connections and now 2 drives were disabled. Smart test completed without error for both drives. Ended up running the check filesystem on both drives with -L. Drives are now mountable but contents emulated. When attempting to rebuild the drives from parity the webgui eventually becomes unavailable. Server does not appear to be on the network anymore. I also have a pikvm hooked up to the server and the unraid console there is also locked up. I have let it run the expected rebuild time hoping it would recover but did not and eventually did an unclean shutdown. Replaced the PSU with a brand new larger unit, thinking the drives did not have enough power. I have since removed the new hard drive and sas expander and returned to the original data cables with the same issue happening during rebuild. Attempted various combinations of booting in safe mode and running the rebuild in maintenance mode or regular mode. Sometimes the rebuild will run for many hours before the system locks up and sometimes it happens in less than 30 minutes. Ran memtest with no errors after one pass. Not sure what to do now. Diagnostics attached. Unraid Version: 6.12.10 CPU: Intel I5-9600K Motherboard: ASRock Z390 Extreme4 RAM: 64GB G.Skill F4-3200C16-16GUK (4x 16GB) DDR4-3200 PSU: Seasonic TX-1300 Cache: 2x SAMSUNG 2TB 870 EVO (zfs) HDDs: Various size/age shucked WD (xfs) HBA: LSI 9207-8i apollo-diagnostics-20240417-1916.zip
  16. Could this docker be run on a server that doesn't have an RTL-SDR installed? Ie: I have my main computer i work on, and then my unraid server which is stored in a server closet. My HT is in my main room which i hook up to my UHF/VHF antenna from time to time. Could i connect my antenna cable to the RTL-SDR on my system and send the data to the docker while my main system is online?
  17. One observation, not sure if it's just me, is that the cpu processor metrics seems to spike and fall more often with these powersave settings. Is this normal?
  18. Hallo McLean Ich vermute grob das deine Hardware ungeignet ist. Zumindest ist das hier die Aussage. Das die Community helfen kann, solltest du aber die diagnostics einmal mitschicken. Grüsse
  19. I went to connect interact with a docker web application, and noticed it wasn't running. Logged into unRAID to find the array wasn't started. I had rebooting it a day or two ago. I ended up starting the array, and it started a parity check. I ended up looking at the uptime and it said 1 day 13 hours, which seems weird because that would have been like 6am Monday. I definitely don't remember rebooting it at that time. It seemed like the parity check started fine, and then all of a sudden it said disk 5 had issues. I see in the notifications it say 3 disk with read errors. Before I started the array all the disks were showing green on the status. Now disk 5 is showing as disabled. It wouldn't let me download diagnostics via the ui, I had to generate them via CLI, and transfer them to another NAS device, but they are attached here. The parity check paused. I assume I need to cancel it. Not sure what I need to do at that point. I tried looking through the diagnostics, and noticed several of the disks are missing smart reports. palazzo-diagnostics-20240417-2000.zip
  20. in the picture I've just started the parity check but in reality... it gets hotter than that. my unraid server keeps on crashing after a while in the middle of every parity check since I've add 3 new drives to it. I'm guessing its because of the heat my drives go throught(I don't have any fans). Could i add fans on the bottom pushing air up and stick them with 3m tape or something add a little 90mm exhaust fan. More importantly, would it cool the drives? What other solution do I have for it to cost the least possible.
  21. 开启魔法,修改DNS,首选DNS 223.5.5.5 次选DNS 1.1.1.1或者8.8.8.8
  22. 1.创建关机脚本 打开用户脚本插件,并创建一个新脚本,例如命名为 “Shutdown Sequence”。 在脚本中,添加以下内容来控制关闭顺序: #!/bin/bash # 停止所有 Docker 容器 docker stop $(docker ps -a -q) # 等待30秒 sleep 30 # 关闭所有虚拟机 for vm in $(virsh list --name --state-running); do virsh shutdown $vm done # 等待虚拟机完全关闭 while [ ! -z "$(virsh list --name --state-running)" ]; do sleep 10 done 2.创建启动脚本 创建另一个脚本,例如命名为 “Startup Sequence”。 在脚本中,添加以下内容来控制启动顺序: #!/bin/bash # 启动所有虚拟机 for vm in $(virsh list --name --state-shutoff); do virsh start $vm done # 等待虚拟机启动 sleep 60 # 启动 Docker 服务(如果已经由系统自动启动则不需要) # systemctl start docker # 启动所有 Docker 容器 docker start $(docker ps -a -q) 以上仅作参考,自己按需求修改延迟时间来控制
  23. I have had to hard reboot my server 3 times in the last 2-3 days. IT just stops working not even pingable, it is still powered on though. I have the syslog mirrored to flash but I couldn't find the log. It is a dual xeon server with an asus motherboard I can provide more information if needed. I have attached the diagnostics. If anyone can give me insight that would be great! raid-diagnostics-20240417-2107.zip
  24. 感觉是网络问题,尝试重新设置一下DNS
  25. Thank you for the suggestion. I will review how to do this and give it a try.
  26. Sorry, guess I was not really clear I guess, before I wiped it, I had 2 drives out of the array, only 1 of them showed in unassigned devices (plugin), but showed up fine when plugin uninstalled. When I setup my unraid back in 2020~, the first thing I always did was install most of your tools, same hardware, it showed all devices just fine, i only noticed it with the upgrade to 6.12 when i was trying to troubleshoot other issues. still, the important part is, without the plugin, unassigned devices shows them all, with the plugin, it shows random selection of devices, if i add a drive to an array or pool unassinged devices plugin will flash in/out and sometimes display a few more, and sometimes a few less, this is all regardless of the array start/stop state EDIT: Further testing, added flash drive to array (zfs pool workaround), started array, drives in UD plugin changed but still not all of them, stopped it, added a regular HDD, started, drives in UD plugin again changed, stopped added a second, again drives shown in plugin changed, added two more, changed again, now I at least see one of my SSD's but not both "I don't believe UD is keeping up with the Unraid changes in this condition because UD thinks some disks are still assigned to the array" this is from a nuked state, drives where secure wiped and have not been setup at all, USB was also secure erased, nothing i can find is from prior installation. EDIT2: drunk testing, apparently, hitting the "refresh disks and configuration" button for the plugin adds the disks back one at a time each refresh, AND, it survives a reboot... diag attached in-case it helps tell you anything helpful in fixing the plugin gotta set it up for use now, if problem persists in the future, will circle back, TY TY for the great tools making unraid greater! drunkenUDdiscovery-diagnostics-20240417-2354.zip
  27. I guess I misunderstood what you were doing. Your first screen shot is the stock Unassigned Devices display without the UD plugin being installed. Your second screen shot is with the UD plugin installed. The devices are being discovered properly. You have not setup any array devices and of course your array is not started. You are in a weird state and I'm not sure that UD is detecting unassigned devices. Unraid dynamically sets the unassigned devices for UD by assigning the 'devX' designation. I don't believe UD is keeping up with the Unraid changes in this condition because UD thinks some disks are still assigned to the array. This is not a situation I've tested before - i.e. no array disks assigned. Uninstall the UD plugin and set up your array. Once you'e done that. start the array and then install UD to manage the unassigned disks. Post back here if you still see issues after doing that.
  1. Load more activity