ars92

Members
  • Posts

    286
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ars92's Achievements

Contributor

Contributor (5/14)

6

Reputation

  1. Hi there, would like to check how is the stability of your system now, with unraid 6.12.6 which should fully support ryzen 7000? Was thinking of getting threadripper 7960x to replace my 1920x but I think I'm gonna need to go with just the normal ryzen considering the limited Mobo and general availability of parts for threadripper this time around.
  2. Thanks JorgeB as always. Will keep an eye on the disk in case anything funky happens in the future.
  3. Thanks @JorgeB, attached the specific disk SMART report. I ran a short self-test (which showed as pass) right before downloading the report. WDC_WD80EFAX-68KNBN0_VDH6GBBK-20231119-1818.txt
  4. So I was using my NAS and doing some file ops on Windows when trying to delete a specific folder caused a lockup on any transfer. Restarted laptop trying the same thing, but this time even the server stopped responding on webGUI. After a hard reset, starting the array didn't work as it was stuck on mounting disk3 Restarted server in maintenance mode and followed another unraid forum post recommending to run xfs_repair, and below is what I got after running a -vL through the webGUI check (had to use -L as it forced me to empty the log, and I went ahead as it was recommended on the other post too) After the below I started array, everything is fine, but I now have a lost+found folder (sized 13.69MB) with files I cant recognize. Should I be worried that the disk may be failing? But a FS repair worked so I only had an FS issue rather than a failing disk right? Should I do/check anything else? xfs_repair_disk3.txt
  5. awesome stuff @VRx! I'm just wondering, since I need to use RPZ (Response Policy Zone) to rewrite certain URLs to a different IPs, do I need to do this manually (create the rpz file and add it to named.conf)? As I dont see such a function through webmin.
  6. Thanks JorgeB for the prompt reply. Sure enough, today the SSD has turned read only, at least I'm not able to start any of my VMs anymore. Managed to copy files out two days back from the VMs (since some of the Vdisks weren't able to be copied out in its entirety) and the app data backup was already there due to CA Backup (thanks Squid for this!!) Planning to get a pair of Crucial P5 Plus, since my two Evo Plus' 5 year warranty ended two months ago in June....lol SN700 seems fun but way too expensive in my country for some reason..... ========================================================================================================================= So I've gotten the replacement SSD, got the disk replaced without doing any reassigning etc. since the old drives have nothing useful anymore. Everything looks good, docker service is back up, but this is worrying me a bit. I have setup scrub and balance to run monthly now, just in case it helps in the future (I will setup the script suggested by JorgeB soon) but when I run "perform full balance" the page refreshes almost immediately (maybe due to nothing in the disks) but the recommendation doesn't go off. I then tried running the below CLI command and I get the below, but the GUI still shows the same message. Should I just ignore this?
  7. Thanks JorgeB for the prompt reply. Sure enough, today the SSD has turned read only, at least I'm not able to start any of my VMs anymore. Managed to copy files out two days back from the VMs (since some of the Vdisks weren't able to be copied out in its entirety) and the app data backup was already there due to CA Backup (thanks Squid for this!!) Planning to get a pair of Crucial P5 Plus, since my two Evo Plus' 5 year warranty ended two months ago in June....lol SN700 seems fun but way too expensive in my country for some reason.....
  8. hey all, First time having a potential hardware issue, so I apologize if I missed certain prerequisite details. My server has been running just fine for a long time, even replaced a disk a few days back by upgrading parity and using the parity disk as a disk drive. Everything has been great even after that. But last night, docker started acting weird and I noticed errors in my cache pool when I downloaded the SMART report. i ran a memtest which went on for 13 hours for 4 pass, and it passed so I guess the 4 RAM sticks are fine. The vdisks aren't able to copy into my array after a certain point so I guess they are corrupted. Good thing my appdata backs up every month using CA Backup. I just want to try to understand if my M.2 drives are going bad and I should purchase some new ones, or should i try my luck by reformatting and using them back again. Attached diagnostic after running scrub (which couldnt correct any of the errors) and also a command which JorgeB recommends to run (shows a whole lot of errors!) Also attached SMART report from both cache disks. I cant do any cable checks since they are M.2 connected directly to the motherboard. I have a third drive used as an unassigned disk, which seems to be fine for now (ADATA drive bought a few years after these two) from scrub page UUID: d7811189-42b8-4d37-a4d0-dae7ee9e73f6 Scrub started: Tue Aug 15 21:03:35 2023 Status: aborted Duration: 0:09:38 Total to scrub: 478.01GiB Rate: 833.38MiB/s Error summary: read=135 Corrected: 0 Uncorrectable: 135 Unverified: 0a root@Tower:~# btrfs dev stats /mnt/cache [/dev/nvme0n1p1].write_io_errs 0 [/dev/nvme0n1p1].read_io_errs 130 [/dev/nvme0n1p1].flush_io_errs 0 [/dev/nvme0n1p1].corruption_errs 2 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme2n1p1].write_io_errs 354329 [/dev/nvme2n1p1].read_io_errs 339856 [/dev/nvme2n1p1].flush_io_errs 1334 [/dev/nvme2n1p1].corruption_errs 2806 [/dev/nvme2n1p1].generation_errs 0 root@Tower:~# tower-diagnostics-20230815-2115.zip tower-smart-20230815-2130.zip tower-smart-20230815-2131.zip
  9. Container is showing as "not available" for me while other dockers are fine. Anyone else with the same issue?
  10. I do not know if this would help you guys who are trying to use Nvidia GPU on an OS newer than High Sierra, but the below worked for me for each and every release (up to Monterey) the moment I made the change. Before that I always get a black screen, which blocked me from even trying to install OCLP to get graphic acceleration working with my 1070TI <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/Downloads/Softwares/Galax1070ti.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> The bolded underlined area was always 0x00 for me all the while, but I noticed that it was 0x03 on my High Sierra by default but 0x00 on anything newer, the moment I changed it, lo and behold, I got the nice, pixelated UI you get when drivers aren't installed OCLP works perfectly after that. The below was needed too, of course: How to Enable NVIDIA WebDrivers on macOS Big Sur and Monterey
  11. For me the longer boot started at 6.8.3 and wasn't there in 6.7.0 and below. But it's not that big of a deal as its perhaps 10 seconds or so more.
  12. Thanking the existence of this docker. Was about to get down and dirty on either trying to enable secured connection to Emby and Jellyfin which has been unsecured and accessible through WAN for a long time (crazy, I know!), saw it was kinda complicated, went and looked into reverse proxy as I am a network engineer by day, so I deal with this on a daily basis but with enterprise solutions. Nginx and others still seemed pretty involved and then this popped up through CA Settled both services through separate DDNS entries which sync to the CNAME of my router DNS so manual update of IP isn't needed, may still need to refresh my hostname every 30 days though due to it being a free account. This barely took a few minutes to set up, thanks again!!!
  13. Love how well this has been working for the past couple of months. Thanks SpaceInvaderOne for creating an easy way for Unraid users to enjoy Mac OS!
  14. It should work great according to this site - Intel GPUs | GPU Buyers Guide (dortania.github.io) Your issue sees to be something outside of the MAC OS/bootloader though. Try to make it work with Windows first and then see if you need to change anything to make it work.
  15. Happy to report that my GTX 1070 TI is working pretty well on Big Sur using OCLP. Will soon try Monterey to see if the same can be achieved. Mainly used elitemacx86 instructions, but also had to edit certain PCI slot values in the XML generated by Unraid/Macinabox or else I was constantly stuck on the UEFI screen. Basically made it the same as what I have working in High Sierra and it came right up. Love how my monitor USB hub works on my USB controller which is passed through to the VM, while the same doesn't work on High Sierra (it disables itself right after choosing my boot drive which signals something in OC not supporting it I guess, but not a big deal as I think I'm gonna get rid of High Sierra soon since Big Sur is working great!) Im trying to get Bluetooth to work though, and I can see the hardware in Hackintool but not in System Report. Still trying to figure out how can I get this to work, but something tells me its not gonna work since I don't see Broadcomm under Vendor Name here.... any thoughts?