ZipsServer

Members
  • Posts

    150
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ZipsServer's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. Sorry for taking so long to respond. The parity build and then work got in the way. I almost made the mistake of using non-matching PSU cables. Its kinda stupid the cables aren't made to the same specs... The 10TB are not appearing at all. I can hear them spin up when I plug power in, but otherwise they are not identified in BIOS or /dev/sd*. I also tried powering both 10TB drives from the original power supply that they were working with (so not a 3.3V issue). So for whatever reason, both 10TB drives started to error and then just died. I purchased these at the same time in 2020 and they are just a few months past their warranty.... These are my first drives to fail. I still have some 3TB REDs from 2013 doing 300k read/writes a day.
  2. Hi, it's me again. So I bought a new HBA (LSI 9211-8i), new SFF-8087 cables, and a new 650W PSU just to power the HDDs. All the new gear appears to work well. However, the 10TB drives that were previously problematic are no longer appearing. They seem to be completely dead. I have tried them on both PSUs and both HBAs. Are there any other ways to confirm they are completely dead? While waiting for the new gear to arrive I managed to transfer all the data from the 10TB disks to other disks in the array. I then removed them from the system and had them sitting on a shelf. EDIT: Good news is that I am building parity with no issues. mastertower-diagnostics-20220917-1641.zip
  3. @trurl I actually just swapped drives around in the cage that seems to be problematic. Only disk2 is unmountable now. I am going to try to move disk2 out of that cage... maybe 2 or more slots in that cage are bad.... or maybe I need to buy some compressed air? But to your original question, yes I think, if the actual superblock is bad then I can use fsck to restore one of the backups. However it seems like if the hardware problems is resolved then I might not have to worry about that. (Had similar issues with my external drives when the USB header was loose.) mastertower-diagnostics-20220902-2225.zip
  4. Update: I am now having disk errors with both disk2 and disk3. The log said they had bad super blocks and were unmountable. So the issue now exists even with the parity drive completely removed. I just ordered a new LSI 9211-8i with SAS breakout cables, but it sounds like I should also purchase a new PSU? Any recommendations for powering 10-15 SATA drives? Most PSUs don't even to seem to come with cabling for that many drives... EDIT: I may also accept recommendations on how to move away from those ICY Dock cages (if they are the problem) to something more quiet. I have this Xigmatek case
  5. Just realizing I have been an unraid user for 10 years now..... @trurl I just looked back at my Newegg history..... It is a Rosewill 550W 80+ Gold purchased back in 2012, along with the mobo and other components.... might be time to replace some parts? @kizer No, only using the standard power cables that came with the PSU. The drive cage could be suspect. I am using the ICY DOCK MB974SP (also purchased in 2012)
  6. @JorgeB Correct, I connected disk3 to onboard SATA with a different generic SATA cable. The power cable was the same. I have not moved power cables around. disk3 and the other drives with errors are all in the same hot-swap cage so the power cables are connected to the hot-swap cage. @itimpi I don't have any brown out or similar problems when I spin up all the drivers at once, so I doubt power rating is the problem. It is a 8+ year old system that has been running 24/7 most of the time so maybe the PSU is going bad? Although it is connected to a battery backup with a power conditioner on it.
  7. I swapped the parity disk and disk1 between hot swap cages. Now disk1, disk2, and disk3 are erroring. disk1, 2, and 3 are all in hotswap cage 1 which are all connected to the HBA card on port/connector 0 via a SAS breakout cable. Maybe the SAS cable randomly went bad? I could order a new HBA with new SAS cables. I probably need to do this anyway so I can get my external drives properly added to the array. Any other things to check before deciding to buy new equipment? mastertower-diagnostics-20220831-2028.zip
  8. I have not replaced any cables, I did not swap the power cables, nor have I used different SAS cables to connect the drives to the HBA. However, I did swap the drives around in the hot swap cages at the very beginning before I posted in the forum... which would have swapped both sata and power connections. And then I also connected the drives straight to the MB as requested, which was the most recent configuration The fact that there are only errors when I try to add a parity disk still doesn't make sense to me. Is there anyway to check the HBA or MB for problems?
  9. @JorgeB Same behavior this time, however I think I got the diags before it spammed the syslog too much. The array runs perfectly normally without the parity disk. I have now copied all data from disk2/3 onto other disks in the array with now issues. Googling some of the errors returns this, which suggest these problems are from bad sata cables.... but I am not sure how to interpret this in the context of these errors only happening when I add a parity drive... mastertower-diagnostics-20220830-2152.zip
  10. Not sure what happened, but somehow the disks were unmounted and the array stopped mastertower-diagnostics-20220829-2208.zip
  11. Switched disk2 and disk3 to the SATA connection on the mobo. There seems to be problems preventing disks from being unmounted and the array from stopping. (failed command: READ FPDMA QUEUED) diags attached. probably going to have to hard shutdown mastertower-diagnostics-20220829-2204.zip
  12. Thanks everyone. Last night I ran an rsync command to copy all the contents from disk3 to another disk(8). That completed with zero errors. So it does seem to be something weird with adding the parity disk. I will update the LSI firmare and then retry adding parity. EDIT: Updated LSI (wow that was easier than the first time I did it years ago, thanks JorgeB!) but I am still running into the same issues with disk2 and disk3 when adding parity. diags attached mastertower-diagnostics-20220829-2146.zip
  13. I am going to keep the parity disk out of the mix for the moment, move all the data off disk2 and disk3, reformat disk2 and disk3, and then try adding parity back in. Any thoughts or insight on this series of events of plans?
  14. Yes, I know external disks are not recommended for the array or pools. It is an unfortunate stop gap measure at the moment. However, I am running all of those external pools in btrfs single disk mode so there is no RAID. I tried adding a parity disk back to the setup, but disk2 and disk3 are still erroring out when trying to build parity. I have attached new diags. This makes no sense since the smart test showed no problems and the disks do not error when there is no parity disk. mastertower-diagnostics-20220828-2025.zip
  15. Tried moving them from disk3 to disk9. I used "rsync -av --remove-source-files /mnt/disk3/folder-path /mnt/disk9/" which I entirely regret now. disk3 was not disabled at that time, but there were I/O errors which is why I was trying to move those files off. rsync started to give errors that it couldn't copy the files and said something like "will try again". It is also embarrassing to admit that I was running the array without a parity drive because I had issues adding one a month or so ago. I forget the exact issues that prevented me from adding the parity.