Seonac

Members
  • Posts

    24
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Seonac's Achievements

Noob

Noob (1/14)

2

Reputation

  1. One thing that isn't clear to me... maybe I missed it seeing it stated or listed somewhere. Is the Parity still valid during a Scatter, Move operation? I have a disk that may fail soon and don't have a spare but have plenty of space within the array to move the data to, so I was going to shrink the array. I realize that I will have to create a new config and rebuild the parity after removing the disk. I started a scatter, move operation following SpaceInvaderOne's instructions and realized after starting the operation I'm not sure if the parity is maintained during the move.
  2. Thank you very much JorgeB. I was able to try the card in another computer and it was dead. I was able to get a M.2 SATA adapter with a JMB585 chip today and the array is going again. Thank very much for all of the information you provided in the Recommended Controllers for Unraid thread! Very helpful in finding a new card that wasn’t overkill for my system 😁
  3. I moved all of the old hardware, motherboard included to the new case. There is no old server or motherboard to test the card on. The card has worked without issue since I bought it about 3 years… until now…
  4. Unraid - 6.2.9 HBA - LSI 9207-8i - 6 / 8 drives connected. I switched all of my hardware to a much needed new case. After booting UNRAID the array configuration was invalid and none of the disks connected to the LSI card were detected. The cache pool and the parity drive is connected to the motherboard SATA ports were detected by UNRAID. I tried reseating the card, disconnected and reseated all cables at both ends. I tried switch the card to another PCIe slot on the motherboard. During at least a couple of the boots I saw the following error: mpt2sas_cm0: _base_spin_on_doorbell_int: failed due to timeout count As well as other failures related to mpt2sas, which is what prompted me to reseat and change the card to another PCIe slot. After changing the card to another PCIe slot the card was detected and UNRAID showed the configuration was valid. When I tried to start the array, it never actually started… really hoping the data on the drives is ok… UNRAID didn’t show any errors that I saw. It just sat “Starting Array” for an extremely LONG time. I should’ve grabbed a syslog or diagnostic but didn’t even think. I shutdown and checked the seating of the card again and now the card is not showing up at all… I did grab a diagnostic log from this boot up. This is bundle attached. I tried to boot into the BIOS for the card but never saw an option for it come up, tried CTRL+G during boot but UNRAID boot menu just ended up eventually came up. I bought the card 3 years ago off Amazon and it was already flashed to IT mode so maybe possible the BIOS was wiped/removed? Am I right thinking that the card dead or is there something else going on in the logs?unfortunately I don’t have another board to test it on. mrnas-diagnostics-20221219-2220.zip
  5. Thank you very much! That explains it! Have never had run into that until now.
  6. Thank you JorgeB! As expected, the shares did come back after a reboot. I’m planning on moving the hardware into a new case today, paranoia and worry that something was wrong got me LOL
  7. Using Unraid 6.9.2 All of my user shares disappeared! I was browsing shares and all of a sudden was I was disconnected and could no longer access any of the shares over the network. I logged into the webgui expecting to see at least a disabled disk but there aren’t any red ball disks and the user share section is empty. I ran a short smart test for all disks including disks in the cache pool, all completed without error. I’ve downloaded the diagnostics and attached it. So far I have only stopped all dockers and the one VM I run. I have not rebooted yet. Wanted to wait incase th Any help would be greatly appreciated! mrnas-diagnostics-20221218-0943.zip
  8. Thank you! I started the array and was able to browse the emulated contents of disk6. I stopped the array and replaced disk6 with a new precleared drive already in the server and started the rebuild process.
  9. I don't see the disk listed anywhere in the WebUI, I also checked System Devices under Tools as well. The drive is connected to a LSI SAS card so I didn't expect that it would show in the BIOS. I physically swapped the 'Unassigned Device' drive and drive that was disk6 to rule out bad cable or bad connector and the 'Unassigned Device' drive was detected but the drive that was assigned to disk6 was still not detected. Should/Can I start the array with the disk missing? I wasn't sure if that would unassign the slot, making a drive rebuild impossible. The Array Operations shows 'Stopped. Configuration valid.' Array Devices, shows the disk 'Device is missing (Disabled), contents Emulated'. Here's a new diagnostics just incase it's useful. mrnas-diagnostics-20220813-1744.zip
  10. Powered down, checked all connections and here's a new diagnostics. mrnas-diagnostics-20220813-1636.zip
  11. I rebooted after getting the diagnostics. So the Diagnostics are pre-reboot and the screenshots are post-reboot.
  12. Thanks trurl! I noticed that the SMART report for disk6 was missing and just thought that that it was missing because it was disabled and couldn't be accessed by unraid. Here's the screenshot of Unassigned Devices, it's a precleared drive. Here's the screenshot of the Array Devices, the only drive available in the dropdown list for drive6 is the same drive in the unassigned disks.
  13. This morning I received a notification my unraid server (v6.9.2) that Disk 6 was disabled and 'in error state'. I've be very lucky and have not had to deal with any drive failures in many many years, I've replaced drives over the last few years due to age usually around 4-5 years rather then failures as they were not NAS drives. I prefer the proactive approach but probably due to paranoia more then anything LOL I was going to run a File System Check, following the instructions in the Manual, Drive_shows_as_unmountable. Steps I've followed so far: - Ran Diagnostic Tool - Stopped the Array - Powered Down Server - Checked cables in Drive Trays and to LSI Card - Powered Server back on, I have auto mount set as off I was going to start in 'Maintenance Mode' but the Disk now shows as 'Unassigned' and the drive is no longer listed, I assume because it is not longer considered to be a valid drive for assignment? I have not started the array in maintenance mode or otherwise. Looking for some direction before I do anything further. Do I leave the disk as 'Unassigned' and start the array in maintenance mode and do the File System Check, and repair the File System if needed? Or should I just replace the drive and start the rebuild? The drive is question was next to be replaced, so I already have a replacement drive pre-cleared and ready. First thing I did was run the diagnostic tool, results attached. Maybe if someone could take a look? I've also attached the post-reboot syslog in case it might be helpful. I appreciate any help and advice! mrnas-diagnostics-20220813-1149.zip mrnas-syslog-20220813-1349.zip
  14. Will do. New PSU is already ordered and on route, should receive it tomorrow, hopefully will get it installed tomorrow night. Got a great deal on a single rail 650W with 54A on 12V rail, CORSAIR HX650 $119.99 CAD + 25USD rebate. http://www.newegg.ca/Product/Product.aspx?Item=N82E16817139012
  15. The parity did rebuild successfully but during the rebuild disk 5 showed 31 errors. I assume that these would be read errors since nothing was being written to the array during the rebuild. After the parity rebuild was complete, a parity check NOCORRECT completed successfully. Parity Valid Last checked on Wed Jul 10 18:12:50 2013 ADT, finding 0 errors. However, now there is 138 errors showing for disk 5. I know that ANY errors during a rebuild is bad but I'm just not sure exactly what this means. Is the disk bad/have corrupt sectors? Is this caused by the power issues experienced earlier? I've attached a smart test for the disk in question and a new syslog. In the log there are quite a few of these lines, which would've occured during the rebuild: Jul 10 02:15:42 MR_NAS kernel: md: disk5 read error Jul 10 02:15:42 MR_NAS kernel: handle_stripe read error: 1974783248/5, count: 1 Jul 10 02:15:42 MR_NAS kernel: md: parity incorrect: 1974783248 Later there are these, which would've occured during the Parity Check: Jul 10 12:01:07 MR_NAS kernel: md: disk5 read error Jul 10 12:01:07 MR_NAS kernel: handle_stripe read error: 1974783248/5, count: 1 FYI - I've also ordered a new psu: Corsair HX650 - http://www.newegg.ca/Product/Product.aspx?Item=N82E16817139012 smart.txt syslog.txt