Bolle

Members
  • Posts

    95
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bolle's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. Interesting, looks a lot like the issues I was having. I sent my motherboard back and I am waiting for a replacement to arrive.
  2. Time for a little update... still no luck. I still have 'random' disks dropping out, apparently independent of the cage and port they're mounted in. I've changed all SATA cables but no luck. The two common hardware elements in all my swapping around are the PSU en the motherboard. The cache disk running on the PCIe SATA card appears to be stable, so power appears to be ok and my main suspect is the motherboard. I've returned this (it's still under warranty I believe) to my shop. They've have diagnosed it as faulty and returned it to Asus (I do have my doubts though whether they actually tested or just based it on my fault description). A replacement/repair should be returned soon. I hope this solves my problems! It cost me way too much time and frustration so far....
  3. Well it has been driving me crazy!!! It has been quite frustrating, shuffling disks, cables, drive bays etc... it was hard to see any logic and I'm still not sure I'm seeing it correctly. My main suspect now is the power connectors. It looks like the molex plugs of my Seasonic PSU might have quite big female opening, causing a bad connection with the Norco cages. Putting an extender cable in between seems to solve this. However the extender is probably too small gauge to provide clean power to support 5 disks in a cage, as with 5 disks I'm getting all sorts of errors and missing disks (due to COMRESET errors). The PSU itself is a Seasonic S12ii 520 Bronze, so should be sufficient for the 7 disks I have in my server in total.
  4. Little update: I've now tried the following configuration: SATA port mainboard - cage - disk - drive 4 - 1 - disk3 - EARS 5 - 1 - empty - empty 6 - 2 - disk4 - EARX And this has been stable now for almost 4 hrs and is 40% in the rebuild of disk4. There where some hiccups on bootup. Although funnily enough these relate to ata4, which seems to be port 3 and disk2 which have never given issues sofar... I can't seem to find any pattern or consistency in the errors I'm having....
  5. After a long weekend away, back home working on getting my unraid setup working again... On rebooting the server this morning, I again got a lot of errors, most like the ones mentioned earlier. I also got a COMRESET error meaning that disk4 (my new EARX disk) wasn't been seen so I couldn't rebuild the array. Unfortunately I didn't save the syslog from this event, So I started troublesolving, or so I hoped. My initial config was: (SATA port mainboard - cage - disk - drive 4 - 1 - disk3 - EARS 5 - 1 - disk4 - EARX (new) 6 - 1 - empty - empty This was the same configuration as I have tried sofar, with the only comment (as mentioned in earlier post) that after the reiserfsck came out ok I started rebuilding the array with the new EARX disk instead of the old EARS that was previously giving my trouble. This as I trusted this disk more and to have a 'spare' copy of the data on the old EARS. As mentioned earlier I got a lot of errors and stopped the rebuild. Swapping cables didn't clear the errors. To remove the cage from the equation I did the following configuration 4 - 2 - disk3 - EARS 5 - 2 - disk4 - EARX (new) 6 - 2 - empty - empty This gave again some errors, with disk4 not recognized (not visible in devices, probably due to COMRESET errors) Then I tried: 4 - 2 - disk3 - EARS 5 - 2 - disk4 - EARS (old) 6 - 2 - empty - empty This led to errors on both disk3 and disk4 (both disabled. COMRESET errors). I then swapped cables and tried: 4 - 2 - disk3 - EARS 5 - 2 - disk4 - EARX 6 - 2 - unassigned - EARS (old) Disk4 was now recognized, albeit with CRC errors and link resets. Unraid starting rebuilding the array, with multiple errors and resets. But I got write errors on disk4 after approx 7% of the rebuild. The syslog is attached. Disk4 is nog not recognize, see the SMART report: As far as I see: *When disks are recognized they seem ok, see earlier SMART reports. *Changing cables does not solve it *Changing cages does not solve it *Other disks (parity, disk1 and disk2) in same cages seem ok, so it does not seem a power issue. Seeing that disk3 and disk4 were originally used in the old Sharkoon cage with the suspect connector, I assumed the disks might be 'fried' by this. However using the brandnew EARX disk I also get errors. Based on the errors and and variables tried, I assume somehow the SATA ports of the mainboard originally connected to the suspect Sharkoon cage were damaged? The only consistent factor in my experimentation is that I have used these three same ports? Any other theories out there? I could use some help. syslog-2012-12-18.txt
  6. Actually I believe ata5, the one reporting the most errors is connected to disk3. So that would be one of the drives also used in the suspect Sharkoon cage.
  7. Full syslog (6,6 Mb) https://dl.dropbox.com/u/9280177/syslog-2012-12-13-2.txt
  8. After looking at the syslog again, it is mostly the errors on ata5, but also once in a while: Something is not well somewhere in my box...
  9. I stopped the data rebuild. The unmenu main page shows: Does the parity update mean that both my data on disk4 and the parity are now not good?
  10. I was away for 2 hours, when I left it it was fine had approx 500 min left for the rebuild, and now the errors and near 9000 min. Better stop it and troubleshoot the connection first. I replaced the cable, it is a new disk, so that leaves the motherboard as the only suspect?
  11. Spoke too soon: it's now repeating the following error..
  12. Thanks for the help Joe. I think many on the forum owe you a few beers for all your efforts! What I've done now: *I removed all the tie wrap holding my SATA cables together (just read that on another post on the forum as well). And it made my case look so clean and tidy... *I replaced the SATA cables for the SATA ports 4 and 5 *Put the 'trouble' disk back in, it showed as a new disk (blue ball) and the message 'recon_disk, array stopped'. That means I have to rebuild the disk, if I'm correct? *As I have a new and pre-cleared 2TB (same size) disk ready (the one meant to be added as disk5 later on), I decided to use this one as replacement disk, instead of the one original used for disk4. *I'm now rebuilding disk4 using the new disk. Looking at the syslog (see attached), I've had a few hiccups on one SATA link: and But we're now 25 min further, rebuild is going, and I see no related SATA entries in the syslog any more so all seems well... If my current array is back and protected, and seems stable, next steps Im going to do is to run a pre-clear on the old disk previously used as disk4. If it comes out ok I can use it, if I have issues I'm trying to get a new one under warranty. Then it is further onwards with step 10 and further of my strategy, with the change that I will either use the old and re-pre-cleared disk (or new one if swapped under warranty) as disk5. syslog-2012-12-13-1.txt
  13. So, if I add the conclusions up for the 'trouble' disk: *The SMART report looks ok, no sectors pending or re-aloocated sectors. *reiserfsck on the actual physical disk shows a unreadable block *reiserfsck on the emulated disk shows no problems. That should mean I can just reinsert the disk to the server and rebuild the data? The SMART software on the disk will re-allocate any bad sectors? Or do I need to reiserfsck rebuild the tree or fix on the emulated disk to be sure?
  14. BTW , PSU is a Seasonic S12ii 520w Bronze so I believe that shouldn't be an issue. Certainly not with only 4 drives installed at the moment...
  15. result of the reiserfsck on the 'emulated' disk4