sdumas

Members
  • Posts

    58
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

sdumas's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. PS - I downloaded LinuxReader and tried to access either one of these disks separately outside UnRAID. They can't be accessed - they are truly dead.
  2. Disk5 has died... it no longer is accessible. I am now in a situation where 2 disks have died two days apart... I am screwed.
  3. Crap - that disk5 just failed and disappeared from the list... Now I have two drives that failed... Here is the syslog... syslog.txt
  4. Hi everyone, I am running 5.0 Beta13 - I have been running this for ever and it was working fine until a few days ago. One of the drive went "missing". I decided to replace the drive and rebuild. So far so good. 2TB drive -the rebuild went well until about 68% (1.36TB) rebuilt. Now it is sloooooow as hell (34.45KB /sec). I notice that during the rebuild a second drive came up with LOTS of errors (475597 errors) - so I guess I have another drive that decided to go bye bye. I am not sure that I can rebuild the original drive since the other one with error seem to slow down the rebuild to an impossible timeframe (304,305 minutes remaining). Is there a way to rebuild two drives? (my thinking is no, but...) Any brilliant ideas to help me here? FWIW - I only store data on particular drives - the data is not spread across multiple drives. Thanks!
  5. I have been running 5.0 Beta 13 for a long while with no issues. Are there any reasons for me to upgrade to the latest RC? (nonobstant of the slow write issue with 4GB). Does it have better performance? or something that I should be aware (or afraid) of? Thanks!
  6. I was just wondering if the number of drives affect the speed of the parity build and copy process. I have a Asus P5B mobo with the SuperMicro SAS MV8 controller and 12 drives in a Norco box. My 3TB parity drive is on the mobo SATA port. I always had performance issues (or perceived performance issues) where a copy of files (even internal with NC) is never higher than 20-30MB/s. The parity build on the other end goes to 70-80 MB/s. I am not using User Share and always copy directly to the drives (\\tower\disk1...etc.). I built another machine for testing (Asus mobo and only internal drives - yes I know it's generic - too lazy to open the box and look for exact model... but it's comparable to the other one) and it has only three drives. When I build parity, I hit 110-120MB/s. When I copy files I get 30-40 MB/s. It looks to me that the amount of drives does have an effect on speed and parity build. Am I right in my assumption, or are there other parameters to take into considerations. PS - I am using 5.0 Beta-13 (works well for now and is stable...), NICs are confirmed 1 GB, no user share, 4 GB of RAM on both boxes - no apparent errors in the logs Thanks for commenting on this.
  7. OK Guys and girls, I am trying to figure out performance issues on my unRAID box. Running Beta13 for a while. Had issues at some point where I lost drives (red balled) went through hell and back - system is good now. It's running on a nice Norco 4020 and has 11 drives in it - mix of onboard SATA and a 8 port SAS controller. I am getting slow speed on write on the network - 10-15 Mbps - I could undertand that - it's the network... I am using MC on a Putty connection and I copy files between drives and I still get slow speed... why??? 14 Mbps... Look at attached screenshot of MC. I'll post log in next post.
  8. SOLVED Playing around with many variables - Disks - Controllers - Backplanes - Cables - Motherboard - Power Supply, I could eliminate most of them and it would seem that the Power Supply made the whole difference. I had a 650W (Corsair T650W) and even with the amount of drives I had (11), it would seem that switching to a 1000W PS worked out things. There are still in my mind some issues that are kinda unanswered, but overall it now works and I am happy. As an aside, I also played in the BIOS. Before, I had issues with write performance - 6 to 10 MB/sec - always banged my head on the wall on this, and finally I had the "Duh" moment. My drives were set at Compatibility Mode (IDE) - changed that to AHCI and surprisingly enough - I now get 70MB/Sec writes... Duh.
  9. Oh thy woes and howrors... :'( I had to not touch this for a couple of days as I was thinking (again) of ways to destroy the whole thing... Time to refrain myself of doing something bad. I redid the DDRescue - forward and backward - it recovered (or so it seems) all the data minus 57 kb. Pretty good, I think. I did the reiserfsck --check /dev/sda1 - Got the "superblock cannot be found" error again. Suggested a rebuild - did the rebuild - could not rebuild - suggested rebuild-tree, did the rebuild-tree. That took a while (8 hours) ... (don't have the beginning of this log) ######## Pass 1 ######## Looking for allocable blocks . . Finished Flushing .. finished 0 leaves read 0 inserted ######## Pass 2 ######## Flushing .. finished No reiserfs metadata found . If you are sure that you had the reiserfs on this partition, then the start of the partition might be changed or all data were wiped out. The start of the partition may get changed by a partitioner if you had used one. Then you probably rebuilt the superblock as there was no one. Zero the block of 64K offset from the start of the partition (a new super block you have just built) and try to move the start of the partition a few cylinders aside and check if debugreiserfsck /dev/xxx detects a reiserfs super block. If it does this is likely to be the right super block version Aborted ... Now what? - I am a little lost and discouraged. The data is not that important (it's more inconvenient than anything) but I would love to find a way to recuperate it. Because if this happened on another drive that was critical - I would be in big trouble. I still have the original "bad drive", I can DDrescue again if need be... Thanks for all the help so far!!!!!!
  10. Thanks will do - that will take a while - It will be probably another 6 hours to complete the first pass. I'll start the reverse retry later on tonight! Thanks again!