luvmich

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

luvmich's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I get a constant 100Megs/sec when I turn on turbo writes. Not the easiest the thing to enable, but if you login to a SSH or console session and type "mdcmd set md_write_method 1" it will turn on turbo writes. Change the "1" to "0" or reboot to turn off turbo writes. What turbo writes does is spin up all the disk to calculate parity during writes. (Not really energy efficient) Anyway, I only turn on turbo writes when transferring large amounts of data. Normal writes only spin up the disk being wrote to and the parity disk.
  2. The other thing you can do is turn on "Turbo writes". Type this in at a console or SSH "mdcmd set md_write_method 1". That speeds my writes upto 100Meg/sec. This way you can have your parity and move stuff to the new box.
  3. I said it years ago... nothing will be done! I have tried old firmware and the newest firmwares. My system with a SAS2lp only parity checks at 40MB/sec and parity rebuilds at 130MB/s. My other system with two SASLP checks and rebuilds at 75MB/sec. Both versions of unraid v5 and v6 have the same results. My system with the SAS2LP had the webgui lockup problem and the system with the two SASLP had no problems with the webgui lockup (6.1.2 fix that problem). My system with the SAS2LP also had IOMMU /HVM loss of share/lockup problem and my system with two SASLP worked fine. Besides the SAS cards the only difference between the the two system is the amount of memory and the SAS2LP system has IPMI. Neither system has any plug-ins.
  4. Nothing has been done and nothing has changed. Just upgrade my server to 6.0beta15 and parity checks are at 50mb/s or less. Parity rebuild is 110mb/s.
  5. Hate to burst the bubble on Intel and AMD. It is a linux driver issue of sorts. My two systems are similar except for the controller cards. Server 1 uses the onboard SATA and a AOC-SAS2LP-MV8. System 2 uses only the 2 AOC-SASLP-MV8. Anyway System 1 rebuilds at 130MB/s and parity checks at 40MB/s. System 2 rebuilds at 70MB/s and parity checks at 70MB/s. System 2 controller cards are max out hence the constant 70MB/s. I have not upgraded system 1 to v6 yet so I don't know if this situation has been fixed. But every v5 had the same results on the two systems. I tried bear bone installs of v5 on system 1 and I would always get the same 40MB/s parity checks. I borrowed a AOC-SASLP-MV8 and put it back in system 1. The parity checks went back to 70MB/s again. Made no sense. I'm just waiting for v6 final:) before I put it on System 1.
  6. Had this problem once, back when I used unraid 4.7. Every parity check would result in a correction. Ran smart checks until I was blue in the face. Ran memory checks for hours. Even did file system checks what would find errors and correct them. Not sure if you have run them yet to see your results. This was all about two years ago that this happened. Since I rule out it was not a drive, I upgraded my hardware. I had a Sempron 140 in a consumer mobo with cheap controller cards. (bunch of two ports). I'm still using all the same drives today and I have not had a problem since I upgraded to my supermicro mobo. Knocking on wood:)
  7. I set mine in my \unraid\flash\config\network.cfg _____________________ # Generated network settings USE_DHCP=yes IPADDR= NETMASK= GATEWAY= MTU=9000 ___________________ The "MTU=9000" sets the jumbo frames every time it boots up. Just make sure your network switch supports jumbo frames or you will not get anything if the packet is over 1500. It is pretty straight forward and it really helps when I read from the unraid box. I get 70-80MBytes/sec reads and 30-40MBytes/sec writes. Without jumbo frames my reads top out at 40 and writes top out at 30
  8. Thanks for testing the cards, looks like I might have to start saving my pennies.
  9. Thank you. I was hoping someone would say what you just said. I have two servers and my other server is the same except it has 2 SASLP-MV8 and it parity checks at 75MB/s. Looks like I will not be upgrading that server with 2 SAS2LP-MV8 any time soon.
  10. Here is my log during the parity check. It is a real boring read. I'm guessing it is a driver issue with the SAS2LP-MV8. The avg parity check speed is 57MB/s. Might just stick the SASLP-MV8 back in for now. Feb 11 05:37:30 ACNAS01 kernel: mdcmd (71): check CORRECT Feb 11 05:37:30 ACNAS01 kernel: md: recovery thread woken up ... Feb 11 05:37:30 ACNAS01 kernel: md: recovery thread checking parity... Feb 11 05:37:30 ACNAS01 kernel: md: using 1536k window, over a total of 2930266532 blocks. Feb 11 10:59:37 ACNAS01 kernel: mdcmd (72): clear Feb 11 13:47:45 ACNAS01 kernel: mdcmd (73): clear Feb 11 18:09:00 ACNAS01 kernel: mdcmd (74): spindown 1 Feb 11 18:09:01 ACNAS01 kernel: mdcmd (75): spindown 2 Feb 11 18:09:01 ACNAS01 kernel: mdcmd (76): spindown 3 Feb 11 18:09:02 ACNAS01 kernel: mdcmd (77): spindown 5 Feb 11 18:12:12 ACNAS01 kernel: mdcmd (78): spindown 6 Feb 11 18:12:13 ACNAS01 kernel: mdcmd (79): spindown 7 Feb 11 18:12:14 ACNAS01 kernel: mdcmd (80): spindown 8 Feb 11 18:12:14 ACNAS01 kernel: mdcmd (81): spindown 9 Feb 11 18:12:14 ACNAS01 kernel: mdcmd (82): spindown 10 Feb 11 18:12:15 ACNAS01 kernel: mdcmd (83): spindown 11 Feb 11 18:12:15 ACNAS01 kernel: mdcmd (84): spindown 12 Feb 11 18:12:16 ACNAS01 kernel: mdcmd (85): spindown 13 Feb 11 20:12:56 ACNAS01 kernel: md: sync done. time=52526sec Feb 11 20:12:56 ACNAS01 kernel: md: recovery thread sync completion status: 0
  11. The part that doesn't make sense is the parity re-build did what it was suppose to do (Start at 112MB/s) and the parity check didn't. (Started at 50MB/s)
  12. Once pass 2T the parity check went to 112MB/s and ended at 80MB/s. The Mobo has 2 PCIe v2.0 8x and 1 PCIe v2.0 4x in a 8x slot. The old SASLP-MV8 was 4x and the SAS2LP-MV8 is 8x. It is the fact that the SASLP-MV8 slowed down when I filled all 8 ports. It is a 4x card that is maxed out at 75MB/s with 8 drives attatched. I changed the card out for the SAS2LP-MV8 (8x) and I stayed at 50MB/s for the whole 2TB on parity check. That is why I changed the parity drive, it showed errors. I don't have a backplane, just direct cable connects. I might have to check cables once more and maybe change out. After I replace the parity drive I did a re-build of the parity drive. Rebuild started out a 110MB/s. At 2TB it was 60MB/s. Then from 2-3TB it was 112-80MB/s. Once the array was protected and up and running I did a parity check. The check stayed at 50MB/s for the first 2TB and went up to 112MB/s. It makes no sense to me. The parity check just reads from all drives and the parity rebuild reads from 13 and writes to 1.
  13. I decided to put a picture of the parity check. I hit clear statistics and after a while I hit refresh and I notice the reads off each disk were different. The first 6 disk are running off the onboard controller. The last 8 are running off the SAS2LP-MV8. Is this normal?
  14. It all started when I added the remaining 4 drives to my SASLP-MV8. Previous to this I was using the 6 ports on the mobo and 4 ports on the SASLP-MV8 running unraid 4.7. With only 4 drives on the SASLP-MV8 parity checks were 100MB/s and ended in the 70MB/s range. All 2TB Ears or Earx. After adding the remain 4 drives to the SASLP-MV8 my parity checks went down to 70MB/s constant. So to me that looked like a bandwidth restriction on the 4X PCIe bus. So I went out and got the SAS2LP-MV8. Upgraded unraid to 5RC11. The parity check then drop to less than 50MB/sec. I checked my smart status on the drives and noticed two "current pending sector" on the parity drive (2tb ears). Order a 3TB WD red and pre-cleared. Added it to the array and re-built speed of the parity drive started at 108MB/s. At the ended of 2TB it was 60MB/s and averaged 75MB/s overall. Sounds like the drive was the slowdown and I have fixed my bottleneck. Nope this morning I started a parity check to make sure the parity re-built was correct and it is checking at less the 50MB/s again. I am running unmenu with very little packages installed (APC, PHP, smart history) I ran the user script to check hard drive speeds and they all show 100+MB/s. I do have the SAS2LP-MV8 installed in a 8X PCIe slot. I'm guessing this is why Unraid is still a RC. Any suggestion on how to restore my parity check speed? parity WDC_WD30EFRX-68AX9N0_WD-WMC1T1589815 (sda) 2930266532 30°C 3 TB disk1 WDC_WD20EARS-00MVWB0_WD-WCAZA1105446 (sdb) 1953514552 30°C 2 TB disk2 WDC_WD20EARS-00MVWB0_WD-WCAZA1136222 (sdc) 1953514552 30°C 2 TB disk3 WDC_WD20EARS-00MVWB0_WD-WMAZA1092316 (sdd) 1953514552 29°C 2 TB disk4 WDC_WD20EARS-00MVWB0_WD-WMAZA3680791 (sde) 1953514552 30°C 2 TB disk5 WDC_WD20EARS-00MVWB0_WD-WMAZA1058936 (sdf) 1953514552 34°C 2 TB disk6 WDC_WD20EARS-00MVWB0_WD-WCAZA2643619 (sdn) 1953514552 30°C 2 TB disk7 WDC_WD20EARS-00S8B1_WD-WCAVY5678574 (sdm) 1953514552 35°C 2 TB disk8 WDC_WD20EARS-00MVWB0_WD-WMAZA4276078 (sdl) 1953514552 35°C 2 TB disk9 WDC_WD20EARX-00PASB0_WD-WMAZA5701378 (sdo) 1953514552 27°C 2 TB disk10 WDC_WD20EARX-00ZUDB0_WD-WCC1H0740724 (sdk) 1953514552 28°C 2 TB disk11 WDC_WD20EARX-00ZUDB0_WD-WCC1H0748053 (sdj) 1953514552 29°C 2 TB disk12 WDC_WD20EARX-00AZ6B0_WD-WCC070165700 (sdi) 1953514552 32°C 2 TB disk13 WDC_WD20EARS-00MVWB0_WD-WMAZA3792065 (sdh) 1953514552 29°C 2 TB flash JD_FireFly - 2 GB 1.5 GB 763 232 0 syslog-2013-02-11.txt