johnodon

WD EADS/EARS vs. EARX

Recommended Posts

Hoping someone can help me with a small mystery...

 

I currently have 8 drives...parity + 6 data + cache.  My data drives are a mixed lot:

 

1x WDC_WD20EADS

2x WDC_WD20EARS

1x SAMSUNG_HD204UI

1x WDC_WD15EADS

1x WDC_WD20EARX

 

The way I understand it is that the EADS/EARS are 2nd generation SATA II drives and the EARX is a 3rd generation SATA III drives.  I decied to perform a little write speed test to each disk share to take the cache drive out of the equation and this is what I saw:

 

WDC_WD20EADS  ---  26MB/s

WDC_WD20EARS  ---  30MB/s

SAMSUNG_HD204UI -- 33MB/s

WDC_WD15EADS ----- 26MB/s

WDC_WD20EARX  ----  15MB/s

 

Can anyone tell me why the EARX is so slow compared to the rest?  Since these drives are a newer generation I assumed that they should be as fast or maybe even faster.  Is this a known issue?  Are the drive just manufactured that way (cost cutting)?  I just added the EARX drive a few weeks ago so it has ~1.6TB of available space.  I can't imagine that would be an issue but figured I would throw it out there.

 

TIA!

 

John

 

Share this post


Link to post
Share on other sites

And I was so excited that WDC replaced my RMA'd EARS with EARX :(

 

Do you still have any EARS?  Can you perform a few tests like I did above?

 

John

Share this post


Link to post
Share on other sites

I believe the EARX is an advance format drive, so it needs to be aligned when a partition table is created.

If i remember correctly., the EARS had a jumper to automatically re-align the drive. The EARX may not.

What I do remember reading is that you loose allot of performance (as demonstrated) if the drive is not aligned correctly.

 

Just some food for thought.

 

Share this post


Link to post
Share on other sites

I have both, and the EARX are a bit faster than EARS.  If yours are not, I'd check the alignment, controller, and cables.

Share this post


Link to post
Share on other sites

Thanks guys.

 

I did preclear the drive so I don't think the alignment is an issue.  This is the info from the disc settings.  Are there any commands I can run in a telnet session that will provide more info?

 

 

disk6 Settings

--------------------------------------------------------------------------------

Partition 1 size:  1953514552 KB (K=1024)

Partition format:  MBR: 4K-aligned

File sytem type:  reiserfs

 

At some point I will also try moving the drive to a different bay and see if that makes a difference.

 

John

Share this post


Link to post
Share on other sites

OK...performed an internal write test to an EARS drive and the EARX drive...

 

WD20EARS

root@unRAID:~# dd if=/dev/zero of=/mnt/disk2/test.dd count=8192000

8192000+0 records in

8192000+0 records out

4194304000 bytes (4.2 GB) copied, 217.262 s, 19.3 MB/s

 

WD20EARX

root@unRAID:~# dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000

8192000+0 records in

8192000+0 records out

4194304000 bytes (4.2 GB) copied, 229.623 s, 18.3 MB/s

 

Share this post


Link to post
Share on other sites

You have something VERY wrong with your system.  I get better than that with 10-year old IDE crap drives.  WDEARS and EARX should be 5 times faster than that.

Share this post


Link to post
Share on other sites

Are those benchmarks to the array drives or to the drives outside of the array?

The parity will slow them down allot, however the speed should be in the upper 20's to mid 30s on the array (at the very least).

 

 

Share this post


Link to post
Share on other sites

Are those benchmarks to the array drives or to the drives outside of the array?

The parity will slow them down allot, however the speed should be in the upper 20's to mid 30s on the array (at the very least).

 

The array was online so the data was being written to both parity and the data drive.  How do I perform the same test without parity being enabled?  Do I stop teh array, set parity = none and then start the array again?

 

Besides the parity issue, do I have some type of driver or architecture issue?  I am using a server class MB although it is a bit old.  The specs of the MB can be seen here:  http://lime-technology.com/forum/index.php?topic=10798.0

 

As far as controllers...2x AOC-SASLP-MV8.

 

I have also attached my syslog from a fresh boot if any of you kind souls would like to look at it for me.  :)

 

TIA!!!

 

John

 

 

syslog.txt

Share this post


Link to post
Share on other sites

Are those benchmarks to the array drives or to the drives outside of the array?

The parity will slow them down allot, however the speed should be in the upper 20's to mid 30s on the array (at the very least).

 

The array was online so the data was being written to both parity and the data drive.  How do I perform the same test without parity being enabled?  Do I stop the array, set parity = none and then start the array again?

 

Besides the parity issue, do I have some type of driver or architecture issue?  I am using a server class MB although it is a bit old.  The specs of the MB can be seen here:  http://lime-technology.com/forum/index.php?topic=10798.0

 

As far as controllers...2x AOC-SASLP-MV8.

 

I have also attached my syslog from a fresh boot if any of you kind souls would like to look at it for me.  :)

 

TIA!!!

 

John

 

Is the parity drive on the motherboard ports?

If so, Is it shared with other data drives?

 

 

Share this post


Link to post
Share on other sites

After looking at the block diagram for that board, I might try putting the parity drive on the controller that is connected to the PCIe x8 slot.

That will give it full bandwidth. I know my ICH9R has a max bandwidth of 384MB/s which is faster then x1. I don't know what the bandwidth of the ICH7R is, but it can't be more then x4 since there is also a PCIe x4 hanging off the same chipset, along two network interfaces each consuming an x1 slot.

 

http://www.tyan.com/support_download_manuals.aspx?model=S.S5160

Share this post


Link to post
Share on other sites

Thanks for the info weebo!  I'll move the parity drive to the card in the x8 slot when I get home and run another set of tests!

 

Thanks again for the help!

 

John

Share this post


Link to post
Share on other sites

OK...performed an internal write test to an EARS drive and the EARX drive...

 

WD20EARS

root@unRAID:~# dd if=/dev/zero of=/mnt/disk2/test.dd count=8192000

8192000+0 records in

8192000+0 records out

4194304000 bytes (4.2 GB) copied, 217.262 s, 19.3 MB/s

 

WD20EARX

root@unRAID:~# dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000

8192000+0 records in

8192000+0 records out

4194304000 bytes (4.2 GB) copied, 229.623 s, 18.3 MB/s

Try something like this:

dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000 bs=65536

 

You were writing with a block size of 512 bytes.  (fairly inefficient)

 

You should see a huge improvement in throughput when you set the block size larger.

Share this post


Link to post
Share on other sites

Try something like this:

dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000 bs=65536

Why spend an hour+  test-writing 500+ GB ? :)

 

Try ....  bs=64k count=8k oflag=direct

 

Share this post


Link to post
Share on other sites

Try something like this:

dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000 bs=65536

Why spend an hour+  test-writing 500+ GB ? :)

 

Try ....  bs=64k count=8k oflag=direct

OOps...  I did not think to reduce the "count" when I increased the block size...  Guess it would take a while... 8)

Share this post


Link to post
Share on other sites

Try something like this:

dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000 bs=65536

Why spend an hour+  test-writing 500+ GB ? :)

 

Try ....  bs=64k count=8k oflag=direct

OOps...  I did not think to reduce the "count" when I increased the block size...  Guess it would take a while... 8)

But mine is no prize either ... the oflag=direct messes up. It was an attempt to eliminate the system-buffer (write-behind) effect. (I never do a write test when a read test [on the appropriate /dev/xxx, or /mnt/xxx] would suffice. Does that fit here?)

 

If a write-test is called for, change the above to

... bs=64k count=8k conv=fdatasync

(Have your cake; and eat it :) -- short and accurate)

 

Aside: Maybe one of you youngsters can enlighten me (I was using Unix before there was a dd command) -- but since when does dd use Marketing Megabytes in its summary line? (And with no way to request otherwise!) This is sacrilege. Maybe it is a "new feature" of linux's dd.

 

Share this post


Link to post
Share on other sites

Here is my script to write then read 10GB.

 

 


#!/bin/bash


if [ -z "$1" ] 
   then echo "Usage: $0 outputfilename"
        exit
fi


if [ -f "$1" ]
   then echo "removing: $1"
        rm -vf $1
        sync
fi


bs=1024
count=4000000
count=10000000


total=$(( $bs * $count))
echo 3 > /proc/sys/vm/drop_caches
echo "writing $total bytes to: $1"
touch $1;rm -f $1
dd if=/dev/zero bs=$bs count=$count of=$1 &
BGPID=$!


trap "kill $BGPID; rm -vf '$1'; exit" INT HUP QUIT TERM EXIT


sleep 5
while ps --no-heading -fp $BGPID >/dev/null 
do kill -USR1 $BGPID
   sleep 5
done


trap "rm -vf '$1'; exit" INT HUP QUIT TERM EXIT


echo "write complete, syncing"
sync
echo 3 > /proc/sys/vm/drop_caches 
echo "reading from: $1"
dd if=$1 bs=$bs count=$count of=/dev/null
echo "removing: $1"
rm -vf $1

 

 

 

 

You want to write to the array to determine maximum throughput to the array with parity.

After that you want to read from the drive to determine what your maximum read speed could possibly be.

The cache's are flushed before and after writing to remove any skew from the cache, but you do want it to come into play because that's a real world useful number.

 

 

When there is a question about parity interference, you can drop parity temporarily or on a new drive and do the same test.

10GB is chosen because many people have 8GB.  4.7 is a standard DVD and some are 9GB.

 

 

Share this post


Link to post
Share on other sites

Thanks for all of the input guys!  I still have not had time to move the drives around but I did run the tests (I think I did it right).  I also ran it against the cache drive to see the difference...

 

WD20EARS

root@unRAID:~# dd if=/dev/zero of=/mnt/disk2/test.dd count=8192000 bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB) copied, 24.6937 s, 21.7 MB/s

 

WD20EARX

root@unRAID:~# dd if=/dev/zero of=/mnt/disk6/test.dd count=8192000 bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB) copied, 27.0628 s, 19.8 MB/s

 

CACHE

root@unRAID:~# dd if=/dev/zero of=/mnt/cache/test.dd count=8192000 bs=64k count=8k conv=fdatasync

8192+0 records in

8192+0 records out

536870912 bytes (537 MB) copied, 5.68581 s, 94.4 MB/s

 

I still want to try Weebo's suggestion of moving the parity drive off of the onboard SATA controller.  I am 99.99% sure that I put the parity drive there because I thought that it would be faster for some reason.  the funny thing is (and I have yet to confirm) that I think I put the cache drive on an onboard port also.

 

John

Share this post


Link to post
Share on other sites

if parity and cache are the only ones on the onboard port, then you should be OK.

If there are data drives on the onboard port they will compete for bandwidth.

 

Remember every write operation is

 

read data block / read parity block / XOR / write data block / write parity block.

 

If your cache is on the internal controller and you are @95MB/s, then you are where you should be depending on drive model.

 

Share this post


Link to post
Share on other sites

Well guys...

 

Before I even started to mess with moving drives around, I decided it was as good a time as any to upgrade some hardware.  Being the cheapass that I am :) I focused on open box and clearance items at Microcenter.  Here is what I got for a total of $161:

 

MB:  ASUS P8Z68-V LX ($59):  http://www.asus.com/Motherboards/Intel_Socket_1155/P8Z68V_LX/

CPU:  Intel Core i3 2120 LGA1155 3.30 GHz Boxed Processor ($89)

RAM:  Micro Center 4GB DDR3-1333 (PC3-10666) CL9 Dual Channel Desktop Memory Kit (Two 2GB Memory Modules) ($13)

 

So, my main question is this...

 

Which drives should go on which controller?  All drives on the AOC-SASLP-MV8's?  Parity/cache on the SATA 6GB ports on the MB?

 

Thanks for all of the help!!!

 

John

Share this post


Link to post
Share on other sites

Well, new hardware installed and I have to say...I am not seeing much of a difference.  :(  The only things left from the original box are the Norco 4220 case itself (I mention this due to the backplanes), the PSU, the cables and the AOC-SASLP-MV8 cards (each in a 16x slot).

 

Here are the results of some tests that I ran to each disk (same command as above) with and without parity enabled:

 

Disk # Make/Model                               Location                            Parity Enabled (MB/s)     Parity Disabled (MB/s)         Speed Increase

Parity    Hitachi/HDS722020ALA330        SATA2 MB port                                N/A                                      N/A                                  N/A

1         WD/WD20EADS                       SATA2 MB port                                28.0                                 56.8                             102.9%

2         WD/WD20EARS                       SATA2 MB port                                20.6                                 40.3                               95.6%

3         WD/WD20EARS                       SASLP-MV8 #1                                22.4                                 59.9                               167.4%

4         Samsung/HD204UI                   SASLP-MV8 #1                                19.2                                 32.2                               67.7%

5         WD/WD15EADS                       SASLP-MV8 #1                                23.9                                 61.2                               156.1%

6         WD/WD20EARX                       SASLP-MV8 #1                                16.9                                 65.0                             284.6%

Cache Segate/ST31000528AS             SATA2 MB port                                95.7                                 97.1                               1.5%

 

And yes...I didn't even realize that I wasn't running a single disk off of the 2nd card.

 

Do you guys think that the cables could be the issue?  BTW...check out the speed difference for the EARX drive with parity enabled/disabled.

 

This is the breakout cable that I am using to go from one backplane to the 4 SATA2 connections on the MB:  http://www.newegg.com/Product/Product.aspx?Item=N82E16816116097

 

This is the cable that I am using to go from the backplanes to the ports ports on the AOC-SASLP-MV8 cards:  http://www.newegg.com/Product/Product.aspx?Item=N82E16816133034

 

I am at a complete loss now!  :S

 

John

 

Share this post


Link to post
Share on other sites

What did you use to derive these test numbers?

 

dd if=/dev/zero of=/mnt/disk2/test.dd count=8192000 bs=64k count=8k conv=fdatasync

 

Changed disk# for each test of course.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.