Drive performance testing (version 2.6.5) for UNRAID 5 thru 6.4


Recommended Posts

So I recently updated and ran this read speed test again. See attached for plots. I'm happy with the results -  even though my two older 2TB drives are pretty slow. The REALLY slow one (disk5) currently has no data on it. When I get to the point of filling up my array I will upgrade my parity drive and replace this 2TB drive with the current 4TB parity.

 

However I have a question - most of my drives that contain data have read speeds over 100MB/sec. However, if I try to COPY a file from one share to another using Midnight Commander or the Krusader docker, I am rarely able to exceed a measly 40MB/sec write speeds. This is USING the SSD cache drive. I figure the write/copy speed would be limited by the read speed of the slowest drive involved in the process, but for whatever reason I can never even come close to those lighting fast write speeds to the cache. Any ideas?

diskspeed.html

Link to comment
3 minutes ago, Smitty2k1 said:

However I have a question - most of my drives that contain data have read speeds over 100MB/sec. However, if I try to COPY a file from one share to another using Midnight Commander or the Krusader docker, I am rarely able to exceed a measly 40MB/sec write speeds. This is USING the SSD cache drive. I figure the write/copy speed would be limited by the read speed of the slowest drive involved in the process, but for whatever reason I can never even come close to those lighting fast write speeds to the cache. Any ideas?

That seems like the sort of speed you would get if you were reading and writing to the parity array. Are you saying you are copying from one User share to another User share. and the destination User share is configured so the file is written to cache first? Are you sure it is written to cache and not to an array disk?

Link to comment

I am copying a file from one user share to a different user share. Not overwriting any files. Source files on array (not cache) destination writes to cache (confirmed by viewing cache files through the unRaid GUI after copy).

 

I used to have an old Atom CPU so I always attributed poor performance to that. However been running a Xeon for a while now and still get the same speeds.

 

I've checked by copying a file from a Windows PC on an SSD to the unRaod cache (SSD) over gigabit network and it saturates the gig ethernet. Therefore I assumed it was a slow read speed from the array disks but this script is telling me otherwise. 

Link to comment
49 minutes ago, Smitty2k1 said:

I am copying a file from one user share to a different user share. Not overwriting any files. Source files on array (not cache) destination writes to cache (confirmed by viewing cache files through the unRaid GUI after copy).

 

I used to have an old Atom CPU so I always attributed poor performance to that. However been running a Xeon for a while now and still get the same speeds.

 

I've checked by copying a file from a Windows PC on an SSD to the unRaod cache (SSD) over gigabit network and it saturates the gig ethernet. Therefore I assumed it was a slow read speed from the array disks but this script is telling me otherwise. 

Do you use Cache Dirs plugin?

Link to comment
  • 3 weeks later...

So, here's a weird thing.

 

Been using the plugin. It won't push the disks over 7-10MB/s and pegs 1 cpu at 100%.

 

 

chart.thumb.png.51f2f41fd1bd2bef9c0eb6ee8b3dc7b8.png

 

590e641bee714_ScreenShot2017-05-06at7_59_06PM.png.3ce9ff52f37d607566e896281948fb0c.png590e641c47761_ScreenShot2017-05-06at7_59_19PM.png.d3bee85bc04a7f5a8fa6da2e0c4a39e7.png

 

 

 

I tired uninstalling and reinstalling the plugin. No dice. I know it's not a hardware problem because a quick non-correcting parity check shows:

 

590e649e1cd76_ScreenShot2017-05-06at8_03_22PM.png.235bcb65690952d3d49a7e288868f389.png

 

 

Thoughts? It works flawless on my other server which is nearly identical to this one except for processors/ram quality/disks. This did work when I first installed the plugin a few days ago.

 

Link to comment
8 minutes ago, 1812 said:

So, here's a weird thing.

 

Been using the plugin. It won't push the disks over 7-10MB/s and pegs 1 cpu at 100%.

 

 

chart.thumb.png.51f2f41fd1bd2bef9c0eb6ee8b3dc7b8.png

 

590e641bee714_ScreenShot2017-05-06at7_59_06PM.png.3ce9ff52f37d607566e896281948fb0c.png590e641c47761_ScreenShot2017-05-06at7_59_19PM.png.d3bee85bc04a7f5a8fa6da2e0c4a39e7.png

 

 

 

I tired uninstalling and reinstalling the plugin. No dice. I know it's not a hardware problem because a quick non-correcting parity check shows:

 

590e649e1cd76_ScreenShot2017-05-06at8_03_22PM.png.235bcb65690952d3d49a7e288868f389.png

 

 

Thoughts? It works flawless on my other server which is nearly identical to this one except for processors/ram quality/disks. This did work when I first installed the plugin a few days ago.

 

Strange.  

 

Does the standalone script work fine?

Edited by Squid
Link to comment
9 minutes ago, Squid said:

Strange.  

 

Does the standalone script work fine?

 

 

appears so, but I don't know what the warning is for:

 

diskspeed.sh for UNRAID, version 2.6.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV

Warning: Files in the array are open. Please refer to /tmp/lsof.txt for a list
/dev/sdb (Cache): 268 MB/sec avg

 

Link to comment
2 hours ago, 1812 said:

 

 

appears so, but I don't know what the warning is for:

 


diskspeed.sh for UNRAID, version 2.6.4
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV

Warning: Files in the array are open. Please refer to /tmp/lsof.txt for a list
/dev/sdb (Cache): 268 MB/sec avg

 

The warning is exactly what it says.  There are open files.  Open files may affect the testing results.  

 

99% of the script is identical between the plugin and the bare script.  The main differences are some prior to testing just to get the raw script to work under a different environment.  At this time I have no clue why you're seeing such a wide disparity (I'm certainly not) between the two versions as they are nearly identical, but I'll think about it...

Link to comment
8 hours ago, Squid said:

The warning is exactly what it says.  There are open files.  Open files may affect the testing results.  

 

99% of the script is identical between the plugin and the bare script.  The main differences are some prior to testing just to get the raw script to work under a different environment.  At this time I have no clue why you're seeing such a wide disparity (I'm certainly not) between the two versions as they are nearly identical, but I'll think about it...

 

Something else interesting I noticed this am. I ran the test on an ssd with no open file activity, which still produced the result around 7MB/s. So after the first segment finished, I pressed cancel. The page refreshed to show not running. I then opened the stats page where it still showed disk activity at 7MB/s, cycling to show the different read intervals that were specified. I spend another tab to the dashboard and could also see 1 cpu thread pegged at 100%. So despite cancelling, it still continued on with the test.

 

Log only showed this

 

May 7 07:44:33 Brahms1 emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin checkall
May 7 07:45:41 Brahms1 root: kill 12110

 

Link to comment
  • 2 weeks later...

Wonder if it would be possible to modify the script to optionally run each disk's test simultaneously - might provide insights into controller bottlenecks.

So like all selected disks run their 0% test, then all run the 10%, etc.  Haven't looked into the script to see if the loops are set up in a way that could be easily modified to do it this way, and obviously each individual test would need to be forked.

Edited by CraziFuzzy
Link to comment
  • 3 weeks later...
25 minutes ago, jbartlett said:

The Plugin version doesn't hang up the GUI while running if you're running 6.4 or higher due to 6.4 no longer being 100% single threaded for web calls.

Even under 6.3 it won't hang the GUI while its running...  Everything is done in the background.

Link to comment

I have an Areca ARC-1231ML that unRaid on its own is unable to id the drives attached to it. I have the Dynamix SCSI Devices plug-in installed which allows the drives to be properly identified in Web GUI. Would it be possible to check for and make use of that translation if your script gets an "Unable to determine" value for drive ID?

Screenshot_1.jpg

Link to comment
31 minutes ago, chaosratt said:

So neither the plugin or bare script seem to be generating graphs for me now. I get the small table with disk names, but the area where the graph should be is empty.

If you run the stand-alone script with the -l (log) option, you can send that along and I'll be able to debug what's happening.

Link to comment
On 6/8/2017 at 10:11 AM, interwebtech said:

I have an Areca ARC-1231ML that unRaid on its own is unable to id the drives attached to it. I have the Dynamix SCSI Devices plug-in installed which allows the drives to be properly identified in Web GUI. Would it be possible to check for and make use of that translation if your script gets an "Unable to determine" value for drive ID?

Screenshot_1.jpg

Does the command "lsscsi -i" reveal the information?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.