[SOLVED] HELP: Upgraded Parity - Now Super Slow Writes To Array


DZMM

Recommended Posts

Blimey, this is awful.

 

I'm trying to move my old tv recordings from my cache to the array to free up space and I'm getting 1.5MB/s and that's using turbo write!!!  I must have a duff drive as I can't be the only person with a setup that causes problems, especially when I've stopped almost all activity? 

 

Are there any additional tests I can do to prove a fault with the drive?  The retailer won't do no-quibble exchanges, so maybe I could try an RMA with seagate?

 

59e9c114a3c4f_FireShotCapture36-Highlander_Main-http___172_30_12.2_Main.thumb.png.508af8aae4b6594be5709d94b58fcade.png

highlander-diagnostics-20171020-1025.zip

Link to comment

This might not help you, but I notice you have Deluge writing torrents to your parity protected array. I did this when I originally installed my Deluge docker and experienced terrible performance. WebUI would freeze, no access to docker pages, SMB shares slow to load, etc. I changed my /data path in Deluge to point to a UD mount outside the parity protected array and all those problems disappeared. Now I just setup Sonarr/Radarr to copy the completed torrents to a parity protected share and delete the torrent once seeding is complete. Might be something you could test and see if it helps.

Link to comment
24 minutes ago, wgstarks said:

This might not help you, but I notice you have Deluge writing torrents to your parity protected array. I did this when I originally installed my Deluge docker and experienced terrible performance. WebUI would freeze, no access to docker pages, SMB shares slow to load, etc. I changed my /data path in Deluge to point to a UD mount outside the parity protected array and all those problems disappeared. Now I just setup Sonarr/Radarr to copy the completed torrents to a parity protected share and delete the torrent once seeding is complete. Might be something you could test and see if it helps.

I'm moving the majority of my writes to my cache including deluge to bypass the parity drive as a temporary fix, but they're not the problem.  The unRAID team got in touch and asked me to do some transfers and send the diagnostics.  I did two transfers after a reboot and with all dockers that could cause probs like deluge stopped - one transfer of 52GB between 2 drives took 1hr 21 mins @11MB/s!!!

 

 

highlander-diagnostics-20171020-1343.zip

Link to comment

Latest update if anyone else if having similar problems.  The limetech team have been helping and the change below got me up to around 35 MB/s without turbo write and about 55MB/s with turbo when moving files. 

 

Not near the 75-100MB/s other users seem to be getting, but hopefully the limetech team might be able to squeeze a bit more out of my drive

 

Quote

I have another thing for you to try, if it’s not too much trouble.  You’re motherboard has two SATA controllers totaling 10 SATA ports.  Your current configuration and the location of each physical connection from the motherboard to device looks like this:

 

4-Port "black ports" SATA controller

  ata1:  disk2  [sdb] 5TB, TOSHIBA HDWE150, Z5O6KDQHF57D

  ata2:  disk3  [sdc] 5TB, TOSHIBA HDWE150, Z5O5K4MRF57D

  ata3:  disk1  [sdd] 2TB, Hitachi HDS722020ALA330, JK1170YAHTAXWP

  ata4:  parity [sde] 8TB, ST8000AS0022-1WL17Z, Z840QTHH

 

6-Port SATA controller

  ata5:  cache  [sdf] 500GB, HFS500G32TND-N1A2A, FJ68N448010508H0N

  ata6:  cache2 [sdg] 250GB, HFS250G32TND-N1A2A, FJ69N40661080954I

  ata7:  cache3 [sdh] 250GB, Crucial_CT250MX200SSD1, 15260FDF29F6

  ata8:  disk5  [sdi] 6TB, TOSHIBA HDWE160, 968BK4EEF56D

  ata9:  disk6  [sdj] 6TB, TOSHIBA HDWE160, 969DK08KF56D

  ata10: disk4  [sdk] 6TB, TOSHIBA HDWE160, 86H5K3T2F56D

 

 

Could you reroute the connections to be like this?:

 

4-Port "black ports" SATA controller

  ata1:  cache  [sdb] 500GB, HFS500G32TND-N1A2A, FJ68N448010508H0N

  ata2:  cache2 [sdc] 250GB, HFS250G32TND-N1A2A, FJ69N40661080954I

  ata3:  cache3 [sdd] 250GB, Crucial_CT250MX200SSD1, 15260FDF29F6

  ata4:  disk1  [sde] 2TB, Hitachi HDS722020ALA330, JK1170YAHTAXWP

 

6-Port SATA controller

  ata5:  parity [sdf] 8TB, ST8000AS0022-1WL17Z, Z840QTHH

  ata6:  disk2  [sdg] 5TB, TOSHIBA HDWE150, Z5O6KDQHF57D

  ata7:  disk3  [sdh] 5TB, TOSHIBA HDWE150, Z5O5K4MRF57D

  ata8:  disk5  [sdi] 6TB, TOSHIBA HDWE160, 968BK4EEF56D

  ata9:  disk6  [sdj] 6TB, TOSHIBA HDWE160, 969DK08KF56D

  ata10: disk4  [sdk] 6TB, TOSHIBA HDWE160, 86H5K3T2F56D

 

 

The ordering isn’t important, just as long as the smaller sized devices are all connected to the 4-port sata controller (the bottom-most black ports).  You won’t have to change anything on unRAID’s web interface — it’ll automatically detect the drives and assign them the same diskX/parity slots as before.

 

Link to comment

If you're running without parity there shouldn't be any difference between normal and turbo.

 

If you're running with parity and moving files between array drives then those speeds don't seem unreasonable, since moving involves copying files to destination then deleting from source. so both drives are written, and both writes would also write parity.

  • Like 1
Link to comment
1 hour ago, trurl said:

 

If you're running with parity and moving files between array drives then those speeds don't seem unreasonable, since moving involves copying files to destination then deleting from source. so both drives are written, and both writes would also write parity.

Ahh that makes sense - other higher speeds I've seen must have been writing to the array not doing disk transfers.

 

I'm finally a happy bunny.  I was getting frustrated with the slowness this was causing.

Link to comment

Final update.  @trurl I've just done a cache-array transfer to eliminate the double write and I got 95MB/s for a 14GB transfer with TW on.  That's with all my other disk activity going on at the same time, so I would have easily got over 100MB/s I think if I'd shut all my other dockers down etc  Happy days!

 

I would never have thought I would have problems with my two motherboard SATA controllers.  A bit worried about what will happen when I install a PCIe controller into the mix, as I've almost maxed out my onboard connectors.  That won't happen for a while though, as I'm going to remove two 250GB cache pool SSDs at the next upgrade and replace with one 500GB SSD to free up a cable.

Link to comment
  • 2 months later...

second update:  Adding a new note as I've found another source for the slow speeds.

 

In my unraid settings my server address is set to 172.30.12.2 and I have several VLANs a la 6.4.  What I noticed yesterday is that when I'm on my main windows 10 VM on VLAN50 (172.35.12.x) the unraid server is accessible at 172.35.12.2 as well as 172.30.12.2 - not sure how or why. 

 

Anyway, if I connect to my server via SSH from VLAN50 to 172.35.12.2 instead of 172.30.12.2 I get expected transfer speeds e.g. 50MB/s+ when copying between drives - I was also having a problem with putty disconnecting when connected to 172.30.12.2 after a few minutes which is now fixed if I connect to 172.35.12.2.

 

@bonienl @limetech where has the 172.35.12.2 address come from?  I've changed some of my other VLAN 50 devices (e.g. kodi boxes) to point to 172.35.12.2 rather than 172.30.12.2 and again this has fixed intermittent disconnect problems.

5a5f715d6bfed_FireShotCapture37-Highlander_NetworkSettings_-https___1d087a25aac48109ee9a15217a.thumb.png.f6cfa7f5b27d7adb99ec2ca083fb7092.png

Edited by DZMM
Link to comment

Well your network settings show the 172.35.12.2 ip. And yes, the current docker network support requires unraid up on all interfaces your Dockers might be interested in using. Unseen side effect is that the server can be ssh'd smb'd or nfs'd on all of those ips. So even bigger attack surface. And your windows VM connecting to the local vlan IP is going to be more performant than the one on a different vlan to it.

Link to comment

RFC1918 defines private address range as 172.16.xxx.yyy up to 172.31.xxx.yyy

Addresses like 172.35.xxx.yyy are public addresses and used 'somewhere', you may not be able to reach certain internet sites if they happen to be on those public addresses.

 

When you start working with different network segments you need to realize that IP addresses in different networks can only reach each other via your router. This is something you may want if your router acts as firewall and determines what is allowed and what is not between networks. Devices in the same network always talk directly to each other without the router involved.

 

Link to comment
6 minutes ago, bonienl said:

RFC1918 defines private address range as 172.16.xxx.yyy up to 172.31.xxx.yyy

Addresses like 172.35.xxx.yyy are public addresses and used 'somewhere', you may not be able to reach certain internet sites if they happen to be on those public addresses.

 

 

 

Duh - when I created my VLANs I didn't even check to see they were valid addresses.  Will fix this - thanks

 

6 minutes ago, bonienl said:

When you start working with different network segments you need to realize that IP addresses in different networks can only reach each other via your router. This is something you may want if your router acts as firewall and determines what is allowed and what is not between networks. Devices in the same network always talk directly to each other without the router involved.

 

I have this all working, although one of my VLANs couldn't reach the unraid server but I think fixing the ranges above might do the trick

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.