eschultz

Version 6.3.0-rc5 Release Notes

Recommended Posts

Hmm, well my putty is UTF-8... but maybe I'm carrying those "hidden files" from long ago...

When I had this problem it was always from downloads, in other words, files/folders that were named by someone else on some other system.

Share this post


Link to post
Share on other sites

When I had this problem it was always from downloads, in other words, files/folders that were named by someone else on some other system.

 

Well yea, I didn't though about that. You know, now days the internet is so fast that it's quicker to download than to RIP my own. ;) And hey, gotta use that bandwidth for something!

Share this post


Link to post
Share on other sites

After updating to rc5 I noticed a problem with my Ubuntu server VM (16.04 LTS.) A share I have on my cache drive (I have a 2 disk cache pool Raid 1)which is mounted in the VM  is now not writable which I discovered when my SABnzbd running within the VM started throwing off error messages when trying to create directories within the share.  I checked the permissions, they are ok.  I tried creating another share on the cache drive, same result.  I scrubbed the cache pool and no errors.  All other shares which are on the array (not on the cache drive) are behaving normally.  Any suggestions?

Share this post


Link to post
Share on other sites

Check the permissions and/or ownership for the /mnt/cache mount point directory itself?

 

The permissions are the same for the array share mount points.  Another strange issue, when I open VNC to get to a console, it asks for a password.  When I start the VM without the cache drive mounted it doesn't.  Could these two issues be related? Strange. :-\

Share this post


Link to post
Share on other sites

After updating to rc5 I noticed a problem with my Ubuntu server VM (16.04 LTS.) A share I have on my cache drive (I have a 2 disk cache pool Raid 1)which is mounted in the VM  is now not writable which I discovered when my SABnzbd running within the VM started throwing off error messages when trying to create directories within the share.  I checked the permissions, they are ok.  I tried creating another share on the cache drive, same result.  I scrubbed the cache pool and no errors.  All other shares which are on the array (not on the cache drive) are behaving normally.  Any suggestions?

 

To clarify, you are saying that you are getting errors when trying to write to a cache-only share from an Ubuntu VM running on unRAID?

 

My first question would be whether or not you can write to that same share from another machine on your network.  Second question would be if you could please attach your system diagnostics to a reply on this topic so we can examine further.

Share this post


Link to post
Share on other sites

 

To clarify, you are saying that you are getting errors when trying to write to a cache-only share from an Ubuntu VM running on unRAID?

 

Yes.

 

My first question would be whether or not you can write to that same share from another machine on your network.  Second question would be if you could please attach your system diagnostics to a reply on this topic so we can examine further.

 

Yes, no problem from other machines.

 

Diagnostics attached.

tower-diagnostics-20161130-1417.zip

Share this post


Link to post
Share on other sites

I'm having 2 issues:

 

1st - NVMe temperature stopped working on 6.3.0-rc releases, it worked on 6.2.4 on both my devices (Samsung 950 Pro and Toshiba/OCZ RD400)

 

2nd - on rc5 only I can't copy bzimage and bzroot to the flash share, can create folders and copy other files, but not those, the old file is deleted but get a strange windows error, if I go back to rc4 or earlier it starts working.

Screenshot_2016-12-01_17_12_06.png.affbb61a064cfedb58f9424a4f1c6c17.png

Share this post


Link to post
Share on other sites

I am amazed your VMs even run without the cache mounted, since the VM settings (/etc/libvirt) live in a disk image on the cache volume.

Share this post


Link to post
Share on other sites

 

 

2nd - on rc5 only I can't copy bzimage and bzroot to the flash share, can create folders and copy other files, but not those, the old file is deleted but get a strange windows error, if I go back to rc4 or earlier it starts working.

 

same.... cant copy files bzimage and bzroot to flash share in rc5.  Can create folders and delete.

Share this post


Link to post
Share on other sites

I'm getting over 1GB/sec writing to unRAID now, but still getting only 500MB/sec down .... but it is no longer choppy.

 

I'm seeing the same lower read speeds, I've been trying various tweaks without success, I can write to unRAID at >1GB/s when it's caching to RAM and at ~800MB/s sustained to a NVMe cache SSD, but read speed from the same device, using user or disk shares, tops at ~500Mb/s.

 

Tom, is there any tunnables we/you could try?

 

This isn't related to this release but since it's the one I'm using now I wanted to give an update on this with the hope that it may help other 10GbE users.

 

Since Direct_IO was implemented I could get 1GB/s uploading to unRAID, but "only" 500/600MB/s downloading, well I finally had some time to test and found the reason, I fell a little foolish because it was the low MTU setting, and dispite knowing that jumbo frames were recommended for 10GbE, I never though to try them before since my problem was the download speed only.

 

Now with SMB 2.0.2, MTU set to 9000 and Direct_IO enable (if copying from a user share) I can get 1GB/s download from unRAID when copying from the NVMe cache device, so I'm a happy camper. 

mtu.png.e035fcdb97e6b365e516f9c7563198a9.png

Share this post


Link to post
Share on other sites

Forgot to add that in some circumstances Direct_IO can provide better performance even for gigabit users (only when copying from a user share), I've found that some disks won't deliver full gigabit speed with Direct_IO disable, some examples below (with SMB set to 2.0.2 due to the current issues with SMB3).

 

disk1 - Toshiba DT01ACA100 (7200rpm)

disk2 - Seagate ST1000DM003 (7200rpm)

disk3 - Samsung HD204UI (5400rpm)

disk4 - WD Purple WD10PURX (5400rpm)

Direct_IO.png.a4f1f9230582bbac0f48c734e8bd3439.png

Share this post


Link to post
Share on other sites

 

 

2nd - on rc5 only I can't copy bzimage and bzroot to the flash share, can create folders and copy other files, but not those, the old file is deleted but get a strange windows error, if I go back to rc4 or earlier it starts working.

 

same.... cant copy files bzimage and bzroot to flash share in rc5.  Can create folders and delete.

The vfs_fruit additions are the culprit, we're fixing this for rc6.

Share this post


Link to post
Share on other sites

Forgot to add that in some circumstances Direct_IO can provide better performance even for gigabit users (only when copying from a user share), I've found that some disks won't deliver full gigabit speed with Direct_IO disable, some examples below (with SMB set to 2.0.2 due to the current issues with SMB3).

 

disk1 - Toshiba DT01ACA100 (7200rpm)

disk2 - Seagate ST1000DM003 (7200rpm)

disk3 - Samsung HD204UI (5400rpm)

disk4 - WD Purple WD10PURX (5400rpm)

so, you say, you can overcome 1Gbit almost 1.5x with Direct_IO on? 200MB/s - it's about 1.6Gbit... 

Share this post


Link to post
Share on other sites

so, you say, you can overcome 1Gbit almost 1.5x with Direct_IO on? 200MB/s - it's about 1.6Gbit...

 

That's not possible, I'm using 10GbE, but some disks, especially the WD can't get to gigabit max speed (~114MB/s) without Direct_IO enable (when copying from a user share).

Share this post


Link to post
Share on other sites

That's not possible, I'm using 10GbE, but some disks, especially the WD can't get to gigabit max speed (~114MB/s) without Direct_IO enable (when copying from a user share).

ok, thanks, got it..

another option to overcome some network limitations is traffic compression..

i'm using it with rsync when doing offsite backups - it almost doubles my 100Mbit internet connection speed..

i never tested it with LAN.. is traffic compression available for plain samba/nfs?

Share this post


Link to post
Share on other sites

I'm not sure how this could have happened:

 

root@Lapulapu:~# ls -la /mnt
/bin/ls: cannot access '/mnt/user': Transport endpoint is not connected
total 16
drwxr-xr-x 12 root   root  240 Nov 26 04:30 ./
drwxr-xr-x 18 root   root  400 Dec  3 13:31 ../
drwxrwxrwx  1 nobody users  40 Nov 26 04:30 cache/
drwxrwxrwx  3 nobody users  22 Nov 26 04:30 disk1/
drwxrwxrwx  3 nobody users  22 Nov 26 04:30 disk2/
drwxrwxrwx  3 nobody users  22 Nov 26 04:30 disk3/
drwxrwxrwx  2 nobody users   6 Nov 26 04:30 disk4/
drwxrwxrwx  2 nobody users   6 Nov 26 04:30 disk5/
drwxrwxrwx  5 nobody users  66 Nov 26 04:30 disk6/
drwxrwxrwx  4 nobody users  80 Nov 26 04:31 disks/
d???  ? ?      ?       ?            ? user/
drwxrwxrwx  1 nobody users  22 Nov 26 04:30 user0/
root@Lapulapu:~# 

 

I discovered it when Plex (the LimeTech container) complained that my media were offline. I stopped the array and restarted it but

/mnt/user

was neither removed nor recreated. So I grabbed diagnostics (attached) and rebooted, which fixed the problem. I don't see anything untoward in the syslog.

 

lapulapu-diagnostics-20161203-2347.zip

Share this post


Link to post
Share on other sites

so, you say, you can overcome 1Gbit almost 1.5x with Direct_IO on? 200MB/s - it's about 1.6Gbit...

 

That's not possible, I'm using 10GbE, but some disks, especially the WD can't get to gigabit max speed (~114MB/s) without Direct_IO enable (when copying from a user share).

I wondet if this is something limetech can enable by default in the future? Or is there some downside to it?

 

Sent from my LG-H990 using Tapatalk

 

 

Share this post


Link to post
Share on other sites

so, you say, you can overcome 1Gbit almost 1.5x with Direct_IO on? 200MB/s - it's about 1.6Gbit...

 

That's not possible, I'm using 10GbE, but some disks, especially the WD can't get to gigabit max speed (~114MB/s) without Direct_IO enable (when copying from a user share).

I wondet if this is something limetech can enable by default in the future? Or is there some downside to it?

 

Sent from my LG-H990 using Tapatalk

 

I have enabled it on all my servers, most of them are gigabit but speed difference is very noticeable, especially with WD disks for some reason, I can't find any downside but different hardware my give different results.

Share this post


Link to post
Share on other sites

 

I have enabled it on all my servers, most of them are gigabit but speed difference is very noticeable, especially with WD disks for some reason, I can't find any downside but different hardware my give different results.

 

So, how does one enable direct_IO?

Share this post


Link to post
Share on other sites

 

I have enabled it on all my servers, most of them are gigabit but speed difference is very noticeable, especially with WD disks for some reason, I can't find any downside but different hardware my give different results.

 

So, how does one enable direct_IO?

 

Settings -> Global Share Settings

 

Note this only improves performance when using user shares (don't notice impact when using disk shares), write speed improvement should only be noticeable with 10GbE and SSD/NVMe devices, read speed can improve significantly with HDDs (varies with HDD maker/model, at least on my tests).

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.