unRAID Server Release 6.2.0-beta20 Available


Recommended Posts

First time using 6.2-beta? please read the Original 6.2-beta Announcement Post first.

 

IMPORTANT

[*]While every effort has been made to ensure no data loss, THIS IS BETA SOFTWARE.... use at your own risk...

[*]Your server must have access to the Internet to use the unRAID 6.2 beta.

[*]Posts in this thread should be to report bugs and comment on features ONLY.

 

HOW TO REPORT A BUG

Think you've found a bug or other defect in the beta?  Ask yourself these questions before posting about it here:

[*]Have I successfully tested this bug with all my plugins disabled (booting into safe mode)?

[*]Can I recreate the bug consistently and have I documented the steps to do so?

[*]Have I downloaded my diagnostics from the unRAID webGui after the bug occurred, but before I rebooted the system?

Do not post about a bug unless you can confidently answer "Yes" to all three of those questions.  Once you can, be sure to follow these guidelines, but make sure to post as a reply on this thread, not as a new topic under defect reports (we track bug reports for beta/rc releases independent from the stable release).

 

Installing and updating the beta

If you are currently running previous 6.2-beta release, clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade.

 

Alternately, navigate to Plugins/Install Plugin, copy this text into the box and click Install:

https://raw.githubusercontent.com/limetech/unRAIDServer-6.2/master/unRAIDServer.plg

 

You may also Download the release and generate a fresh install.

 

A note from the developers

More bug fixes.  In particular, squashed a bug which resulted in Windows 10 VM's running multi-media applications causing host CPU's to peg at near 100%.  This one was a doozy and we had a -beta20 all ready to go which fixed this issue by reverting back to the linux 4.1.x kernel.  (We figured out the issue got introduced by some change in the kernel 4.3 merge window, but kernel 4.2.x is deprecated.)  Not happy with this compromise and not wanting to wait for kvm developers to acknowledge and fix this issue, our own Eric Schultz took the plunge and started "bisecting" the 4.3-rc1 release to find out what patch was the culprit.  It took something like 16 kernel builds to isolate the problem, and the fix turns out to be a truly 1-line change in a configuration file (/etc/modprobe.d/kvm.conf)!  A big Thank You to Eric for his hard work on this!

 

unRAID Server OS Change Log
===========================

THIS IS BETA SOFWARE
--------------------

While every effort has been made to ensure no data loss, **use at your own risk!**

Version 6.2-beta20 2016-03-25
-----------------------------

Base distro:

- aaa_elflibs: version 14.2 
- ethtool: version 4.5
- glibc-zoneinfo: version 2016b
- lvm2: version lvm2-2.02.147
- mc: version 4.8.16
- pciutils: version 3.4.1
- pkgtools: version 14.2
- procps-ng: version 3.3.11
- mozilla-firefox: version 45.0.1
- harfbuzz: version harfbuzz-1.2.4
- utempter: version 1.1.6

Linux kernel:

- version 4.4.6
- added missing firmware: ast_dp501_fw.bin
- use out-of-tree drivers:
  - Intel 10Gbit Ethernet driver ixgbe: version 4.3.13
  - Intel 10Gbit Ethernet driver ixgbevf: version 3.1.2
  - Highpoint Rocket r750: version 1.2.4

Management:

- Add halt_poll_ns=0 to kvm.conf - eliminates high cpu overhead in windows 10 [kudos to Eric S. for this!]
- Fix auto-start array
- Fix upgrade process erroneous reference of /boot/config/domains.cfg to /boot/config/domain.cfg
- Quiet extraneous nfs start messages.
- When necessary to query keyserver, poll up to 45 seconds for a connection.

webGui:

- docker: Add Template Authoring Mode. 
- docker: Add the ability to keep templates in sync with the authors modifications
- docker: Fix: wrong variable name prevents config creation
- docker: Set default port mode to TCP and path mode to RW 
- dynamix: Introduce context-sensitive help
- dynamix: Get rid of SMART db update in monitor
- Do not activate context-sensitive help functionality for Docker and VMs pages yet
    
Version 6.2-beta19 2016-03-17
-----------------------------

Base distro:

- fix NFS mounts and warnings about missing IPv6
- removed obsolete 'apmd' and 'portmap' packages
- acpid: version 2.0.26
- docker: version 1.10.3
- cryptsetup: version 1.7.1
- grep: version 2.24
- gtk+3: version 3.18.9
- htop: version 2.0.1
- libdrm: version 2.4.67
- libnl3: version 3.2.27
- lvm2: version 2.02.145
- mozilla-firefox: version 45.0 (console GUI mode)
- mpfr: version 3.1.4
- nettle: version 3.2
- openssh: version 7.2p2
- p11-kit: version 0.23.2
- pciutils: version 3.4.1
- rpcbind: version 0.2.3
- samba: version: 4.3.6
- xorg-server: version 1.18.2

Linux kernel:

- version 4.4.5
- added config options:
  - AMD_IOMMU_V2: AMD IOMMU Version 2 driver
  - INTEL_IOMMU_SVM: Support for Shared Virtual Memory with Intel IOMMU
  - SCSI_HPSA: HP Smart Array SCSI driver [per customer request for testing, may be removed]
- unraid: Correct sync start/end timestamps.
- unraid: Fix device spindown bug.
- unraid: Fix NEW_ARRAY case of Q not set invalid.
- unraid: Refinement in 'invalidslot' handling

Management:

- Certain mount errors can actually leave device mounted, so un-mount if any error detected.
- Change 'color' status of non-present parity devices from 'red-off' to 'grey-off'.
- correctly handle dual-parity "trust parity" flag.
- fix 'bash' error in /etc/rc.d/rc.6 (shutdown) script.
- Fix disks_mounted event generated after svcs_started.
- Get rid of "Identify" operation.
- Incorporate gfjardim suggestion to mark /mnt "shared" for better Docker integration.
- upgrade process now copies/upgrades bzroot-gui and syslinux/syslinux.cfg-

webGui:

- docker: always show the 'Add another Path, Port or Variable' button
- docker: export ports as variable if Network is set to host
- docker: fix 'WebUI' content menu item now hidden when the web ui link is empty
- docker: removed 'Dry Run' button on create/edit container page
- docker: update pop-in dialogs to look better when using the Dynamix black theme
- Do not display unassigned parity devices when array is Started.
- fix context menu to escape non-safe css selector characters
- fix when disk rebuild is complete, notification reports status "Canceled"
- vm manager: Fix cdrom bus type to use SATA when machine type is Q35

Version 6.2-beta18 2016-03-11
-----------------------------

Changes vs. unRAID Server OS 6.1.9.

Base distro:

- switch to 'slackware64-current' base packages
- avahi: version 0.6.32
- beep: version 1.3
- docker: version 1.10.2
- eudev: version 3.1.5a: support NVMe
- fuse: version 2.9.5
- irqbalance: version 1.1.0
- jemalloc: version 4.0.4
- libestr: version 0.1.10
- liblogging: version 1.0.5
- libusb: version 1.0.20
- libvirt: version 1.3.1
- lshw: version B.02.17 svn2588
- lz4: version r133
- mozilla-firefox: version 44.0.2 (console GUI mode)
- netatalk: version 3.1.8
- numactl: version 2.0.11
- php: version 5.6.19
- qemu: version 2.5.0
- rsyslog: version 8.16.0
- samba:
  - version: 4.3.5
  - enable asynchronous I/O in /etc/samba/smb.conf
  - remove 'max protocol = SMB3' from /etc/samba/smb.conf (automatic negotiation chooses the appropriate protocol)
- spice: version 0.12.6
- xorg-server: version 1.18.1
- yajl: version 2.1.0

Linux kernel:

- version 4.4.4
- default iommu to passthrough (iommu=pt)
- kvm: enabled nested virtualization
- unraid: array PQ support (dual-parity)

Management:

- Trial key now supports 6 devices, validates with limetech keyserver
- Pro key supports max 30 array devices, unlimited attached devices
- add 10Gb ethernet tuning in /etc/sysctl.conf
- add tunable: md_write_method (so-called "turbo write")
- array PQ support (dual-parity)
- do not auto-start parity operation when Starting array in Maintenance mode
- libvirt image file handling
- stop md/unraid driver cleanly upon system poweroff/reset
- support NVMe storage devices assignable to array and cache/pool
- support USB storage devices assignable to array and cache/pool
- system shares handling
- misc other improvements and bug fixes

webGui:

- all fixes and enhancements from 6.1.9
- added hardware profile page
- added service status labels to docker and vm manager settings pages
- docker: revamped docker container edit page (thanks gfjardim!)
- docker: now using docker v2 index/repos
- docker: updating a stopped container will keep it stopped upon completion
- dyanmix-6.2: version 2016-03-11
- reverse the negative logic in docker and libvirt image fsck confirmation
- support user specified network MTU value
- vm manager: usb3 controller support, improved usb device sorting and display
- vm manager: integrated virtio driver iso downloader
- vm manager: support nvidia with hyper-v for windows guests
- vm manager: added auto option for vdisk location
- misc other improvements and bug fixes

Link to comment

- dynamix: Get rid of SMART db update in monitor

 

Can we fix it instead?  The underlying problem is pretty simple:

  https://lime-technology.com/forum/index.php?topic=47386.0

Debian got rid of it entirely due to "security concerns":

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804299

 

I think this is something that is more properly added as a plugin, but open to other opinions.  We'll see about refreshing that header file for each release.

Link to comment

also my cpu cores are in seperate fields now. before and after pics attached. is that new??

 

I think that's because none of your cores are hyperthreaded and thread siblings. What does the output of the following command show?

cat /sys/devices/system/cpu/*/topology/thread_siblings_list | sort -nu

 

 

See here: http://lime-technology.com/forum/index.php?topic=47261.msg452222#msg452222

 

root@Media:~# cat /sys/devices/system/cpu/*/topology/thread_siblings_list | sort -nu                        
0
1
2
3

 

ill take a look at that

Link to comment

also my cpu cores are in seperate fields now. before and after pics attached. is that new??

 

Yes, it's now shown as "thread pairs" instead of sequentially listing each core, two-per-row.  If you don't have hyperthreading (or disabled it) then you'll see one core per row otherwise you'll see the two threads that physically make up a cpu core per row.

Link to comment

also my cpu cores are in seperate fields now. before and after pics attached. is that new??

 

Yes, it's now shown as "thread pairs" instead of sequentially listing each core, two-per-row.  If you don't have hyperthreading (or disabled it) then you'll see one core per row otherwise you'll see the two threads that physically make up a cpu core per row.

 

i believe my cpu doesnt support hyperthreading. ill have so much open space now lol :'(

Link to comment

root@Media:~# cat /sys/devices/system/cpu/*/topology/thread_siblings_list | sort -nu                        
0
1
2
3

 

ill take a look at that

 

Basically for those with hyper threaded systems (really didn't think there were any systems without hyperthreading these days), it pairs the real-core with the hyperthreaded-core so you get a better idea as to what's happening on your system. It also helps immensely on CPU Pinning if you're running Dockers or VMs.

 

Previously it was too difficult to really know what was going on in your system or why running a VM with 2 CPU cores of 0 and 1 would severly impact a VM with 2 CPU cores of 2 and 3. The typical reason for that is that CPUs 0 and 2 would likely be from the same physical core, and CPU 1 and 3 would be from the other same physical core as well. To have a better Docker/VM experience, one would need to setup the first VM to use CPU core 0 and 2, and the other VM to use CPU core 1 and 3.

Link to comment

Basically for those with hyper threaded systems (really didn't think there were any systems without hyperthreading these days), it pairs the real-core with the hyperthreaded-core so you get a better idea as to what's happening on your system. It also helps immensely on CPU Pinning if you're running Dockers or VMs.

 

Previously it was too difficult to really know what was going on in your system or why running a VM with 2 CPU cores of 0 and 1 would severly impact a VM with 2 CPU cores of 2 and 3. The typical reason for that is that CPUs 0 and 2 would likely be from the same physical core, and CPU 1 and 3 would be from the other same physical core as well. To have a better Docker/VM experience, one would need to setup the first VM to use CPU core 0 and 2, and the other VM to use CPU core 1 and 3.

 

all i read was "haha you dont have hyperthreading" LOL

but i get it. it makes sense for you hyperthreading people >_<

Link to comment

Yes, it's now shown as "thread pairs" instead of sequentially listing each core, two-per-row.  If you don't have hyperthreading (or disabled it) then you'll see one core per row otherwise you'll see the two threads that physically make up a cpu core per row.

Or, in the case of AMD processors, the Core Pairs on each package.

 

Which is all well and good, but my display doesn't show any of the speeds for any core on the dashboard

 

server_a-diagnostics-20160326-2015.zip

Link to comment

- dynamix: Get rid of SMART db update in monitor

 

Can we fix it instead?  The underlying problem is pretty simple:

  https://lime-technology.com/forum/index.php?topic=47386.0

Debian got rid of it entirely due to "security concerns":

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=804299

 

I think this is something that is more properly added as a plugin, but open to other opinions.  We'll see about refreshing that header file for each release.

 

Thanks Tom.  If you include the latest header file in each release that will really cut down on the need to run the update.  Right now I run it because the file included in 6.1.9 doesn't know about my 4TB Seagate NAS drives, even though they have been out for several years.

 

I do think it would make sense to fix the /usr/sbin/update-smart-drivedb script anyway, so the people who get shiny new hard drives between releases can update smartmontools to recognize them. But unRAID doesn't need to run the script automatically.

Link to comment

I have som issues with the beta and a VM.

In 6.1.9 I had a Windows 2012 R2 VM (running in Bios mode, since I could not manage to make partitions in UEFI mode), it was running in one disk image, placed on Disk 1.

 

[...]

 

When I tried to access the data drive in the VM now, the VM hang.

Could not shut it down.. and "Force Stop" did not work either.. Just got this error:

"Execution error ... Failed to terminate process 16228 with SIGKILL: Device or resource busy"

 

Is this a bug in 6.2 beta, or is it becuase I have one VM, running on two different Disk images, placed on differnt disks?

I will try to re-install it, just using one Disk image, placed on Cache drive.

Sounds exactly like THIS.

No solution or reason why, but it seems there are issues while running VMs with disks on the array.

 

More like a "bug" with vdisks placed on the array, that crashes VMs when the start writing something on it.

If you place your OS vdisk on the array, you probably wont get far after the login, if you even make it that far.

 

Cache-Only VMs do not show these issues. Workaraund would be to copy everything to the cache or run it outside of the array or a rollback.

 

This seems also to be fixed.

I can run VMs on the array again. At least one, have to copy everything back to be sure.

Link to comment

I'm having an issue with the Unassigned Devices plugin with mounting a remote NFS share from another server.  The remote share mounts properly and is NFS shared on the unraid server properly but I get recurring log messages about exporting the NFS share.

 

I'm doing something here that is a bit circular and I am wondering about the wisdom of doing this.  Let's say that the unraid server UD is mounting a NFS share from a NAS server locally on the unraid server.  I mount the remote NFS share and re-share it on the unraid server using both SMB and NFS (if enabled on unraid).  So the remote NFS NAS share now ends up being re-shared with NFS on the unraid server.

 

Anyway here are the log entries I am getting.

 

Mar 27 07:05:12 Tower rpc.mountd[18611]: authenticated mount request from 192.168.1.3:950 for /mnt/user/Public (/mnt/user/Public)
Mar 27 07:06:01 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export
Mar 27 07:06:42 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export
Mar 27 07:07:01 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export
Mar 27 07:07:12 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export
Mar 27 07:07:23 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export
Mar 27 07:08:01 Tower root: exportfs: /mnt/disks/MediaServer_Public does not support NFS export

 

The /etc/exports file:

# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd.
"/mnt/disks/MediaServer_Public" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Computer Backups" -async,no_subtree_check,fsid=103 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Public" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/iTunes" -async,no_subtree_check,fsid=101 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

This does not happen on 6.1.9.

 

Diagnostics attached.

tower-diagnostics-20160327-0707.zip

Link to comment

 

- When necessary to query keyserver, poll up to 45 seconds for a connection.

 

 

Does this mean that when booting uraid, that the key check now allows 45seconds for the array to come online and VM's to start. IE virtualised pfsense before the keycheck disables the system for an invalid key ?

Link to comment
Guest
This topic is now closed to further replies.