unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

Hi jonp, firstly congrats on getting this out, awesome job! Do you know if the tweak to put iptable_mangle support back in got included in this release?

Hi bin,

 

Sorry for not replying to the PMs on the subject. I had forwarded internally for review.  I believe this was taken care of with 6.2, but if you could confirm for the, that'd be great.

 

No probs I knew you guys were flat out working on 6.2. I don't have a test rig at the mo so unfortunately I won't be able to test until 6.2 goes final but I will let you know asap. I'm assuming 6.1.9 is going to be the last 6.1.x release right? If not it would be really cool if you could include mangle support in the last release of 6.1 series

 

Link to comment
  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

nfs isnt working for me.

system share wasnt created for me when running

i had to manually set the location and got docker running fine

but im still unable to get the vm tab to enable

its enabled but status says stopped

 

EDIT:

libvirt wasnt working with unassign drive

just had to move it back to cache

vm is working now

Link to comment

Just did an update via the plugin, and when the system rebooted the bzroot-gui file was not present on the USB drive, so I could not boot into GUI mode.  The GUI said I was on the 6.2beta18 release so the other files had been successfully updated, so not sure what went wrong as there was no error message indicated.

 

Corrected it by copying it across from the ZIP download version of the release.

 

P.S.  Really like having the GUI mode available.  Being able to use the attached monitor to also run a VM is a boon as my system does not support hardware pass-through so can not drive a GPU directly from a VM.  This means the unRAID box can now act as a lightweight desktop wereas before it was effectively only useful as a headless box.

Link to comment

NFS appears to be failing.  Log entries:

 

Mar 11 23:49:08 Tower rpc.statd[8155]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Already notifying clients; Exiting!

Mar 11 23:49:08 Tower rpc.statd[8155]: failed to create RPC listeners, exiting

Mar 11 23:49:08 Tower root: Starting NFS server daemons:

Mar 11 23:49:08 Tower root:  /usr/sbin/exportfs -r

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.nfsd 8

Mar 11 23:49:08 Tower root: rpc.nfsd: address family AF_INET6 not supported by protocol TCP

Mar 11 23:49:08 Tower root: rpc.nfsd: unable to set any sockets for nfsd

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.mountd

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower rpc.mountd[8166]: mountd: No V2 or V3 listeners created!

 

I had the same issue, NFS wouldn't start.  Folllowed this article and commented out 2 lines in the /etc/netconfig and got nfs started as a temp work around

 

https://www.novell.com/support/kb/doc.php?id=7011354

Link to comment

Took the plunge..

 

Updated the system disabling all VM and Dockers on forehand.

 

System did not autostart.

 

I enabled my second parity disk (has been precleared and waiting for this very moment).

 

Started the array

 

I had no VM tab

 

In VM settings VM's were still enabled

 

I disabled it and enabled it again, this made the tab appear (guessing this is due to the Dynamix webgui).

 

I did the pre-startup actions for the VM's (edit, change video to QXL).

 

I have 3 VM's, neither started. All primary disks were no longer allocated, I set to manual, browsed to the primary disk and save/updated. This made it work again.

 

One of my VM's had two disks attached, this one also did not start but when setting to manual the primary disk was found again by itself, did not have to browse to it.

 

Now on to the dockers..

Link to comment

NFS appears to be failing.  Log entries:

 

Mar 11 23:49:08 Tower rpc.statd[8155]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Version 1.3.3 starting

Mar 11 23:49:08 Tower sm-notify[8156]: Already notifying clients; Exiting!

Mar 11 23:49:08 Tower rpc.statd[8155]: failed to create RPC listeners, exiting

Mar 11 23:49:08 Tower root: Starting NFS server daemons:

Mar 11 23:49:08 Tower root:  /usr/sbin/exportfs -r

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.nfsd 8

Mar 11 23:49:08 Tower root: rpc.nfsd: address family AF_INET6 not supported by protocol TCP

Mar 11 23:49:08 Tower root: rpc.nfsd: unable to set any sockets for nfsd

Mar 11 23:49:08 Tower root:  /usr/sbin/rpc.mountd

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for udp6

Mar 11 23:49:08 Tower root: rpc.mountd: svc_tli_create: could not open connection for tcp6

Mar 11 23:49:08 Tower rpc.mountd[8166]: mountd: No V2 or V3 listeners created!

 

I had the same issue, NFS wouldn't start.  Folllowed this article and commented out 2 lines in the /etc/netconfig and got nfs started as a temp work around

 

https://www.novell.com/support/kb/doc.php?id=7011354

Scratch that, I thought the daemon had started, but rpc.statd wont restart...

Link to comment

Took the plunge..

 

Updated the system disabling all VM and Dockers on forehand.

 

System did not autostart.

 

I enabled my second parity disk (has been precleared and waiting for this very moment).

 

Started the array

 

I had no VM tab

 

In VM settings VM's were still enabled

 

I disabled it and enabled it again, this made the tab appear (guessing this is due to the Dynamix webgui).

 

I did the pre-startup actions for the VM's (edit, change video to QXL).

 

I have 3 VM's, neither started. All primary disks were no longer allocated, I set to manual, browsed to the primary disk and save/updated. This made it work again.

 

One of my VM's had two disks attached, this one also did not start but when setting to manual the primary disk was found again by itself, did not have to browse to it.

 

Now on to the dockers..

 

All dockers appear to need an update.. Kind of weird..

 

needo/Couchpotato, upgrade/start:    worked (took a long time to start again)

needo/Deluge, upgrade/start:    worked

aptalca/dolphin, upgrade/start:    worked

needo/sabnzbd, upgrade/start:    worked

needo/sickrage, upgrade/start:    worked (took a long time to start again)

gfjardim/transmission, upgrade/start:    worked

gfjardim/crashplan, upgrade/start:    worked

 

 

Link to comment

I don't like the way the add config button for dockers is hidden in advanced view, and the button name itself is too vague.

 

the add config screen itself is also rather confusing.

 

i foresee the next hot topic in docker support is "how do i add another volume" etc , especially for things like plex with differing media types that people will want to add.

Link to comment

 

I don't like the way the add config button for dockers is hidden in advanced view, and the button name itself is too vague.

 

the add config screen itself is also rather confusing.

 

i foresee the next hot topic in docker support is "how do i add another volume" etc , especially for things like plex with differing media types that people will want to add.

+1

Link to comment

I was also missing the gui boot image after the plugin upgrade

 

Dockers also needed updated, and they took forever.

 

The new gui boot is great, it's awesome being able to make changes at my desk with a full size keyboard and mouse instead of squinting at a laptop!

 

how did you fix the gui boot image missing?

Link to comment

I was also missing the gui boot image after the plugin upgrade

 

Dockers also needed updated, and they took forever.

 

The new gui boot is great, it's awesome being able to make changes at my desk with a full size keyboard and mouse instead of squinting at a laptop!

 

how did you fix the gui boot image missing?

 

Download V6.2 Beta 18 manually and copy across bzroot-gui manually I'd guess..

Link to comment

I was also missing the gui boot image after the plugin upgrade

 

Dockers also needed updated, and they took forever.

 

The new gui boot is great, it's awesome being able to make changes at my desk with a full size keyboard and mouse instead of squinting at a laptop!

 

how did you fix the gui boot image missing?

 

Download V6.2 Beta 18 manually and copy across bzroot-gui manually I'd guess..

 

Yes.

Link to comment

I have a Windows VM with two vdisks allocated (one for system and one for data).  They are in .vdi format although I suspect that is not relevant.    If I simply try and use the VM without doing anything then it works fine.  When I follow the documented update procedure and use the Edit option on the VM then one of the vdisk entries gets removed.  As it is the first one (the system disk) then the VM is effectively useless after this!

Link to comment

This is what I experienced also... Did not try it without the edits though...

After doing the Edit (and manually adding in the Disk that was removed) I then start getting the message

Warning: libvirt_domain_xml_xpath(): namespace warning : xmlns: URI unraid is not absolute in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt.php on line 936 Warning: libvirt_domain_xml_xpath():
and the list of VM's is only correct up to the one I have just edited and then starts going wrong with the one just edited.  Not sure if it is relevant but I have the images hosted on a SSD that is external to the array.

 

Not sure how to get around this other than reverting to 6.1.9 and repeating the update to 6.2.  Anybody any ideas?

Link to comment

Important: Your server will require internet access upon boot in order to validate with the LimeTech key server.

 

Is this a one-time validation OR is this new version crippled unless it phones home at EVERY reboot???????

HOPEFULLY this only will apply to the beta and evaluation versions. If I can't start my server because my internet happens to be down... >:( >:(

 

Has there been any word on this? I prevent outside network access on my unRAID box at the firewall (and won't be changing it). I thought our keys were tied to the USB, what's to authenticate?

Link to comment

Important: Your server will require internet access upon boot in order to validate with the LimeTech key server.

 

Is this a one-time validation OR is this new version crippled unless it phones home at EVERY reboot???????

HOPEFULLY this only will apply to the beta and evaluation versions. If I can't start my server because my internet happens to be down... >:( >:(

 

Has there been any word on this? I prevent outside network access on my unRAID box at the firewall. I thought our keys were tied to the USB, what's to authenticate?

 

Either there is a keygen out there or they are trying to prevent a keygen from being made considering they are gaining speed with new customers. However, if this isn't implemented properly and unraid wont start (starting the array/services I assume), then that is a huge dealbreaker. I will use OMV or a combination of something else. Unraid is nice, pretty and makes things easy, but I won't let a bad thunderstorm that causes my UPS to shut my servers down that might also screw my internet up at the same time (it happens.. more than once) hold my data hostage.

 

edit: I work for a fairly well known company that has millions of installations of our flagship software. We also implemented a key server. But the software is a bit relaxed if it has already authenticated before. If it has, then it gives an additional period of time of purchased features before it get's mad that it can't contact home. This would be an acceptable solution. Also, what happens if their key server goes down? Ours has, both by our mistake and our hosting having issues. I hope they thought this through.

Link to comment
Guest
This topic is now closed to further replies.