unRAID Server Release 6.0-beta4-x86_64 Available


Recommended Posts

Yes you're right, i meant adding a drive and letting unRAID clear it. I've not tried using the preclear script

Cant post a log now, could do later today but here is a quick screenshot i took this morning http://i.imgur.com/rLcZRyd.jpg

4GB ram and no addons

I would suggest you get your array back up without the new disk. Then use the preclear script on the new drive. Maybe not related to your problem, but it is a safe way to make sure your new disk is good enough to put into your array.
Link to comment
  • Replies 263
  • Created
  • Last Reply

Top Posters In This Topic

Question for the community is screen not part of the standard install of the 6.0 beta 4? 

 

I couldn't find it so I tried to install it manually am getting the following error when I try and run it.

 

root@Tower:/boot/config/plugins# upgradepkg --install-new screen-4.0.3-x86_64-4.txz

 

+==============================================================================

| Installing new package ./screen-4.0.3-x86_64-4.txz

+==============================================================================

 

Verifying package screen-4.0.3-x86_64-4.txz.

Installing package screen-4.0.3-x86_64-4.txz:

PACKAGE DESCRIPTION:

# screen (screen manager with VT100/ANSI terminal emulation)

#

# Screen is a full-screen window manager that multiplexes a physical

# terminal between several processes (typically interactive shells).

# Each virtual terminal provides the functions of a DEC VT100 terminal

# and several control functions from the ISO 6492 (ECMA 48, ANSI X3.64)

# and ISO 2022 standards (e.g. insert/delete line and support for

# multiple character sets).  There is a scrollback history buffer for

# each virtual terminal and a copy-and-paste mechanism that allows

# moving text regions between windows.

#

Executing install script for screen-4.0.3-x86_64-4.txz.

Package screen-4.0.3-x86_64-4.txz installed.

 

root@Tower:/boot/config/plugins# screen

screen: error while loading shared libraries: libutempter.so.0: cannot open shared object file: No such file or directory

 

Link to comment

I've been getting some weird errors with this beta version when trying to rename some files.  the error message is copied below, along with a syslog.

 

I suspect I've got some permissions problems, so I'm going to run the new permissions script, but I thought I'd read that it's not recommended once you're already running version 5 or higher, which I have been since I first installed unRAID.

 

I'm using the XEN version, and have an arch VM running, along with SABnzbd and SickBeard; but I thought they were isolated from unRAID, so I'm still not sure what's going on.

 

When this error happens, I lose all control of any file system activity in any program and my laptop is basically useless until it's done trying to rename the files, which takes 5-10 minutes.

error.jpg.4b0a0ffa4476cbbf47603952c7c9f145.jpg

syslog.txt

Link to comment

Good morning everyone,

 

So I installed 6.0 beta 4 two days ago and I've now found the system became completely unresponsive overnight both mornings.

 

It was a clean, USB format, install on a system that has been running reliably with 5.0.5

 

The web interface doesn't respond, trying to connect over telnet or ssh is unsuccessful and a network scan does not find an IP address for unRAID.

 

The only thing I can do is log into the IMPI console and power cycle the box.

 

I can't (or don't know how) to get a log of what happened overnight so the only thing I did this morning was grab a screen shot of the "monitor" over the IMPI console which I have attached.

 

Your help would be appreciated in either fixing this, or being able to give Tom more feedback so he can understand what is happening.

 

Thank you!

Screen_Shot_2014-04-25_at_7_17.56_AM.png.743f9a253fdc266c1a6e7d6b3cc93ddc.png

Link to comment

I am having problems with Mover, if I manually start the mover it works fine emptying the cache disk. However once the cache disk is empty, the mover will no longer work. To get it working again, I have to stop and restart the array. It does not appear to work using automatic settings at all.

 

I am not sure this is a software or hardware problem, as I have had this problem before in 5.0 (including RC and Beta). For information I am running ASUS P5Q Deluxe / Celeron E1500 / 4GB RAM & 2 x Supermicro AOC SASLP-MV8 + 16 drives including parity & cache.

Link to comment

I've been getting some weird errors with this beta version when trying to rename some files.  the error message is copied below, along with a syslog.

 

I suspect I've got some permissions problems, so I'm going to run the new permissions script, but I thought I'd read that it's not recommended once you're already running version 5 or higher, which I have been since I first installed unRAID.

 

I'm using the XEN version, and have an arch VM running, along with SABnzbd and SickBeard; but I thought they were isolated from unRAID, so I'm still not sure what's going on.

 

When this error happens, I lose all control of any file system activity in any program and my laptop is basically useless until it's done trying to rename the files, which takes 5-10 minutes.

 

A few quick thoughts (without looking at the syslog): 

 

As you know, a 'rename' is a form of 'move', which normally identifies that the source location is the same as the destination location so just modifies the file entry with the new name.  If however there is a problem determining the locations are identical, then a physical move is started, copy all then delete from source.  This obviously takes much longer, which makes it correspond with the long delay you are seeing.  I don't have any ideas as to why it would mistake the source and dest locations here though.  The fact that it indicates it needs 493MB to make the move does seem to indicate a true physical move, not a rename.

 

Since the share 'media' appears to be 23TB, it must be spread across multiple disks, so the free space of 4.2TB is the sum of free spaces.  It looks like it is trying to move to a destination with much less space, either another array drive within the Share or the Cache drive.  Check those for enough space.  And remember that new file operations on a ReiserFS drive that is almost full are very inefficient, very slow.

 

No real answers here, but I hope this may point you in the right direction.

Link to comment

Good morning everyone,

 

So I installed 6.0 beta 4 two days ago and I've now found the system became completely unresponsive overnight both mornings.

 

It was a clean, USB format, install on a system that has been running reliably with 5.0.5

 

The web interface doesn't respond, trying to connect over telnet or ssh is unsuccessful and a network scan does not find an IP address for unRAID.

 

The only thing I can do is log into the IMPI console and power cycle the box.

 

I can't (or don't know how) to get a log of what happened overnight so the only thing I did this morning was grab a screen shot of the "monitor" over the IMPI console which I have attached.

 

Your help would be appreciated in either fixing this, or being able to give Tom more feedback so he can understand what is happening.

 

Thank you!

 

It may or may not be a memory problem, but it's the easiest thing to check, so try running an overnight memtest on the system.  Memory usage is different in 64bit, so it's possible that some part of memory is being accessed now under 64bit v6 that was inaccessible under 32bit v5.

 

More likely I think, it's an incompatibility with something that's 64bit.  Make sure you don't have any 32bit-only addons or plugins or packages running/installing.  One thing that runs overnight is the mover, can you test it, see if it runs manually?  A syslog (before the crash!) may point out something incorrect under v6.  And of course, try booting under Safe Mode and testing.

Link to comment

I am having problems with Mover, if I manually start the mover it works fine emptying the cache disk. However once the cache disk is empty, the mover will no longer work. To get it working again, I have to stop and restart the array. It does not appear to work using automatic settings at all.

 

I am not sure this is a software or hardware problem, as I have had this problem before in 5.0 (including RC and Beta). For information I am running ASUS P5Q Deluxe / Celeron E1500 / 4GB RAM & 2 x Supermicro AOC SASLP-MV8 + 16 drives including parity & cache.

 

Since it happens under multiple releases, it would be better to move this to the General Support forum.  If you can, zip and attach syslogs from various releases, that include the mover working.

Link to comment

A few quick thoughts (without looking at the syslog): 

 

As you know, a 'rename' is a form of 'move', which normally identifies that the source location is the same as the destination location so just modifies the file entry with the new name.  If however there is a problem determining the locations are identical, then a physical move is started, copy all then delete from source.  This obviously takes much longer, which makes it correspond with the long delay you are seeing.  I don't have any ideas as to why it would mistake the source and dest locations here though.  The fact that it indicates it needs 493MB to make the move does seem to indicate a true physical move, not a rename.

 

Since the share 'media' appears to be 23TB, it must be spread across multiple disks, so the free space of 4.2TB is the sum of free spaces.  It looks like it is trying to move to a destination with much less space, either another array drive within the Share or the Cache drive.  Check those for enough space.  And remember that new file operations on a ReiserFS drive that is almost full are very inefficient, very slow.

 

No real answers here, but I hope this may point you in the right direction.

 

Those are good points.  Now that you mention it, most of my drives are VERY full.  4 discs have 0KB available, 4 have under 300KB available, and the other 2 are newly added, and are the only real source of available space.

 

Device	Temp.	Size	Used	Free	Errors
Parity	33 °C	4 TB	-	-	0
Disk 1	35 °C	3 TB	3 TB	16.4 KB	0
Disk 2	36 °C	3 TB	3 TB	291 KB	0
Disk 3	33 °C	3 TB	3 TB	0 B	0
Disk 4	33 °C	2 TB	2 TB	4.10 KB	0
Disk 5	32 °C	3 TB	3 TB	0 B	0
Disk 6	22 °C	1 TB	1 TB	0 B	799
Disk 7	21 °C	1 TB	204 GB	796 GB	0
Disk 8	30 °C	3 TB	3 TB	49.2 KB	0
Disk 9	32 °C	2 TB	2 TB	0 B	0
Disk 10	29 °C	4 TB	529 GB	3.47 TB	0

 

I'm currently running a parity check due to having to hard-boot the machine, but when that finishes, I'll move a few hundred gigs off all the full drives, and see if that helps.

 

With that said, I really do feel like this is a bug with unRAID.  If it's trying to 'move' or 'rename' a file onto the same disk, but it's full, it should know to use another disk with free space.  I have no grouping setup, so there is no reason to not use the available space instead of giving me errors.

 

Either way, I appreciate the response, and think moving some files around should resolve this issue.

 

Thanks again.

Link to comment

E

I'm not sure whether to post here or in the Xen area.

 

My unRAID server suddenly became unreachable through the network.

 

I'm running 6.0-beta4 on the hardware detailed in my .sig.  I'm running three virtual machines - two of IronicBadger's ArchVM pre-rolled images, and another running WinXP.  This XP machine was installed new yesterday.

 

One of the ArchVM machines is running a number of services - MySQL (MariaDB), Deluged, minidlna, LogitechMediaServer.

 

Dom0 is running a few plugins: apcupsd, tftp-hpa, fan_speed, dovecot and mpop.

 

The other two VMs are virtually dormant.  I did have a vnc connection open to the WinXP machine.  The second ArchVM has xfce loaded with tigervnc server - there were no connections to this.

 

I have been running beta4 for more than two weeks, and the first ArchVM for almost as long.  tftp-hpa has been running for more than a week -  and the other plugins have been running since day 1 with beta4, and for many months on v5.0.

 

I did have ssh sessions open to Dom0 and the two ArchVMs.

 

There was an XBMC media server playing a video from a user share.

 

I was playing with the Dom0, via ssh, trying to enable the session name to appear in the title bar of my Gnome terminal.

 

All of a sudden, everything stopped - XBMC stopped playing, my Squeezeboxen blanked, my deluge transfers froze etc.  I was unable to access any shares from my Ubuntu desktop, and all the ssh sessions froze.  I couldn't get a response from the Tower emhttp interface.  Tower would respond to pings, but not any of the VMs.

 

I went to ipmi and was able to capture the attached screen image.  I was aware that fan-speed was still active because I could hear the fans speeding up - the drives must have been getting warm(er).

 

I didn't have the presence of mind to attempt any interaction via ipmi!  :-[

 

I did a soft reset from ipmi and everything came back up as normal, with a parity check running.

 

Does anyone have any idea what may have caused this?

 

Just had a complete crash again, dom0 responds to ping but that's all, no SSH no webui and no access to any domU. Exact same error as peterb (see post#167 for better screenshot) please see his post above, I googled the output from console and got a pretty damn close match, it looks like the issue is fixed in kernel 3.12 rc1 or later, tom please take a look here http://lists.xen.org/archives/html/xen-devel/2013-10/msg00585.html this is a pretty serious bug as it crashes the entire server, screenshot:-

 

http://tinypic.com/r/w20ow6/8

 

Link to comment

Good morning everyone,

 

So I installed 6.0 beta 4 two days ago and I've now found the system became completely unresponsive overnight both mornings.

 

It was a clean, USB format, install on a system that has been running reliably with 5.0.5

 

The web interface doesn't respond, trying to connect over telnet or ssh is unsuccessful and a network scan does not find an IP address for unRAID.

 

The only thing I can do is log into the IMPI console and power cycle the box.

 

I can't (or don't know how) to get a log of what happened overnight so the only thing I did this morning was grab a screen shot of the "monitor" over the IMPI console which I have attached.

 

Your help would be appreciated in either fixing this, or being able to give Tom more feedback so he can understand what is happening.

 

Thank you!

 

It may or may not be a memory problem, but it's the easiest thing to check, so try running an overnight memtest on the system.  Memory usage is different in 64bit, so it's possible that some part of memory is being accessed now under 64bit v6 that was inaccessible under 32bit v5.

 

More likely I think, it's an incompatibility with something that's 64bit.  Make sure you don't have any 32bit-only addons or plugins or packages running/installing.  One thing that runs overnight is the mover, can you test it, see if it runs manually?  A syslog (before the crash!) may point out something incorrect under v6.  And of course, try booting under Safe Mode and testing.

 

Thanks RobJ.

 

It occurred to me during the day that the only thing that is different during the night is the mover. So, I will disable the mover tonight and manually run the mover script and see if I can recreate the problem.

 

If the mover is the problem, then that's an easy fix temporarily.

 

I have tried invoking the mover manually and so far, after a few minutes, it seems to be a part of the problem. The minute I invoked the mover the web interface became unresponsive and unRAID became non responsive to telnet or ssh (really, telnet seems to respond but won't let me enter a user name and password, while ssh simply does not respond).

 

When I try to access a share remotely, while it appears as accessible, it gives me an error when I try to access the share.

 

The log is attached as a PDF.

 

I hope this helps.

 

In the meantime, I'll disable mover and the cache drive and wait for some fix or the next beta.

 

Thanks again.

FileServer_syslog.pdf

Link to comment

E

I'm not sure whether to post here or in the Xen area.

 

My unRAID server suddenly became unreachable through the network.

 

I'm running 6.0-beta4 on the hardware detailed in my .sig.  I'm running three virtual machines - two of IronicBadger's ArchVM pre-rolled images, and another running WinXP.  This XP machine was installed new yesterday.

 

One of the ArchVM machines is running a number of services - MySQL (MariaDB), Deluged, minidlna, LogitechMediaServer.

 

Dom0 is running a few plugins: apcupsd, tftp-hpa, fan_speed, dovecot and mpop.

 

The other two VMs are virtually dormant.  I did have a vnc connection open to the WinXP machine.  The second ArchVM has xfce loaded with tigervnc server - there were no connections to this.

 

I have been running beta4 for more than two weeks, and the first ArchVM for almost as long.  tftp-hpa has been running for more than a week -  and the other plugins have been running since day 1 with beta4, and for many months on v5.0.

 

I did have ssh sessions open to Dom0 and the two ArchVMs.

 

There was an XBMC media server playing a video from a user share.

 

I was playing with the Dom0, via ssh, trying to enable the session name to appear in the title bar of my Gnome terminal.

 

All of a sudden, everything stopped - XBMC stopped playing, my Squeezeboxen blanked, my deluge transfers froze etc.  I was unable to access any shares from my Ubuntu desktop, and all the ssh sessions froze.  I couldn't get a response from the Tower emhttp interface.  Tower would respond to pings, but not any of the VMs.

 

I went to ipmi and was able to capture the attached screen image.  I was aware that fan-speed was still active because I could hear the fans speeding up - the drives must have been getting warm(er).

 

I didn't have the presence of mind to attempt any interaction via ipmi!  :-[

 

I did a soft reset from ipmi and everything came back up as normal, with a parity check running.

 

Does anyone have any idea what may have caused this?

 

Just had a complete crash again, dom0 responds to ping but that's all, no SSH no webui and no access to any domU. Exact same error as peterb (see post#167 for better screenshot) please see his post above, I googled the output from console and got a pretty damn close match, it looks like the issue is fixed in kernel 3.12 rc1 or later, tom please take a look here http://lists.xen.org/archives/html/xen-devel/2013-10/msg00585.html this is a pretty serious bug as it crashes the entire server, screenshot:-

 

http://tinypic.com/r/w20ow6/8

Hopefully we can get 3.14 on the next beta.

 

Sent from my SM-N9005 using Tapatalk

 

 

Link to comment

This happened to me also.  I built a new server last week with an i5 and 17 GB of ram with 8 assigned to unRAID and 8gb to a ubuntuvm and haven't had s problem since.

 

In short for me anyways o chalked it up to running out of memory as I only had 4gb shared between the 2.

 

Kryspy

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

So, my problems continue -

 

Disabled mover to see if this was the cause of the problem. While the server did not become completely unresponsive, I woke this morning to a new unpleasant surprise.

 

At some point during the night, the server had rebooted and was unable to find two disks, so the array had not restarted. I did a reboot and the server found all disks in the array but wanted to do a parity check since i was an unclean shutdown.

 

My log is attached.

 

I think I will revert back to 5 for now, which is a real disappointment because I was really looking to moving Sickbeard and SABnzbd to a VM.

 

Hope this helps me or Tom find a solution to the problem.

Apr_26_04_crash_log.txt

Link to comment

I have tried invoking the mover manually and so far, after a few minutes, it seems to be a part of the problem. The minute I invoked the mover the web interface became unresponsive and unRAID became non responsive to telnet or ssh (really, telnet seems to respond but won't let me enter a user name and password, while ssh simply does not respond).

 

When I try to access a share remotely, while it appears as accessible, it gives me an error when I try to access the share.

 

The log is attached as a PDF.

 

I hope this helps.

 

In the meantime, I'll disable mover and the cache drive and wait for some fix or the next beta.

 

Your syslog piece seems to clearly show that the General Protection faults occurred because of the mover, with the first occurring only 10 seconds after starting.  Initially, the rsync module was corrupted, then later nmbd (part of SMB, which is part of your network access to your shares) was corrupted.  I still believe memory is a suspect (test it overnight), but could also be an incompatible library somewhere (wait for a later release), or a hardware fault (hmmm...), or a BIOS fault (check for a BIOS update).

 

Just a request, we always prefer syslogs as complete as possible, and in their original text state, unmodified in any way.  A pdf was fine for this short piece, but simple text files are easier for me to compare with similar syslogs.

Link to comment

I have tried invoking the mover manually and so far, after a few minutes, it seems to be a part of the problem. The minute I invoked the mover the web interface became unresponsive and unRAID became non responsive to telnet or ssh (really, telnet seems to respond but won't let me enter a user name and password, while ssh simply does not respond).

 

When I try to access a share remotely, while it appears as accessible, it gives me an error when I try to access the share.

 

The log is attached as a PDF.

 

I hope this helps.

 

In the meantime, I'll disable mover and the cache drive and wait for some fix or the next beta.

 

Your syslog piece seems to clearly show that the General Protection faults occurred because of the mover, with the first occurring only 10 seconds after starting.  Initially, the rsync module was corrupted, then later nmbd (part of SMB, which is part of your network access to your shares) was corrupted.  I still believe memory is a suspect (test it overnight), but could also be an incompatible library somewhere (wait for a later release), or a hardware fault (hmmm...), or a BIOS fault (check for a BIOS update).

 

Just a request, we always prefer syslogs as complete as possible, and in their original text state, unmodified in any way.  A pdf was fine for this short piece, but simple text files are easier for me to compare with similar syslogs.

 

Thank you!

 

I'm on the road for a bit, so I'll just roll back to 5.0.5 until I'm back home with enough time to "play" with this. I will run a memory test as well.

 

I'm still trying to sort out how to "capture" the log. I use the web gui but there is likely a better way isn't there? <half grin>. The latest one (you'll see my problems continue) I copied from the web guy and saved it in a text file. I hope that helps but I'd welcome any other suggestions.

 

Thanks again.

 

Link to comment

So, my problems continue -

 

Disabled mover to see if this was the cause of the problem. While the server did not become completely unresponsive, I woke this morning to a new unpleasant surprise.

 

At some point during the night, the server had rebooted and was unable to find two disks, so the array had not restarted. I did a reboot and the server found all disks in the array but wanted to do a parity check since i was an unclean shutdown.

 

My log is attached.

 

I think I will revert back to 5 for now, which is a real disappointment because I was really looking to moving Sickbeard and SABnzbd to a VM.

 

Hope this helps me or Tom find a solution to the problem.

 

Syslog was from the reboot, does not show a crash, does show more disk problems than were visible to you.  Four drives had problems starting, one (Disk 12) was very slow, very late to be setup, another (Disk 6) set up but its partition table was unreadable, and the other 2 (Disk 4 and Disk 13) could not be read.  Glad you rebooted!  It acted like there wasn't enough power to spin them up fast enough, but that is just a conjecture, may not be correct.

 

For now, v5 is probably a better choice.  I don't know what Tom can do here though, if your memory and other hardware and BIOS are fine.  Perhaps a future Linux release will prove better.

 

As to syslogs, our older advice was Capturing your syslog.  Our newer advice is here.  I always recommend zipping the syslog text file, as it is highly compressible.

Link to comment

So, my problems continue -

 

Disabled mover to see if this was the cause of the problem. While the server did not become completely unresponsive, I woke this morning to a new unpleasant surprise.

 

At some point during the night, the server had rebooted and was unable to find two disks, so the array had not restarted. I did a reboot and the server found all disks in the array but wanted to do a parity check since i was an unclean shutdown.

 

My log is attached.

 

I think I will revert back to 5 for now, which is a real disappointment because I was really looking to moving Sickbeard and SABnzbd to a VM.

 

Hope this helps me or Tom find a solution to the problem.

 

Syslog was from the reboot, does not show a crash, does show more disk problems than were visible to you.  Four drives had problems starting, one (Disk 12) was very slow, very late to be setup, another (Disk 6) set up but its partition table was unreadable, and the other 2 (Disk 4 and Disk 13) could not be read.  Glad you rebooted!  It acted like there wasn't enough power to spin them up fast enough, but that is just a conjecture, may not be correct.

 

For now, v5 is probably a better choice.  I don't know what Tom can do here though, if your memory and other hardware and BIOS are fine.  Perhaps a future Linux release will prove better.

 

As to syslogs, our older advice was Capturing your syslog.  Our newer advice is here.  I always recommend zipping the syslog text file, as it is highly compressible.

 

Thanks again RobJ!.

 

Link to comment

Working on integrating a newer kernel, but -beta5 will be released instead without it in order to address the 'heartbleed' bug which might affect some unRaid users (probably < 0.1% of you).

 

That's great news about the newer kernel, fingers crossed it fixes the crash im seeing, hopefully included in v6b6 perhaps?

Link to comment

Yes you're right, i meant adding a drive and letting unRAID clear it. I've not tried using the preclear script

Cant post a log now, could do later today but here is a quick screenshot i took this morning http://i.imgur.com/rLcZRyd.jpg

4GB ram and no addons

I would suggest you get your array back up without the new disk. Then use the preclear script on the new drive. Maybe not related to your problem, but it is a safe way to make sure your new disk is good enough to put into your array.

 

Thanks.

I tried the preclear script but it is failing in the 2nd step with Kernel Panic - not syncing:Fatal exception in interrupt Error. I have created a new thread for this. http://lime-technology.com/forum/index.php?topic=33105.0

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.