SOLVED - VM Crashing since update to 6.2.3


Recommended Posts

Marking this as solved....chrome browser was refreshing open tab causing array stop/vm shutdown.

 

 

 

 

My VMs are crashing at random since upgrading to 6.2.3, and I am getting this randomly in my VM logs (pid varies):

 

qemu-system-x86_64: terminating on signal 15 from pid 11336

 

Does anyone know what this means?  Full diagnostics are attached.  I am going to run memtest to see what that turns up.

 

Here is the syslog from around that time (12:31 on 11/6/2016):

 

Nov  6 11:28:38 Tower kernel: mdcmd (43): spindown 6
Nov  6 11:28:54 Tower kernel: mdcmd (44): spindown 4
Nov  6 11:29:49 Tower kernel: mdcmd (45): spindown 1
Nov  6 11:30:08 Tower kernel: mdcmd (46): spindown 3
Nov  6 11:52:12 Tower kernel: mdcmd (47): spindown 0
Nov  6 12:31:08 Tower emhttp: Spinning up all drives...
Nov  6 12:31:08 Tower kernel: mdcmd (48): nocheck 
Nov  6 12:31:08 Tower kernel: md: nocheck_array: check not active
Nov  6 12:31:08 Tower kernel: mdcmd (49): spinup 0
Nov  6 12:31:08 Tower emhttp: shcmd (1281): /usr/sbin/hdparm -S0 /dev/sdf &> /dev/null
Nov  6 12:31:08 Tower kernel: mdcmd (50): spinup 1
Nov  6 12:31:08 Tower kernel: mdcmd (51): spinup 2
Nov  6 12:31:08 Tower kernel: mdcmd (52): spinup 3
Nov  6 12:31:08 Tower kernel: mdcmd (53): spinup 4
Nov  6 12:31:08 Tower kernel: mdcmd (54): spinup 5
Nov  6 12:31:08 Tower kernel: mdcmd (55): spinup 6
Nov  6 12:31:17 Tower emhttp: Stopping services...
Nov  6 12:31:17 Tower cache_dirs: Stopping cache_dirs process 5219
Nov  6 12:31:17 Tower unassigned.devices: Unmounting Devices...
Nov  6 12:31:17 Tower emhttp: 
Nov  6 12:31:17 Tower emhttp: 
Nov  6 12:31:18 Tower emhttp: shcmd (1283): /etc/rc.d/rc.libvirt stop |& logger
Nov  6 12:31:21 Tower root: Domain d13cb820-b946-f513-afc6-960d2061f381 is being shutdown
Nov  6 12:31:21 Tower root: 
Nov  6 12:31:27 Tower kernel: usb 3-9.1.4.7: reset low-speed USB device number 7 using xhci_hcd
Nov  6 12:31:27 Tower kernel: usb 3-9.1.4.7: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
Nov  6 12:31:27 Tower kernel: usb 3-9.1.4.5: reset low-speed USB device number 6 using xhci_hcd
Nov  6 12:31:28 Tower kernel: usb 3-9.1.4.5: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
Nov  6 12:31:28 Tower kernel: usb 3-9.1.4.5: ep 0x82 - rounding interval to 1024 microframes, ep desc says 2040 microframes
Nov  6 12:31:28 Tower kernel: br0: port 2(vnet0) entered disabled state
Nov  6 12:31:28 Tower kernel: device vnet0 left promiscuous mode
Nov  6 12:31:28 Tower kernel: br0: port 2(vnet0) entered disabled state
Nov  6 12:31:28 Tower kernel: input: Logitech USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-9/3-9.1/3-9.1.4/3-9.1.4.5/3-9.1.4.5:1.0/0003:046D:C31C.0007/input/input13
Nov  6 12:31:28 Tower kernel: hid-generic 0003:046D:C31C.0007: input,hidraw0: USB HID v1.10 Keyboard [Logitech USB Keyboard] on usb-0000:00:14.0-9.1.4.5/input0
Nov  6 12:31:28 Tower kernel: input: Logitech USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb3/3-9/3-9.1/3-9.1.4/3-9.1.4.5/3-9.1.4.5:1.1/0003:046D:C31C.0008/input/input14
Nov  6 12:31:28 Tower kernel: hid-generic 0003:046D:C31C.0008: input,hidraw1: USB HID v1.10 Device [Logitech USB Keyboard] on usb-0000:00:14.0-9.1.4.5/input1
Nov  6 12:31:28 Tower kernel: input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-9/3-9.1/3-9.1.4/3-9.1.4.7/3-9.1.4.7:1.0/0003:046D:C01D.0009/input/input15
Nov  6 12:31:28 Tower kernel: hid-generic 0003:046D:C01D.0009: input,hidraw2: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:14.0-9.1.4.7/input0
Nov  6 12:31:29 Tower kernel: vgaarb: device changed decodes: PCI:0000:09:00.0,olddecodes=io+mem,decodes=io+mem:owns=none
Nov  6 12:31:32 Tower root: Domain 50b73fd1-eaa0-77c8-2f16-5cad767f79d4 is being shutdown
Nov  6 12:31:32 Tower root: 
Nov  6 12:31:34 Tower kernel: br0: port 3(vnet1) entered disabled state
Nov  6 12:31:34 Tower kernel: device vnet1 left promiscuous mode
Nov  6 12:31:34 Tower kernel: br0: port 3(vnet1) entered disabled state
Nov  6 12:31:36 Tower kernel: vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=io+mem:owns=none
Nov  6 12:31:36 Tower root: Waiting on VMs to shutdown..
Nov  6 12:31:36 Tower root: Stopping libvirtd...
Nov  6 12:31:36 Tower dnsmasq[11771]: exiting on receipt of SIGTERM
Nov  6 12:31:36 Tower kernel: device virbr0-nic left promiscuous mode
Nov  6 12:31:36 Tower kernel: virbr0: port 1(virbr0-nic) entered disabled state
Nov  6 12:31:36 Tower avahi-daemon[3766]: Interface virbr0.IPv4 no longer relevant for mDNS.
Nov  6 12:31:36 Tower avahi-daemon[3766]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Nov  6 12:31:36 Tower avahi-daemon[3766]: Withdrawing address record for 192.168.122.1 on virbr0.
Nov  6 12:31:36 Tower root: Network a4007147-6d28-4b27-8a73-0b1a1672c02b destroyed
Nov  6 12:31:36 Tower root: 
Nov  6 12:31:39 Tower emhttp: shcmd (1284): umount /etc/libvirt
Nov  6 12:31:39 Tower emhttp: shcmd (1286): /etc/rc.d/rc.docker stop |& logger
Nov  6 12:31:39 Tower root: stopping docker ...
Nov  6 12:31:43 Tower root: 18576e020f2e
Nov  6 12:31:45 Tower ntpd[1674]: Deleting interface #6 as0t0, 172.27.224.1#123, interface stats: received=0, sent=0, dropped=0, active_time=5677 secs
Nov  6 12:31:47 Tower root: 2d71f23078a7
Nov  6 12:31:50 Tower kernel: veth2603fef: renamed from eth0
Nov  6 12:31:50 Tower kernel: docker0: port 6(veth202afd5) entered disabled state
Nov  6 12:31:50 Tower kernel: docker0: port 6(veth202afd5) entered disabled state
Nov  6 12:31:50 Tower kernel: device veth202afd5 left promiscuous mode
Nov  6 12:31:50 Tower kernel: docker0: port 6(veth202afd5) entered disabled state
Nov  6 12:31:50 Tower root: 730d327aad7a
Nov  6 12:31:54 Tower kernel: veth5dc01ce: renamed from eth0
Nov  6 12:31:54 Tower kernel: docker0: port 5(veth41c2e41) entered disabled state
Nov  6 12:31:54 Tower kernel: docker0: port 5(veth41c2e41) entered disabled state
Nov  6 12:31:54 Tower kernel: device veth41c2e41 left promiscuous mode
Nov  6 12:31:54 Tower kernel: docker0: port 5(veth41c2e41) entered disabled state
Nov  6 12:31:54 Tower root: 07831e4d28b4
Nov  6 12:31:54 Tower kernel: vetha2c9752: renamed from eth0
Nov  6 12:31:54 Tower kernel: docker0: port 7(veth797030a) entered disabled state
Nov  6 12:31:54 Tower kernel: docker0: port 7(veth797030a) entered disabled state
Nov  6 12:31:54 Tower kernel: device veth797030a left promiscuous mode
Nov  6 12:31:54 Tower kernel: docker0: port 7(veth797030a) entered disabled state
Nov  6 12:31:54 Tower root: 8d7fba88a276
Nov  6 12:31:55 Tower kernel: veth813e985: renamed from eth0
Nov  6 12:31:55 Tower kernel: docker0: port 2(vethd4d9246) entered disabled state
Nov  6 12:31:55 Tower kernel: docker0: port 2(vethd4d9246) entered disabled state
Nov  6 12:31:55 Tower kernel: device vethd4d9246 left promiscuous mode
Nov  6 12:31:55 Tower kernel: docker0: port 2(vethd4d9246) entered disabled state
Nov  6 12:31:55 Tower root: 2018e5eb6489
Nov  6 12:31:59 Tower root: 1e752c6d3bbe
Nov  6 12:32:00 Tower kernel: vethc37c7d6: renamed from eth0
Nov  6 12:32:00 Tower kernel: docker0: port 1(vethde42fb8) entered disabled state
Nov  6 12:32:00 Tower kernel: docker0: port 1(vethde42fb8) entered disabled state
Nov  6 12:32:00 Tower kernel: device vethde42fb8 left promiscuous mode
Nov  6 12:32:00 Tower kernel: docker0: port 1(vethde42fb8) entered disabled state
Nov  6 12:32:00 Tower root: 4d1379a89045
Nov  6 12:32:01 Tower root: fb9e962fa926
Nov  6 12:32:01 Tower kernel: vethe222581: renamed from eth0
Nov  6 12:32:01 Tower kernel: docker0: port 4(veth7103e15) entered disabled state
Nov  6 12:32:01 Tower kernel: docker0: port 4(veth7103e15) entered disabled state
Nov  6 12:32:01 Tower kernel: device veth7103e15 left promiscuous mode
Nov  6 12:32:01 Tower kernel: docker0: port 4(veth7103e15) entered disabled state
Nov  6 12:32:01 Tower root: 2c736fef2ae9
Nov  6 12:32:01 Tower kernel: vethfea3cbb: renamed from eth0
Nov  6 12:32:01 Tower kernel: docker0: port 3(veth7b9d444) entered disabled state
Nov  6 12:32:01 Tower kernel: docker0: port 3(veth7b9d444) entered disabled state
Nov  6 12:32:01 Tower kernel: device veth7b9d444 left promiscuous mode
Nov  6 12:32:01 Tower kernel: docker0: port 3(veth7b9d444) entered disabled state
Nov  6 12:32:01 Tower root: df582c070cea
Nov  6 12:32:02 Tower avahi-daemon[3766]: Interface docker0.IPv4 no longer relevant for mDNS.
Nov  6 12:32:02 Tower avahi-daemon[3766]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Nov  6 12:32:02 Tower avahi-daemon[3766]: Withdrawing address record for 172.17.0.1 on docker0.
Nov  6 12:32:02 Tower emhttp: shcmd (1287): umount /var/lib/docker |& logger
Nov  6 12:32:02 Tower emhttp: shcmd (1288): /etc/rc.d/rc.samba stop |& logger
Nov  6 12:32:02 Tower emhttp: shcmd (1289): rm -f /etc/avahi/services/smb.service
Nov  6 12:32:02 Tower avahi-daemon[3766]: Files changed, reloading.
Nov  6 12:32:02 Tower avahi-daemon[3766]: Service group file /services/smb.service vanished, removing services.
Nov  6 12:32:02 Tower emhttp: shcmd (1292): /etc/rc.d/rc.avahidaemon stop |& logger
Nov  6 12:32:02 Tower root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
Nov  6 12:32:02 Tower avahi-daemon[3766]: Got SIGTERM, quitting.
Nov  6 12:32:02 Tower avahi-dnsconfd[3775]: read(): EOF
Nov  6 12:32:02 Tower avahi-daemon[3766]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.0.50.
Nov  6 12:32:02 Tower avahi-daemon[3766]: avahi-daemon 0.6.32 exiting.
Nov  6 12:32:02 Tower emhttp: shcmd (1293): /etc/rc.d/rc.avahidnsconfd stop |& logger
Nov  6 12:32:02 Tower root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
Nov  6 12:32:02 Tower emhttp: Sync filesystems...
Nov  6 12:32:02 Tower emhttp: shcmd (1294): sync
Nov  6 12:32:04 Tower ntpd[1674]: Deleting interface #5 docker0, 172.17.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=5707 secs
Nov  6 12:32:05 Tower emhttp: shcmd (1295): set -o pipefail ; umount /mnt/user |& logger
Nov  6 12:32:05 Tower emhttp: shcmd (1296): rmdir /mnt/user |& logger
Nov  6 12:32:05 Tower emhttp: shcmd (1297): set -o pipefail ; umount /mnt/user0 |& logger
Nov  6 12:32:05 Tower emhttp: shcmd (1298): rmdir /mnt/user0 |& logger
Nov  6 12:32:05 Tower emhttp: shcmd (1299): rm -f /boot/config/plugins/dynamix/mover.cron
Nov  6 12:32:05 Tower emhttp: shcmd (1300): /usr/local/sbin/update_cron &> /dev/null
Nov  6 12:32:05 Tower emhttp: Unmounting disks...
Nov  6 12:32:05 Tower emhttp: shcmd (1301): umount /mnt/disk1 |& logger
Nov  6 12:32:05 Tower kernel: XFS (md1): Unmounting Filesystem
Nov  6 12:32:05 Tower emhttp: shcmd (1302): rmdir /mnt/disk1 |& logger
Nov  6 12:32:05 Tower emhttp: shcmd (1303): umount /mnt/disk2 |& logger
Nov  6 12:32:06 Tower kernel: XFS (md2): Unmounting Filesystem
Nov  6 12:32:06 Tower emhttp: shcmd (1304): rmdir /mnt/disk2 |& logger
Nov  6 12:32:06 Tower emhttp: shcmd (1305): umount /mnt/disk3 |& logger
Nov  6 12:32:06 Tower kernel: XFS (md3): Unmounting Filesystem
Nov  6 12:32:06 Tower emhttp: shcmd (1306): rmdir /mnt/disk3 |& logger
Nov  6 12:32:06 Tower emhttp: shcmd (1307): umount /mnt/disk4 |& logger
Nov  6 12:32:06 Tower kernel: XFS (md4): Unmounting Filesystem
Nov  6 12:32:07 Tower emhttp: shcmd (1308): rmdir /mnt/disk4 |& logger
Nov  6 12:32:07 Tower emhttp: shcmd (1309): umount /mnt/disk5 |& logger
Nov  6 12:32:07 Tower kernel: XFS (md5): Unmounting Filesystem
Nov  6 12:32:07 Tower emhttp: shcmd (1310): rmdir /mnt/disk5 |& logger
Nov  6 12:32:07 Tower emhttp: shcmd (1311): umount /mnt/disk6 |& logger
Nov  6 12:32:07 Tower kernel: XFS (md6): Unmounting Filesystem
Nov  6 12:32:07 Tower emhttp: shcmd (1312): rmdir /mnt/disk6 |& logger
Nov  6 12:32:07 Tower emhttp: shcmd (1313): umount /mnt/cache |& logger
Nov  6 12:32:08 Tower emhttp: shcmd (1314): rmdir /mnt/cache |& logger
Nov  6 12:32:08 Tower kernel: mdcmd (56): stop 
Nov  6 12:32:08 Tower kernel: md1: stopping
Nov  6 12:32:08 Tower kernel: md2: stopping
Nov  6 12:32:08 Tower kernel: md3: stopping
Nov  6 12:32:08 Tower kernel: md4: stopping
Nov  6 12:32:08 Tower kernel: md5: stopping
Nov  6 12:32:08 Tower kernel: md6: stopping
Nov  6 12:32:09 Tower emhttp: shcmd (1315): rmmod md-mod |& logger
Nov  6 12:32:09 Tower kernel: md: unRAID driver removed
Nov  6 12:32:09 Tower emhttp: shcmd (1316): modprobe md-mod super=/boot/config/super.dat |& logger
Nov  6 12:32:09 Tower kernel: md: unRAID driver 2.6.8 installed
Nov  6 12:32:09 Tower emhttp: Pro key detected, GUID: 05DC-A701-0802-054128260907 FILE: /boot/config/Pro1.key
Nov  6 12:32:09 Tower emhttp: Device inventory:
Nov  6 12:32:09 Tower emhttp: shcmd (1317): udevadm settle
Nov  6 12:32:09 Tower emhttp: LEXAR_JD_FIREFLY_67CC0802054128260907-0:0 (sda) 1966048
Nov  6 12:32:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4NEUA5L20 (sdb) 2930266532
Nov  6 12:32:09 Tower emhttp: WDC_WD1003FZEX-00MK2A0_WD-WCC3F2VKVHYE (sdc) 976762552
Nov  6 12:32:09 Tower emhttp: WDC_WD20EFRX-68EUZN0_WD-WMC4M1062491 (sdd) 1953514552
Nov  6 12:32:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N0M6V0HC (sde) 2930266532
Nov  6 12:32:09 Tower emhttp: Samsung_SSD_850_EVO_500GB_S2RANX0H543209Z (sdf) 488386552
Nov  6 12:32:09 Tower emhttp: WDC_WD1001FALS-00J7B0_WD-WMATV0360335 (sdg) 976762552
Nov  6 12:32:09 Tower emhttp: WDC_WD20EARX-008FB0_WD-WCAZAF142676 (sdh) 1953514552
Nov  6 12:32:09 Tower emhttp: ST3000VN000-1HJ166_W6A0WEG0 (sdi) 2930266532
Nov  6 12:32:09 Tower emhttp: WDC_WD10EARX-00N0YB0_WD-WMC0T1042974 (sdj) 976762552

tower-diagnostics-20161106-1238.zip

Link to comment

Everything I see above and in the diagnostics looks completely normal (unless I missed something).  Did you not stop the array at 12:31:08?  Then the VM's were correctly sent a SIGTERM (signal 15) between 12:31:18 and 12:31:36.  That explains the 'terminating on signal 15' message you saw.

 

It doesn't explain your mention of random crashes, but there's no evidence of crashes here that I can see.

Link to comment

Rob,

 

I did not stop the array.  I opened chrome on the VM and while typing the Google search, the VM screen went blank.  I went to tower on my phone and the array was stopped.  I then restarted the array.  Never did I stop the array....as a matter of fact I didn't even have the webgui loaded.

 

I also have a diagnostic that I left in the 6.2.3 release thread where on the reboot from installing 6.2.3 that this happened and I repeated this process of starting the array 2 or 3 times.

 

I did recently increase the ram in my unraid box from 16 to 32gb.  This is the reason I indicated that I am running memtest.  It is still running...8 hours in and no errors.

Link to comment

As another example...I rebooted the server and the VM autostarted and immediately crashed...by crashed I mean it started, I could see the desktop then it immediately said shutting down.  I then went to the webgui on a phone and the array was stopped.

 

I am including the diagnostics as it should be minimal since it was just a bootup and autostart.

 

As a followup to memtest...ran over 24 hours and no errors.

 

Dan

tower-diagnostics-20161107-1400.zip

Link to comment

I did not stop the array.  I opened chrome on the VM and while typing the Google search, the VM screen went blank.  I went to tower on my phone and the array was stopped.  I then restarted the array.  Never did I stop the array....as a matter of fact I didn't even have the webgui loaded.

 

I also have a diagnostic that I left in the 6.2.3 release thread where on the reboot from installing 6.2.3 that this happened and I repeated this process of starting the array 2 or 3 times.

 

It is very clear in your syslog that a 'stop array' event was issued, clearly not a memory issue (in this case).  Something is issuing 'stop array' events, which result in stopping the array and all containers and VM's.  Your last post with an example of a VM crash doesn't sound like a crash, just that 'something' almost immediately sent a stop array signal.

 

I looked to see if you had installed the new Ransomware plugin, because that's the only addon I know of that issues 'stop array' events.  But you don't have it.  So I'm out of ideas for now...  Try running in Safe Mode, just to eliminate plugin effects.

Link to comment

RobJ,

 

Thanks for sticking with me on this....as you can imaging going from having no issues...to a system that does not work is frustrating.

 

I updated to 6.2.4, no change.

Booted in safe mode, same thing stop command is issued and VM shuts down immediately.

 

Attached is a diagnostic for 6.2.4 booted in safe mode.  For reference, after the VM stopped, I did start the array back up.

 

I know the old powerdown plugin issued a command when you hit/held the power button...now I no longer have this plugin as it was depricated with 6.2.  Is there anything else that could cause the shutdown command.  I will have to search for this ransomware plugin as I never heard about that.

 

Any other suggestions.

 

Again...thanks for staying with me on this.

 

Dan

 

Dan

tower-diagnostics-20161107-1749.zip

Link to comment

Maybe, just maybe, this calls for a special one-off release for you, which logs way more detail about special events like something shutting the array down. Down to the exact cause of the shutdown, outlet for the interface actually triggering it, IP address of any machine commanding it, etc.

Link to comment

Possibly Resolved...please let me know if this is a plausible explanation.

 

I realized that the VM was running fine when I ran other software.  The VM shut down when Chrome was opened and I started browsing.  I noticed that I had a second unraid tab open...right after opening chrome, I clicked this tab...and it had the unraid, system is going down for a reboot (my text may not be exact)...but this was the tab that was open from when I rebooted unraid for the 6.2.3 update. 

 

My chrome settings in the "On Startup" section is checked for "Continue where you left off".  I am thinking this may have been sending the stop array signal???  Not sure...but now that I closed all tabs, my VMs have been running for 30 minutes or so.

 

I hope this makes sense...I will come back and mark this thread as solved if my system stays up for 24 hours.

 

Fingers crossed!

 

Dan

 

 

Link to comment

Yes, I had the very same issue when I mistakenly refreshed a shutting down tab rather than letting it reload itself after the server came back up. It immediately loaded the authenticated shutdown script, which then tried to shut down the array and the machine.

 

It does sound quite plausible that you could have had the shutdown page commanded from within the VM, and it perpetually resprang a shutdown on every session restore of your Chrome session.

 

Future suggestion for anyone listening: Do not command your unRAID to shut down from within the VM, until a fix for this issue has been issued.

 

Suggested fix for unRAID developers: For shutdown/reboot, and any other web interface page that does things like this (sits on a status page), do the following:

 

Possible 1) Tie a random session key to the URL that commands the shutdown. It can remain compatible with random scripts that command it externally by accepting "no session key GET parameter" as valid, while also accepting "wrong session key GET parameter" as an error and failing to perform the action. The web interface will clearly pass a session key, so cached instances of the tab do not inadvertently reboot the machine again when they are reloaded.

 

Possible 2) Have the URLs that handle these types of actions immediately redirect to another page, combined with using a POST method to trigger the originating script. That way, the browser will cache the target of the redirect as the current tab contents, and any attempt to navigate back to that page should also ask the user if they wish to resubmit. Combine with the above, using a different session key each time a command is invoked, and you end up with a script that hopefully won't reactivate if the same form contents are resubmitted to it. Maintain as a separate API than anything that needs an immediate GET action to just fire it off.

Link to comment

You should make a defect report or else LT won't notice it.

 

Regarding the issue, I have not had this problem with shutting down/restarting unraid from inside my VM from chrome with the continue where you left off active.

 

As kode54 says, it might be that your tab refreshed for some reason. Might be a setting in Chrome.

Link to comment

Possibly Resolved...please let me know if this is a plausible explanation.

 

I agree with kode54, this sounds very plausible.  Long ago, possibly during v4 days, we had the same issue, where shutting down the system in specific circumstances and browser config, and leaving the window open could cause the same thing - shutting down the server unexpectedly when that window was next used (refreshed?).  That was fixed at the time, but the solution for this one is probably similar.  Can't remember but may have been a get vs post thing, plus changing the last state of the page so it can't repeat.

 

By the way, kode54 is becoming very useful around here!  I think we should keep him!  ;)

Link to comment

Sorry, I'm taken!

 

I kid, but thanks. I've been around other places handling technical issues, like Hydrogen Audio (foobar2000, the rest of the forum...) and also some console emulation communities. I've been an avid user of Windows, macOS, and Linux, for many years now, and I seem to be able to pick up information quickly. I also try to resolve questions quickly, but sometimes I take for granted that not a whole lot of people have an eidetic memory. I haven't been tested for this, but I can easily recall the most obscure details from everything including various media with little triggering, which kind of spoils re-reading/re-watching for me.

 

I only just joined and started using this software, and already feel fortunate to have met so many interesting people, and helped a few of them. I also count myself fortunate to have not run into any of the issues I see regularly, while still running into a few of my own. (It was fun using 7za and cpio to extract files from the bzroot to restore permissions on a clobbered initrd without having to reboot the machine. I also had fun yesterday when I accidentally attached my unRAID flash drive to my Windows 10 VM instead of the same-vendor same-model same-capacity USB drive that was plugged into a front panel port. I managed to attach the correct drive the second time, but had to reboot the machine to remount the USB properly. Hmm, I probably could have remounted that by hand without rebooting, if I had felt like it...)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.