unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

Connect a monitor and keyboard and boot in the gui mode and see if that locks up.  If not it is not a gui issue and is probably a networking issue.

 

EDIT: From the diagnostics, it looks like disk6 has a problem.  You are getting kernel panics when that disk is mounting.  It appears the file system has a problem.

 

May  8 21:46:16 media emhttp: shcmd (723): mkdir -p /mnt/disk6
May  8 21:46:16 media emhttp: shcmd (724): set -o pipefail ; mount -t auto -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
May  8 21:46:16 media kernel: XFS (md6): Mounting V5 Filesystem
May  8 21:46:16 media kernel: XFS (md6): Starting recovery (logdev: internal)
May  8 21:46:16 media kernel: XFS (md6): _xfs_buf_find: Block out of range: block 0x8e8e05f28, EOFS 0x1d1c0be48 
May  8 21:46:16 media kernel: ------------[ cut here ]------------
May  8 21:46:16 media kernel: WARNING: CPU: 1 PID: 7528 at fs/xfs/xfs_buf.c:472 _xfs_buf_find+0x7f/0x28c()
May  8 21:46:16 media kernel: Modules linked in: md_mod x86_pkg_temp_thermal igb coretemp i2c_i801 kvm_intel kvm e1000e mvsas ptp ahci libsas libahci i2c_algo_bit pps_core scsi_transport_sas [last unloaded: md_mod]
May  8 21:46:16 media kernel: CPU: 1 PID: 7528 Comm: mount Not tainted 4.4.6-unRAID #1

 

Yes, I can get to the GUI now remotely (once I disabled plugins and set Array Start/Dockers to not autostart)... I can't seem to get the full "GUI Mode" to work at all from my Physical Server with a Monitor/KB/Mouse connected... Its not even an option when I boot my Flash  This is in my Flash Settings on Syslinux Configuration tab but no option on startup (on monitor connected to server).

label unRAID OS GUI Mode
  menu default
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui

 

*EDIT* I was able to hit tab and append the ",/bzroot-gui" on the main Unraid item and the GUI started... odd that it didn't show up on the list as it should.

 

I also ran a xfs_repair -L on Disk 6 after this log was created this morning.  Attached is a current one from Remote GUI, Array started in Maintenance Mode only... assume if I try to mount array regularly it will hang again.

 

The issue now is when I start the array, everything grinds to a halt again, GUI becomes unresponsive... this was the last bit of the last Diag Zip that seems to be happening dozens of times per second...

 

May  9 10:45:24 media kernel: swapper/0: page allocation failure: order:0, mode:0x2080020
May  9 10:45:24 media kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.6-unRAID #1
May  9 10:45:24 media kernel: Hardware name: Supermicro X10SAE/X10SAE, BIOS 3.0 05/20/2015
May  9 10:45:24 media kernel: 0000000000000000 ffff88041dc03c28 ffffffff813688da 0000000000000000
May  9 10:45:24 media kernel: 0000000000000000 ffff88041dc03cc0 ffffffff810bc9b0 ffffffff818b0e38
May  9 10:45:24 media kernel: ffff88041dff9b00 ffffffffffffffff ffffffff008b0680 0000000000000000
May  9 10:45:24 media kernel: Call Trace:
May  9 10:45:24 media kernel: <IRQ>  [<ffffffff813688da>] dump_stack+0x61/0x7e
May  9 10:45:24 media kernel: [<ffffffff810bc9b0>] warn_alloc_failed+0x10f/0x127
May  9 10:45:24 media kernel: [<ffffffff810bf9c7>] __alloc_pages_nodemask+0x870/0x8ca
May  9 10:45:24 media kernel: [<ffffffff814333a9>] ? device_has_rmrr+0x5a/0x63
May  9 10:45:24 media kernel: [<ffffffff810bfabd>] __alloc_page_frag+0x9c/0x15f
May  9 10:45:24 media kernel: [<ffffffff8152e310>] __napi_alloc_skb+0x61/0xc1
May  9 10:45:24 media kernel: [<ffffffffa053e92a>] igb_poll+0x441/0xc06 [igb]
May  9 10:45:24 media kernel: [<ffffffff815390ac>] net_rx_action+0xd8/0x226
May  9 10:45:24 media kernel: [<ffffffff8104d4c0>] __do_softirq+0xc3/0x1b6
May  9 10:45:24 media kernel: [<ffffffff8104d73d>] irq_exit+0x3d/0x82
May  9 10:45:24 media kernel: [<ffffffff8100db9a>] do_IRQ+0xaa/0xc2
May  9 10:45:24 media kernel: [<ffffffff8161ab42>] common_interrupt+0x82/0x82
May  9 10:45:24 media kernel: <EOI>  [<ffffffff815041b7>] ? cpuidle_enter_state+0xf0/0x148
May  9 10:45:24 media kernel: [<ffffffff81504170>] ? cpuidle_enter_state+0xa9/0x148
May  9 10:45:24 media kernel: [<ffffffff81504231>] cpuidle_enter+0x12/0x14
May  9 10:45:24 media kernel: [<ffffffff81076247>] call_cpuidle+0x4e/0x50
May  9 10:45:24 media kernel: [<ffffffff810763cf>] cpu_startup_entry+0x186/0x1fd
May  9 10:45:24 media kernel: [<ffffffff8160fbdd>] rest_init+0x84/0x87
May  9 10:45:24 media kernel: [<ffffffff818eaec0>] start_kernel+0x3f7/0x404
May  9 10:45:24 media kernel: [<ffffffff818ea120>] ? early_idt_handler_array+0x120/0x120
May  9 10:45:24 media kernel: [<ffffffff818ea339>] x86_64_start_reservations+0x2a/0x2c
May  9 10:45:24 media kernel: [<ffffffff818ea421>] x86_64_start_kernel+0xe6/0xf3
May  9 10:45:24 media kernel: Mem-Info:
May  9 10:45:24 media kernel: active_anon:468687 inactive_anon:4711 isolated_anon:0
May  9 10:45:24 media kernel: active_file:443016 inactive_file:3009187 isolated_file:32
May  9 10:45:24 media kernel: unevictable:0 dirty:64349 writeback:152019 unstable:0
May  9 10:45:24 media kernel: slab_reclaimable:51705 slab_unreclaimable:30682
May  9 10:45:24 media kernel: mapped:51722 shmem:85744 pagetables:5236 bounce:0
May  9 10:45:24 media kernel: free:17874 free_pcp:104 free_cma:0
May  9 10:45:24 media kernel: Node 0 DMA free:15580kB min:12kB low:12kB high:16kB active_anon:304kB inactive_anon:16kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15984kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:32kB shmem:320kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 3512 16022 16022
May  9 10:45:24 media kernel: Node 0 DMA32 free:51276kB min:3524kB low:4404kB high:5284kB active_anon:572120kB inactive_anon:3316kB active_file:391584kB inactive_file:2440188kB unevictable:0kB isolated(anon):0kB isolated(file):128kB present:3607096kB managed:3597428kB mlocked:0kB dirty:61208kB writeback:129236kB mapped:48616kB shmem:74916kB slab_reclaimable:44168kB slab_unreclaimable:26384kB kernel_stack:3376kB pagetables:5800kB unstable:0kB bounce:0kB free_pcp:144kB local_pcp:120kB free_cma:0kB writeback_tmp:0kB pages_scanned:44 all_unreclaimable? no
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 0 12510 12510
May  9 10:45:24 media kernel: Node 0 Normal free:4640kB min:12564kB low:15704kB high:18844kB active_anon:1302324kB inactive_anon:15512kB active_file:1380480kB inactive_file:9596560kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13074432kB managed:12810880kB mlocked:0kB dirty:196188kB writeback:478840kB mapped:158240kB shmem:267740kB slab_reclaimable:162652kB slab_unreclaimable:96344kB kernel_stack:11968kB pagetables:15144kB unstable:0kB bounce:0kB free_pcp:272kB local_pcp:140kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 0 0 0
May  9 10:45:24 media kernel: Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 3*64kB (UM) 2*128kB (UM) 1*256kB (U) 1*512kB (M) 2*1024kB (UM) 2*2048kB (UM) 2*4096kB (M) = 15580kB
May  9 10:45:24 media kernel: Node 0 DMA32: 499*4kB (ME) 306*8kB (UME) 807*16kB (UME) 358*32kB (UME) 93*64kB (UME) 23*128kB (UME) 7*256kB (ME) 1*512kB (E) 7*1024kB (M) 2*2048kB (M) 0*4096kB = 51276kB
May  9 10:45:24 media kernel: Node 0 Normal: 324*4kB (M) 140*8kB (UME) 89*16kB (UME) 35*32kB (M) 3*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5152kB
May  9 10:45:24 media kernel: 3537970 total pagecache pages
May  9 10:45:24 media kernel: 0 pages in swap cache
May  9 10:45:24 media kernel: Swap cache stats: add 0, delete 0, find 0/0
May  9 10:45:24 media kernel: Free swap  = 0kB
May  9 10:45:24 media kernel: Total swap = 0kB
May  9 10:45:24 media kernel: 4174378 pages RAM
May  9 10:45:24 media kernel: 0 pages HighMem/MovableOnly
May  9 10:45:24 media kernel: 68326 pages reserved

 

Well, I started the Array and it's doing a data rebuild now on disk6 and it has been running about an hour and about 9% done... we'll see what happens, but at least a step in right direction.

Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

So I updated from 6.1.7 to 6.1.9 and my Windows VM stopped booting

So I thought, might as well upgrade to 6.2beta21 and see what happens

 

Well, it got worse. When my VM freezes at the Windows loading screen, the rest of the NAS just...dies. I can't access the Flash disk etc and I have to reboot it to get it back

 

Here's my xml:

 

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Windows</name>
  <uuid>50d3fa99-2a1c-88cf-21a3-d58d88d08c39</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/virtualmachines/Windows/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:48:8f:00'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x0800'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/>
  </qemu:commandline>
</domain>

 

If I edit the XML after it's crashed, this appears at the top of the page:

 

Warning: file_put_contents(/boot/config/domain.cfg): failed to open stream: Input/output error in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt_helpers.php on line 375

 

And the log just has this:

 

May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error
May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error
May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error
May 10 23:44:53 NAS shfs/user: shfs_read: read: (5) Input/output error

 

More log:

 

May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:e0:40:6e:cc/00:00:08:00:00/40 tag 28 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/48:e8:78:6e:cc/00:00:08:00:00/40 tag 29 ncq 36864 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:f0:c0:6e:cc/00:00:08:00:00/40 tag 30 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3: hard resetting link
May 11 00:26:56 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:26:56 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:01 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata4: hard resetting link
May 11 00:27:01 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata3: hard resetting link
May 11 00:27:02 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:02 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:12 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata4: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata4: hard resetting link
May 11 00:27:12 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata3: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata3: hard resetting link
May 11 00:27:12 NAS kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:12 NAS kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:13 NAS kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata2.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata2.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 21 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata2.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata2: hard resetting link
May 11 00:27:13 NAS kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata1.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata1.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 17 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata1.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata1: hard resetting link
May 11 00:27:13 NAS kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:13 NAS kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

Link to comment

May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:e0:40:6e:cc/00:00:08:00:00/40 tag 28 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/48:e8:78:6e:cc/00:00:08:00:00/40 tag 29 ncq 36864 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:f0:c0:6e:cc/00:00:08:00:00/40 tag 30 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3: hard resetting link
May 11 00:26:56 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:26:56 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:01 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata4: hard resetting link
May 11 00:27:01 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata3: hard resetting link
May 11 00:27:02 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:02 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:12 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata4: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata4: hard resetting link
May 11 00:27:12 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata3: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata3: hard resetting link
May 11 00:27:12 NAS kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:12 NAS kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:13 NAS kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata2.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata2.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 21 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata2.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata2: hard resetting link
May 11 00:27:13 NAS kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata1.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata1.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 17 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata1.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata1: hard resetting link
May 11 00:27:13 NAS kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:13 NAS kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

 

Looks like a cable or controller problem.

 

Link to comment

May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:e0:40:6e:cc/00:00:08:00:00/40 tag 28 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/48:e8:78:6e:cc/00:00:08:00:00/40 tag 29 ncq 36864 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3.00: failed command: WRITE FPDMA QUEUED
May 11 00:26:56 NAS kernel: ata3.00: cmd 61/18:f0:c0:6e:cc/00:00:08:00:00/40 tag 30 ncq 12288 out
May 11 00:26:56 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:26:56 NAS kernel: ata3.00: status: { DRDY }
May 11 00:26:56 NAS kernel: ata3: hard resetting link
May 11 00:26:56 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:26:56 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:01 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata4: hard resetting link
May 11 00:27:01 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:01 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:01 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:01 NAS kernel: ata3: hard resetting link
May 11 00:27:02 NAS kernel: ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:02 NAS kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:12 NAS kernel: ata4.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata4.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata4.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata4: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata4: hard resetting link
May 11 00:27:12 NAS kernel: ata3.00: qc timeout (cmd 0xec)
May 11 00:27:12 NAS kernel: ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
May 11 00:27:12 NAS kernel: ata3.00: revalidation failed (errno=-5)
May 11 00:27:12 NAS kernel: ata3: limiting SATA link speed to 3.0 Gbps
May 11 00:27:12 NAS kernel: ata3: hard resetting link
May 11 00:27:12 NAS kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:12 NAS kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
May 11 00:27:13 NAS kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata2.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata2.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 21 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata2.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata2: hard resetting link
May 11 00:27:13 NAS kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 11 00:27:13 NAS kernel: ata1.00: failed command: READ DMA EXT
May 11 00:27:13 NAS kernel: ata1.00: cmd 25/00:08:18:29:54/00:00:57:00:00/e0 tag 17 dma 4096 in
May 11 00:27:13 NAS kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 11 00:27:13 NAS kernel: ata1.00: status: { DRDY }
May 11 00:27:13 NAS kernel: ata1: hard resetting link
May 11 00:27:13 NAS kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
May 11 00:27:13 NAS kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

 

It looks like a controller problem.  All 4 drives attached to ata1 through ata4 have stopped responding, yet are maintaining good SATA links.  Since it's extremely improbable that all 4 drives crashed simultaneously, it has to be the controller common to them.

 

Since this appears to be a support issue, please start a support topic in General Support (V6), and include your complete Diagnostics zip.

Link to comment

Been using for a week, but I have an odd problem:

 

All of my Dockers say they have an update available, but when I go to update them they do not connect and ultimately fail. The Dockers themselves have full network connectivity, and when I SSH to the unraid host I can ping github/etc. just fine. Any thoughts? Using mostly linuxserver.io Docker images.

 

EDIT: Appears that the Docker engine does not like Jumbo frames. I have 2 NICs bonded, and the NIC interfaces and the bond0 interface all have MTU9000 set. This appears to be the bug.

 

I'm seeing the same issue where the virtual interfaces won't set to jumbo frames even if all ethx and bond0 is 9000, causing new container install or updates of older containers to fail.  Also noticed that the virbr0 for VM's is stuck at 1500 as well.  Tried changing the rc.d daemon settings to force --mtu=9000 and still no joy.  ifconfig did show the change to the virtual interfaces but adding containers still failed.  I had to revert to 1500 frames to get it to work again.

 

Link to comment

Not sure if this is related to running the beta or not.

 

Came home yesterday and all of my drives said they were missing. Rebooted the machine and they all came back. Left it running overnight and then switched it off again this morning.

 

When I came home this evening, started it up again and once again all of the disks were missing. Rebooted, and all but one came back (disk 7 was missing). Rebooted again and all the drives were back again. A few minutes after starting the array I received the message that Disk 7 returned to normal operation (sdi).

 

There may have been a power cut yesterday (I can't be 100% certain as the server was running and the uptime on the web gui said it had been running for 7 days).

 

Anyone got any ideas what the issue might be?

 

My thinking was that if there was a power cut it may borked some hardware - the motherboard has an inbuilt LSI controller which is in IT mode and the Supermicro case has a SAS expander backplane - either of which may have been affected.

 

I've run a short test against the hard drives (with no issues reported), but my feeling is that it isn't related to the hard drives.

 

I've attached the diagnostics from the most recent reboots.

 

Any help appreciated.

unraid-diagnostics-20160512-2106.zip

unraid-diagnostics-20160512-2116.zip

unraid_state2.jpg.2aa20c31ad42771f4f8051ee0f4db937.jpg

Link to comment

Not sure if this is related to running the beta or not.

 

Came home yesterday and all of my drives said they were missing. Rebooted the machine and they all came back. Left it running overnight and then switched it off again this morning.

 

When I came home this evening, started it up again and once again all of the disks were missing. Rebooted, and all but one came back (disk 7 was missing). Rebooted again and all the drives were back again. A few minutes after starting the array I received the message that Disk 7 returned to normal operation (sdi).

 

There may have been a power cut yesterday (I can't be 100% certain as the server was running and the uptime on the web gui said it had been running for 7 days).

 

Anyone got any ideas what the issue might be?

 

My thinking was that if there was a power cut it may borked some hardware - the motherboard has an inbuilt LSI controller which is in IT mode and the Supermicro case has a SAS expander backplane - either of which may have been affected.

Thank you for diagnostics for both cases, drives found and drives missing.  There's only one difference between the 2 cases, and that is an error returned by the LSI card (or the expander) of ioc_status is 'scsi data underrun'.  An online search found nothing useful.  In both cases, there are 11 drives and 2 enclosures.  In the good case, the 11 drives are registered first, then the 2 enclosures.  In the error case, only the 2 enclosures are registered, and each reports "ioc_status(scsi data underrun)".  Then 11 devices are found, but instead of their drive names and identities, the message "attempting task abort!" appears, not an encouraging message!

 

I can only blame this on the card or expander at this point.  The only advice I have therefore is check for firmware updates, and possibly a BIOS update.

 

Because this is almost certainly a support issue, it might be best to continue it in the General Support board.

Link to comment

Been running 6.2.0-beta21 for a week now.

 

All working well. Dual party upgrade was easy. NVME pool working just fine. No crashes. Dockers / VM working well.

 

The only change as I have posted before is posted CPU speeds, in 6.1.9 CPU speeds showed max speed 3.6GHZ, now I have reported max speed of 3.8GHZ (My CPU turbo's up to 4.0GHZ) also the speeds show differently, instead of 3800MHZ its 3758MHZ for e.g.

I never see it hit 4000MHZ (or above 3800MHz)

 

DvSvcGW.png

cpu.png.d7a618207e752396e0bb735df382b806.png

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

archangel-diagnostics-20160515-1642.zip

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

Ah sounds like the dreaded unexplainable hang that have been reported several times already on this thread.

It seems dAigo and myself have had it and now you (and a few other).

The devs unfortunately cannot reproduce the issue so it's pretty much either you have it you don't kinda situation. :(

 

Next time it hangs, can you try using another PC and try pinging your server?

What I have found in my case is even though everything appears unresponsive, the server still returns pings (and things that are running without querying the array, e.g. top command in console, will continue to run without any issue). It appears to be array-related.

 

Also, please would you mind sharing your configuration?

Link to comment

Ah sounds like the dreaded unexplainable hang that have been reported several times already on this thread.

It seems dAigo and myself have had it and now you (and a few other).

The devs unfortunately cannot reproduce the issue so it's pretty much either you have it you don't kinda situation. :(

 

Next time it hangs, can you try using another PC and try pinging your server?

What I have found in my case is even though everything appears unresponsive, the server still returns pings (and things that are running without querying the array, e.g. top command in console, will continue to run without any issue). It appears to be array-related.

 

Also, please would you mind sharing your configuration?

 

It seems so, my config should be in the diagnostic i think?

I can ping, ssh, get htop, its almost as if the array just stops dead - dockers etc. stop but my vms which are on the array (cache drive) still work as normal

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

Stop Array.  Go to Settings --> Disk Settings and change the num_stripes tunable from 1028 to 8192.  Save.  Start Array.

 

That should workaround the deadlock / web ui unresponsive issues during heavy IO for now.

Link to comment

Possible minor UI bug...? 

 

I just completed a drive rebuild on my backup server last night (running 6.2.0 beta 21).  The UI reported this as a successfully completed parity check (which clearly it isn't).  Probably it would be good if the parity status report was to say something like "disk rebuild completed, parity check recommended" after such a rebuild. 

 

Thanks.

Link to comment

 

The only change as I have posted before is posted CPU speeds, in 6.1.9 CPU speeds showed max speed 3.6GHZ, now I have reported max speed of 3.8GHZ (My CPU turbo's up to 4.0GHZ) also the speeds show differently, instead of 3800MHZ its 3758MHZ for e.g.

I never see it hit 4000MHZ (or above 3800MHz)

 

 

Disable the p_state driver. 

Link to comment

 

The only change as I have posted before is posted CPU speeds, in 6.1.9 CPU speeds showed max speed 3.6GHZ, now I have reported max speed of 3.8GHZ (My CPU turbo's up to 4.0GHZ) also the speeds show differently, instead of 3800MHZ its 3758MHZ for e.g.

I never see it hit 4000MHZ (or above 3800MHz)

 

 

Disable the p_state driver.

 

Just in case you don't know how to do that ;)

 

...

label unRAID OS

  menu default

  kernel /bzimage

  append intel_pstate=disable initrd=/bzroot

...

 

Alter your syslinux setup (go to the 'main' tab in unraid, click Flash and scroll down)

 

Just add "intel_pstate=disable" to the append line under the first block of text (example above :) )

Link to comment

Thanks, what does this do?

 

I did a little research, wont make a difference? it's just the way its reported? this happens to everyone upgrading to 6.2? (new to linux still learning)

 

edit: doing some more research, I think this will solve my issue. Still interested in what it does, but seems to be a driver thing with control over the cpu? i upgraded my server from a q6600 to this machine and then upgraded to 6.2 that might of changed the driver?

 

Cheers

 

 

Link to comment

Thanks, seems to have worked. However wont go past 3.6ghz (wont turbo) since this is not a beta issue I will post elsewhere. However curious to know if the beta version updated the driver? the fact I moved from a q6600 to this machine before beta is why I never encountered it in 6.1.9?

Link to comment

Thanks, seems to have worked. However wont go past 3.6ghz (wont turbo) since this is not a beta issue I will post elsewhere. However curious to know if the beta version updated the driver? the fact I moved from a q6600 to this machine before beta is why I never encountered it in 6.1.9?

 

Just run this:

 

cat /proc/cpuinfo |egrep -i mhz

 

SSH in, or use a monitor and keyboard hooked up to your unRAID server :)

 

It'll give you a readout of frequencies that the raw system is reading :)

Make sure you're doing something to push the system too :P Spin up a VM or something so the CPU is actually doing something :)

 

I.e. if you're just idling, the CPU will never need to hit 4GHz ;)

Link to comment

OK, I disabled Turbo, Max CPU is 3600Mhz, with it enabled it's 3601Mhz,

 

CPU-Z in windows VM with turbo on shows 3.9Ghz + when running prime on all cores. With Turbo off it's max 3600Mhz,

 

So I guess the Dashboard wont report it?

 

Anyways seems to be resolved I guess.

Link to comment

OK, I disabled Turbo, Max CPU is 3600Mhz, with it enabled it's 3601Mhz,

 

CPU-Z in windows VM with turbo on shows 3.9Ghz + when running prime on all cores. With Turbo off it's max 3600Mhz,

 

So I guess the Dashboard wont report it?

 

Anyways seems to be resolved I guess.

 

Seems odd! ;)

Definitely got the Xeon E3-1275v5 and not like a 1245 somehow? :)

Link to comment

OK, I disabled Turbo, Max CPU is 3600Mhz, with it enabled it's 3601Mhz,

 

CPU-Z in windows VM with turbo on shows 3.9Ghz + when running prime on all cores. With Turbo off it's max 3600Mhz,

 

So I guess the Dashboard wont report it?

 

Anyways seems to be resolved I guess.

 

Your findings agree with my previous results (w/i7-4790s), and seem normal. The dashboard never showed turbo frequencies, but it worked as expected.

However if you do not have issues with the CPU idling, I don't recommend enabling the intel_pstate=disable.

For some users it is the only way the cpu will throttle, leading to extra heat/power and sticking at max frequency (my i7-4790s did exactly this).

 

The pstate driver is better suited to handle the throttling of your CPU than other "generic" (if I can call it that) modes.

Since your end result still isn't reported exactly correct (taking into account the turbo frequency) I'd remove it.

 

----

 

On another note: I had most of my USB devices disappear yesterday, it was odd, only thing to fix this was a reboot of the server.

Removal/inserting did not register the device within the system devices page.

There were approximately 5 devices missing, however my UPS, keyboard, and a wireless receiver were still shown.

 

Supposedly 4 of my rear USB ports are from an ASMedia chip, the others are from the chipset.

I suppose it is possible (however I haven't looked) that these are plugged into the ASMedia, and it failed and reinitialized on reboot. Don't know, haven't looked specifically.

The ports on the AsMedia will not separate from the same bus as the others regardless of what I do (have tried all options).

 

I have a syslog, but no diagnostics from this particular occurrence.

I've always had some odd "USB rounding interval XX" errors which I cannot get rid of with too much time spent trying.

 

I had never seen this prior to yesterday, and it repeats a LOT

May 14 12:33:04 Server kernel: INFO: rcu_preempt detected stalls on CPUs/tasks:
May 14 12:33:04 Server kernel: 	4-...: (0 ticks this GP) idle=4e8/0/0 softirq=50883399/50883399 fqs=0 
May 14 12:33:04 Server kernel: 	(detected by 5, t=60002 jiffies, g=17816713, c=17816712, q=16930111)
May 14 12:33:04 Server kernel: Task dump for CPU 4:
May 14 12:33:04 Server kernel: swapper/4       R  running task        0     0      1 0x00200000
May 14 12:33:04 Server kernel: ffffffff8150415f 0000000200000001 ffff8808bfc9f598 ffffffff81888a40
May 14 12:33:04 Server kernel: ffff88089c368000 ffff88089c364000 ffff88089c364000 ffff88089c367ed8
May 14 12:33:04 Server kernel: ffffffff81504231 ffff88089c367ef0 ffffffff81076247 0000000000000002
May 14 12:33:04 Server kernel: Call Trace:
May 14 12:33:04 Server kernel: [<ffffffff8150415f>] ? cpuidle_enter_state+0x98/0x148
May 14 12:33:04 Server kernel: [<ffffffff81504231>] ? cpuidle_enter+0x12/0x14
May 14 12:33:04 Server kernel: [<ffffffff81076247>] ? call_cpuidle+0x4e/0x50
May 14 12:33:04 Server kernel: [<ffffffff810763cf>] ? cpu_startup_entry+0x186/0x1fd
May 14 12:33:04 Server kernel: [<ffffffff81033cbf>] ? start_secondary+0xf4/0xf7
May 14 12:33:04 Server kernel: rcu_preempt kthread starved for 60002 jiffies! g17816713 c17816712 f0x0 s3 ->state=0x1
May 14 12:35:37 Server kernel: INFO: rcu_preempt detected stalls on CPUs/tasks:
May 14 12:35:37 Server kernel: 	4-...: (0 ticks this GP) idle=374/0/0 softirq=50883399/50883399 fqs=0 
May 14 12:35:37 Server kernel: 	(detected by 10, t=60002 jiffies, g=17816714, c=17816713, q=16926814)
May 14 12:35:37 Server kernel: Task dump for CPU 4:
May 14 12:35:37 Server kernel: swapper/4       R  running task        0     0      1 0x00200000
May 14 12:35:37 Server kernel: ffffffff8150415f 0000000100000001 ffff8808bfc9f598 ffffffff81888a40
May 14 12:35:37 Server kernel: ffff88089c368000 ffff88089c364000 ffff88089c364000 ffff88089c367ed8
May 14 12:35:37 Server kernel: ffffffff81504231 ffff88089c367ef0 ffffffff81076247 0000000000000001

 

What the heck are "jiffies", cause my server is "starved" for them..  ;D

rcu_preempt kthread starved for 180005 jiffies!

 

Edit:

Actually it lists the issue here, however not sure of why this happened... I'm still blaming the need for more jiffies (backstory: it is a nickname my friend calls me, close to my name of Jeff, so yes I find that humorous).  ;)

May 15 16:37:06 Server kernel: usb usb3-port9: disabled by hub (EMI?), re-enabling...
May 15 16:37:06 Server kernel: usb 3-9: USB disconnect, device number 4
May 15 16:37:06 Server kernel: usb 3-9.1: USB disconnect, device number 6
May 15 16:37:06 Server kernel: usb 3-9.1.3: USB disconnect, device number 8
May 15 16:37:06 Server kernel: usb 3-9.2: USB disconnect, device number 7
May 15 16:37:06 Server kernel: usb 3-9.3: USB disconnect, device number 9
May 15 16:37:07 Server kernel: usb usb3-port9: Cannot enable. Maybe the USB cable is bad?
May 15 16:37:08 Server kernel: usb usb3-port9: Cannot enable. Maybe the USB cable is bad?
May 15 16:37:09 Server kernel: usb usb3-port9: Cannot enable. Maybe the USB cable is bad?
May 15 16:37:10 Server kernel: usb usb3-port9: Cannot enable. Maybe the USB cable is bad?
May 15 16:37:10 Server kernel: usb usb3-port9: unable to enumerate USB device

server-syslog-20160515-2035.zip

Link to comment
Guest
This topic is now closed to further replies.