Nicktdot

Members
  • Posts

    41
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nicktdot's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. I'm very glad to know it works. I was researching the error / mask 00000001/0000e000 in the message, and found out it had to do with the PCI end device not responding to an ASPM command. So while turning off AER masks the problem by not logging the errors, it doesn't solve the actual PCI errors. So then started going down the rabbit hole of what ASPM is all about, ( https://en.wikipedia.org/wiki/Active_State_Power_Management ) and saw there is a kernel boot flag to turn off the feature.. I dont think we need it anyways seeing as my server is running 24h/day and never goes to sleep mode. I figured it might help avoid the error altogether if the unused feature is disabled. I'll check my own server next time it reboots!
  2. Could you try pcie_aspm=off . This seems to disable power management mode which is throwing the error.. I've put it in my config for next time I reboot
  3. Let me know how it works out for you. I have 1 of 4 SK Hynix NVMEs on an Asus HYPER M.2 X16 GEN 4 CARD throwing this constantly.
  4. It appears there's a schema upgrade screwup when moving to the latest Plex. I'm told on the Plex website that it's due to a corrupted DB, but I get the same behavior on DB backups as well.. Appears I'm not alone in this boat. See: https://forums.plex.tv/t/loading-libraries-fails-error-got-exception-from-request-handler-bad-cast/795400
  5. I updated to the latest Plex Server 1.27 ( 1.27.0.5849-99e933842 )and the libraries were gone when starting. Checked for permission issues but that wasn't the case. The log files are now filled with (libc++ errors?) these messages whenever a transaction occurs: got exception from request handler: std::bad_cast I reverted back to Plex Server 1.26 ( PlexMediaServer-1.26.0.5715-8cf78dab ) and the issue disappears. Same filesystem, same plex database. example output: Jun 02, 2022 10:55:21.399 [0x7f7e37163b38] DEBUG - [com.plexapp.system] HTTP reply status 200, with 0 bytes of content. Jun 02, 2022 10:55:21.399 [0x7f7e37d1eb38] DEBUG - Completed: [127.0.0.1:38308] 200 GET /system/messaging/clear_events/com.plexapp.agents.fanarttv (4 live) GZIP 7ms 280 bytes Jun 02, 2022 10:55:21.472 [0x7f7e36433b38] ERROR - Got exception from request handler: std::bad_cast Seems like every transaction gets this error or: Jun 02, 2022 10:55:21.611 [0x7f7e37140b38] ERROR - Got exception from request handler: Cannot convert data to std::tm.
  6. I upgraded to 6.9.0 last night after running 180+ days on the previous release. I've been experiencing constant crashes across different CPU since the upgrade. My max uptime has been about 5 hours. [ 1758.031275] ------------[ cut here ]------------ [ 1758.031286] WARNING: CPU: 5 PID: 519 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031287] Modules linked in: tun veth macvlan xt_nat xt_MASQUERADE iptable_nat nf_nat nfsd lockd grace sunrpc md_mod xfs hwmon_vid ipmi_devintf ip6table_filter ip6_tables iptable_filter ip_tables bonding igb i2c_algo_bit i40e sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl mpt3sas i2c_i801 intel_cstate ahci i2c_smbus i2c_core intel_uncore nvme libahci raid_class scsi_transport_sas nvme_core wmi button [last unloaded: i2c_algo_bit] [ 1758.031343] CPU: 5 PID: 519 Comm: kworker/5:1 Not tainted 5.10.19-Unraid #1 [ 1758.031345] Hardware name: Supermicro X10SRA/X10SRA, BIOS 2.1a 10/24/2018 [ 1758.031352] Workqueue: events macvlan_process_broadcast [macvlan] [ 1758.031357] RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031360] Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01 [ 1758.031362] RSP: 0018:ffffc90000304d38 EFLAGS: 00010202 [ 1758.031365] RAX: 0000000000000188 RBX: 0000000000003bd9 RCX: 0000000009abba5f [ 1758.031367] RDX: 0000000000000000 RSI: 0000000000000232 RDI: ffffffff8200a7a4 [ 1758.031369] RBP: ffff888586659540 R08: 0000000061fe0175 R09: ffff888103c5d800 [ 1758.031371] R10: 0000000000000158 R11: ffff8885d3cc1e00 R12: 000000000000fa32 [ 1758.031373] R13: ffffffff8210db40 R14: 0000000000003bd9 R15: 0000000000000000 [ 1758.031375] FS: 0000000000000000(0000) GS:ffff88903f340000(0000) knlGS:0000000000000000 [ 1758.031377] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1758.031379] CR2: 000014eba5200000 CR3: 000000000200c004 CR4: 00000000003706e0 [ 1758.031381] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1758.031383] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1758.031384] Call Trace: [ 1758.031387] <IRQ> [ 1758.031393] nf_conntrack_confirm+0x2f/0x36 [ 1758.031422] nf_hook_slow+0x39/0x8e [ 1758.031429] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031434] ? ip_protocol_deliver_rcu+0xfe/0xfe [ 1758.031437] ip_local_deliver+0x49/0x75 [ 1758.031441] ip_sabotage_in+0x43/0x4d [ 1758.031445] nf_hook_slow+0x39/0x8e [ 1758.031449] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031453] ? l3mdev_l3_rcv.constprop.0+0x50/0x50 [ 1758.031456] ip_rcv+0x41/0x61 [ 1758.031464] __netif_receive_skb_one_core+0x74/0x95 [ 1758.031474] process_backlog+0xa3/0x13b [ 1758.031482] net_rx_action+0xf4/0x29d [ 1758.031489] __do_softirq+0xc4/0x1c2 [ 1758.031495] asm_call_irq_on_stack+0x12/0x20 [ 1758.031500] </IRQ> [ 1758.031507] do_softirq_own_stack+0x2c/0x39 [ 1758.031518] do_softirq+0x3a/0x44 [ 1758.031524] netif_rx_ni+0x1c/0x22 [ 1758.031530] macvlan_broadcast+0x10e/0x13c [macvlan] [ 1758.031540] macvlan_process_broadcast+0xf8/0x143 [macvlan] [ 1758.031548] process_one_work+0x13c/0x1d5 [ 1758.031554] worker_thread+0x18b/0x22f [ 1758.031559] ? process_scheduled_works+0x27/0x27 [ 1758.031564] kthread+0xe5/0xea [ 1758.031567] ? __kthread_bind_mask+0x57/0x57 [ 1758.031571] ret_from_fork+0x22/0x30 [ 1758.031575] ---[ end trace 485f3428373b5ba8 ]---
  7. Currently 193TB in a Chenbro NR40700 enclosure converted into a JBOD box attached to a LSI 9206-16e 4x 960GB NVME SSDs in a BTRFS cache Misc disk mounts for DB, Plex, Docker images, etc.
  8. Interesting. I'm running the upcoming Skylake Purley Xeon . I guess that's what they call the Xeon E5 v5 in the note, however the nomenclature for this upcoming cpu has changed , the current chip has this cpu info: And from the CPU instruction set flag ( http://i.imgur.com/o6Y8LWp.png ) definitely supports HyperThreading (ht), so looks like it's affected by bug. I've hammered the box pretty hard but have not encountered any stability issues.. maybe I should run unRAID on it for a bit
  9. Thanks!!!!!! (make sure the crc32 kernel module is included, I believe the kernel currenbtly used in 6.3.1 requires it)
  10. BUMP. Seriously, enable this as a module in the kernel config and provide it as a module. It's not a big deal and it helps those of us who use it. make menuconfig -> filesystem -> F2FS -> [M] Don't know why it takes two years to ask for this.
  11. I run something similar. Go for it!
  12. Could you try another Flash Drive and see if you have the same problem? Just to see if it boots. With two of you having virtually the same problem, I can't believe it is cockpit error! The issue is the cruzer fit. I have the same thing. Here's how to manually test it. When it scans for the UNRAID label, remove/reinstall the USB key. You'll see it picked up right away and it'll boot normally. That's obviously not a longer term solution, but it will allow you to boot manually.
  13. Sounds more like a Tu-95 ! I swapped out the fans with Noctua. so now it's super quiet.
  14. Saw this earlier in the week and ordered one to see. My existing system has a higher end CPU so I swapped out the board and upgraded to it from a Norco RPC4224 , but this is still a pretty good deal for a 48 drive full system.. It's a Chenbro NR40700 , which has 2 integrated 24 bay expanders in the drive backplane. http://www.chenbro.com/en-global/products/RackmountChassis/4U_Chassis/NR40700 The systems are complete with LSI-9211-8i, Xeon X3450 and 32GB RAM . So for a full system, the asking price is pretty good. See link below: http://www.ebay.com/itm/Chenbro-48-Bay-Top-Loader-4U-Chassis-w-Rail-Kit-Drive-Brackets-COMPLETE-SYSTEM-/252334824504 Cheers