joelones

Members
  • Posts

    532
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

joelones's Achievements

Enthusiast

Enthusiast (6/14)

4

Reputation

1

Community Answers

  1. I'm still on 6.12.3 and would like to upgrade but facing multiple issues upon trying 6.12.6. Any guidance would be appreciated. The Intel GPU CometLake-S GT2 [UHD Graphics 630] fails to load upon bootup thus the Jellyfin docker fails to start as it is used for gpu transcoding My Windows 10 VM with a pass-through nvdia quadro fails to start All is fine on 6.12.3. Thoughts? clarabell-diagnostics-20231206-1504.zip
  2. I'm trying to the new UniFi Network Application container and getting a tomcat 404 error using a custom bridge setup. Previous container works well with this setup. Can anyone please advise (unraid 6.12.3)
  3. Just tried it again today, same issue same call trace
  4. No I reverted back to 6.12.3 which is working fine. I would add this as the following: mkdir -p /boot/config/modprobe.d echo "options i915 enable_fbc=1 enable_guc=3" > /boot/config/modprobe.d/i915.conf mkdir -p /boot/config/modprobe.d echo "options i915 enable_dc=0" > /boot/config/modprobe.d/i915.conf what does this do? for 6.12.3, no need to add the modprobe lines, it just works. odd
  5. Apparently `/dev/dri/*` which I pass into the jellyfin docker does not exist. root@clarabell:/boot/config# intel_gpu_top -L No GPU devices found Tried this, but modprobe just hangs: mkdir -p /boot/config/modprobe.d echo "options i915 enable_fbc=1 enable_guc=3" > /boot/config/modprobe.d/i915.conf Here is the diagnostics: root@clarabell:~# lsmod | grep intel intel_rapl_msr 16384 0 intel_rapl_common 24576 1 intel_rapl_msr intel_powerclamp 16384 0 kvm_intel 282624 4 iosf_mbi 20480 2 i915,intel_rapl_common kvm 983040 1 kvm_intel crc32c_intel 24576 2 ghash_clmulni_intel 16384 0 aesni_intel 393216 0 crypto_simd 16384 1 aesni_intel cryptd 24576 2 crypto_simd,ghash_clmulni_intel intel_cstate 20480 0 intel_gtt 24576 1 i915 agpgart 40960 2 intel_gtt,ttm intel_uncore 200704 0 intel_pmc_core 49152 0 clarabell-diagnostics-20230912-1149.zip
  6. Are users having problems loading the intel quick sync drivers with 6.12.4? I seem to have to revert back to .3 to get intel quick sync working in my container.
  7. Just install 6.12.4 and can't start jellyfin due to what i think is missing intel drivers??!! Is there a package app for this now with 6.12.4? .3 didn't have this problem
  8. Seems like a power cycle brought back the nic's activities LEDs, not sure what happened. could be I'm on an old (currently overloaded) UPS so maybe power issues, i hope not hardware.
  9. Here's a picture of the nics, the right one is passthroughed to pfsense and the left one is all of a sudden giving me a problem.
  10. you're right, didn't help. here's my zip. did my quad nic just die? i see activity on the quad leds, only the green led is on. i still have the onboard nic which it now thinks is eth0...
  11. Before doing that, I only see a network-rules.cfg.old and don't see another. The file seems to have the valid network settings. I copied it to network-rules.cfg and will try a reboot
  12. Can someone please help!!! Having a major crisis. Just updated to 6.12.4 and the settings for my quad nic are gone, I can no longer see the other eth1-3 interfaces Tried downgrading to 6.12.3 with the same issue Please help
  13. I'm currenlty using a second Intel Quad NIC to allocate separate VLANs to a couple of my dockers. Basically port 1 is untagged (host), and port 2 and 3 are configured as separate VLANs (attached image) and I use these bridges for certain dockers. I believe I did because I had problems with inter-docker networking with other dockers bridged (to the host) and to the same parent interface. Could I do away with this NIC and use the onboard Intel NIC on the motherboard and set up VLAN using a parent interface but will I have MacVLAN problems like inter-docker communication with the unRAID box and VLANs?
  14. Hello, Although hardware related, I'm posting my question here as I will need to upgrade on older AMD system to an Intel 11th gen based system and I'm not sure how easy it is to migrate to a new motherboard + CPU combo (while keeping the disks and HBA). Will unRAID be smart to work with the existing configuration? Requirements: As many PCI-E slots as possible without going to server motherboards. GPU, HBA, and possibly two Intel Quad NICs. One for a VM and one for VLANs allocated to dockers. Run a maximum of two VMs; one for pfSense (passing in an Intel Quad NIC), and another VM for a basic Windows box for slicing models for my 3D printers. Intel iGPU using quicksync for Jellyfin hardware acceleration This is what I have come up with. Thoughts? Proposed hardware: ASUS Z590-A Prime 11700k or 11600k or even i7-10700 G.SKILL Ripjaws V Series 32GB (2 x 16GB) 288-Pin PC RAM DDR4 3200 (PC4 25600) I already have a Rosewill RSV-L4500U server case but I take it I will need a new low profile heatsink for the CPU? The one I'm using for the old AMD 8350 surely might not work, right? Corsair TX750 from existing box (assuming this will still work?) Questions: Hardware specs seem ok for what I need? Is migrating to a new motherboard + CPU combo easy? I'm using a second Intel Quad NIC to allocate separate VLANs to a couple of my dockers. Could I do away with a second NIC and use the onboard Intel NIC on the motherboard and set up VLANs? Will have MacVLAN problems like inter-docker communication with the unRAID box and VLANs?
  15. Hi, I'm just curious if anyone with strong networking skills can explain to me the following setup - why it works and why it doesn't. I have a quad nic: 1 port (untagged 3.x subnet vlan1) and second port (vlan 10) as below both connected to an old cisco switch: My docker network as follows: Concerning my question, as I said eth1 is connected to a cisco switch and if I select the port mode to access and allow vlan 10, I am not able to ping anything on br1.10. However, if I set it as a trunk port to allow vlan 1 and vlan 10 - it works. I would not expect to set it to a trunk port and I clearly only want vlan 10 traffic flowing. Perhaps I'm confused or something.. Thank