jakea333

Members
  • Posts

    85
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jakea333's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. You'll lose the IPMI display while Unraid is booting. For me, there's not much to "see" there or interact with once Unraid is booted. My IPMI usage is focused on accessing the BIOS and verifying the boot process is starting correctly. I'm not using the main CLI via IPMI, so not much is lost. I can confirm that it seems to fix the issue for me on 6.12.6. I've also upgraded to 6.12.8 with the same bugs and same fix still in place. It seems this problem is limited to the mATX variant (I can see at least 3 confirmed cases through this thread). Doesn't seem the ATX version is reporting the same problem. Will be interested in seeing how @Daniel15 fares with your upgrade, as you seem to have a better understanding of this process than myself.
  2. Glad it worked out. I am curious as to the root cause as well, as other boards with the Aspeed BMC don't seem to suffer in the same way. It's beyond my ability to troubleshoot, but I know that something changed between 6.12.4 and 6.12.6 that introduced this bug for me. Maybe someone else can identify the specific fix that's needed. I'm planning to leave it blacklisted and check after each Unraid release. Hopefully it's fixed in time with kernel updates.
  3. I've had issues with the W680M board and iGPU passthrough that appear to be related to the BMC that I wasn't able to resolve with the BIOS changes mentioned in this thread. These weren't present on Unraid 6.12.4, but began when I attempted to update to 6.12.6. Thanks to JorgeB in the 6.12.6 announcement thread, blacklisting the ast driver allows the iGPU to work again: echo "blacklist ast" > /boot/config/modprobe.d/ast.conf You'll lose the BMC during Unraid startup, but I don't generally use it at that point anyway.
  4. Apologies for the long delay, but I wanted to let you know that this does resolve the issue for me. Thanks for your guidance! It will drop the BMC connection during Unraid start, but that's not a big issue for me. I primarily use it for BIOS access and non-boot troubleshooting.
  5. Similar to mrhanderson, I made another attempt at 6.12.6 from a working 6.12.4 with the 13th gen Intel & W680 board. This time I captured diagnostics I'm attaching. The system doesn't seem to fully start not reaching the login prompt via command line (while watching via IPMI). However, the GUI comes up successfully and I'm able to interact with the system. The array starts (although I had Docker disabled this time), but I can see same iGPU access issues. It also hangs on every shutdown and forces a hard power down. I also booted the system using a new, cleanly installed USB. It booted to the login fully. No plugins, so I didn't check the iGPU issues. However, the system still hung during shutdown on the clean USB and I had to manually power off again. I captured the diagnostics for this boot as well, but not sure if it's helpful. For now, I'm also rolled back to 6.12.4 with everything fully functional again. tower-diagnostics-20231231-0829.zip tower-diagnostics-20231231-0931_clean install.zip
  6. Going to third this type of failure with a W680 board (ASUS Pro WS W680M-ACE SE) and Intel 13th Gen (i5-13500t). Unfortunately, I didn't capture a diagnostics package. Hopefully it can be resolved from your case here. The same inability to clean reboot/shutdown (hang at the end). The hard restarts didn't kick off Parity Checks. Only the Intel iGPU onboard (no extra GPU). Not new to Unraid, but my system was recently migrated to this new HW from a very stable 4th Gen intel system. Rolled back to version 6.12.4 and all is happy again. First ever issue with an update.
  7. I've never had any luck with this plugin keeping itself up to date. I have the setting "Automatically protect new and modified files:" set to "Enabled" but it doesn't seem to ever correctly work. So far, I've just manually rebuilt every few days which works fine, but I'd really prefer the near real-time protection enabled. I think it might be related to the inotifywait component. When I look at the config I see: root@Tower:~# cat /etc/inotifywait.conf cmd="a" method="-md5" exclude="(Domain_Backups/|Podcasts/)" disks="" Is that "disks" parameter supposed to contain all of my array disks? I've intermittently seen an error in my Syslog that's related I believe. Apr 14 20:06:08 Tower inotifywait[22757]: Failed to watch /mnt/disk5; upper limit on inotify watches reached! Apr 14 20:06:08 Tower inotifywait[22757]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify I went to that location and increased the watches from the default 524288 to 1524288 as a rough test. I then reapplied the settings in File Integrity but saw no change. Disk5 contains a backup of my Plex Library, which has a large number of small files. My guess is that it's larger than the default watches, but my quick change didn't seem to work. I've got the Share it's stored in excluded, but I don't believe that'll matter to the disk watches. Any suggestions for what I might do to troubleshoot next? I haven't found much in the way of logs to help parse what's going on, so I'm a little out of my depth. ***UPDATE It seems I spoke too soon. Since updating the max # of watches my files have been staying up to date. I added a line to my Go file to set my max watches to ~2 million each time unRAID boots. I figure that's enough so that I won't have to worry about it. Looks like inotifywait uses ~225 MB of RAM on my system, but that seems a small price to pay for the functionality of this plug-in. I still see the disks="" but the plug-in seems to be functioning correctly now.
  8. I generated the same error while trying to create a new backup external drive and testing the mount/unmount script I had assigned. I could remount (only once) each time if I rebooted the server. As these were simply backup discs that I had just formatted, I went with a different FS. Given the prevalence of XFS in UnRAID, perhaps Rob's fix should be implemented in this plugin?
  9. In the GUI, you can go to the Shares tab and click "Compute" under the Size category. I believe this only works for the top level shares, however. Any folder inside a share can easily be checked via Windows with the right-click, properties. You can also use "du" from command line at any level.
  10. Thanks, picked up the single to pair with my current parity drive. If you've got an Amex card there's a $25 back on $200 offer at Newegg that brings it down to $214 (not to mention the bonus warranty year and Shoprunner 2-day shipping benefits those cards usually provide).
  11. So, I've corrected my issue. I'm still not convinced this is due to gfjardim's plugins, but uninstalling them temporarily did resolve my problem. Basically, I uninstalled both the Preclear and Unassigned Devices plugins, rebooted the server (cache did not auto populate), assigned the drive, started the array, stopped the array, unassigned the drive, started and stopped the array again, then finally assigned the drive again. After that my cache has persisted through power cycles, even once I reinstalled these plugins. That may not be the most efficient way to do it, but I was essentially trying to emulate the process needed for unRAID to "forget" a disk you wanted to rebuild onto. I'd done something similar with the plugins installed and the cache drive never persisted through reboots.
  12. Did you find a fix for this? I have seen the same symptoms. I noticed my cache drive was no longer persistent before adding this plugin (I have recently added the pre-clear plugin and then noticed the issue when I rebooted to add a new drive). I did not change the physical cache drive, but I did unassign it to do a secure erase approximately a month ago. I probably hadn't rebooted the server since I first reassigned the drive following the secure erase and the recent power down to install the new drive. I'm assuming this isn't related to gfjardim's excellent plugins, but I am curious if you've corrected the problem. I've removed the flash and checked it in a Windows box (indicating no issues) and unassigned/reassigned the cache drive multiple times with no change.
  13. Nothing fancy on my end, just plug and play. I picked it up from BPlus via Amazon. I only had a half mini PCIe slot to play with, so I made sure I found one of the shorter cards. Left my single PCIe available for a graphics card I could passthrough in a VM.
  14. I don't believe UnRAID supports wireless cards. I'm using the Z87E-ITX board as well and it's been very solid. I tossed the wireless card into an old laptop and use the mini PCIe for a dual sata card.
  15. jakea333

    plex talk

    A single i7 or Xeon are more than adequate. A few things you should consider before making decisions: what quality (bitrate) is your media, how many simultaneous transcoded streams do you realistically expect to see, how large are your media files? I think a lot of people overestimate the amount of processing power they need for Plex. I have an i5-4570 that hums along nicely with my setup - a docker Plex Media Server on UnRAID that serves 12-14 individuals locally and remotely with a Xen-based Window 8.1 VM that functions primarily as a Plex Home Theater front-end. Quality is important, as people give a general recommendation of ~1000 passmarks per 1080p stream. So a Haswell i5 will support 7 or so simultaneous transcodes. That said, my media is usually at a lower quality. My largest files are only 4-5 GB and average file size is probably closer to 750 MB. This translates to bitrates around 1000 - 5000 kbps usually. At that size, only my largest files are even transcoded during remote playback, and they are likely requiring less than the 1000 passmark recommendation anyway. Even sharing to so many people I never see more than 6-7 people watching at the same time and usually only 1-2 of those remote connections are transcoded. Now, if you have nothing but 25 GB Blu-ray rips at 20+Mbps bitrates, your needs may change - especially if every single remote file has to be transcoded down to ~3Mpbs. As far as Plex's usage of RAM, it doesn't seem to use much unless you decide to make your transcode directory in the Plex settings the /tmp directory on UnRAID (which is stored in RAM). This is the location those temporary transcoded files are stored. You can easily set this to the a regular location on a HDD/SSD as well. The thing to keep in mind is that Plex doesn't delete the "chunks" as it goes along. It builds the entire file up as it does the transcode and doesn't delete it until the viewer closes the stream. If you've got large files being transcoded, this can take up a lot of space. I transcode to the /tmp directory on my system because I never really see more than maybe 2-3 transcodes taking up 3-5 GB depending on where they each are in playback. So, if you choose to use the /tmp directory you'll need adequate RAM. I believe it defaults to half the amount of RAM UnRAID has available to it. I have 16 GB installed > so /tmp is ~8 GB. My point is just that the simple act of using Plex doesn't necessitate some super beefy system. The files that you'll be serving and how those will be accessed matters. Obscene amounts of RAM aren't necessary - as there is minimal advantage to transcoding to RAM over an SSD unless you've just got the extra resources anyway. For remote streams, I think most people find the bottleneck to be their upload internet speeds rather than a CPU limitation. If you've got something like Google Fiber where that's not the case, just tell your remote users to change the settings to allow direct play so you don't tax the CPU anyway.