bombz

Members
  • Posts

    613
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bombz's Achievements

Enthusiast

Enthusiast (6/14)

12

Reputation

2

Community Answers

  1. Thanks for this info. Confirmed on (1X) of my instances that the plugin has removed/uninstalled itself from the plugin lists. I will confirm when I have maintenance time to update the OS Thank you devs and community
  2. I saw update assistant report 6.12.10 when I was on 6.12.8 When updating the OS, the OS went to 6.12.9 once rebooted 6.12.10 was available, must be a staged update as I didn't see 6.12.10 posted on the releases page here Thanks for this thread, and the community feedback.
  3. would be awesome if unraid docker section implemented a link to the change logs for dockers... perhaps some day https://github.com/KDE/krusader/blob/master/ChangeLog I assume this is the most recent changelog ?
  4. Hello, Appreciate this post/patch. After every OS update I run through my logs on both servers, Saw this earmarked in the log: root: Fix Common Problems: Warning: Docker Patch Not Installed Which led me here to investigate further. Thank you devs, members, and everyone involved in the UnRaid community!
  5. Hello, Ahh now I see what you mean. My apologies I have never had to perform a name change to the UD mount point. Seems both disks are now successfully mounted. Thank you again for your prompt feedback and assistance, much appreciated!
  6. Hello, Thank you for the quick reply. I am not sure what you mean? Here are the 2 disk, if I try to mount (sdh) I receive the following prompt as stated above But if I mount (sdi) the mount is successful and (sdh) changes to 'reboot' would you mind clarifying? Thank you.
  7. Hello, Attempting to mount a device with UD and unsure as if why it won't mount. It is a disk I pulled from the array formatted as XFS and the plan is to preclear it. I am running into the following error Jan 20 14:13:27 unassigned.devices: Error: Device '/dev/sdh1' mount point 'WDC_WD40_EFRX' - name is reserved, used in the array or a pool, or by an unassigned device. Jan 20 14:13:27 unassigned.devices: Disk with serial 'WDC_WD40_EFRX_ATMM221U3000000001-0:0', mountpoint 'WDC_WD40_EFRX' cannot be mounted. I was able to mount other disk, however this one is not allowing to mount. Any ideas?
  8. 10-4 seems better now, had to pick an off-peek time to reboot, was considering performing a reboot sooner, and thought to add some info for the community, just incase :-) seems better now... 0 /var/log/pwfail 8.0K /var/log/unraid-api 0 /var/log/preclear 0 /var/log/swtpm/libvirt/qemu 0 /var/log/swtpm/libvirt 0 /var/log/swtpm 0 /var/log/samba/cores/rpcd_winreg 0 /var/log/samba/cores/rpcd_classic 0 /var/log/samba/cores/rpcd_lsad 0 /var/log/samba/cores/samba-dcerpcd 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/nmbd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores 36K /var/log/samba 0 /var/log/plugins 0 /var/log/pkgtools/removed_uninstall_scripts 4.0K /var/log/pkgtools/removed_scripts 4.0K /var/log/pkgtools/removed_packages 8.0K /var/log/pkgtools 4.0K /var/log/nginx 0 /var/log/nfsd 0 /var/log/libvirt/qemu 0 /var/log/libvirt/ch 0 /var/log/libvirt 428K /var/log Thank you again for your assistance!
  9. Hello, Here is the output of the command: 0 /var/log/pwfail 127M /var/log/unraid-api 0 /var/log/preclear 0 /var/log/swtpm/libvirt/qemu 0 /var/log/swtpm/libvirt 0 /var/log/swtpm 0 /var/log/samba/cores/rpcd_winreg 0 /var/log/samba/cores/rpcd_classic 0 /var/log/samba/cores/rpcd_lsad 0 /var/log/samba/cores/samba-dcerpcd 0 /var/log/samba/cores/winbindd 0 /var/log/samba/cores/nmbd 0 /var/log/samba/cores/smbd 0 /var/log/samba/cores 1.1M /var/log/samba 0 /var/log/plugins 0 /var/log/pkgtools/removed_uninstall_scripts 4.0K /var/log/pkgtools/removed_scripts 12K /var/log/pkgtools/removed_packages 16K /var/log/pkgtools 8.0K /var/log/nginx 0 /var/log/nfsd 0 /var/log/libvirt/qemu 0 /var/log/libvirt/ch 0 /var/log/libvirt 128M /var/log Appreciate your feedback, hope it helps with a resolution. Thank you.
  10. Hello, I previously installed all plugins (today) before I saw this concern happen.... 'connect' was one of them. Saw in the 'connect' release notes before pushing the plugin update, that there were changes to 'connect' due to community feedback. Once updated, I noted the system log went to 100%
  11. Hello, I seem to also be having a concern with the GUI system log showing 100% I have been attempting to figure out where the concern may be. I have posted diagnostics to assist. Thank you. unraid-diagnostics-20240113-1058.zip
  12. Saw this concern random docker containers are showing 'not available' when attempting to update today....after updating to the latest CA Currently running v6.12.4 Is there a workaround for this concern, or perhaps has been patched in the latest OS 6.12.6?
  13. Thanks for the info here. I was looking to update my current HTPC which streams from the PLEX server (direct play). My challenge was discovering all the new hardware out today that would allow the HTPC (client) to play 4K HVEC / AV1 without any concerns. I only need to upgrade the motherboard and CPU I stumbled across this and would like to get your thoughts? Intel Core i3-13100 Desktop Processor 4 cores (4 P-cores + 0 E-cores) 12MB Cache, up to 4.5 GHz MSI PRO-B760M-P-DDR4 (Supports 12th/13th Gen Intel Processors, LGA 1700, DDR4, PCIe 4.0, M.2, 2.5Gbps LAN, USB 3.2 Gen2, mATX) I was curious if the Intel UHD Graphics 730 would handle 4k playback from plex, of if I would require from more horsepower from say a: GeForce GT 710 2GB DDR3 PCI-E2.0 DL-DVI VGA HDMI Passive Cooled Single Slot Low Profile Graphics Card (ZT-71302-20L Looking forward to hear your feedback regarding this. Thank you.
  14. UnRaidOS > 6.12.4 Also reporting the concern, and wanted to share: Seems (for me) to be related to docker when a container is restarted. shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. this would kill all my shares and the restarted docker container in question would not start. Upon reboot the issue still persisted. My resolution until posting about this concern previously would be to restore the flash backup to get things operational again. Reading more into it with user posts here, it was recommended to disable NFS shares, which I have done today, found (1X) share had it turned on. I have NOT changed the following setting YET: Settings > Global Shares Settings -> Tunable (support Hard Links): no which was also recommended. Going to see if disabling NFS shares help with this concern. Perhaps this bug fix will be resolved in upcoming release? Planning to move to 6.12.6 based on this: 'This release includes bug fixes and an important patch release of OpenZFS. All users are encouraged to upgrade.' perhaps that will assist with this bug? I am not 100% I am going on what the wonderful community here has posted about it and crossing my fingers. Thanks for all the feedback as always everyone!
  15. Hello, You rock man, thanks for all the feedback, really appreciate your time clarifying these concerns!