docgyver Posted August 4, 2023 Author Share Posted August 4, 2023 On 6/18/2023 at 7:01 AM, Alex R. Berg said: There's a bug in the detection of whether WORK_DIR is on persistent storage, so it always says persistent, unless its on 'ramfs' (the first match in the array). This code fixes it: Adding 'tmpfs' to array, and don't break on first mismatch (in .plg file) denyhosts_datacheck() { array=( ramfs proc tempfs sysfs tmpfs ) fs=$( stat -f -c '%T' $WORK_DIR ) if [ "$fs" = "msdos" ]; then echo "<p style="color:red\;"><b>WARNING:</b> Your WORK_DIR is located on your flash drive. This can decrease the life span of your flash device!</p>" else found=0 for i in "${array[@]}" do if [[ "'$i'" = "'$fs'" ]]; then echo "<p style="color:red\;"><b>WARNING:</b> Your WORK_DIR is not persistent and WILL NOT survive a reboot. The WORK_DIR maintains a running history of past DenyHosts entries and ideally should be maintained across reboots. Please locate your WORK_DIR on persistent storage. eg. cache/array disk</p>" found=1 break fi done if (( ! $found )) ;then echo "<p style="color:green\;">WORK_DIR located on persistent storage. Your data will persist after a reboot :-)</p>" fi fi } I'm not really sure whether its a good idea to put it on /boot due to spamming writes on USB, and also I would prefer it to be not dependent on /mnt being available. So I suspect the best option would be copying to/from /boot on start/stop or mount or something like that. What do others do to persist the data, and what is the data? I would be fine with moving the deny-lists to /boot as I expect those are not written frequently, unless of cause I'm spammed from unlimited number of ipv6 addresses... (if that can happen...) Thanks. I've updated for tmpfs and "any match". I put my WORK_DIR on the same cache array I use for the docker disk image and docker application "config". That is the "usual" place for this sort of persistent data. I forget what array triggers (stop/start/...) exist but there might be a practical way to use an in memory filesystem that persists the data to /boot "at the last moment". Quote Link to comment
Johann Posted August 16, 2023 Share Posted August 16, 2023 (edited) I'm not sure if this is possible but I use Terminus and it doesn't support adding a sftp path yet. Would it be possible to implement a default sftp path into the plugin to set it using the gui? https://support.termius.com/hc/en-us/articles/4403215978393-How-do-I-set-a-default-SFTP-path- Thank you! Edited August 16, 2023 by Johann Quote Link to comment
docgyver Posted August 17, 2023 Author Share Posted August 17, 2023 @Johann Do you need unraid to have a default SFTP path on an inbound connection or a particular unRaid user to have a default SFTP path when connecting from unraid out to another server? High level thinking here as I await your response... Changing unRaid's inbound default SFTP could have unintended consequences for other apps. More risk that someone will mess things up comes with the more dials and buttons we add to the plugin. If this is the direction you intend I'd be more inclined to figure out how to change the user's home directory. e.g. Have a manually created "passwd" file in the /boot/config/plugins/ssh/<user> folder and use that to change the home directory. Outbound is much easier though. I could add support for an ".ssh/config" file to be copied to the /home/<user>/.ssh folder from /boot. doc.. Quote Link to comment
cayuga Posted October 25, 2023 Share Posted October 25, 2023 (edited) Hi there, I have recently updated my unraid server (from 6.11.5) to 6.12.4, and i am now running into a pretty unique issue. Every time a new SSH session is created, a new cgroup is created. Unfortunately this new cgroup does not get cleaned up when the session is closed. I use SSH to run some icinga monitoring checks on the host. this adds up pretty fast, reaching the cgroup limit of 65535 within a few weeks. When the limit is reached, i cannot start new docker containers: docker: error response from daemon: failed to create shim task: oci runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: mkdir /sys/fs/cgroup/docker/XXX: no space left on device: unknown. Restarting the SSH Daemon does not appear to work, i have to restart the server to clear the defunct cgroups. The amount of cgroups can be viewed using: cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 0 52 1 cpu 0 52 1 cpuacct 0 52 1 blkio 0 52 1 memory 0 52 1 devices 0 52 1 freezer 0 52 1 net_cls 0 52 1 perf_event 0 52 1 net_prio 0 52 1 hugetlb 0 52 1 pids 0 52 1 All these cgroups appear as /sys/fs/cgroup/cXX: /sys/fs/cgroup# ls c1/ c15/ c20/ c4/ cgroup.controllers cgroup.threads io.stat c10/ c16/ c21/ c5/ cgroup.max.depth cpu.stat machine/ c11/ c17/ c22/ c6/ cgroup.max.descendants cpuset.cpus.effective memory.numa_stat c12/ c18/ c23/ c7/ cgroup.procs cpuset.mems.effective memory.reclaim c13/ c19/ c24/ c8/ cgroup.stat docker/ memory.stat c14/ c2/ c3/ c9/ cgroup.subtree_control elogind/ I have found one similar issue on the interwebs from 6 years ago: https://stackoverflow.com/questions/45690117/ubuntu-server-every-ssh-connect-creates-non-deleted-cgroup Quote It seems that the problem only occurs on servers having docker installed. It appears to be a pretty fringe issue. I do not have any other servers to try and reproduce this. i don't know if this is the right place for this issue, i can imagine this may go way deeper into openSSH/docker itself than just the plugin. Any help is appreciated. Edited October 26, 2023 by cayuga 1 Quote Link to comment
SimonMisc Posted October 27, 2023 Share Posted October 27, 2023 Hey there, I found this plugin while playing around with reverse-proxying ssh traffic to a specific subdomain and thought it would be a nice supplement to existing security to have password attempts blocked out on the server-side. I disabled the root login access and last night in a moment of poor judgement attempted to log in on the machine without changing this setting. Now my wireguard IP is likely on some deny list as the server is refusing connections. What's the proper procedure for undoing this, and did you ever flesh-out a whitelist method for known safe local IPs like you mentioned considering a few pages back in this thread? I'd be very thankful for your assistance in this matter! Quote Link to comment
libook Posted November 23, 2023 Share Posted November 23, 2023 To share how I use UserScript for opening up user SSH access. Wish it was possible in the future to use plugins to replace this script completely. #!/bin/bash # Permmit user to access via ssh cat /etc/ssh/sshd_config | sed -e s/"AllowUsers root"/"AllowUsers root libook"/ > /etc/ssh/sshd_config.1 && mv /etc/ssh/sshd_config.1 /etc/ssh/sshd_config # Give user shell chsh -s /bin/zsh libook # Set user home usermod -d /mnt/user/home/libook libook # Make /etc/profile for multi-user cat /etc/zprofile | sed -e s/"export HOME=\/root"/"if [[ -z \$HOME ]]; then\n export HOME=\/root\nfi"/ > /etc/zprofile.1 && mv /etc/zprofile.1 /etc/zprofile # Add user to sudoers.d echo "libook ALL=(ALL:ALL) ALL" > /tmp/libook && chmod 440 /tmp/libook && mv /tmp/libook /etc/sudoers.d/libook # Reload SSHd /etc/rc.d/rc.sshd reload Quote Link to comment
TobiasKWF Posted January 11 Share Posted January 11 (edited) On 10/25/2023 at 2:05 PM, cayuga said: Hi there, I have recently updated my unraid server (from 6.11.5) to 6.12.4, and i am now running into a pretty unique issue. Every time a new SSH session is created, a new cgroup is created. Unfortunately this new cgroup does not get cleaned up when the session is closed. I use SSH to run some icinga monitoring checks on the host. this adds up pretty fast, reaching the cgroup limit of 65535 within a few weeks. When the limit is reached, i cannot start new docker containers: docker: error response from daemon: failed to create shim task: oci runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: mkdir /sys/fs/cgroup/docker/XXX: no space left on device: unknown. Restarting the SSH Daemon does not appear to work, i have to restart the server to clear the defunct cgroups. The amount of cgroups can be viewed using: cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 0 52 1 cpu 0 52 1 cpuacct 0 52 1 blkio 0 52 1 memory 0 52 1 devices 0 52 1 freezer 0 52 1 net_cls 0 52 1 perf_event 0 52 1 net_prio 0 52 1 hugetlb 0 52 1 pids 0 52 1 All these cgroups appear as /sys/fs/cgroup/cXX: /sys/fs/cgroup# ls c1/ c15/ c20/ c4/ cgroup.controllers cgroup.threads io.stat c10/ c16/ c21/ c5/ cgroup.max.depth cpu.stat machine/ c11/ c17/ c22/ c6/ cgroup.max.descendants cpuset.cpus.effective memory.numa_stat c12/ c18/ c23/ c7/ cgroup.procs cpuset.mems.effective memory.reclaim c13/ c19/ c24/ c8/ cgroup.stat docker/ memory.stat c14/ c2/ c3/ c9/ cgroup.subtree_control elogind/ I have found one similar issue on the interwebs from 6 years ago: https://stackoverflow.com/questions/45690117/ubuntu-server-every-ssh-connect-creates-non-deleted-cgroup It appears to be a pretty fringe issue. I do not have any other servers to try and reproduce this. i don't know if this is the right place for this issue, i can imagine this may go way deeper into openSSH/docker itself than just the plugin. Any help is appreciated. Got the same Problem on Version 6.12.6 ... Only reboot or manual delete with cgdelete helps ... Strange Problem happens when using ssh. /sys/fs/cgroup gots a new directory (c5 , c6 ....c65300) every ssh connection starts . No auto delete or cleanup after closing ssh Edited January 11 by TobiasKWF 1 Quote Link to comment
andber Posted January 22 Share Posted January 22 On 1/11/2024 at 10:10 AM, TobiasKWF said: Got the same Problem on Version 6.12.6 ... Only reboot or manual delete with cgdelete helps ... Strange Problem happens when using ssh. /sys/fs/cgroup gots a new directory (c5 , c6 ....c65300) every ssh connection starts . No auto delete or cleanup after closing ssh same here unraid 6.12.6 on the drive: in the log: but the session persists .... after some weeks unraid gets problems with the error "elogind-daemon Failed to create cgroup xyz No space left on device" and if the error happens, it could be that I can no longer start (certain) docker containers. But that's just a guess. Unfortunately, I no longer have an Unraid installation in this state. I have restarted unraid and I am now at c78 Quote Link to comment
andber Posted January 22 Share Posted January 22 1 hour ago, andber said: same here unraid 6.12.6 on the drive: in the log: but the session persists .... after some weeks unraid gets problems with the error "elogind-daemon Failed to create cgroup xyz No space left on device" and if the error happens, it could be that I can no longer start (certain) docker containers. But that's just a guess. Unfortunately, I no longer have an Unraid installation in this state. I have restarted unraid and I am now at c78 Ok, I have tried it again. I get the warning no space left on device from c65514: Jan 22 13:37:52 unraid2 sshd[30614]: Starting session: command for root from 172.17.0.11 port 36846 id 0 Jan 22 13:37:52 unraid2 elogind-daemon[1504]: Failed to create cgroup c65514: No space left on device Jan 22 13:37:52 unraid2 sshd[30694]: pam_elogind(sshd:session): Failed to create session: No space left on device .... and I can confirm this with the dockers -> A docker that was not running when no error appeared in the log can no longer be started after the error appears. Before that it starts without problems. Finally: I am not a Linux professional, just an interested person .... Quote Link to comment
tommek83 Posted January 31 Share Posted January 31 same issue here. Is there a solution to this problem? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.