SSH and Denyhosts updated for v6.1


Recommended Posts

On 6/18/2023 at 7:01 AM, Alex R. Berg said:

There's a bug in the detection of whether WORK_DIR is on persistent storage, so it always says persistent, unless its on 'ramfs' (the first match in the array). 

This code fixes it: Adding 'tmpfs' to array, and don't break on first mismatch (in .plg file)
 

denyhosts_datacheck()
{
  array=( ramfs proc tempfs sysfs tmpfs )
  fs=$( stat -f -c '%T' $WORK_DIR )
  if [ "$fs" = "msdos" ]; then
    echo "<p style="color:red\;"><b>WARNING:</b> Your WORK_DIR is located on your flash drive. This can decrease the life span of your flash device!</p>"
  else
    found=0
    for i in "${array[@]}"
    do
      if [[ "'$i'" = "'$fs'" ]]; then
        echo "<p style="color:red\;"><b>WARNING:</b> Your WORK_DIR is not persistent and WILL NOT survive a reboot. The WORK_DIR maintains a running history of past DenyHosts entries and ideally should be maintained across reboots.  Please locate your WORK_DIR on persistent storage. eg. cache/array disk</p>"
        found=1
        break
      fi
    done
    if (( ! $found )) ;then
        echo "<p style="color:green\;">WORK_DIR located on persistent storage. Your data will persist after a reboot :-)</p>"      
    fi
  fi
}



I'm not really sure whether its a good idea to put it on /boot due to spamming writes on USB, and also I would prefer it to be not dependent on /mnt being available. So I suspect the best option would be copying to/from /boot on start/stop or mount or something like that. What do others do to persist the data, and what is the data? I would be fine with moving the deny-lists to /boot as I expect those are not written frequently, unless of cause I'm spammed from unlimited number of ipv6 addresses... (if that can happen...)

Thanks. I've updated for tmpfs and "any match".

 

I put my WORK_DIR on the same cache array I use for the docker disk image and docker application "config". That is the "usual" place for this sort of persistent data. I forget what array triggers (stop/start/...) exist but there might be a practical way to use an in memory filesystem that persists the data to /boot "at the last moment".

Link to comment
  • 2 weeks later...

@Johann Do you need unraid to have a default SFTP path on an inbound connection or a particular unRaid user to have a default SFTP path when connecting from unraid out to another server?

High level thinking here as I await your response...

Changing unRaid's inbound default SFTP could have unintended consequences for other apps. More risk that someone will mess things up comes with the more dials and buttons we add to the plugin. If this is the direction you intend I'd be more inclined to figure out how to change the user's home directory. e.g. Have a manually created "passwd" file in the /boot/config/plugins/ssh/<user> folder and use that to change the home directory.

Outbound is much easier though. I could add support for an ".ssh/config" file to be copied to the /home/<user>/.ssh folder from /boot.

 

doc..

Link to comment
  • 2 months later...

Hi there,

 

I have recently updated my unraid server (from 6.11.5) to 6.12.4, and i am now running into a pretty unique issue. Every time a new SSH session is created, a new cgroup is created. Unfortunately this new cgroup does not get cleaned up when the session is closed.

 

I use SSH to run some icinga monitoring checks on the host. this adds up pretty fast, reaching the cgroup limit of 65535 within a few weeks. When the limit is reached, i cannot start new docker containers:

 

docker: error response from daemon: failed to create shim task: oci runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: mkdir /sys/fs/cgroup/docker/XXX: no space left on device: unknown.

 

Restarting the SSH Daemon does not appear to work, i have to restart the server to clear the defunct cgroups.

 

The amount of cgroups can be viewed using:

 

cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	0	52	1
cpu	0	52	1
cpuacct	0	52	1
blkio	0	52	1
memory	0	52	1
devices	0	52	1
freezer	0	52	1
net_cls	0	52	1
perf_event	0	52	1
net_prio	0	52	1
hugetlb	0	52	1
pids	0	52	1

 

All these cgroups appear as /sys/fs/cgroup/cXX:

 

/sys/fs/cgroup# ls
c1/   c15/  c20/  c4/  cgroup.controllers      cgroup.threads         io.stat
c10/  c16/  c21/  c5/  cgroup.max.depth        cpu.stat               machine/
c11/  c17/  c22/  c6/  cgroup.max.descendants  cpuset.cpus.effective  memory.numa_stat
c12/  c18/  c23/  c7/  cgroup.procs            cpuset.mems.effective  memory.reclaim
c13/  c19/  c24/  c8/  cgroup.stat             docker/                memory.stat
c14/  c2/   c3/   c9/  cgroup.subtree_control  elogind/

 

I have found one similar issue on the interwebs from 6 years ago:

https://stackoverflow.com/questions/45690117/ubuntu-server-every-ssh-connect-creates-non-deleted-cgroup

 

Quote

It seems that the problem only occurs on servers having docker installed.

 

It appears to be a pretty fringe issue. I do not have any other servers to try and reproduce this. i don't know if this is the right place for this issue, i can imagine this may go way deeper into openSSH/docker itself than just the plugin.

 

Any help is appreciated.

 

 

 

Edited by cayuga
  • Upvote 1
Link to comment

Hey there, I found this plugin while playing around with reverse-proxying ssh traffic to a specific subdomain and thought it would be a nice supplement to existing security to have password attempts blocked out on the server-side. I disabled the root login access and last night in a moment of poor judgement attempted to log in on the machine without changing this setting. Now my wireguard IP is likely on some deny list as the server is refusing connections. What's the proper procedure for undoing this, and did you ever flesh-out a whitelist method for known safe local IPs like you mentioned considering a few pages back in this thread?

 

I'd be very thankful for your assistance in this matter!

Link to comment
  • 4 weeks later...

To share how I use UserScript for opening up user SSH access. Wish it was possible in the future to use plugins to replace this script completely.

 

#!/bin/bash

# Permmit user to access via ssh
cat /etc/ssh/sshd_config | sed -e s/"AllowUsers root"/"AllowUsers root libook"/ > /etc/ssh/sshd_config.1 && mv /etc/ssh/sshd_config.1 /etc/ssh/sshd_config

# Give user shell
chsh -s /bin/zsh libook

# Set user home
usermod -d /mnt/user/home/libook libook

# Make /etc/profile for multi-user
cat /etc/zprofile | sed -e s/"export HOME=\/root"/"if [[ -z \$HOME ]]; then\n  export HOME=\/root\nfi"/ > /etc/zprofile.1 && mv /etc/zprofile.1 /etc/zprofile

# Add user to sudoers.d
echo "libook ALL=(ALL:ALL) ALL" > /tmp/libook && chmod 440 /tmp/libook && mv /tmp/libook /etc/sudoers.d/libook

# Reload SSHd
/etc/rc.d/rc.sshd reload

 

 

Link to comment
  • 1 month later...
On 10/25/2023 at 2:05 PM, cayuga said:

Hi there,

 

I have recently updated my unraid server (from 6.11.5) to 6.12.4, and i am now running into a pretty unique issue. Every time a new SSH session is created, a new cgroup is created. Unfortunately this new cgroup does not get cleaned up when the session is closed.

 

I use SSH to run some icinga monitoring checks on the host. this adds up pretty fast, reaching the cgroup limit of 65535 within a few weeks. When the limit is reached, i cannot start new docker containers:

 

docker: error response from daemon: failed to create shim task: oci runtime create failed: runc create failed: unable to start container process: unable to apply cgroup configuration: mkdir /sys/fs/cgroup/docker/XXX: no space left on device: unknown.

 

Restarting the SSH Daemon does not appear to work, i have to restart the server to clear the defunct cgroups.

 

The amount of cgroups can be viewed using:

 

cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	0	52	1
cpu	0	52	1
cpuacct	0	52	1
blkio	0	52	1
memory	0	52	1
devices	0	52	1
freezer	0	52	1
net_cls	0	52	1
perf_event	0	52	1
net_prio	0	52	1
hugetlb	0	52	1
pids	0	52	1

 

All these cgroups appear as /sys/fs/cgroup/cXX:

 

/sys/fs/cgroup# ls
c1/   c15/  c20/  c4/  cgroup.controllers      cgroup.threads         io.stat
c10/  c16/  c21/  c5/  cgroup.max.depth        cpu.stat               machine/
c11/  c17/  c22/  c6/  cgroup.max.descendants  cpuset.cpus.effective  memory.numa_stat
c12/  c18/  c23/  c7/  cgroup.procs            cpuset.mems.effective  memory.reclaim
c13/  c19/  c24/  c8/  cgroup.stat             docker/                memory.stat
c14/  c2/   c3/   c9/  cgroup.subtree_control  elogind/

 

I have found one similar issue on the interwebs from 6 years ago:

https://stackoverflow.com/questions/45690117/ubuntu-server-every-ssh-connect-creates-non-deleted-cgroup

 

 

It appears to be a pretty fringe issue. I do not have any other servers to try and reproduce this. i don't know if this is the right place for this issue, i can imagine this may go way deeper into openSSH/docker itself than just the plugin.

 

Any help is appreciated.

 

 

 

 

Got the same Problem on Version 6.12.6 ...

Only reboot or manual delete with cgdelete helps ... Strange Problem happens when using ssh.

/sys/fs/cgroup gots a new directory (c5 , c6 ....c65300) every ssh connection starts  . No auto delete or cleanup after closing ssh

Edited by TobiasKWF
  • Upvote 1
Link to comment
  • 2 weeks later...
On 1/11/2024 at 10:10 AM, TobiasKWF said:

 

Got the same Problem on Version 6.12.6 ...

Only reboot or manual delete with cgdelete helps ... Strange Problem happens when using ssh.

/sys/fs/cgroup gots a new directory (c5 , c6 ....c65300) every ssh connection starts  . No auto delete or cleanup after closing ssh

same here unraid 6.12.6
on the drive:

image.thumb.png.0af0811de75500318ad698c7a86d0e49.png

 

in the log:
image.thumb.png.1c7f15f60feaffb7e4878d3cd61c02a2.png
 

but the session persists ....
after some weeks  unraid gets problems  with the error  "elogind-daemon Failed to create cgroup xyz No space left on device" and if the error happens, it could be that I can no longer start (certain) docker containers. But that's just a guess. Unfortunately, I no longer have an Unraid installation in this state.
I have restarted unraid and I am now at c78 :)

Link to comment
1 hour ago, andber said:

same here unraid 6.12.6
on the drive:

image.thumb.png.0af0811de75500318ad698c7a86d0e49.png

 

in the log:
image.thumb.png.1c7f15f60feaffb7e4878d3cd61c02a2.png
 

but the session persists ....
after some weeks  unraid gets problems  with the error  "elogind-daemon Failed to create cgroup xyz No space left on device" and if the error happens, it could be that I can no longer start (certain) docker containers. But that's just a guess. Unfortunately, I no longer have an Unraid installation in this state.
I have restarted unraid and I am now at c78 :)


Ok, I have tried it again. I get the warning no space left on device from c65514:

Jan 22 13:37:52 unraid2 sshd[30614]: Starting session: command for root from 172.17.0.11 port 36846 id 0
Jan 22 13:37:52 unraid2 elogind-daemon[1504]: Failed to create cgroup c65514: No space left on device
Jan 22 13:37:52 unraid2 sshd[30694]: pam_elogind(sshd:session): Failed to create session: No space left on device


image.thumb.png.d6b06ecacc5f8b70995044eff056c13a.png

.... and I can confirm this with the dockers -> A docker that was not running when no error appeared in the log can no longer be started after the error appears. Before that it starts without problems.

image.png.fe9aafbe27e836f689f9d50292b650c7.png

image.thumb.png.eef2eef0caf4a8dc84ed885d7e5532f0.png

 Finally: I am not a Linux professional, just an interested person .... 

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.