[Plugin] CA Fix Common Problems


Recommended Posts

 

  • Check for files/directories owned by root on user share

 

Maybe I'm missing something here... What exactly is the issue with this?  I just created a new share owned by root, group root, and I can browse, add, delete, modify files all day long within it over smb
Link to comment

Hi Squid,

 

Just a quick note to say that if the array is stopped, the 2016.05.13 version of the plugin throws a few errors:

 

Warning: scandir(/mnt/user): failed to open dir: No such file or directory in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: scandir(): (errno 2): No such file or directory in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: array_diff(): Argument #1 is not an array in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 704

 

Link to comment

Hi Squid,

 

Just a quick note to say that if the array is stopped, the 2016.05.13 version of the plugin throws a few errors:

 

Warning: scandir(/mnt/user): failed to open dir: No such file or directory in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: scandir(): (errno 2): No such file or directory in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: array_diff(): Argument #1 is not an array in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 702

Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/fix.common.problems/scripts/scan.php on line 704

Actually just found that myself 5 minutes ago  ;)
Link to comment

FYI. I got the warning about having /user/ in docker mapping instead of /cache/. Changed all 3 of mine (plexEmail, PlexPy & PMS) and they all restarted without issue and displayed all their respective stuffz... EXCEPT Plex Media Server. With that mapping for appdata, I was unable to add new movies. The process of getting the cover & info from content agents would not find any matches. I tried falling back a version but no help. Resetting the appdata path to /user/ fixed it and it matched films up first refresh. I suspect maybe user permissions causing it to fail silently (did not look at pms logs) as I am using Needo's docker. JUst an FYI in case someone else runs across this.

Link to comment

As I've grown older, I've come to realize that pleasing everyone is impossible, but pissing every one off is a piece of cake

 

- Remove checks for control characters in filenames

- Add checks for dockers not running in same network mode as what author intended

- Added in checks for HPA *

- Added in checks for illegal suffixes on cache floor settings

- Added in checks for cache floor larger than cache drive

- Added in checks for cache free space less than cache floor

- Added in check for array started

- Added in check for flash drive getting full

- Fix false positive for implied array only files sitting on cache drive

- Fix false positive if docker template had no default ports

 

* HPA if found will generate an error if its on the parity drive.  If its on any other drive, it will only generate an "other comment" since IMO its not that big a deal to worry about (and not even worth the time to fix).  Parity on the other hand is a big deal.

Link to comment

FYI. I got the warning about having /user/ in docker mapping instead of /cache/. Changed all 3 of mine (plexEmail, PlexPy & PMS) and they all restarted without issue and displayed all their respective stuffz... EXCEPT Plex Media Server. With that mapping for appdata, I was unable to add new movies. The process of getting the cover & info from content agents would not find any matches. I tried falling back a version but no help. Resetting the appdata path to /user/ fixed it and it matched films up first refresh. I suspect maybe user permissions causing it to fail silently (did not look at pms logs) as I am using Needo's docker. JUst an FYI in case someone else runs across this.

Here's the problem with /user

 

Paths in /user do NOT support the symlinks that many (if not a majority) of docker apps create.  There are many documented cases of apps not working at all if they are set to /user, but set it to /cache (or diskX) and they will work perfectly.  In this case, I guess its because of the fact that the appdata already existed in /user with invalid symlinks and once the symlinks started working the way that they were supposed to it through plex for a loop.

 

I've got this down as a warning because I anticipated a situation like yours (and I've tried to set up warnings so that you can safely be ok to not receive notifications on them), but going forward, set up any new installs as /cache as this is the proper way to do it and it will minimize any/all issues with containers (and the strange little bugs that sometimes pop up with containers that no one has a clue about what's causing it could possibly be traced back to the symlink issue)

 

Link to comment

 

  • Check for files/directories owned by root on user share

 

Maybe I'm missing something here... What exactly is the issue with this?  I just created a new share owned by root, group root, and I can browse, add, delete, modify files all day long within it over smb

i suspect that you have the permissions set to allow 'world' access so that anyone can access the files/folders?  People frequently have problems if they copy/move existing files that do not have world access using 'mc' while running as root and then find they have no access via their shares.  Note also that Samba does not allow a private/secure share to be accessed over the network using the 'root' user.
Link to comment

i suspect that you have the permissions set to allow 'world' access so that anyone can access the files/folders? 

Created the folder (root:root)  Implied permissions (and as reported by Shares) is public.  No problem accessing files within, modifying the files within, deleting etc over SMB

Changed the share to Private (and left root:root).  Could do the exact same things

 

Even created a file owned by root:root and could still do everything over SMB to it.

 

I do realize that some docker apps by default (CP, sonarr) move files to the array and use a different user (can't remember what it is), but in my tests, root:root is perfectly acceptable  (or maybe I'm just special  8) )

 

Link to comment

i suspect that you have the permissions set to allow 'world' access so that anyone can access the files/folders? 

Created the folder (root:root)  Implied permissions (and as reported by Shares) is public.  No problem accessing files within, modifying the files within, deleting etc over SMB

Changed the share to Private (and left root:root).  Could do the exact same things

 

Even created a file owned by root:root and could still do everything over SMB to it.

 

I do realize that some docker apps by default (CP, sonarr) move files to the array and use a different user (can't remember what it is), but in my tests, root:root is perfectly acceptable  (or maybe I'm just special  8) )

yes, but have you looked at the Linux level permissions?  If you switch off the 'world' access (which if set means anyone can access the file) at the Linux level then I think you will find you fail at the share level.
Link to comment

i suspect that you have the permissions set to allow 'world' access so that anyone can access the files/folders? 

Created the folder (root:root)  Implied permissions (and as reported by Shares) is public.  No problem accessing files within, modifying the files within, deleting etc over SMB

Changed the share to Private (and left root:root).  Could do the exact same things

 

Even created a file owned by root:root and could still do everything over SMB to it.

 

I do realize that some docker apps by default (CP, sonarr) move files to the array and use a different user (can't remember what it is), but in my tests, root:root is perfectly acceptable  (or maybe I'm just special  8) )

yes, but have you looked at the Linux level permissions?  If you switch off the 'world' access (which if set means anyone can access the file) at the Linux level then I think you will find you fail at the share level.

ok that's a different story.  Now we're down to permissions and not ownership / groups.  (Sorry for being daft -> not a linux guy so when you originally said world permissions I thought you were talking share permissions, not linux permission of 770)

 

And with permissions set to 770 you are indeed correct - the share becomes inaccessible.  Now that I know what the problem is I can incorporate checks

 

Thx

 

Link to comment

Stuck at a kiddie birthday party so not much going to happen today but I had a thought looking for comments.

 

Random reboot detection / unclean shutdown detection.

 

Optional for random reset issues automatic syslog capture (to docker image to save wear and tear)  unsure if I want to do a tail or a copy every minute due to possible corruption issues

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

FYI. I got the warning about having /user/ in docker mapping instead of /cache/. Changed all 3 of mine (plexEmail, PlexPy & PMS) and they all restarted without issue and displayed all their respective stuffz... EXCEPT Plex Media Server. With that mapping for appdata, I was unable to add new movies. The process of getting the cover & info from content agents would not find any matches. I tried falling back a version but no help. Resetting the appdata path to /user/ fixed it and it matched films up first refresh. I suspect maybe user permissions causing it to fail silently (did not look at pms logs) as I am using Needo's docker. JUst an FYI in case someone else runs across this.

Here's the problem with /user

 

Paths in /user do NOT support the symlinks that many (if not a majority) of docker apps create.  There are many documented cases of apps not working at all if they are set to /user, but set it to /cache (or diskX) and they will work perfectly.  In this case, I guess its because of the fact that the appdata already existed in /user with invalid symlinks and once the symlinks started working the way that they were supposed to it through plex for a loop.

 

I've got this down as a warning because I anticipated a situation like yours (and I've tried to set up warnings so that you can safely be ok to not receive notifications on them), but going forward, set up any new installs as /cache as this is the proper way to do it and it will minimize any/all issues with containers (and the strange little bugs that sometimes pop up with containers that no one has a clue about what's causing it could possibly be traced back to the symlink issue)

 

After careful consideration of your advise (and finding "watched" status was not sticking on shows, and the Channels section claimed to no longer exists) I zapped the Needo docker and installed Linuxserver.io version using cache/appdata in path. Did not experience any issues with stalled permissions updates as I did in a previous attempt to switch. Might be placebo effect but me thinks it runs smoother :P

Link to comment

Updated to 5.13a and ran a new scan. Detects an HPA on all disks (1, 2, 9-11) connected to my Areca ARC-1231ML. Including the 8TB RAID0 volume being used for parity. Pretty sure this is a false positive as I understand it, unRaid is not able to get hardware specifics from disks connected to the Areca.

 

 

Link to comment

As I've grown older, I've come to realize that pleasing everyone is impossible, but pissing every one off is a piece of cake

 

I've also found that times you just have to say screw it.  Its my utility and this is the way I'm going to do it.  If you want it done differently go write your own.  Politely of course ;D

Link to comment

As I've grown older, I've come to realize that pleasing everyone is impossible, but pissing every one off is a piece of cake

 

I've also found that times you just have to say screw it.  Its my utility and this is the way I'm going to do it.  If you want it done differently go write your own.  Politely of course ;D

It was a general statement.  Nothing at all to do with this plugin...    I find it so boring to post update dates

 

Sent from my LG-D852 using Tapatalk

 

Link to comment

Updated to 5.13a and ran a new scan. Detects an HPA on all disks (1, 2, 9-11) connected to my Areca ARC-1231ML. Including the 8TB RAID0 volume being used for parity. Pretty sure this is a false positive as I understand it, unRaid is not able to get hardware specifics from disks connected to the Areca.

what does
hdparm -N /dev/sdX

where sdx is one of those drives so I know how to exclude them

Link to comment

Updated to 5.13a and ran a new scan. Detects an HPA on all disks (1, 2, 9-11) connected to my Areca ARC-1231ML. Including the 8TB RAID0 volume being used for parity. Pretty sure this is a false positive as I understand it, unRaid is not able to get hardware specifics from disks connected to the Areca.

what does
hdparm -N /dev/sdX

where sdx is one of those drives so I know how to exclude them

 

This is a retail 2TB Seagate drive (ST2000DM001 - 2 TB (sdf))

 

Linux 4.1.18-unRAID.
root@Tower:~# hdparm -N /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
max sectors   = 0/1, HPA is enabled
root@Tower:~#

Link to comment

Updated to 5.13a and ran a new scan. Detects an HPA on all disks (1, 2, 9-11) connected to my Areca ARC-1231ML. Including the 8TB RAID0 volume being used for parity. Pretty sure this is a false positive as I understand it, unRaid is not able to get hardware specifics from disks connected to the Areca.

what does
hdparm -N /dev/sdX

where sdx is one of those drives so I know how to exclude them

 

This is a retail 2TB Seagate drive (ST2000DM001 - 2 TB (sdf))

 

Linux 4.1.18-unRAID.
root@Tower:~# hdparm -N /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
max sectors   = 0/1, HPA is enabled
root@Tower:~#

fixed next rev
Link to comment

I'll stop wearing black when they make a darker colour

 

- Fixed false positive HPA warnings when using Areca controllers (hopefully)

- Ability to ignore errors

 

Every error / warning now has an Ignore button next to it.  If you're 100% sure that you don't want to be bugged about it again, then just ignore it.  This does not ignore all errors of the class, but rather that specific error.

 

All the ignore really does is prevent any notifications from being sent because of that error.  They will still display on the GUI screen under the ignored errors section (and be logged into the syslog if found).  To reenable notifications for it, you can re-add the specific error, or re-add all of them.

 

 

 

Now, a question for the uber-linux geeks out there:

 

Is it possible to redirect (or tee) stderr for the local attached console from any random bash shell to a file?  My online searches aren't yielding any help for it, and all of my attempts thus far either have no effects.

 

If that's impossible, how can I redirect (or tee) all stderr outputs for the local attached console from the local console?

 

The reason I want to do this is because I want to add in an option for random reboots / crash detection and offer up a tail of the syslog to the flash.  However, in much of the cases where this has been deemed necessary, the syslog yields no useful information about the crash, and I'm hoping that the local display does, but that information is lost on a reboot, and who among us sits in front of the monitor waiting for a crash with a camera?

Link to comment

Issues that are set to ignored still put entries into the log.  If they are ignored, there should not be any log entries.

I disagree there.  Simply because they are still issues that the user has chosen to ignore for whatever reason.  And if they are not put into the syslog, then any assistance the user is seeking via the forums will not have all of the relevant information.

 

ie: a user is having issues accessing shares, but has chosen to ignore the fact that there is myShare and MyShare coexisting at the same time. 

 

I just don't want people completely turning off notifications simply because they have a valid use case (or completely disagree with) those warnings.

 

If you still think that I'm wrong here, I'll make it optional to log the ignored issues.

Link to comment

Just a small little thing, then you remove a warning, it should say "ReAdd warning", instead of "ReAdd error", and it should be "Ignored Errors/Warnings" at header, instead of "Ignored Errors" :)

Probably right there...  It was just a ton easier to handle it at the time  8)
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.