Dynamix File Integrity plugin


bonienl

Recommended Posts

I don't see why that matters though. Of course different algorithms produce different hashes.  But the user doesn't have to change algorithms when they change hardware.  They can, but they don't gave to. It won't be the fastest choice from that point on, but right now it isn't the fastest choice either.  So I see that as a wash with the benefit of having the fastest choice on first use and the option to start over if it is important enough on new hardware.

Link to comment

As to point numner 2, that depends on how many disks and how many cores are available. I have far more cores available than disks, its a factor of 8 to 1. A choice of using multithreaded algorothm could drastically help in my situation. My system has 32 cores and 4 data disks.

 

Interesting point you make BRiT.

 

Let me reconsider single-core and multi-core support!

 

Link to comment

Can someone clarify something for me regarding the way this plugin works.

 

I assume that from a 'client' PoV (i.e. my mac) doesn't see any of the extended attributes that contains the checksum, so that my backup software isn't going to have a fit and see all the data has changed?

 

Short version: The checksum info is only visible to the server, not to clients?

 

I always previously wanted the checksums to go to a file, but I suppose if I move data around the attributes would follow the files and thus checksums would never be lost?

 

Thanks!

Link to comment

That's correct.  For a separate file containing the checksums you would either have to generate them on the client using something like corz for Windows or the checksum plugin (but development ceased on that a long time ago.  But it does work for me)

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

I'm getting what appear to be a lot of false positives when checking for modified checksums.  The files being identified as corrupted are always .nfo files within TV and Movie user shares that are accessed over the network from a Windows machine running Kodi.  If I look at the .nfo files, they appear intact and uncorrupted.  Kodi reads them fine, and they appear to be accurate, though it's entirely possible they've changed since the last time a verification occurred.

 

I do have automatically protect new and modified files Enabled.  They're just metadata I don't really care about that's easy to regenerate, so I could add them to the ignore list, but I'm kind of curious what's going on.  Any thoughts?

Link to comment

A false positive usually happens because an application changes the content of a file without updating its file modification time. The plugin considers this as corrupted.

 

In general files which change frequently should be excluded from checking.

Hmm.  That seems like it shouldn't happen, but I'm not familiar enough with the file access behavior of the two apps that use these files to know for sure.  I don't think they're changing often (seems I get 3-4 of these errors every 2 weeks at verification time), but I don't really care that much about them so I just added them to the ignore list.

 

Thanks for the help.

Link to comment

A false positive usually happens because an application changes the content of a file without updating its file modification time. The plugin considers this as corrupted.

 

In general files which change frequently should be excluded from checking.

 

I can confirm that both Cobian backup and my wife's apple accessed file edits trigger this issue.

Link to comment

Is it expected that the "Build Up to Date" row almost always has a few disks with Os?  It says that means there's an open operation, but isn't really clear on what could trigger that.  I manually force updates on all disks, then they all turn into the green checkmarks, but by the next morning two or three are Os again.

 

The setting to automatically update the hashes when files are added/modified is enabled.  Not sure what I could be doing wrong here.

Link to comment

There are 2 pictures in my post... Is there a way to verify the plugin is working and doing its job? What do the O's ans X's represent in the control page?

 

On the settings page you have enabled the "automatic protection" and created a "verification schedule", so yes it is all enabled and working.

 

Did you see the help text on the tools page? It explains the O's and X's or is the explanation not clear?

 

Remember that it is not required to use the file integrity tools, the utility works fine without them, but in case you want to keep hash files and want to do manual checking these tools can be used as needed. With one exception:

 

To get the initial hash keys stored in the extended attributes of your existing files, you need to run the Build command for each disk.

Link to comment

I've read through a lot of pages at the beginning and end of this thread, and I can't find an relevant answer to my questions.

 

1) Is there any reason to use one hashing method over another? e.g. Blake2 vs MD5?

 

2) How can I benchmark my system with the different available methods to determine which is fastest?

 

 

Link to comment

So there's a lot here I still don't understand.  I have the plugin setup, I think, to ignore all .nfo files and files in the CrashPlan folder.  I have it configured as follows, and have the verification schedule set to run a few drives every week.

 

If I run the Build manually, I can get all the checkmarks green.  The next morning, the "Build up to date" status for some drives are orange circles, as shown in this screenshots, even though no verification was run.  It's not clear how "Build up to date" could change from green to orange.  All the help says is that it's an "Open operation", but no indication what that means.  Automatically protect new files is enabled.

 

Then when a verification runs, I always get some warnings about mismatches or corruption in the folders and filetypes I've specifically set as ignored.

 

Any idea what's going on here?  This doesn't make sense to me.

Screen_Shot_2016-10-04_at_6_47.50_PM.png.023249fada81cd4ef1da645b10934f06.png

Link to comment

I've read through a lot of pages at the beginning and end of this thread, and I can't find an relevant answer to my questions.

 

1) Is there any reason to use one hashing method over another? e.g. Blake2 vs MD5?

 

2) How can I benchmark my system with the different available methods to determine which is fastest?

 

Any of the methods will do the job just fine. If compatibility with other tools is required then MD5 is the safest choice.

 

A simple way to test is to copy a file to your /tmp folder so it stays in RAM memory and do

time md5sum /tmp/testfile
time b2sum /tmp/testfile
time sha256sum /tmp/testfile

 

  • Like 1
Link to comment

So there's a lot here I still don't understand.  I have the plugin setup, I think, to ignore all .nfo files and files in the CrashPlan folder.  I have it configured as follows, and have the verification schedule set to run a few drives every week.

 

If I run the Build manually, I can get all the checkmarks green.  The next morning, the "Build up to date" status for some drives are orange circles, as shown in this screenshots, even though no verification was run.  It's not clear how "Build up to date" could change from green to orange.  All the help says is that it's an "Open operation", but no indication what that means.  Automatically protect new files is enabled.

 

Then when a verification runs, I always get some warnings about mismatches or corruption in the folders and filetypes I've specifically set as ignored.

 

Any idea what's going on here?  This doesn't make sense to me.

 

The Open status indicates files are found which do not have a hash value in their extended attributes. This usually happens with files which change frequently and in bulk. Therefore it is recommended to exclude files and folders which are constantly updated. The purposes of this tool is to find "silent" corruption, meaning files which are not touched over a longer period and may get changed due to disk anomalies.

 

Can you post a screenshot of the settings page.

 

Link to comment

Excluded Files and Folders do not seem to be working for me.

 

I have two of my shares listed in the excluded Folders section (using the nice checkboxes to select my shares). However, I keep getting warnings that files in those shares (but accessed via disk1/disk2 instead of usr) have a hash mismatch.  The files are Crashplan backups so they change very frequently and there is not much point in me checksuming them.

 

How do I get it to actually honor my ignored folders?

 

EDIT: Added screenshot of settings

Settings.png.4edb4c70a246e0302f644f9385ed031c.png

Link to comment

This can happen when the exclusion was added at a later stage and files have already a hash value stored in the extended attributes.

 

Go to Tools -> File Integrity and select the disks on which the excluded files are present. Next use Clear to remove the obsolete attributes.

 

Link to comment

This can happen when the exclusion was added at a later stage and files have already a hash value stored in the extended attributes.

 

Go to Tools -> File Integrity and select the disks on which the excluded files are present. Next use Clear to remove the obsolete attributes.

Pretty sure this is also why my excluded files were being ignored and I kept getting warnings as noted in the post above.  So the verify operation ignores the exclusions, which are obeyed only during the hash generation step.  If I may, it might be worth adding a note about that to the help.

 

I made the changes mentioned and will keep an eye on it this week.

Link to comment

I still get file integrity check for files excluded.

I had removed/deleted files and reinstalled the plugin but I can't get it right.

Also, very often I get not updated build in disk2 but I have enabled

Automatically protect new and modified files:

Lastly, I have excluded .tonidodb files but it checked it during scheduled verification check

 

Event: unRAID file corruption
Subject: Notice [TOWER] - bunker verify command
Description: Found 3 files with BLAKE2 hash key corruption
Importance: alert

BLAKE2 hash key mismatch, /mnt/disk1/Photos/2003 - 2014 ?????/2008 - 2009 ?????/xxxxxxxxxxxxxxxxx/.tonidodb is corrupted
BLAKE2 hash key mismatch, /mnt/disk1/Photos/2003 - 2014 ?????/2012 - 2013 ?????/xxxxxxxxxxxxxxxxxx/.tonidodb is corrupted
BLAKE2 hash key mismatch, /mnt/disk1/Photos/2003 - 2014 ?????/2012 ???/xxxxxxxxxxxxxxxxxx/.tonidodb is corrupted
BLAKE2 hash key mismatch (updated), /mnt/disk1/TV Series/xxxxxxxxxxxxxxxxx/xxxxxxxxxxxx - S01E10 - xxxxxxHDTV-1080p.nfo was modified

 

Pics

https://www.dropbox.com/sh/9jb8b7yb79fkjsk/AACE1EA6A9ltaHSagcIVm1dPa?dl=0

 

 

Any ideas?

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.