themaxxz

Members
  • Posts

    42
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

themaxxz's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I suspect that adding 'missingok' option to /etc/logrotate.d/cache_dirs will prevent the error. Also the permissions of this file could be changed to 644.
  2. Just some follow-up. The full SMART test completed ok and did not report anything, but I went ahead with the replacement anyway. I just put in a new seagate exos 5e8 which is rebuilding.
  3. Hi, Could you please on the 'Self-Test' page for hard disk, add a button 'SMART xerror log' to show xerror log as well? The command seems to be 'smarctl -l xerror <device>'. Reasoning. The regular 'error' log is often empty, and may give a false sense that the harddrive is still good, while information displayed in the xerror log indicates that the harddrive is starting to fail. See also https://www.smartmontools.org/ticket/34 E.g. xerror log can show 'Error: UNC at LBA' entries, while the error log shows nothing. See https://techoverflow.net/2016/07/25/how-to-interpret-smartctl-messages-like-error-unc-at-lba/
  4. Ok, it seems that this log information is printed by the 'smartctl -l xerror' command. Perhaps 'SMART xerror log' could be added to the 'Self-test' unraid UI menu of the hard drives, as it seems xerror log contains more valuable info not always shown in 'error log'.
  5. Hi, Disk8 reported 448 Errors, but the disk did not drop from the raid. As I just read another post where it says some SMART values for WD disk should always be zero, I guess the disk should be replaced. I'm currently running a long smart test. The last long test was run 3 months ago. How often should a long test be run? Only 2 days ago I ran a parity check which resulted with 0 errors. I have attached the smart report, could somebody confirm diagnosis is failing disk? unraid-smart-20180917-1855.zip
  6. Hi, I'd like to report a 'bug' of some sort. After skimming this thread, I thought this plugin could be usefull, so i installed it today on my unraid server (6.5.3). I decided to start slow and perform a disk by disk build. I first did a build on disk13, which only had 2 files. After that completed I started a build on disk3 with ~500 files. Currently still ongoing. (1 bunker process running related to disk3) I suddenly noticed the UI displayed a green check for all but one disk (disk9) under 'Build up-to-date'? I tracked back that it's populated from the file /boot/config/plugins/dynamix.file.integrity/disks.ini. After manually correcting (removing the not build disks) this file, the status in the UI reflects the correct situation. While trying to understand how this could happen, I was able to reproduce the issue, by clicking on the 'hash files' link on Tools/Integrity page. After clicking on this link all but one disk9 were re-added again to disks.ini file. What should be the correct format of this disks.ini file?
  7. I just checked my server (6.5.3) and I also noticed the state was stopped. I started it again using the same procedure. (cache_dirs version: 2.2.0j)
  8. Where do you configure this status emails? Is it the "Array status notification" option ? From the help "Start a periodic array health check (preventive maintenance) and notify the user the result of this check." Can anybody provide some more details on what this means? Will this perform a parity check? Thanks,
  9. Hi, The link to the 'bug report' https://forums.unraid.net/forum/66-bug-reports/ appears to be broken resulting in a 404.
  10. I recently did this with an ubuntu live cd using the setblocksize program. As I'm not sure about the rules of posting external links, google for unsupported-sector-size-520.html
  11. No the UPS never ran out of power. My condition is 5min on battery. The rest of the network stayed up and was still up when power returned after 30min or so. The server just completed a parity check (which I initiated) and everything checks out. I just didn't know about the shutdown time-out settings waiting for a graceful shutdown.
  12. Ok, so it was an unclean shutdown in this case, and most likely due to having a too short timeout set. Thanks.
  13. Thanks, indeed that is where I found the diagnostic file with the syslog.txt file from which I quoted the previous logging. But it's not clear to me how a clean or unclean shutdown logging should look like.