Micaiah12 Posted March 19, 2017 Share Posted March 19, 2017 Hey all, Starting to have really weird issues with my unraid server. It started when I was in one of my torrent dockers and things started getting flagged as files not found. I thought it was a weird thing with one of the other dockers moving the files. I went to the URL of the other docker and it wouldn't load so I restarted the lets encrypt docker, however. It kept saying server error. So I stopped the array, restarted and now when I start the array I can see the disks, but none of the shares. Docker is turned off because it can't find the docker.img file. I have the array in maintenance mode after another reboot. I have the logs attached. Hopefully someone has some suggestions. Thanks! tower-diagnostics-20170318-1707.zip Quote Link to comment
Squid Posted March 19, 2017 Share Posted March 19, 2017 Start the array, and then post another set of diagnostics. Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 Sorry my bad. tower-diagnostics-20170318-1717.zip Quote Link to comment
Squid Posted March 19, 2017 Share Posted March 19, 2017 (edited) There's corruption on the cache drive. You need to https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems on what is currently sde1 Edited March 19, 2017 by Squid Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 Will do boss, I will let you know how it goes. Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 I ran the xfs_repair on the cache drive and it exited with this error. .Sorry, could not find valid secondary superblock Exiting now. Any ideas? Quote Link to comment
Squid Posted March 19, 2017 Share Posted March 19, 2017 What command did you run? Or did you do it through the GUI? Wait for @johnnie.black He's the expert on corruption. (Not that he's corrupt himself though) Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 Lol, my bad, ran xfs_repair on /dev/sde, but it's a cache so it needs to be done xfs_repair -v /dev/sde1. Apparently it was in the footnotes. Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 here is the reply from the console after running that command. - scan filesystem freespace and inode maps... freeblk count 5 != flcount 6 in ag 3 agi unlinked bucket 57 is 202361 in ag 3 (inode=201528953) sb_icount 4544, counted 13824 sb_ifree 349, counted 243 sb_fdblocks 17397874, counted 8314928 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 correcting nblocks for inode 201528953, was 145361 - counted 145233 correcting nextents for inode 201528953, was 2365 - counted 2363 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 1 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 201528953, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (51:65442) is ahead of log (1:2). Format log to cycle 54. XFS_REPAIR Summary Sat Mar 18 17:51:38 2017 Phase Start End Duration Phase 1: 03/18 17:51:33 03/18 17:51:33 Phase 2: 03/18 17:51:33 03/18 17:51:34 1 second Phase 3: 03/18 17:51:34 03/18 17:51:35 1 second Phase 4: 03/18 17:51:35 03/18 17:51:35 Phase 5: 03/18 17:51:35 03/18 17:51:35 Phase 6: 03/18 17:51:35 03/18 17:51:36 1 second Phase 7: 03/18 17:51:36 03/18 17:51:36 Total run time: 3 seconds done Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 After the xfs repair. It looks like it created a lost + found folder. I took the array out of maintenance and started it up normally and it looks like all the shares are there. I am about to verify the data. Is there any idea what could of caused that? And any way to keep that from not happening? Thanks. Quote Link to comment
Squid Posted March 19, 2017 Share Posted March 19, 2017 #1 cause of corruption on any computer system would be unexpected power downs (ie: power failure) - get a UPS much farther down the list would be cabling to the cache drive bad cache drive - smart looks good on the cache drive Quote Link to comment
Micaiah12 Posted March 19, 2017 Author Share Posted March 19, 2017 We did have a power outage this morning. The whole town was down for a few hours. I do have a ups on it, however come to think of it. I don't remember if I set the server up to use it. That may be something I need to check. Thanks though! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.