FreeMan Posted April 22, 2017 Share Posted April 22, 2017 (edited) I have a current pending sector on my cache drive, and it's caused all drives to be mounted read-only. It's probably about time to replace that little Samsung Spinpoint, but I'd like to be able to write to my system before I start doing that. Also, if someone would kindly point me to the directions for replacing a cache drive (with docker configs), I'm sure I'll be making use of them sooner rather than later. Diagnostics & Smart report attached. nas-smart-20170422-1648.zip nas-diagnostics-20170422-1541.zip Edited April 23, 2017 by FreeMan Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 TYVM, trurl. Opened in a new tab and instructions copied off in case I close it in a fit of forgetfulness. Now to patiently await recommendations on dealing with the pending sector. Or, is replacement the best/only solution? Quote Link to comment
trurl Posted April 22, 2017 Share Posted April 22, 2017 2 minutes ago, FreeMan said: replacement the best/only solution? After you replace cache you can preclear the disk to try to get the disk to reallocate the pending sector by making it try to write it. Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Will changing all my shares to not use the cache enable write status on them? If not, how do I go about doing so? Quote Link to comment
trurl Posted April 22, 2017 Share Posted April 22, 2017 5 minutes ago, FreeMan said: Will changing all my shares to not use the cache enable write status on them? If not, how do I go about doing so? Please read the entire post I linked all the way to the end and see if you can understand what it is doing and why. Your question is irrelevant. Quote Link to comment
trurl Posted April 22, 2017 Share Posted April 22, 2017 OK, I understand what you are asking. Because the current cache disk is corrupt, it has been made read-only to prevent further corruption. The user shares that are set to use cache are effectively read-only because the cache disk is. It's possible the procedure will not work to get everything off cache and then back on again because it may be that some of the files on cache cannot be read due to corruption. In any case, if you replace cache the new disk's filesystem will not be corrupt, so it won't be read-only. Whether all the files can be saved from the original cache remains to be seen. Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Whew! I was just about to post what might have been considered slightly ranty. Thanks for saving me!!! (there was an apology at the end... ) A) it will be tomorrow at best before I get a replacement cache drive, and B) ALL of my disks are currently mounted read-only, so the mover cannot move files off the cache. I would like to get all the rest of the disks re-mounted read/write, then run the mover to get everything off the cache drive in case it decides to completely give up and because I've got other files that I need to put on my server. I'll take the non-cached write performance hit for now to get them on there. What do I need to do to enable read/write on all disks (other than the cache which I probably really want to be read-only at the moment)? I am running CA Backup, so I hope that all my docker configs will be OK. There are files in the CAAppdataBackup directory on the cache drive, so the most recent backup could be gone, but I don't change things very often... Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Actually, I just realized I got my son a 240GB PNY CS1311 SSD for Christmas that he's never installed in his machine. (He's also got memory and a new sound card sitting on his desk from Christmas. He's going to be a CS major when he starts college in the fall, but shows almost no interest in computers - pray for him!) I will get this drive installed this evening (unless it's highly recommended against this model for unRAID use), but that still leaves me with the inability to write to any drive in the system. Quote Link to comment
JorgeB Posted April 22, 2017 Share Posted April 22, 2017 Start the array in maintenance mode and run reiserfsck on the cache disk, this may or not work due to the pending sector(s). Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Thanks, johnnie, For confirmation, from the FAQ it looks like I want to run btrfs check --repair /dev/sdX1 Where sdX1 = my cache drive. To be sure I understand what's going on (instead of just blindly following instructions without learning anything)... I want the --repair option because that should force btrfs to move the data from the pending sector to somewhere else on the disk, correct? Quote Link to comment
JorgeB Posted April 22, 2017 Share Posted April 22, 2017 Your cache disk is reiserfs, you need to run reiserfsck: reiserfsck --check /dev/sdk1 Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Wow... thanks. That's why I double check things. Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 and the results are in. No corruptions found There are on the filesystem: Leaves 38319 Internal nodes 239 Directories 1821 Other files 35050 Data block pointers 34211785 (9110923 of them are zero) Safe links 1 Full log attached. Next step? reiserfsck of Cache disk output.txt Quote Link to comment
JorgeB Posted April 22, 2017 Share Posted April 22, 2017 Start the array normally and starting moving data from cache, start with the most important stuff as it can go read only again. Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 (edited) Sigh... I figured there was nothing critical on the cache drive, so... just invoke the mover! Easy peasy, right? Bad call. It's read-only again and, of course, I can't stop the array because the mover is still running.Yes, I know, I ignored instructions... root 19154 1 0 18:46 ? 00:00:00 /bin/bash /usr/local/sbin/mover root 20442 17537 0 19:28 pts/0 00:00:00 grep mover Can I "kill -9 19154" to kill the mover without doing terminal damage? If so, I'll follow up with another run of reiserfsck --check, then I'll carefully move one. file. at. a. time. until everything's moved off. Edited April 22, 2017 by FreeMan remove duplicate text Quote Link to comment
JorgeB Posted April 22, 2017 Share Posted April 22, 2017 There's a new command, type: mover stop Don't know if it will work in the current conditions. Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Money! shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory mover stopped that was much easier, and I'm sure cleaner... Thanks! I promise to follow instructions this time, but at least I learned something! Quote Link to comment
FreeMan Posted April 22, 2017 Author Share Posted April 22, 2017 Interesting that the results are notably different: ########### reiserfsck --check started at Sat Apr 22 19:43:50 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/sdk1' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 28737 Internal nodes 181 Directories 1810 Other files 35026 Data block pointers 24520941 (5557935 of them are zero) Safe links 0 ########### reiserfsck finished at Sat Apr 22 19:45:36 2017 ########### Carefully, patiently, slowly upward and onward. Quote Link to comment
FreeMan Posted April 23, 2017 Author Share Posted April 23, 2017 Any known concerns with that PNY disk? Also, I presume the best bet would be to put the disk on a SATA3 port on the MoBo. The controllers are a scattered mix of cheapies, so I'd think the MoBo ports would be the best. Quote Link to comment
FreeMan Posted April 23, 2017 Author Share Posted April 23, 2017 What is the current preferred format for cache drives? I've only got the one at this time, and I'm not really planning on a second, but that may change. I believe reiserfs is not recommended for anything at this point. I believe that btrfs is recommended for cache, especially if one will be moving to multiple cache drives. Is this correct? Quote Link to comment
trurl Posted April 23, 2017 Share Posted April 23, 2017 3 minutes ago, FreeMan said: btrfs is recommended for cache, especially if one will be moving to multiple cache drives. Is this correct? btrfs is the only choice for cache pools. XFS will only allow single cache. It's not that hard to switch using procedure already linked. Quote Link to comment
BRiT Posted April 23, 2017 Share Posted April 23, 2017 XFS is preferred all around unless you're using multiple drive in the cache pool, then your only options are to use Unassigned-Devices to manage things yourself or use BTRFS for Cache Pool. Quote Link to comment
FreeMan Posted April 23, 2017 Author Share Posted April 23, 2017 Thank you gents, crisis averted! I may throw the old drive back in and try a preclear pass or two to see if the pending sector clears up. That would be an interesting pairing, the old 250GB SpinPoint paired with the SSD for my cache. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.