Drewster727 Posted May 22, 2017 Share Posted May 22, 2017 I've currently got 2x6TB WD Red drives as my parity disks (dual parity). I recently purchased 2x8TB HGST Deskstar drives to replace them (so that I can start adding 8TB drives to my array). I've never had to rebuild dual parity before, let alone replacing the disks. I assume it's exactly the same process as a single disk. In other words, my plan to upgrade them is: Preclear new 8TB disks (already done) Stop the array Shut down the server Swap the current parity disks with the new ones Boot up server Re-assign parity slots to the new drives Turn on the array and just let it rebuild Is this a correct procedure for dual parity rebuilds? Thanks! Quote Link to comment
JonathanM Posted May 22, 2017 Share Posted May 22, 2017 Should be fine. If you are risk averse, I'd do it one at at time, that way you are still protected from a data drive failure during the whole procedure. You are physically removing the old parity drives for the procedure, correct? Quote Link to comment
Drewster727 Posted May 22, 2017 Author Share Posted May 22, 2017 Well, the only reason I wasn't considering doing them one at a time is because when parity checks run, they're slow and causes performance issues with my array during the sync that I'm trying to avoid. Question -- if I do it one at a time, is unRAID smart enough to rebuild parity from the existing parity disk or does it still have to read from the entire array during the sync process? Quote Link to comment
JorgeB Posted May 22, 2017 Share Posted May 22, 2017 Just now, Drewster727 said: if I do it one at a time, is unRAID smart enough to rebuild parity from the existing parity disk or does it still have to read from the entire array during the sync process Parity disks are different from each other, it can't read the other one, it needs to read all disks. Quote Link to comment
Drewster727 Posted May 22, 2017 Author Share Posted May 22, 2017 1 minute ago, johnnie.black said: Parity disks are different from each other, it can't read the other one, it needs to read all disks. Ok, figured that was probably the case. I may get risky and just do a full rebuild on both of them to minimize the time I'm putting pressure on the array. Quote Link to comment
Vr2Io Posted May 22, 2017 Share Posted May 22, 2017 If you want the array still valid with the old parity drive (6TBx2), you should run in maintenance mode to rebuild both new drive in same time. The drawback was whole file system will not available, but you still have 2 drive fail protection. Quote Link to comment
bonienl Posted May 22, 2017 Share Posted May 22, 2017 When you ensure no writes to the array, you can replace both at the same time and have the 'old' drives as backup in case needed. Quote Link to comment
Drewster727 Posted May 22, 2017 Author Share Posted May 22, 2017 (edited) Ok, so just to clarify: Turn off the array Switch to maintenance mode (ensures no writes?) Swap the parity disks in the GUI Let it rebuild Once complete, exit maintenance mode If anything fails, pop the old 6TB parity disks back in to resolve the issues. Is this correct? Edited May 22, 2017 by Drewster727 Quote Link to comment
Vr2Io Posted May 22, 2017 Share Posted May 22, 2017 (edited) 4 minutes ago, Drewster727 said: Is this correct? Yes Better one Preclear new 8TB disks (already done) Stop the array Shut down the server Swap the current parity disks with the new ones Boot up server Re-assign parity slots to the new drives Turn on the array in maintenance mode and just let it rebuild Edited May 22, 2017 by Benson 1 1 Quote Link to comment
Drewster727 Posted May 22, 2017 Author Share Posted May 22, 2017 Thanks guys! That's what I will do. Quote Link to comment
Vr2Io Posted May 22, 2017 Share Posted May 22, 2017 (edited) Suggest an importance step should do, Sync the arrary once before change. Because you don't know parity valid or not.. That's what I do. Edited May 22, 2017 by Benson 1 Quote Link to comment
Leon_CC Posted March 26, 2018 Share Posted March 26, 2018 (edited) question will it make much difference i am planing to go from 1 parity disk to 2x 10tb drives as parity. using the steps @Benson listed before. i am running unRaid OS Version 6.5 any idea how long it will take to do this to get 2x 10 tb drives done. 22h 30 min to go from a 4 tb single parity to a single 8 tb parity drive with only 500gb of data the new setup is going to have 26 tb of space before i move over plex with 24.21 tb of data then i am taking 2x of the ironwolf 10 tb drives and making them the new parity drives then adding the remaining 4x 10tb drives to the array giving me 36 tb more storage. a total of 62 tb of storage in the end. Sync the arrary once before change then Stop the array Shut down the server Swap the current parity disks with the new ones Boot up server Re-assign parity slots to the new drives Turn on the array in maintenance mode and just let it rebuild thank you in advance for all the helpful info in this post. Edited March 26, 2018 by Leon_CC Quote Link to comment
Vr2Io Posted March 26, 2018 Share Posted March 26, 2018 (edited) 3 hours ago, Leon_CC said: question will it make much difference i am planing to go from 1 parity disk to 2x 10tb drives as parity. No different, but pls disable array "auto start" before change. 3 hours ago, Leon_CC said: any idea how long it will take to do this to get 2x 10 tb drives done. It depends on all existing disk and new disk performance (i.e.7200rpm faster the 5400rpm, different model harddisk even with same capacity also have big different). Also, system have bottleneck or not (i.e. controller bandwidth reach ceiling, 10 disks will limited to 100MB/s speed per disk , and 5 disks will got 200MB/s full speed) Some well system with 7200rpm 10TB harddisk i.e. ST10000DM0004 (not DM004), I will expect less then 19 hrs to finish re-build. 3 hours ago, Leon_CC said: 22h 30 min to go from a 4 tb single parity to a single 8 tb parity drive with only 500gb of data There are a bit slow, does it is actual finish time or estimate time during rebuild ? BTW, how many data won't make different, because rebuild was block operation which not relate to filesystem. 3 hours ago, Leon_CC said: adding the remaining 4x 10tb drives to the array giving me 36 tb more storage. Do you mean replace four 1TB disk with four 10TB disk by rebuilding in 2 times. I just think does something can do to save lot of time, but it may risk. Edited March 26, 2018 by Benson 1 Quote Link to comment
Leon_CC Posted March 26, 2018 Share Posted March 26, 2018 thats the current systems my old Nas an RN516 I am dumping all the data to the unRaid server then adding 4x of these drives to the array and using the other 2x as the dual parity setup. Quote Link to comment
HellDiverUK Posted March 26, 2018 Share Posted March 26, 2018 I just rebuilt my parity with two 8TB WD Reds (well, white-label WD80EZZX which is basically a Red), and I got: Parity is valid Last checked on Mon 26 Mar 2018 11:34:45 AM BST (today), finding 0 errors.Duration: 17 hours, 58 minutes, 58 seconds. Average speed: 123.6 MB/sec I wouldn't expect a 10TB to run much more than 20-22 hours, depending on the speed of your other drives. If you've an array full of old 2TB Greens, then expect things to be slower. My array is an elderly 4TB Seagate Desktop, 2x 8TB Seagate Archive v2, so none of the drives are quick units. The machine is a Ryzen R5-1600 running on a X370 board. Quote Link to comment
Vr2Io Posted March 26, 2018 Share Posted March 26, 2018 (edited) 7 hours ago, Leon_CC said: thats the current systems my old Nas an RN516 I am dumping all the data to the unRaid server then adding 4x of these drives to the array and using the other 2x as the dual parity setup. It more clear. Big job, migrate to unRAID, data move out and in. For your case, I will propose some differnet way. ( If your plan was 11+2 disk array, instead 7+2 ) - Your old NAS was RAID-5, if you plug-out 2 disk then all data in those 10TB disks already invalid. So I will do this ( Protection still maintain by the old 8TB parity disk and those 2T /4T data disk ) - Copy all data to unRAID - Disable array auto start - shutdown unRAID - Plug-out the 8TB parity - Pling-in all 10TB disk - Start unRAID and click new disk config with retain data disk - Start array in maintenance mode - Stop array and assign 10TB disks as 2 parity, 4 data disk - Just let unRAID new-build parity - If all nornal - Start array in normal mode and let unRAID format those 4 10TB data disk. - Finish But if possible, I would suggest add a 4TB disk be 2nd parity before move in 10TB disk, then you will got 2 disk protection all time. ( not sure unRAID allow or not for differet size parity drive ) Edited March 26, 2018 by Benson 1 Quote Link to comment
Leon_CC Posted March 26, 2018 Share Posted March 26, 2018 i was thinking of getting a extra 8TB before i do the migration anyway after more reading i have been doing. and yes they have to be the same size dorm what i was reading. just for the backup and in the end i would have an extra 18TB to the total and a setup of 2x 10TB parity drives and a array of 4x 10TB + 2x 8TB drives + 6x 4TB drives and 1x 2TB drive with a total of 40+16+24+2= 82TB with 2x 10TB parity drives aka 12+2 and thank you again for the info @Benson Quote Link to comment
Vr2Io Posted March 26, 2018 Share Posted March 26, 2018 (edited) Sound good for adding extra 8TB parity during migration. Capacity increase so much. But those 8TB x2 need to be add after migration complete. Would you consider not add all be a 12+2 array. Let say, 10+2, then 2 disk i.e. 4TBx2 could be spare drive. Edited March 26, 2018 by Benson Quote Link to comment
Leon_CC Posted April 13, 2018 Share Posted April 13, 2018 thats the results of the Preclear of one of the 10TB drives Quote Link to comment
Vr2Io Posted April 14, 2018 Share Posted April 14, 2018 (edited) High performance drive, each phase complete less then 17 hrs. But still in preclear stage after 2 weeks+ Edited April 14, 2018 by Benson 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.