Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

3 hours ago, Squid said:

Only if using a single parity drive.  With dual parity drives, parity disk 2 will be incorrect

True. The wiki addresses that in the notes before the steps are enumerated:

 

  • You have one Parity drive, not two! If you have dual parity, this procedure will invalidate the second parity drive. You can still use this procedure, but you might as well unassign the second parity drive until you have finished converting all of the drives you wish to convert. Then you can reassign it and let it rebuild. Why? The first parity drive does not care about drive positioning within the array, so we can freely swap the drives around. The second parity drive *does* care, so no drives can move or be swapped, at all!
Link to comment
6 hours ago, trurl said:

True. The wiki addresses that in the notes before the steps are enumerated:

 

  • You have one Parity drive, not two! If you have dual parity, this procedure will invalidate the second parity drive. You can still use this procedure, but you might as well unassign the second parity drive until you have finished converting all of the drives you wish to convert. Then you can reassign it and let it rebuild. Why? The first parity drive does not care about drive positioning within the array, so we can freely swap the drives around. The second parity drive *does* care, so no drives can move or be swapped, at all!

Thanks! I knew it had to be for single parity but I wasn't sure if changing the drive size would matter for keeping parity valid.

Link to comment
On 3/21/2017 at 4:49 PM, Switchblade said:

 

Thanks Rob!  I  was planning to use this process to convert my server and it would be killer to have instructions  from you.  It's hard to keep track of everything in this thread.

 

Hi Rob, when do you think you will be able to add a general unBALANCE-based method?

 

I looking to start my conversion next weekend - 8 data drives.

 

Link to comment

I think it was inevitable that I would end up here. I have recently upgraded from 4.7 to 6.3.2. Now the challenge is to get all my drives converted to XFS. 

My configuration:

Disk1 new 2TB  XFS          blank This will stay as a 2TB drive   0% used

Disk 2 new 4TB XFS          copied from disk1 no new data is on drive  49% used

Disk 3 new 4TB RFS          was formated XPS but tried to do a rebuild vs copy no new data is on drive Still have the original Disk 3 (just removed from the array)    47% used

Disk 4 old   2TB RFS          keep on a 2 TB drive until I replace with 4 TB drive     95% used

Disk 5 old   2TB RFS          keep on a 2 TB drive until I replace with 4 TB drive     78% used 

Disk 6 new 2TB RFS          keep on a 2 TB drive until I replace with 4 TB drive      88% used

 

Parity new 4TB 

Cache new 1TB BTRFS     If this could be XFS I would prefer to have everything with the same format    0%

 

I have 2 open slots available for for drives in my ICY Dock trayless

Additional drives:

I have the original 2TB  disk3 untouched

I have the 2TB parity drive

I have the original 2TB disk 2 (needs to be precleared and formated for use

 

I have complete backups of each drive on NTFS drives so I am not worried about losing anything.  I am looking for the most efficient way to change each data drive to XFS and if advised to do so convert the cache drive.

 

The server is in what I call maintenance mode so I will not make any changes to the data on any drive but will move wherever it needs to go.  When I did the copy vs rebuild on the 4TB drives, it took about the same length of time. 

 

Disk one could be used as a temporary disk as this is a different share than all the other drives.

1. What is the best method to convert to XFS?

2. Should I convert the cache drive too? 

3. I have fumbled around to preclear and format my drives XFS that have been previously used. What is the best method to do this? 

For example disk 3 was converted to RFS (formated XFS) because I used the rebuild method.  Do I need to preclear the drive again? What steps do I need to do to convert any of my old drives to XFS and begin replacing the RFS drives?

4.  I would like to learn how to use rsync command to move the files if that is advisable. It appears to be quite useful when one learns how to use it properly.

5.  If I need to mount one of the NTFS drives and copy what is the best method and how do I do it?

 

Geezzz, I need to go back to school. I think they teach this stuff in first grade. It would be so embarrassing trying to sit in one of those little chairs with my knees next to my ears.

 

 

Link to comment
33 minutes ago, tunetyme said:

Cache new 1TB BTRFS     If this could be XFS I would prefer to have everything with the same format    0%

If you ever want to do a cache pool BTRFS is your only choice. If you will only have a single cache disk you can change it to XFS if you want.

 

34 minutes ago, tunetyme said:

The server is in what I call maintenance mode

Perhaps a confusing thing to call it since Maintenance Mode is already a thing.

 

36 minutes ago, tunetyme said:

3. I have fumbled around to preclear and format my drives XFS that have been previously used. What is the best method to do this? 

For example disk 3 was converted to RFS (formated XFS) because I used the rebuild method.  Do I need to preclear the drive again? What steps do I need to do to convert any of my old drives to XFS and begin replacing the RFS drives?

The only time unRAID requires a clear disk is when you are adding it to a new data slot in an array that already has valid parity. A clear disk is all zeros so has no effect on that valid parity. People often use preclear to test a new disk even when they don't require a clear disk. Unless you really think these previously used disks need further testing, I would say preclearing them is adding needless complication and a whole lot more time to the process.

 

40 minutes ago, tunetyme said:

5.  If I need to mount one of the NTFS drives and copy what is the best method and how do I do it?

Unassigned Devices plugin.

 

And, first things last

42 minutes ago, tunetyme said:

I think it was inevitable that I would end up here. I have recently upgraded from 4.7 to 6.3.2. Now the challenge is to get all my drives converted to XFS.

Have you read this sticky?

 

Link to comment
1 minute ago, mtruffa said:

What happen is when swapping the drives I forgot to hit "Parity is Valid". I did not want to stop it after the parity reconstruct start I was afraid that something would go wrong. 

 

 

One thing has nothing to do with the other, but I'm not sure I understand what you're doing, you'll need to explain in more detail what you did and what you're seeing.

Link to comment

The gui is down now so here is the info from putty.

login as: root
[email protected]'s password:
Last login: Mon Mar 27 00:41:16 2017 from 192.168.1.151
Linux 4.9.10-unRAID.
root@Tower:~# df /mnt/disk6/
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/md6       1953454928 1490251416 463203512  77% /mnt/disk6
root@Tower:~# df /mnt/disk17/
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/md17      1952560688 1643476600 309084088  85% /mnt/disk17
root@Tower:~#
 

I did the rsync -avPX /mnt/disk6/ /mnt/disk17/ and as you can see it has copied a lot more than what is on the drive (disk6) and there is still is some to go according to what file it is on now. 

 

I made sure the drive was empty be fore starting. If I quit the operation and reformat the drive and restart will it pick up where it left off or will it start from the beginning? 

Link to comment
48 minutes ago, mtruffa said:

If I quit the operation and reformat the drive and restart will it pick up where it left off or will it start from the beginning? 

 

If you mean cancel the rsync, yes, you can re-format and start again, but if the destination was really empty the result should be the same.

Link to comment
10 hours ago, tunetyme said:

3. I have fumbled around to preclear and format my drives XFS that have been previously used. What is the best method to do this? 

For example disk 3 was converted to RFS (formated XFS) because I used the rebuild method.  Do I need to preclear the drive again? What steps do I need to do to convert any of my old drives to XFS and begin replacing the RFS drives?

Thought I would add a few thoughts to this part. The most important being that if you do wind up using one of these disks in a new data slot, then it will need to be cleared so parity will remain valid. unRAID will clear a disk you add to a new slot if it isn't already marked as clear, but I would recommend using preclear for this since that gets it done before adding and so takes place independently of anything else you have going on like other conversions.

 

Also, when you formatted disk3 to XFS and then rebuild made it RFS again, this was not really a "conversion". unRAID didn't reformat it to RFS or anything like that, it just rebuilt the previous content and the previous content was the RFS filesystem. Rebuilds and other parity operations don't even recognize filesystems, it is all just a bunch of bits.

Link to comment

Then why is more being copied than what is on the original disk. What I might do is stop it because the destination disk is at 95% full and it looks like there still is more. I will then reformat the drive and then try a disk than has much less data and see if it does the same thing. 

Link to comment
5 minutes ago, mtruffa said:

Then why is more being copied than what is on the original disk. What I might do is stop it because the destination disk is at 95% full and it looks like there still is more. I will then reformat the drive and then try a disk than has much less data and see if it does the same thing. 

 

There can be a small difference when changing filesystem, but never like that, only reason I can think of is that there are sparse files (like vdisks) on the source disk.

 

ETA: On the next run add --sparse to the rsync command, it won't hurt and if there really are sparse files they will use the same space on the destination.

Edited by johnnie.black
Link to comment
On 3/26/2017 at 2:10 PM, Switchblade said:

 

Hi Rob, when do you think you will be able to add a general unBALANCE-based method?

 

I looking to start my conversion next weekend - 8 data drives.

 

Uh oh!  Now I'm feeling pressured!   ;)

 

As you've probably noticed, I'm easily and constantly side-tracked!  And I have a bunch of little projects I'm either working on, or wanted to work on, plus other projects I don't want to work on, but my relatives do want me working on!  I'll try to put a priority on it though.

But the first draft won't be a step by step, but rather a summary of what has to be done.  I have some reservations about the use of unBALANCE, the more I've thought about it.  It's doable, but there are special issues that can come up, and I don't know what happens then.  I first need to create a post in the unBALANCE thread with some questions I have, as to what happens in certain cases.  The only simple case (I think!) is the case where the user doesn't use includes or excludes, all shares exist on all drives.  Any other case is going to have extra issues and steps.

Link to comment
2 hours ago, RobJ said:

 

Uh oh!  Now I'm feeling pressured!   ;)

 

As you've probably noticed, I'm easily and constantly side-tracked!  And I have a bunch of little projects I'm either working on, or wanted to work on, plus other projects I don't want to work on, but my relatives do want me working on!  I'll try to put a priority on it though.

But the first draft won't be a step by step, but rather a summary of what has to be done.  I have some reservations about the use of unBALANCE, the more I've thought about it.  It's doable, but there are special issues that can come up, and I don't know what happens then.  I first need to create a post in the unBALANCE thread with some questions I have, as to what happens in certain cases.  The only simple case (I think!) is the case where the user doesn't use includes or excludes, all shares exist on all drives.  Any other case is going to have extra issues and steps.

 

LOL, no pressure at all.  I'm still going to try it, but would feel better if I had the step by step from you. :)   My shares are on all data drives, with only one exception - timemachine backups go to a specific disk.  I have 8 6tb drives to convert and I'm more about what is easier, even if it takes longer to get the job done.

 

Thanks!

Link to comment
On 3/27/2017 at 10:01 AM, trurl said:

Thought I would add a few thoughts to this part. The most important being that if you do wind up using one of these disks in a new data slot, then it will need to be cleared so parity will remain valid. unRAID will clear a disk you add to a new slot if it isn't already marked as clear, but I would recommend using preclear for this since that gets it done before adding and so takes place independently of anything else you have going on like other conversions.

 

Also, when you formatted disk3 to XFS and then rebuild made it RFS again, this was not really a "conversion". unRAID didn't reformat it to RFS or anything like that, it just rebuilt the previous content and the previous content was the RFS filesystem. Rebuilds and other parity operations don't even recognize filesystems, it is all just a bunch of bits.

 

Trurl:

My 2 cents is that there must be a way to stream line this process. I followed the directions in the Wiki reference step by step. It finished the rsync sometime late last night, moved the drives around as directed now it says "All data on the parity drive will be erased when array is started" Every possible combination of configurations ended up with the same message. Considering all the time it takes to do this one disk at a time, I would suggest that a backup copy of each disk be made using rsync, then preclear all the data drives again and format them xfs. In my case it would save about a 3 -4 days time per disk.

 

Another option would be to buy a new disk that is the same size or larger. Preclear and format dive XFS. Unassign the disk being replaced. Put new disk in slot. Somehow prevent the rebuild, then use rsync to copy from the old RFS disk to the new XFS. No offense, but this is a convoluted process because now I have to preclear the old RFS drive then format it and I find that I don't get the format button and reboot and use trial and error until it pops up. I have no idea what I've done or not done to get this accomplished.

 

At this point, since I have backups of all my drives on NTFS format, I think the best way to deal with this is to unassign the parity drive and go through the lengthy process of doing one drive at a time then rebuild parity. If a file is damaged, I have a backup or I can get the original back out and rip it again. I don't see all this as mission critical stuff and frankly it shouldn't be this hard and time consuming. It has taken me 1 day+ to preclear and format a 2TB drive 1.75 days to copy using rsync 1.7TB of data. I have 2 more drives to do one 4TB (double the time) and one 2TB. At least I can preclear both of them at the same time. Thankfully, I have Icy Dock data drive bays. I can't begin to imagine what people do when they have to climb inside their case every time.

 

BTW I think it is a significant flaw when you use new config that drive format goes to auto. Major source of frustration to go through and change everything back. 

 

It seems to me that there be a program that allows you to use the same drives in the same slots add one disk that files are copied to, Copy file then reformat drive (hopefully this can be done without preclearing the drive again) move files back to drive. Repeat until all disks are converted. At that point rebuild parity or if there is a way to maintain parity then do so. The only downside is you have to copy the files twice and at least on my rig, that takes a long time. 

 

Well in another week or two this nightmare will be over. (I hope) I won't have to mess with this again until it is time to install the other two 4TB drives in a few months.

 

 

Link to comment
1 hour ago, tunetyme said:

My 2 cents is that there must be a way to stream line this process.

If you don't care which numbered slot your data (shares) are on, you can forego the new config and just round robin from largest to smallest drive. Start by copying and verifying the content of your largest drive to any of the other array drives by whatever method you want. When you are sure all the data on the largest drive is duplicated properly, stop the array, change the drive format from RFS to XFS, then start the array and format the drive. From then on it's just a matter of copying all the data from the next largest RFS drive to the newly emptied XFS drive, lather rinse repeat until done. No need for new configs, preclearing, anything. If you feel a need to change slot numbers after you are done, you can do a new config once and put the drives where you want them.

 

If you have defined drive slot inclusions or exclusions for your shares, those will need to be updated after you are done, as well as any legacy stuff that references /mnt/diskX instead of properly referring to /mnt/user locations.

 

The only reason the long convoluted process is there is to preserve the disk slot number allocation for that specific data.

 

Note. I do NOT recommend moving the data from the RFS to the XFS disk, because a move involves a delete cycle after the copy. It will take WAY longer to do it that way, as RFS can take AGES to delete data. Much faster to simply copy the data, then format the RFS disk when you are sure the copy is complete (verification recommended).

Link to comment

As I said earlier in this thread:

On 3/20/2017 at 10:00 AM, trurl said:

If you know how parity works and how formatting works and how user shares work you can pretty much adlib it. The main idea is that changing the filesystem on a disk will format it, so its contents must be moved or it will be lost. Swapping disks, for example, is only important if you have something configured to use specific disks.

When I did this there wasn't a wiki for it. I didn't even bother with rsync and verification. I just used mc to move disk to disk. Never needed New Config, parity rebuild, or preclear. But everyone won't have the spare capacity, or other details specific to my system. And it still took a few days since that is what is needed to move TBs of data.

 

It perhaps doesn't need to be nearly as complex as the wiki procedure for most people, but the specifics of the process will depend on the specifics of their system. The wiki is trying to give a process that will work for all possible systems. And it may even be that some people will have something about their system that nobody thought of and is not covered by the wiki.

 

All of which maybe makes it understandable that Limetech's limited resources haven't gotten involved in this. It is only done once for a given system, and not done at all for new users. So there is nothing built-in and foolproof to do it.

Link to comment
1 hour ago, trurl said:

It perhaps doesn't need to be nearly as complex as the wiki procedure for most people, but the specifics of the process will depend on the specifics of their system. The wiki is trying to give a process that will work for all possible systems. And it may even be that some people will have something about their system that nobody thought of and is not covered by the wiki.

 

Sorry about the venting.  I tried following the guide to the letter and when I exchanged the 4TB drive for the 2TB parity was no longer valid put me over the edge. I realize  (or hope) this will be the only time I have to go through this process. I only have two more data drives to change out of 6 and then I think I would like to change the cache drive to xfs.  I still think the NewConfig command could be improved by keeping each of the disks format identified. I am not sure "auto" is beneficial. 

 

I am preclearing 2 previous written to data drives now so that will take a couple of days. I should have done the conversion as I replaced the drives and spread out the task over a longer period of time.

 

2 hours ago, jonathanm said:

If you don't care which numbered slot your data (shares) are on, you can forego the new config and just round robin from largest to smallest drive. Start by copying and verifying the content of your largest drive to any of the other array drives by whatever method you want. When you are sure all the data on the largest drive is duplicated properly, stop the array, change the drive format from RFS to XFS, then start the array and format the drive. From then on it's just a matter of copying all the data from the next largest RFS drive to the newly emptied XFS drive, lather rinse repeat until done. No need for new configs, preclearing, anything. If you feel a need to change slot numbers after you are done, you can do a new config once and put the drives where you want them.

 

If you have defined drive slot inclusions or exclusions for your shares, those will need to be updated after you are done, as well as any legacy stuff that references /mnt/diskX instead of properly referring to /mnt/user locations.

 

The only reason the long convoluted process is there is to preserve the disk slot number allocation for that specific data.

 

I won't move it. On the plus side I have wanted to get some exposure to the rsync command. I have seen a number of references to it and I would like to find an in depth tutorial on it's use.  While I complain sometimes about the process I am more frustrated with my lack of knowledge more than anything else. I have wanted to rearrange my disks for a number of years so I am going to take this opportunity to do it.  

 

While I have an easy time setting up disks for preclearing I seem to have difficulty having the format button show up????? I have had it show up on occasion but it wouldn't execute the command when I clicked on it. Can't tell you how I got there as I was trying everything. 

 

I did not know about the legacy stuff on /mnt/diskx vs /mnt/user locations.  Is there something in the WIKI?? Is there any other legacy issues I am unaware of since I have jumped from 4.7 to 6.3.2?

 

Thanks for your help.

Link to comment
6 minutes ago, tunetyme said:

I did not know about the legacy stuff on /mnt/diskx vs /mnt/user locations.  Is there something in the WIKI?? Is there any other legacy issues I am unaware of since I have jumped from 4.7 to 6.3.2?

Not sure what you are referring to here. Not a lot has changed about User Shares from 4.7 except some of the Use Cache settings. Were you not using User Shares before?

Link to comment
1 minute ago, tunetyme said:

I did not know about the legacy stuff on /mnt/diskx vs /mnt/user locations.  Is there something in the WIKI?? Is there any other legacy issues I am unaware of since I have jumped from 4.7 to 6.3.2?

The preference to use /mnt/user (shares only) instead of /mnt/diskX (individual disks) is an ongoing thing since the "user share copy bug" became a better known issue. If you know how unraid generates user shares and understand why copying between /mnt/diskX/share to /mnt/user/share can cause data loss, it's easy to avoid. For less savvy users, it's easier just to tell them to ignore the individual disks and let unraid work its magic with user shares.

 

As far as what you may have missed in the time between 4.7 and 6.3.2, there is no way for me to know what you have and haven't learned so far. So much has changed with the addition of notifications, dockers and VM's.

 

One major point is that plugins are less and less supported for applications, pretty much if you can do the task with a docker, you shouldn't be using a plugin.

Link to comment
6 hours ago, tunetyme said:

My 2 cents is that there must be a way to stream line this process. I followed the directions in the Wiki reference step by step. It finished the rsync sometime late last night, moved the drives around as directed now it says "All data on the parity drive will be erased when array is started" Every possible combination of configurations ended up with the same message. Considering all the time it takes to do this one disk at a time, I would suggest that a backup copy of each disk be made using rsync, then preclear all the data drives again and format them xfs. In my case it would save about a 3 -4 days time per disk.

 

tunetyme, you started off by saying that you followed the wiki step by step, but take a look again at step 16.  Somehow you missed that one.  I do apologize for the instructions seeming convoluted to you, but the step was there.  I looked at each of the ways you were suggesting to do it, and I have to be honest, they not only will take twice or 3 times as long, they also seem more convoluted to me, when you add in all the little details needed.  I still believe (and it's just my opinion!) that if you want the easiest and fastest way to do it, AND want to preserve parity and User Share configuration, then the wiki method is the best one.  Obviously I need someone else to write it though!  Except to prepare the initial disk, there is absolutely no clearing done, no Preclearing done, no parity builds done, and no file is copied more than ONE time ever.

 

I'll add more words to step 16 to display the message you saw ("All data on the parity drive will be erased when array is started"), then tell you to ignore it and click the checkbox to indicate "Parity is already valid".  Perhaps that will make it clearer?  After the copying of a drive is done, it only takes a few minutes before you can start copying the next drive - stop the array, New Config with Retain:All, swap the drive assignments, correct the file system formats, optional start and stop of the array to check it, change the file format of the cleared drive to XFS, start the array and allow it to be formatted, and you're ready to start copying again.

 

Here's a summary of the wiki method:

- Steps 1 - 7 are just prep, figuring out a strategy, and preparing the initial drive.  Plus, I recommend a parity check so you don't run into drive problems during the process, and because if parity is not good, there's no point in preserving it.

- Steps 8 - 9 are copying, with optional additional verification.

- Steps 10 - 18 are just the few minutes swapping the drives and formatting for the next copy

- Step 19 just tells you to loop back to Step 8 to start copying again.

At the end, I do tell you there are a few redundant steps in there, but I prefer having them in there because it seems safer that way.  But overall, there's just 3 steps - prep, copy, swap, then repeat.  But I really do welcome improvements and suggestions for simplification, or even full rewrites.

 

I'd like to add a summary at the top of the various possible methods.  I think if a user read the summary first, like the one above, they would be less likely to feel it's convoluted.  Plus, if it suddenly did start to feel convoluted or wrong, then they would know they had gone off the track somewhere.

 

There is a faster method, if you have a lot of data.  Unassign the parity drive, turn off User Shares, and skip the swapping.  That will make the copying faster (no parity drive), but you will still have to allow a day or 2 afterward to rebuild parity.  And while you won't have to worry about the complications of messed up inclusions and exclusions and file duplication during the process, you will still have to locate where everything is afterward, and correct all of the inclusions.

 

The advantage of the wiki method is the array always stays the same except for brief intervals (a few minutes each), both before you start, and during the process, and after you're done.  The only difference is that each logical drive is now a different physical drive.  Parity was always preserved, and so was your User Share configuration, and except for those brief intervals normal operation was fine.  If you had a second parity drive, it would need to be rebuilt, but that is true for almost all methods.

 

7 hours ago, tunetyme said:

BTW I think it is a significant flaw when you use new config that drive format goes to auto. Major source of frustration to go through and change everything back.

 

This would be a good feature request, and I agree with that.  It's not really a flaw, as New Config is basically resetting the array back to nothing.  So it's as if it has never seen the disks you may then assign to it, which is also why it MUST present the message that parity will be rebuilt when you use New Config.  It assumes this is a new array, new disks it has never seen, and a new parity drive.  The Retain feature is essentially brand new for us, but modifies the New Config to reset the array config but retain some or all of the previous assignments.  What we need now is for the Retain feature to also retain the existing file system info for each drive.  This would save steps for us, and avoid some risk and confusion.

 

I'm sorry if I sound defensive about what I've written.  I do welcome improvement.  jonathanm has been pointing out one of the constraining elements of my method, and I want to comment on that, and other things, like the problems of unBALANCE, but in another post.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.