Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

3 hours ago, tunetyme said:

Sorry for the delay

tree.txt

OK, that indicates you have a folder named "disk3" on disk3, which is what we thought, but I wanted to make sure it was a folder and not a file named "disk3". And none of your other disks have a folder named "disk3" on them. The "disk3" folder in user and user0 are just a reflection of the "disk3" folder on your disks.

 

So, what was the issue you had trying to get into that folder in mc?

 

What do you get at the command line with this?

ls -lah /mnt/disk3/disk3

 

Link to comment
1 hour ago, tunetyme said:

It is empty.

 

The problem is that I can't delete the folder.

 

The folder did not show up on disk3 until I rebooted the server.

When you say it is empty, is that based on the command I asked you to enter, or is it based on some other view you have of the folder?

 

How are you trying to delete the folder? You can't delete a User Share from WIndows, for example. That would be true whether you are working with unRAID or some other network share on some other computer.

 

But you should be able to delete it from mc or the unRAID webUI.

 

Link to comment

Hey guys,

 

I need some help converting my last reiserfs disk.  All the rest have gone fine, but this one is behaving oddly.  I'm copying Disk 8 (4TB) to Disk 13 (4TB) using the rsync command in the wiki page.  As you can see the it's already copied more data than the array shows the original drive having.  I tried this once already and it filled up my 4TB drive and wasn't done yet.  The main page tells me the drive is only using 2.8TB, but it fills up a 4TB drive trying to do the copy?  What's going on?  Below is a pic and enclosed are the diagnostics.

 

 

Array.JPG

tower-diagnostics-20170404-2052.zip

Link to comment
7 hours ago, dougnliz said:

 I'm copying Disk 8 (4TB) to Disk 13 (4TB) using the rsync command in the wiki page.  As you can see the it's already copied more data than the array shows the original drive having

 

Any vdisks or other sparse files on disk8? If not sure add --sparse to the rsync command, it won't hurt and if there really are sparse files they will use the same space on the destination.

 

If there aren't any sparse files run reiserfsck on disk8 (after upgrading to v6.3.3 as there are known problems with the one included on all the other v6.3 releases).

Link to comment

Thanks for the info.  I'm trying the --sparse command first but I really don't think that's the issue.  I certainly didn't put them there, but I suppose some process could have.

 

I have a feeling it goes back to some of the corruption issues I had last year with the SAS2LP card and the reiserfsck is going to be the solution, but we'll see.

 

Thanks again,

 

Doug

Link to comment

Well the rsync --sparse finished.  The old drive has 2.8TB used and the new one has 2.9TB used.  The rsync command said it copied 4.2TB of data.  Afterwards I updated to 6.3.3 and ran the reiserfsck on the old drive and it checked out fine, no corruption.

 

I'm not sure what to do next.  Do I just go ahead and finish my conversion?

 

Thanks,

 

Doug

Link to comment

I am happy to report that I have final finished converting all my disks to XFS format. This thread had been of great use, along with buying new 8TB drives. I did not use the recommended rsync command but opted for unbalance plug in. My transfer speeds where in the neighborhood of 55-65MB's. All along keeping a vaild parity, so I am happy. 

Thank you all for the help

Link to comment
On 4/11/2017 at 7:25 AM, Harro said:

I am happy to report that I have final finished converting all my disks to XFS format. This thread had been of great use, along with buying new 8TB drives. I did not use the recommended rsync command but opted for unbalance plug in. My transfer speeds where in the neighborhood of 55-65MB's. All along keeping a vaild parity, so I am happy. 

Thank you all for the help

 

Glad it all worked out.  I'm ready to convert mine, but I'm not sure which procedure to follow.  I am leaning towards the unbalance approach, like you did.

 

Now that you are done, can you share the steps you used?  Rob said he is planning to write this, but hasn't yet and perhaps he can leverage what you write. :)

 

Link to comment
9 hours ago, Switchblade said:

 

Glad it all worked out.  I'm ready to convert mine, but I'm not sure which procedure to follow.  I am leaning towards the unbalance approach, like you did.

 

Now that you are done, can you share the steps you used?  Rob said he is planning to write this, but hasn't yet and perhaps he can leverage what you write. :)

 

 

 

I started with the drive with the least amount of data on it and used unbalance to spread that out between the drives. I chose the share folders to move on that drive. I do not use disk shares and have user shares. With that said, I set the setting in unbalanced to 50% instead of the 450 Mb size. I calculated and could see where all the files would go and to which drives. Happy I hit the move button. It was taking about 4 - 5 hours for each TB to move, depending on how much activity was going on with the server. IE Plex or Kodi streaming. I would normally start my move at night and by morning it was complete.

 

Once the move in unbalance was done, I went to main array tab and looked at the disk I had just emptied to make sure no files were left. My user share folders were still on the disk but nothing inside of them. Make sure to check them since at one point my mover had moved files into those share folders. Once satisfied everything was empty and only user shares remained, I stopped array. I then set the format on the empty disk to xfs and restarted the array. Once array was started Fix Common Problems plugins pops a message up in red, ignore that and down by stop/start button for array, you will see the empty disk on left side and a format box. Check the format box and the disk will now format to xfs. Takes a few minutes. Once done I re-run Fix problems plugin and make sure all is good. Now you have an empty disk formatted in xfs. You can use unbalance again to continue until done with your array. Parity was valid through out the whole process

 

My conversion took roughly 2 weeks to finish 17 drives. I did not stay at it all the time though.

 

Good luck.

Edited by Harro
Link to comment
  • 2 weeks later...

hey guys - I have purchased a new 8tb drive to ultimately become my second parity drive.  I read that it is best to add the second parity drive after the file conversion.  That means I have a new spare drive that I could use to help with the conversion.   I will need this drive freed up at the end, again to be my second parity drive.  I have 8 6tb drives to convert and my current parity is 8tb.  I'm using about half of the 48tb, so I have lots of free space to move stuff around and I don't care where my data ends up.  I only have one specific disk share for TimeMachine, and I can just delete and recreate that later.

 

I was going back and forth about which process to use, considering the new extra drive and how much time this will take.  I  think I saw the answer somewhere, but wasn't able to find it - so here goes - can't I just Exclude the first disk to convert from all shares and use the Mover to empty the drive I want to convert?  I'm not sure if Mover will move the data off or if I have to do that manually, but that prevent future writes - which I can then reformat, and then Include again at the end.  Why does that sound too easy, I must be missing something?

 

Otherwise,  I think I will follow the Mirror each disk with rsync, preserving parity procedure, except I would rather use MC than command lines with rsync.  Any reason I shouldn't use MC for this?

 

 

Link to comment
6 minutes ago, Switchblade said:

hey guys - I have purchased a new 8tb drive to ultimately become my second parity drive.  I read that it is best to add the second parity drive after the file conversion.  That means I have a new spare drive that I could use to help with the conversion.   I will need this drive freed up at the end, again to be my second parity drive.  I have 8 6tb drives to convert and my current parity is 8tb.  I'm using about half of the 48tb, so I have lots of free space to move stuff around and I don't care where my data ends up.  I only have one specific disk share for TimeMachine, and I can just delete and recreate that later.

 

I was going back and forth about which process to use, considering the new extra drive and how much time this will take.  I  think I saw the answer somewhere, but wasn't able to find it - so here goes - can't I just Exclude the first disk to convert from all shares and use the Mover to empty the drive I want to convert?  I'm not sure if Mover will move the data off or if I have to do that manually, but that prevent future writes - which I can then reformat, and then Include again at the end.  Why does that sound too easy, I must be missing something?

 

Otherwise,  I think I will follow the Mirror each disk with rsync, preserving parity procedure, except I would rather use MC than command lines with rsync.  Any reason I shouldn't use MC for this?

 

 

Mover will not help with this, unless you have a very large cache disk. Mover only moves from cache to array user shares or array user shares to cache. It will not move from array disk to array disk.

 

I used mc when I did my conversion. I'm not sure if you can get it to do the verify that makes people want to use rsync instead.

 

I probably wouldn't bother with adding the new disk to the array, since you said you have enough space already. If you add that disk and then later remove it to use it as parity2 you will have to New Config and rebuild both parity. Might as well go ahead and make it parity2 before you start.

 

There really isn't any magic here. You just need to move everything off a drive so you can format it, then repeat for other drives. If you have enough space on other drives to allow you to empty a drive then you don't need another drive.

 

If you can empty your largest disk by moving its files to other disks, then just do the conversion in order from largest to smallest. That way you will have enough room after the first conversion to move any other drive's files without having to use multiple disks to hold the files for the next drive's conversion.

Link to comment

I am in the process of converting the format on my Media Server.  (The Spec's are below.)  I normally use the reconstruct write (aka, turbo write) method of writing to the array but I decided that read, modify, write  (aka, the default)  method as I thought the extra head movement on the source drive could slow things down to the point where the default method might be faster.  I converted the first two disks using the read, modify, write method.  I used the Mirroring Procedure using   rsync.  Here is the rsync summary of the transfer information for the second disk:

 1,449,822,135.370  Bytes      42,192,398.57 Bytes/Sec

The resulted in a transfer that took more than 10 hours.  While this was going on, I researched the question of which might be faster.  There was speculation that reconstruct write would be faster but still slower than the normal speeds seen from this writing method.  But I could not find any real numbers.  I decided that I would try the reconstruct write method on the next disk.  The amount of data transferred was greater but the  time was only about eight hours.  Below is the rsync summary for that disk.

 1,704,756,941,912  Bytes      65,036,030.45 Bytes/Sec

The transfer speed was approximately 54% faster than the default read, modify, write speed!  I did observe one thing--- the write speed did appear to me to become slower as the disk filled up.  I would assume that the writes at this point were moving to the inner tracks on the disk. 

 

So if you are trying to speed up your conversion process have a look at using reconstruct write.  The disadvantages that I can see are (1), you will have to have all of the disks spun up for the entire conversion process instead of just three disks, and (2) the speed gain may not be as much for arrays with older smaller-capacity disks.

 

EDIT:  Another data point for reconstruct write (aka. turbo write):

 2,254,789,286,418Bytes       72,300,171.31 Bytes/sec

Total time for data transfer of 2.27 TB of data was less than 8-1/2 hours. 

Edited by Frank1940
  • Upvote 2
Link to comment
  • 1 month later...

I'm about to start my conversion process (RFS to XFS).

 

I thought 6.4 would bring some tool to help with the subject, but that isn't the case :)

 

I have a 4TB empty disk currently being precleared. 

 

I will go for the "Share based, no inclusions, preserving parity" method

 

- Move all data from a source rfs disk to the empty xfs disk

- Once the source rfs disk is empty, format it with xfs

- It becomes the new empty xfs disk

- Repeat from the top

 

Data will exist in a different disk, but I won't need to swap disks around.

 

I'll monitor how long it takes. If it's unbearably long ( (I expect it to be, as experienced by Harro), I'll probably test migrating one disk with the "Share based, no inclusions, no parity", to feel the difference in data transfer times. If it makes sense, I'll consider moving forward with the no parity dreaded feeling :) 

 

Link to comment
5 hours ago, jbrodriguez said:

I'm about to start my conversion process (RFS to XFS).

 

I thought 6.4 would bring some tool to help with the subject, but that isn't the case :)

 

I have a 4TB empty disk currently being precleared. 

 

I will go for the "Share based, no inclusions, preserving parity" method

 

- Move all data from a source rfs disk to the empty xfs disk

- Once the source rfs disk is empty, format it with xfs

- It becomes the new empty xfs disk

- Repeat from the top

 

Data will exist in a different disk, but I won't need to swap disks around.

 

I'll monitor how long it takes. If it's unbearably long ( (I expect it to be, as experienced by Harro), I'll probably test migrating one disk with the "Share based, no inclusions, no parity", to feel the difference in data transfer times. If it makes sense, I'll consider moving forward with the no parity dreaded feeling :) 

 

 

Use Turbo Write mode on the copies. Should make them considerably faster with parity enabled. Moving that much data is not going to be quick - but can be done over a period of a week or so. I would not recommend to break parity. The whole idea of the process is to preserve it!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.