Re: Format XFS on replacement drive / Convert from RFS to XFS (discussion only)


Recommended Posts

Would transferring a full drive to an empty drive via rsync take about the same time as a parity check (write phase) on said drive size? 

 

As there are different factors between different setups, I figured if the answer to this question is yes, I'd have a good idea of how long migrating a drive will take me.  Ideally I'd like to do a drive every night while sleeping, but my guess is it will take much longer than that.

Link to comment

Would transferring a full drive to an empty drive via rsync take about the same time as a parity check (write phase) on said drive size? 

 

As there are different factors between different setups, I figured if the answer to this question is yes, I'd have a good idea of how long migrating a drive will take me.  Ideally I'd like to do a drive every night while sleeping, but my guess is it will take much longer than that.

I would expect it to take longer as on a parity protected array, each write actually involves two reads and two writes.  You can speed things up considerably by running without a parity disk while doing this and then regenerating parity afterwards, but doing that leaves you susceptible to data loss on a disk failure.
Link to comment

Does anyone know whether the file usage should be similar after a transfer RFS & XFS using the following?

 

rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/

 

I'm seeing a 3TB drive with ~2.25TB filled in RFS become 2.6TB on XFS.

Also: 2TB drive with ~1.5TB filled in RFS become 1.9TB filled on XFS.

 

And sorry, another question...  How does cp & rsync handle hard links?

Link to comment

RFS and XFS do use different amounts of space - sometimes XFS uses a bit more but usually it is less.

 

You can use Windows properties from the Samba share and get an exact number of bytes used by the files. These should match if you copy but don't move/remove the source files.

Link to comment

Thanks bjp999 & gundamguy.  Sorry for the late reply...  This explains a lot. 

 

It turns out that a good deal of my space was consumed by files being copied over 2x due to hard links.  I've been organizing my movies two different ways, by title and by genre, and was using hard links (and not symlinks due to Plex indexing both directories despite me only pointing it at one, if I used symlinks).

 

After reading the man pages, it appears that I definitely should be using the -H flag and perhaps even the -X.  Phew!  I thought I would have to do some file deletions before rsync-ing first  :D.

 

thanks again for the help!

-alex 

Link to comment

I am moving from Unraid 5 to 6 as well, and have very limited hard drive space to use as temporary storage.  Unfortunately I don't have a drive equal to the largest one in the array (2TB).  I only have 2 x 1TB external drives.  Is it possible to do an rsync command that will only do 1TB worth of data, or will I have to manually split and copy the directories over?  Is this the best way forward given my situation?

 

I want to start from fresh in 6.  Once data is copied off a drive and put in to the new array, I will reformat as XFS.  What's the best way to build the new server, should I add drives one-by-one or add them all and then calculate parity?

 

Thanks guys.

 

Link to comment

I am moving from Unraid 5 to 6 as well, and have very limited hard drive space to use as temporary storage.  Unfortunately I don't have a drive equal to the largest one in the array (2TB).  I only have 2 x 1TB external drives.  Is it possible to do an rsync command that will only do 1TB worth of data, or will I have to manually split and copy the directories over?  Is this the best way forward given my situation?

 

I want to start from fresh in 6.  Once data is copied off a drive and put in to the new array, I will reformat as XFS.  What's the best way to build the new server, should I add drives one-by-one or add them all and then calculate parity?

 

Thanks guys.

 

I think in that cause you have to manually split and copy the directories over. I do not think there is anyway to specify to rsync to only copy a set amount of data.

Link to comment

rsync -av --progress --remove-source-files /mnt/disk1/ /mnt/disk6/

I'm currently running this process but it doesn't seem to be removing the source files off of disk1 after they're transferred to disk6. Is that something it does as each file is copied or does it wait until all the files have transferred and then do a compare before deleting all the files in succession?

 

Edit: Nevermind, it's deleting the files off disk1 now, it just seems to be on a ~10 minute delay after the file has been transferred.

Link to comment

RFS and XFS do use different amounts of space - sometimes XFS uses a bit more but usually it is less.

 

You can use Windows properties from the Samba share and get an exact number of bytes used by the files. These should match if you copy but don't move/remove the source files.

 

I just finished converting 9 data drives from RFS to XFS. I save 1.5 to 2.5GB on every disk. Most disks were media files but 2 of them had hundreds of thousands of small files. I think the space savings is just due to file system differences.

 

Data transfer to and from the array is much more consistent now. The transfer rate is probably the same though but the consistency makes it seem a faster. I'm glad I took the time to do this as it was worth it.

 

A near full 3TB drive (50GB free) took about 19 hours. This was pretty consistent too. I used this rsync method:

 

rsync -av --progress --remove-source-files /mnt/disk6/ /mnt/disk5/

 

Thanks to everyone who helped with their ideas on this conversion.

 

Gary

Link to comment

RFS and XFS do use different amounts of space - sometimes XFS uses a bit more but usually it is less.

 

You can use Windows properties from the Samba share and get an exact number of bytes used by the files. These should match if you copy but don't move/remove the source files.

 

I just finished converting 9 data drives from RFS to XFS. I save 1.5 to 2.5GB on every disk. Most disks were media files but 2 of them had hundreds of thousands of small files. I think the space savings is just due to file system differences.

 

Data transfer to and from the array is much more consistent now. The transfer rate is probably the same though but the consistency makes it seem a faster. I'm glad I took the time to do this as it was worth it.

 

A near full 3TB drive (50GB free) took about 19 hours. This was pretty consistent too. I used this rsync method:

 

rsync -av --progress --remove-source-files /mnt/disk6/ /mnt/disk5/

 

Thanks to everyone who helped with their ideas on this conversion.

 

Gary

 

Ok, so on the server console you simply type that command?, where in your example, you are moving the files from disk6 to disk5?

Link to comment

I did it through a telnet session and used screen just in case I lost my telnet connection. If you have a monitor and keyboard attached directly to your unRaid server, screen isn't needed.

 

There is info on using screen in the wiki here:

 

http://lime-technology.com/wiki/index.php/Configuration_Tutorial#Install_Screen_without_using_UnMENU

 

Yes, in that example code, I was moving files from disk6 to disk5.

 

Gary

Link to comment

I currently have 14 disks + Parity disk + cache disk

 

15 disks (14 data + parity) are installed in 3 5*3 Icydock enclosures and the cache disk is internal to the server

 

I have just purchased and precleared a new hard disk to allow me to change the whole array to XFS, at the moment the extra disk is loose wired to the motherboard

 

I have worked out a plan to copy data between disks, but realise that this is going to leave me with 16 disks when complete, when I can only house 15 disks

 

When I get the very last disk, does this have to be part of the array, as I will be formatting it completely with XFS?

 

Can I avoid installing the last disk?

 

Also I am confused about the folowing items:-

 

9 - If there are any comparison failures, stop here and ask for help

 

10 - Now is the good time to move the files in the "t" directory to the root on [dest]. I do this with cut and paste from Windows explorer.

 

11 - Stop the array (no need to delete anything from the [source])

 

At point 9 I have source and destination desks with the same data on. Shouldn't the files on the source disk be deleted before files are moved from the t directory to root on the destination disk, otherwise I am going to have 2 disks with the same data, which I guess will confused UnRaid?

Link to comment

My server is maxed out on 2tb drives (parity + 9 data + cache) and don't have the space to add a spare drive to help with the FS migration. My cache drive houses my docker apps etc so curious on how I could proceed?

 

unRaid 6.0-beta14b.

 

If you don't have 2TB free on your server, you could always copy 2TB of the data to a USB device temporarily during the conversion

Link to comment

My server is maxed out on 2tb drives (parity + 9 data + cache) and don't have the space to add a spare drive to help with the FS migration. My cache drive houses my docker apps etc so curious on how I could proceed?

 

unRaid 6.0-beta14b.

You do not mention how much spare space you have on the array?  If you have enough then it is possible that moving data between drives could free up a drive to start the migration?

 

If you do not have the free space then maybe now is not the right time to do the migration?  You might want to wait until you intend to install larger drives (which would give you the space you need for the migration).

 

As was mentioned the alternative is to temporarily store files external to the unRAID system to free up the required space.

Link to comment

My server is maxed out on 2tb drives (parity + 9 data + cache) and don't have the space to add a spare drive to help with the FS migration. My cache drive houses my docker apps etc so curious on how I could proceed?

 

unRaid 6.0-beta14b.

You do not mention how much spare space you have on the array?  If you have enough then it is possible that moving data between drives could free up a drive to start the migration?

 

If you do not have the free space then maybe now is not the right time to do the migration?  You might want to wait until you intend to install larger drives (which would give you the space you need for the migration).

 

As was mentioned the alternative is to temporarily store files external to the unRAID system to free up the required space.

The current drives don't have enough free space to swallow another drive (not even close). I have a new 4tb to replace a 2tb data drive but currently waiting on the parity expansion to finish. Guess what I'm confused about is if I replaced an existing data drive with a larger data drive and restore it it it will have the original FS on it. Think I need to do some more reading :)

Link to comment

The current drives don't have enough free space to swallow another drive (not even close). I have a new 4tb to replace a 2tb data drive but currently waiting on the parity expansion to finish. Guess what I'm confused about is if I replaced an existing data drive with a larger data drive and restore it it it will have the original FS on it. Think I need to do some more reading :)

Yes - if you replace a drive with a larger one then the rebuild that takes place gives you the same file system - just with more space.  It does not allow you to change the file system type at the same time - as a few who thought it might have found to their cost :)
Link to comment

Thanks so much for the original how to, and the subsequent suggestions.

 

I had been on beta 6 since it came out. Everything worked and I stayed with it. But just ran out of space and needed to add a new drive. I thought this would be a good time to upgrade to beta 14b and convert all my drives to XFS while adding the new drive.

 

I am using rsync -av without the remove source files the first time, and will do that option the second time as it is supposed to be much quicker and less of a chance to get a red ball or corrupt files.

 

I did notice something strange, though. While transferring the big files, some files transfer at 29-30MB/s and some transfer at 55MB/s. Nothing in between. I don't get why there is a difference. See the attached screenshot. I guess about two thirds of the big files go at 29 and about a third go at 55. Any ideas? I am really curious.

 

Thanks

 

PS. I am transferring from a 2TB drive that was almost full (55GB free) to a 3TB drive. My parity is a 3TB drive and all other existing drives are 2TB or smaller.

Capture.JPG.8d81ff2690d7de9e525b1fe901235ec0.JPG

Link to comment

Quick question.

 

Have managed to copy a drive using rsync.

I wanted to have a quick look to make sure everything is gone.

I shared the drive and looked at the share in windows, sorry just a novice linux guy,  and everything is still there

The main tab in unraid says I only have 44MB used so unraid believes everything is gone.

How do I verify before I format the drive?

 

Thanks

Link to comment

Quick question.

 

Have managed to copy a drive using rsync.

I wanted to have a quick look to make sure everything is gone.

I shared the drive and looked at the share in windows, sorry just a novice linux guy,  and everything is still there

The main tab in unraid says I only have 44MB used so unraid believes everything is gone.

How do I verify before I format the drive?

 

Thanks

 

Did you dig down below the top level into folders where the data you wanted to copy was stored? I think --remove-source does not delete folders just files, so you should have a bunch of empty folders left over. I suspect that it only appeard that the data was still there because the (now empty) folders were still there.

 

If that's not it we can try something else to verify.

Link to comment

Let just say i read thru all 5 pages of questions and help and my speed reading was awesomely fast but i fell to read the part where if add a new drive and it formats it XFS and all of the other drives are ReiserFS can  they coexist?

Yes - you can have any mix of the supported file systems.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.