Potential RFS Problem during conversion to XFS


kennelm

Recommended Posts

I'm currently on the latest v6 unRAID version, and I'm working through the conversion of my RFS drives to XFS.  These drives were originally built on V4.4 IIRC. 

 

Anyway, I got 3 of 4 drives converted successfully, but the 4th drive is giving me weirdness.  It's a 1TB drive with about 700GB allocated per the unRAID WebGui.  When I rsync-ed this content to a 1TB swap drive, it blew up on space.   I'm thinking there might be a problem with the file system so I am planning to do a check and possibly a repair.  So I started reading this:

 

https://wiki.lime-technology.com/index.php?title=Check_Disk_Filesystems

 

My question is this:  If the RFS drives were originally built on v4.4, and now I've upgraded through v5 and on to the latest v6.x, which guidance do I follow?  Should I just use the v6 WebGui as described in the link?  Or do I need to drop down to the section entitled "Drives Formatted with ReiserFS using UnRAID v4?"    The wiki is a tad ambiguous on this. 

 

Larry

Edited by kennelm
typo
Link to comment

Do you base the amount of files on the global usage information of the file system - basically "full size - free"? Or have you evaluated actual usage on the command line with "du"?

 

Another thing - might you have run any programs that have created soft/hard links that can trick the copy program into either upgrading links into full files or result in a traversal loop?

Link to comment

PWM,

 

I ran a "df -k" and the source drive has around 700GB allocated.   Did not run du.  Because of this I was puzzled as to why the target drive was getting more data than the source!

 

I didn't think to check for hard/soft links that could be pulling in other files located elsewhere.  I guess this could explain it if rsync follows the links.  I thought the default behavior for rsync was to not follow the softlink but rather copy the softlink.

 

Thanks for the tip.

 

Larry

Edited by kennelm
typo
Link to comment

OK, I ran a file system check on the RFS disk and no problems were found.   I also searched for soft links (find . -type l -ls) and found none, which I suppose could have explained how more data came out than appeared to be there.  

 

Per the wiki, I ran this sync command to transfer the data from the original 1TB RFS disk1 to a much larger (and empty) XFS disk5: 

 

rsync -avPX /mnt/disk1/ /mnt/disk5/

 

Here is the df-k output following the rsync:

Filesystem      1K-blocks       Used            Available      Use% Mounted on

/dev/md1        976732736  742195824  234536912  76% /mnt/disk1

/dev/md5       2928835740 1266835888 1661999852  44% /mnt/disk5

 

Anyone have any idea how disk1 could produce enough data to result in the allocations shown for disk5?   Somehow, 760GB resulted in 1.3TB.  Could this be because the file systems are different?    

 

Larry

Edited by kennelm
typo
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.