Help! Disk1 data completely gone


Recommended Posts

Somehow, one of my data drives lost all of it's data! I theorize it was either a Parity check problem or xfs_repair.

 

Drive Setup:

  • Parity 1
  • Parity 2
  • Disk 1
  • Disk 5

 

Here are the steps that I took to get to this state:

  • Turned on NAS the other day
  • Disk 1 was showing as unmountable
  • Restarted, same problem
  • Searched forums and found xfs_repair /dev/md1
  • It complained about the log file
  • Ran xfs_repair -vL /dev/md1
  • Drive mounts but shows that it's empty
  • I look through my other data drives to see if data is there
  • I go back to Disk 1, and new folders are appearing there as I navigate Disk5 via SMB
  • I immediately stop the Array, Shutdown (don't want to cause more data loss)
  • Yes, I freaked out and I forgot to capture the syslog. Why doesn't unRAID do this automatically at shutdown? Why isn't this a feature of unRAID?

 

  • Start back up for another attempt
  • Disk1 ID wasn't matching the old ID, so unRAID complained disk was missing (Probably a port multipler problem)
  • Emulated drive doesn't even show up now

 

  • After reading the forums for a while I decide to try to mount the Disk1 to /mnt/recovery just in case something was wonky.
  • Drive was showing up as /dev/sdb
  • $ mount -o nouuid /dev/sdb1 /mnt/recovery (tried many variations and this was the only one that worked)
  • Mount point still displays a couple of the empty folders from earlier.

 

  • I shutdown again, pop in UFS Explorer Emergency Recovery USB stick
  • Scan the drive's XFS inodes and it shows a couple of the new empty folders
  • I do a raw scan using Intelliraw (or whatever) and can see a couple hundreds files (gif, txt, etc). 
  • At this point I'm assuming my files are there, but the inode log is somehow butchered.


Notes: 

  • I've known for a while that my unRAID setup is far from optimal. For example, I've been using a Port Multiplier without SMART for my Data drives. I had plans to look into alternatives when life wasn't so busy.
     
  • I theorize that xfs_repair lost my data. It's actually been more than a month (with weekly scheduled parity checks) since I've actually looked through my data. It generally acts as a read-only media NAS, with other folders often being work-related backups and personal projects. (Which I am most desperately trying to salvage.) 
     
  • I understand that vital unRAID data should be backed up off-site. Funny story and great timing. I used to have an Amazon Cloud Drive account which backed up my NAS but recently they stopped offering the unlimited capacity, and now max out at $60 per 1TB and I've been looking at alternatives however life got in the way.. So meanwhile, all of my data was deleted from Amazon since I closed the account.
     
  • unRAID version: 6.3.5
  • Attached is the Diagnostics file with Disk1 in the proper slot but showing improper ID
  • Attached is the Hardware Profile XML file
  • Attached is the syslog showing an XFS corruption warning for Disk1 and a mount error:
      mount: wrong fs type, bad option, bad superblock on /dev/md1,
     
  • Similar experience from another unRAID user here:
    https://forums.lime-technology.com/topic/51819-solved-disk-disappeared-then-reappeared-empty-how-i-recovered-my-data-xfs/

 

I'm under the assumption that my best course of action right now is to wait for a new HDD to come in the mail, disable array, replace the Disk 1 with the new HDD, re-enabled Array, and hope that my Dual Parity still has the data. Before I did that; I thought I'd let the pros help me out here so I don't dig myself a bigger hole.

I truly appreciate any help moving forward.
 

nas-diagnostics-20170721-1005.zip

nas-syslog-20170721-1008.zip

HardwareProfile.xml

Edited by bugs181
Link to comment

Difficult to know where to start here... you should have asked for help much earlier.

 

I'm not clear on all the steps you did or if your parity is still valid.

 

7 hours ago, bugs181 said:

I'm under the assumption that my best course of action right now is to wait for a new HDD to come in the mail, disable array, replace the Disk 1 with the new HDD, re-enabled Array, and hope that my Dual Parity still has the data.

 

Not usually, parity can't fix filesystem corruption, but since I'm not sure what you did you may try this, disk1 is currently disable and unmountable, you don't need to rebuild it to see if there's any fixable data there, just start the array in maintenance mode and run xfs_repair on md1 again, whatever data comes up (or doesn't) it's the same you'll get after rebuilding.

 

P.S. Is there a reason you're using 2 USB disks when you still have 4 available SATA ports? Although supported USB disks are not recommended as array disks, in this case besides SMART not working the serial number for those disks is not correctly displayed, that can cause a number of problems, like the disk not being correctly identified.

Link to comment
38 minutes ago, johnnie.black said:

just start the array in maintenance mode and run xfs_repair on md1 again, whatever data comes up (or doesn't) it's the same you'll get after rebuilding.

 

First, I appreciate you taking the time to respond. I'll attempt your solution however I'm unsure if it'll help this situation because I've already ran xfs_repair before and I theorize thats how the XFS inodes originally got corrupted.


As to your second point. I assume the data is still there on the Parity but could be wrong. I think the xfs_repair butchered the inodes before a Parity sync was able to take place (although can't confirm at the moment). If that's the case, wouldn't the Parity still be correct? And a rebuild successfully bring that data back?

 

38 minutes ago, johnnie.black said:

P.S. Is there a reason you're using 2 USB disks when you still have 4 available SATA ports?


The reason is because for this particular case the data drives are connected to a SATA 3 backplane which then gets connected to a port multiplier with USB 3 (routed to an internal USB header on the motherboard). I had also planned on adding an SSD cache directly routed to a SATA port and more data drives. This particular case supports up to 5 hot-swappable data drives. There are only 4 SATA ports total. I am using them as such: 2 for parity, 1 for the SSD cache, that only leaves enough room for 1 data drive. Or at most, 2 data drives and one parity, with one cache SSD. It's simply a matter of not enough ports (future expandability of 8 drives total).
 

Edit: Attempted your solution and it's still showing blank with the exception of a couple new folders, as described in first post. Whatever the case, if absolutely necessary, I can use data recovery tools to access the deleted data on that drive. The problem is that XFS file information is stored in the inode, so it would be an absolute horrific mess to sort through that many files without filename and directory information. The raw data is surely there though, as discoverable using data recovery tools.

Here's what I did:
- Booted unRAID

- Stopped Array
- Started array in maintenance mode
- $ xfs_repair /dev/md1

- $ mount /dev/md1 /mnt/recovery

- $ ls /mnt/recovery
 

Edited by bugs181
Added recovery attempt steps
Link to comment
6 hours ago, bugs181 said:

I think the xfs_repair butchered the inodes before a Parity sync was able to take place (although can't confirm at the moment). If that's the case, wouldn't the Parity still be correct? And a rebuild successfully bring that data back?

 

Never seen xfs_repair delete an entire disk (though it's certainly possible), but parity is real time, unless something is not working as it should or you did something to the disk outside the array, it will always reflect the current data as on that disk, and since current disk1 (md1) is being emulated, whatever is there is the result of current parity.

 

6 hours ago, bugs181 said:

Attempted your solution and it's still showing blank with the exception of a couple new folders, as described in first post. Whatever the case, if absolutely necessary, I can use data recovery tools to access the deleted data on that drive.

 

If xfx_repair can't fix the emulated disk your only option is to use data recovery software, on the original disk or on the emulated/rebuilt disk.

 

Link to comment

I can't help you with the system recovery (you're in excellent hands with johnnie.black anyway) but I can vouch for CrashPlan as a backup solution. It's $60/year for a personal, single machine, account with unlimited disk space. I backup all my Win machines to my server and include that backup path in the paths that CP is pushing to the cloud. Backup speeds are fast, they're currently holding 1.2 million files at 3.3GB disk space for me, and I've made test and real recoveries quickly and with no issues.

 

Once you get the immediate issues resolved (or maybe even sooner!) you may want to look into it. I'm using gfjardim's docker to run mine, though there are others.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.