Somehow, one of my data drives lost all of it's data! I theorize it was either a Parity check problem or xfs_repair.
Drive Setup:
Parity 1
Parity 2
Disk 1
Disk 5
Here are the steps that I took to get to this state:
Turned on NAS the other day
Disk 1 was showing as unmountable
Restarted, same problem
Searched forums and found xfs_repair /dev/md1
It complained about the log file
Ran xfs_repair -vL /dev/md1
Drive mounts but shows that it's empty
I look through my other data drives to see if data is there
I go back to Disk 1, and new folders are appearing there as I navigate Disk5 via SMB
I immediately stop the Array, Shutdown (don't want to cause more data loss)
Yes, I freaked out and I forgot to capture the syslog. Why doesn't unRAID do this automatically at shutdown? Why isn't this a feature of unRAID?
Start back up for another attempt
Disk1 ID wasn't matching the old ID, so unRAID complained disk was missing (Probably a port multipler problem)
Emulated drive doesn't even show up now
After reading the forums for a while I decide to try to mount the Disk1 to /mnt/recovery just in case something was wonky.
Drive was showing up as /dev/sdb
$ mount -o nouuid /dev/sdb1 /mnt/recovery (tried many variations and this was the only one that worked)
Mount point still displays a couple of the empty folders from earlier.
I shutdown again, pop in UFS Explorer Emergency Recovery USB stick
Scan the drive's XFS inodes and it shows a couple of the new empty folders
I do a raw scan using Intelliraw (or whatever) and can see a couple hundreds files (gif, txt, etc).
At this point I'm assuming my files are there, but the inode log is somehow butchered.
Notes:
I've known for a while that my unRAID setup is far from optimal. For example, I've been using a Port Multiplier without SMART for my Data drives. I had plans to look into alternatives when life wasn't so busy.
I theorize that xfs_repair lost my data. It's actually been more than a month (with weekly scheduled parity checks) since I've actually looked through my data. It generally acts as a read-only media NAS, with other folders often being work-related backups and personal projects. (Which I am most desperately trying to salvage.)
I understand that vital unRAID data should be backed up off-site. Funny story and great timing. I used to have an Amazon Cloud Drive account which backed up my NAS but recently they stopped offering the unlimited capacity, and now max out at $60 per 1TB and I've been looking at alternatives however life got in the way.. So meanwhile, all of my data was deleted from Amazon since I closed the account.
unRAID version: 6.3.5
Attached is the Diagnostics file with Disk1 in the proper slot but showing improper ID
Attached is the Hardware Profile XML file
Attached is the syslog showing an XFS corruption warning for Disk1 and a mount error:
mount: wrong fs type, bad option, bad superblock on /dev/md1,
Similar experience from another unRAID user here: https://forums.lime-technology.com/topic/51819-solved-disk-disappeared-then-reappeared-empty-how-i-recovered-my-data-xfs/
I'm under the assumption that my best course of action right now is to wait for a new HDD to come in the mail, disable array, replace the Disk 1 with the new HDD, re-enabled Array, and hope that my Dual Parity still has the data. Before I did that; I thought I'd let the pros help me out here so I don't dig myself a bigger hole.
I truly appreciate any help moving forward.
nas-diagnostics-20170721-1005.zip
nas-syslog-20170721-1008.zip
HardwareProfile.xml