bugs181

Members
  • Posts

    2
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

bugs181's Achievements

Noob

Noob (1/14)

0

Reputation

  1. First, I appreciate you taking the time to respond. I'll attempt your solution however I'm unsure if it'll help this situation because I've already ran xfs_repair before and I theorize thats how the XFS inodes originally got corrupted. As to your second point. I assume the data is still there on the Parity but could be wrong. I think the xfs_repair butchered the inodes before a Parity sync was able to take place (although can't confirm at the moment). If that's the case, wouldn't the Parity still be correct? And a rebuild successfully bring that data back? The reason is because for this particular case the data drives are connected to a SATA 3 backplane which then gets connected to a port multiplier with USB 3 (routed to an internal USB header on the motherboard). I had also planned on adding an SSD cache directly routed to a SATA port and more data drives. This particular case supports up to 5 hot-swappable data drives. There are only 4 SATA ports total. I am using them as such: 2 for parity, 1 for the SSD cache, that only leaves enough room for 1 data drive. Or at most, 2 data drives and one parity, with one cache SSD. It's simply a matter of not enough ports (future expandability of 8 drives total). Edit: Attempted your solution and it's still showing blank with the exception of a couple new folders, as described in first post. Whatever the case, if absolutely necessary, I can use data recovery tools to access the deleted data on that drive. The problem is that XFS file information is stored in the inode, so it would be an absolute horrific mess to sort through that many files without filename and directory information. The raw data is surely there though, as discoverable using data recovery tools. Here's what I did: - Booted unRAID - Stopped Array - Started array in maintenance mode - $ xfs_repair /dev/md1 - $ mount /dev/md1 /mnt/recovery - $ ls /mnt/recovery
  2. Somehow, one of my data drives lost all of it's data! I theorize it was either a Parity check problem or xfs_repair. Drive Setup: Parity 1 Parity 2 Disk 1 Disk 5 Here are the steps that I took to get to this state: Turned on NAS the other day Disk 1 was showing as unmountable Restarted, same problem Searched forums and found xfs_repair /dev/md1 It complained about the log file Ran xfs_repair -vL /dev/md1 Drive mounts but shows that it's empty I look through my other data drives to see if data is there I go back to Disk 1, and new folders are appearing there as I navigate Disk5 via SMB I immediately stop the Array, Shutdown (don't want to cause more data loss) Yes, I freaked out and I forgot to capture the syslog. Why doesn't unRAID do this automatically at shutdown? Why isn't this a feature of unRAID? Start back up for another attempt Disk1 ID wasn't matching the old ID, so unRAID complained disk was missing (Probably a port multipler problem) Emulated drive doesn't even show up now After reading the forums for a while I decide to try to mount the Disk1 to /mnt/recovery just in case something was wonky. Drive was showing up as /dev/sdb $ mount -o nouuid /dev/sdb1 /mnt/recovery (tried many variations and this was the only one that worked) Mount point still displays a couple of the empty folders from earlier. I shutdown again, pop in UFS Explorer Emergency Recovery USB stick Scan the drive's XFS inodes and it shows a couple of the new empty folders I do a raw scan using Intelliraw (or whatever) and can see a couple hundreds files (gif, txt, etc). At this point I'm assuming my files are there, but the inode log is somehow butchered. Notes: I've known for a while that my unRAID setup is far from optimal. For example, I've been using a Port Multiplier without SMART for my Data drives. I had plans to look into alternatives when life wasn't so busy. I theorize that xfs_repair lost my data. It's actually been more than a month (with weekly scheduled parity checks) since I've actually looked through my data. It generally acts as a read-only media NAS, with other folders often being work-related backups and personal projects. (Which I am most desperately trying to salvage.) I understand that vital unRAID data should be backed up off-site. Funny story and great timing. I used to have an Amazon Cloud Drive account which backed up my NAS but recently they stopped offering the unlimited capacity, and now max out at $60 per 1TB and I've been looking at alternatives however life got in the way.. So meanwhile, all of my data was deleted from Amazon since I closed the account. unRAID version: 6.3.5 Attached is the Diagnostics file with Disk1 in the proper slot but showing improper ID Attached is the Hardware Profile XML file Attached is the syslog showing an XFS corruption warning for Disk1 and a mount error: mount: wrong fs type, bad option, bad superblock on /dev/md1, Similar experience from another unRAID user here: https://forums.lime-technology.com/topic/51819-solved-disk-disappeared-then-reappeared-empty-how-i-recovered-my-data-xfs/ I'm under the assumption that my best course of action right now is to wait for a new HDD to come in the mail, disable array, replace the Disk 1 with the new HDD, re-enabled Array, and hope that my Dual Parity still has the data. Before I did that; I thought I'd let the pros help me out here so I don't dig myself a bigger hole. I truly appreciate any help moving forward. nas-diagnostics-20170721-1005.zip nas-syslog-20170721-1008.zip HardwareProfile.xml