Parity disk disabled...now what?


Recommended Posts

I noticed that I can no longer add files to my array. Went to web GUI and found that my parity drive is marked "disabled".

 

When I restart it says my drive is (blue ball) "new drive"

 

What do I do now? At this point I would be happy to resync and start over.

 

I still on 5.0.5, if that helps

Link to comment
9 hours ago, JDW said:

I noticed that I can no longer add files to my array. Went to web GUI and found that my parity drive is marked "disabled".

 

When I restart it says my drive is (blue ball) "new drive"

 

What do I do now? At this point I would be happy to resync and start over.

 

I still on 5.0.5, if that helps

The parity drive dropping offline should not be sufficient to stop files being written to the array.   It suggests there might be file system corruption on one (or more) of the drives.   You should put the array into Maintenance mode and carry out a file system (reiserfsck) check on each of your drives.

Link to comment

Thanks for tip to run reiserfsck. This led me to unraid wiki articles talking about disk checks [1].

 

I ran reiserfsck --check that resulted in below screenshot. Then I tried reiserfsck --rebuild-tree and that logged a bunch of statements about corrupted blocks and got hung without making further progress.

 

Should I just preclear_disk at this point?

 

[1] https://wiki.lime-technology.com/Check_Disk_Filesystems#Drives_formatted_with_ReiserFS_using_unRAID_v5_or_later

 

 

IMG_20170610_212621.jpg

Link to comment

The command line you used looks incorrect!    When using the raw device names you need to include the partition number so you should have been using '/dev/hdb1'. 

 

if the array is in maintenance mode then you can use /dev/mdX where X refers to the disk number in the unRAID GUI as the mdX devices already take the partition into account.

Link to comment

Could be an issue with the USB disk. The USB disk contains the absolute record of what drives are assigned to what slots. If a slot is blue, that often means the USB was not updated properly. I would remove it and insert into workstation to run a chkdsk to check for and correct corruption.

Link to comment
  • 2 weeks later...

So I have decided to basically start over since this is just a parity disk.

 

I took parity out of array, ran mkraiserfs, then tried to preclear_disk

 

The first time array was in maintenance mode, preclear only made it to ~40% before becoming unresponsive (and no longer writing to disk). Then I restarted & stopped array completely, this time I made it to 90%. Restarted again and made sure array was stopped, only made it to 40%.

 

I am kind of confused why it would do that? I initialized all 4 disks on this box without issue.

Link to comment
4 hours ago, johnnie.black said:

Not following, parity doesn't have a file system.

 

Preclear in maintenance mode?

JDW, it sounds like you went completely offtrack on your 2nd post. itimpi was suggesting checking filesystem on your data disks. As johnnie.black says, parity doesn't have a filesystem to check. And there is no need to preclear a parity disk.

 

You should post a syslog, then we may want SMART reports from your disks.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.