[SOLVED]no cache after adding 2nd cache drive Unmountable disk present


Recommended Posts

I had a great working system ,docker, kvm, gpu passthrough everything was awesome until I tried to add a 2nd cache drive. When I tried to add a 2nd drive to the cache it said both cache drives needed to be formatted so I cancelled and tried to remove new cache drive,  it still said my previous cache drive that was working needed to be formatted,

 

SO then I tried new config and re added my drives as they were and was still getting the drive needed to be formatted message,

 

so since this was a test server and I made backups of my vms i said ok format but it is still not letting me add my drive back for cache I click format and it says Started, formatting... but I wait forever and it does not finish so I refresh the page and still no cache or drive showing that was supposed to be cache,

 

I'll start over if I have to but was trying to avoid that

Link to comment

Adding a second cache causes unRAID to create something called a cache pool. Using the btrfs file system, the drives are formatted in a way that they provide data redundancy. A second cache drive will not add new capacity - but a 3rd would. I have never used this feature, but it is explained in more detail in [THIS THREAD]

 

I am not sure what you have done, but a new config, duplicating the existing array disk assignments and trusting parity, should straighten it out and allow you to move back to one cache drive. If you attempted to format the drives as a pair and did not let it complete, you could have left the disks in an indeterminate state that is confusing to unRAID. In such a case, you should be able to remove the cache disk from the array, and using the preclear script with the "-z" option to zero the MBR of the disk and that should cause unRAID to forget about the cache pool. When you add it back, the cache drive should appear unformatted and they allow you to format it as a single disk.

 

As for any data on the cache disk before you began, if recovery of that data is a priority, DO NOT do the above steps. That data would be difficult to recover, if it is possible at all.

 

 

Link to comment

OK cool thanks , yeah I was trying to add more space but it doesn't work like that, it created raid 1 as its supposed to. so then when I tried to remove it ,the array was broken and it couldn't find it. I did get my disks back though and it was because although I had removed it from the array I still had to set the cache disk back to 1 slot and completely missed that.

Link to comment
  • 4 months later...

Adding a second cache causes unRAID to create something called a cache pool. Using the btrfs file system, the drives are formatted in a way that they provide data redundancy. A second cache drive will not add new capacity - but a 3rd would. I have never used this feature, but it is explained in more detail in [THIS THREAD]

 

I am not sure what you have done, but a new config, duplicating the existing array disk assignments and trusting parity, should straighten it out and allow you to move back to one cache drive. If you attempted to format the drives as a pair and did not let it complete, you could have left the disks in an indeterminate state that is confusing to unRAID. In such a case, you should be able to remove the cache disk from the array, and using the preclear script with the "-z" option to zero the MBR of the disk and that should cause unRAID to forget about the cache pool. When you add it back, the cache drive should appear unformatted and they allow you to format it as a single disk.

 

As for any data on the cache disk before you began, if recovery of that data is a priority, DO NOT do the above steps. That data would be difficult to recover, if it is possible at all.

 

I

Link to comment

Adding a second cache causes unRAID to create something called a cache pool. Using the btrfs file system, the drives are formatted in a way that they provide data redundancy. A second cache drive will not add new capacity - but a 3rd would. I have never used this feature, but it is explained in more detail in [THIS THREAD]

 

I am not sure what you have done, but a new config, duplicating the existing array disk assignments and trusting parity, should straighten it out and allow you to move back to one cache drive. If you attempted to format the drives as a pair and did not let it complete, you could have left the disks in an indeterminate state that is confusing to unRAID. In such a case, you should be able to remove the cache disk from the array, and using the preclear script with the "-z" option to zero the MBR of the disk and that should cause unRAID to forget about the cache pool. When you add it back, the cache drive should appear unformatted and they allow you to format it as a single disk.

 

As for any data on the cache disk before you began, if recovery of that data is a priority, DO NOT do the above steps. That data would be difficult to recover, if it is possible at all.

 

I am in the same situation as the opening poster. I have added a second cache drive and then (after I realized that this wouldn't do what I want it to do) removed the drive again.

 

Now, my first cache drive became unmountable. I have not formatted it and I have actively done anything. Can I get the data from the first cache drive back? Is there a way to repair whatever has happened?

 

Thanks in advance!!!

Link to comment
  • 1 month later...

Sorry to resurrect an old thread... but..

 

To avoid making mistakes: I have a 250 gig cache drive and I want to add capacity.. What would be my best route if I want to go for a 512gb cache ?

 

A 256gb drive will cost me 80 to 100 euro.. Would I need to add two to get the wanted 250 gigs extra ?  That wouldv be EUR 190.

 

A single 512gb would also cost me around that amount, but would ofcourse not add redundancy...

 

Link to comment

Sorry to resurrect an old thread... but..

 

To avoid making mistakes: I have a 250 gig cache drive and I want to add capacity.. What would be my best route if I want to go for a 512gb cache ?

 

A 256gb drive will cost me 80 to 100 euro.. Would I need to add two to get the wanted 250 gigs extra ?  That wouldv be EUR 190.

 

A single 512gb would also cost me around that amount, but would ofcourse not add redundancy...

It is not clear to me if you are looking just for capacity or for resilience as well?

 

The current default is that if there is more that one drive then BTRFS RAID-1 is used.  With this adding the second 250GB drive would just give resilience with no increase in capacity.  Adding a third 250GB drive would only take you up to 375GB as under RAID-1 you never get more than half the total space. 

 

There are some posts from Tom on how one can make the second drive be added in RAID-0 mode which gives extra capacity but no resilience.  However this is a bit messy at the moment although I think it is going to become a standard fully supported feature in the future.

Link to comment

275gig  ?

 

Ehm... that really (in my use case) makes using a cache pool something I totally do not want to do...

 

I am better of getting a 512gigs new ssd then... thanks for the tip... For me personally there is no benefit in a redundant cache drive.. Data is on their only for a short term and would never be of importance (for " important "  shares I have turned off cache drive).

Link to comment
  • 4 months later...

I am in the same situation as the opening poster. I have added a second cache drive and then (after I realized that this wouldn't do what I want it to do) removed the drive again.

 

Now, my first cache drive became unmountable. I have not formatted it and I have actively done anything. Can I get the data from the first cache drive back? Is there a way to repair whatever has happened?

 

Thanks in advance!!!

 

I know this is an old post but I ran into the same problem. Putting the configuration back to just one cache drive didn't fix the problem for me. The issue is that the configuration changed the filesystem from reiserfs to btrfs. Manually changing the cache dribe back to reiserfs fixed the issue for me.

Link to comment
  • 7 months later...

Just to add to this - I did exactly the same thing - I just added the second cache drive into the pool and this left the first drive in an indeterminate state. I tried removing the second drive from the pool and it still said the first drive needed formatting - eventually I rebooted and then removed the first drive from the pool entirely and then re-added it and then it recognised the correct drive format and the cache drive was fine again.

 

I'll move the data off the first cache drive initially - recreate the cache drive as a pool then move it all back again. I thought this would be a walk in the park - i was wrong :P

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.