Reducing a cache pool of 2 drives back to a single cache.[Solved]


666gene

Recommended Posts

Couldn't find a simple answer for the following :)

 

Current situation:

I have a 2 drive cache pool with Cache 1 256GB & Cache 2 128GB

 

Goal:

I want to revert back to a single cache drive arrangement (256GB).

if i set the 128GB Cache 2 drive to (no device) and i start the array the Cache 1 disk is unmountable.

Is reverting to 1 cache drive possible and if so can i have a step by step guide on the safest way to do this.

any assistance would be much appreciated :)

Link to comment

If this was me, id back up what you wanted to put back on cache post pool change into the array temporarily, reduce to one device, reformat the cache drive and copy data back.

Obviously beforehand, disable VMs and/or docker so nothing tries to write to cache while you're reformatting and copying data back.

 

The GUI has always seemed to be geared to adding more disks to the array, or more disks to cache, but not the other way around.

 

EDIT: See linky below

Link to comment

thanks again for your assistance johnnie.black

 

i followed your instructions exactly. iv highlighted the steps i performed.

 

after starting the array i get the unmounted message. im removing cache 2 i kept an eye on the writes count. they didnt change at all and the disk still says unmounted.

 

screenshot attached

 

[glow=green,2,300]backup your cache in case something unexpected happens

shutdown server

remove disk (disk has to be physically disconnected or precleared, starting the array with a previously used pool disk unassigned will result in unmountable cache)

power up, start array[/glow]

1) if disk removed was cache1: balance will begin, progress can be seen on cache webpage

2) if disk removed was cache2 and the only one left is cache1: balance will begin, no balance progress will be shown, wait for cache read/write activity to stop

this can take some time depending on how much data is on the pool and how fast your devices are, don't stop the array until it's done

when balance is done check that on the cache page "btrfs filesystem show" total devices are correct and it's not displaying "***some devices missing"

if you removed cache1 and plan to only keep using cache2 it’s ok to trade slots and assign it as cache1

Capture.PNG.b780824937e9809d115f83c47afa1f08.PNG

Link to comment

ok i stopped the array

shutdown

reconnected physical SSD.

booted server

re assigned the disk

started array and everythings back to normal

 

Capture2 shows me re assigning the device 128 kingston ssd

Capture3 shows array back up with cache pool successfull with btrfs showing space available.

 

does this look corect?

Capture2.PNG.56bced59576730de111965139b9c3295.PNG

Capture3.PNG.10e992d2ba45f966c1ecd451b94c5095.PNG

Link to comment

Your pool is not balanced, but since you're going to remove a device it doesn't matter now:

 

If not yet done backup your cache.

 

Use the console or SSH into your server and run:

 

btrfs balance start -f -dconvert=single -mconvert=single /mnt/cache

 

This can take some time, progress can be watched by clicking on "cache" on the main page and checking "btrfs balance status":

 

When it's done type:

 

btrfs device remove /dev/sde1 /mnt/cache

 

Again this could take some time, when done, stop array, unassign removed cache device, re-start array and you're done.

  • Upvote 1
Link to comment
  • 1 year later...
On 10/25/2016 at 3:40 AM, johnnie.black said:

Your pool is not balanced, but since you're going to remove a device it doesn't matter now:

 

If not yet done backup your cache.

 

Use the console or SSH into your server and run:

 

 


btrfs balance start -f -dconvert=single -mconvert=single /mnt/cache
 

 

 

This can take some time, progress can be watched by clicking on "cache" on the main page and checking "btrfs balance status":

 

When it's done type:

 

 


btrfs device remove /dev/sde1 /mnt/cache
 

 

 

Again this could take some time, when done, stop array, unassign removed cache device, re-start array and you're done.

@Johnnie.Black I'm also wanting to remove a drive from a 2-drive pool. My first device seems to be corrupt (has a ton of read errors that are unfixable). Had a couple of questions: 1) should I follow the instructions you referenced in the UnRaid v6 FAQ or use the command line instructions in this post? I'm not sure if I want to replace the bad drive with a new one or just stick to a single cache drive setup

Link to comment
1 minute ago, eubbenhadd said:

@Johnnie.Black I'm also wanting to remove a drive from a 2-drive pool. My first device seems to be corrupt (has a ton of read errors that are unfixable). Had a couple of questions: 1) should I follow the instructions you referenced in the UnRaid v6 FAQ or use the command line instructions in this post? I'm not sure if I want to replace the bad drive with a new one or just stick to a single cache drive setup

 

If you're using latest unRAID you can use the instructions on the FAQ.

Link to comment
11 hours ago, eubbenhadd said:

Sorry, one last question. At what point should I physically remove the bad drive?

 

When the stop array button is available.

 

5 hours ago, eubbenhadd said:

So went through the steps and getting some weird stuff now. 

That's likely a corrupt docker image, would need the diags to confirm, if yes just delete and recreate.

Link to comment
  • 4 years later...

Hi @JorgeB, I've got a two drive cache pool and just tried to remove the first of my cache drives following the guide in the FAQ and I'm seeing some strange behaviour.

I stopped the array and unassigned the first cache drive, then restarted the array. I couldn't see any cache activity, but there was a notification that the cache had returned to normal operation. However, under Pool Devices it says "Unmountable: No File System", my Appdata folder is empty and none of my Dockers are there.

I then tried to reallocate the first cache drive into the first slot, but the system warned me that all data would be removed from this drive if I started the array, so I chose not to. I then reallocated the second cache drive to first cache drive's slot and restarted the array, but I'm still getting the "Unmountable: no file system" error and when I click on the cache web gui, the Balance function is unavailable, saying it is only available when the array has started.

 

Any help appreciated!!

bigdaddy-diagnostics-20220923-1159.zip

Edited by randommonth
Link to comment
6 hours ago, randommonth said:

Any help appreciated!!

 

Sep 23 11:23:35 BIGDADDY kernel: BTRFS: device fsid b22551d5-574f-45b3-a984-7648d094271c devid 1 transid 867196 /dev/sdb1 scanned by udevd (667)
Sep 23 11:23:35 BIGDADDY kernel: BTRFS: device fsid b22551d5-574f-45b3-a984-7648d094271c devid 2 transid 862906 /dev/sdd1 scanned by udevd (663)

 

Pool was out already of sync at boot, note the different transids, this suggests one them dropped offline before, and looking at the stats they both did:

 

Sep 23 11:24:30 BIGDADDY kernel: BTRFS info (device sdb1): bdev /dev/sdb1 errs: wr 479443, rd 6903, flush 6054, corrupt 193330, gen 810
Sep 23 11:24:30 BIGDADDY kernel: BTRFS info (device sdb1): bdev /dev/sdd1 errs: wr 21307930, rd 58098292, flush 590499, corrupt 2129794, gen 5010

 

Hopefully not at the same time but the pool might still be a mess, since one of the devices was wiped after removing it you first need to recover that, with the array stopped type:

btrfs-select-super -s 1 /dev/sdb1

 

Then unassign all pool devices, start array, stop array, re-assign both pool devices, start array, post new diags.

Link to comment

Thanks but I got impatient yesterday and tried to recover the remaining drive using your guide here -

 

When I mounted the remaining drive I saw that I wasn't going to be able to recover all the data, which confused me because I thought the data in a RAID 1 cache pool was mirrored? So I formatted the drive then recovered my Appdata and both drives appear to have come back online ok.

 

Until this morning, when my Docker server appears to have failed. I took the attached diagnostics before rebooting.

 

I'm getting sick of the abysmal reliability of my cache. My efforts yesterday were intended on removing the troublesome SSD but just caused more drama than I expected. How can I safely reduce my cache to a single drive so I can eliminate variable sources of issues?

bigdaddy-diagnostics-20220924-0804.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.