unRAID 6 Beta 6: Btrfs Quick-Start Guide


Recommended Posts

Hmmm, I'll have to explain to the wife that a new hard drive may be required...

 

You can use multiple filesystems so it's not an all or nothing type situation. If disks 1 - 4 are reiserfs and you add 5th drive, make it XFS.

 

I went ahead and switch all my drives to XFS but I treated it like a project over a few days. First, I copied things around from all my Reiserfs drives till I had one disk that was empty. I then formatted it to XFS, copied an entire Reiserfs disk to new XFS disk and repeated over the next few days in my spare time. That way I didn't have the entire server down the whole time.

 

I even blew away my parity drive and used it so I could do several drive copies at a time. I do not recommend that for most people but I know what I am doing.

Link to comment
  • Replies 149
  • Created
  • Last Reply

Top Posters In This Topic

Hmmm, I'll have to explain to the wife that a new hard drive may be required...

 

You can use multiple filesystems so it's not an all or nothing type situation. If disks 1 - 4 are reiserfs and you add 5th drive, make it XFS.

 

I went ahead and switch all my drives to XFS but I treated it like a project over a few days. First, I copied things around from all my Reiserfs drives till I had one disk that was empty. I then formatted it to XFS, copied an entire Reiserfs disk to new XFS disk and repeated over the next few days in my spare time. That way I didn't have the entire server down the whole time.

 

Oh, good idea, I should be able to do that as well.

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering too but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

 

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering to but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

There are ways to do it now but the next beta will make cache pooling an option where it wasn't before.

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering to but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

There are ways to do it now but the next beta will make cache pooling an option where it wasn't before.

 

Thanks. I will wait for it then.

 

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering to but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

There are ways to do it now but the next beta will make cache pooling an option where it wasn't before.

What ways are there to mount the cache pool after rebooting?  I can add a drive to make a pool but if I remember right, on reboot it would show unformatted.  It's been a while and I didn't try very hard.

Link to comment

What ways are there to mount the cache pool after rebooting?  I can add a drive to make a pool but if I remember right, on reboot it would show unformatted.  It's been a while and I didn't try very hard.

 

 

Looking for the same info... currently in the same state.

 

Link to comment

Guys, is it possible to do a pool without stripping? Currently I have an old Seagate 320gb 7200RPM drive as a cache drive, due to some upgrades on another box I now have a 32GB SSD that is unused, what I want to do is create a pool that first uses the SSD and then falls back to the cache drive if I am copying over more than 32GB. Is it possible with Btrfs? The purpose isn't to increase capacity but to take advantage of the faster SSD speeds since 90% of the time I probably use less than the limit.

 

Edit: I found the following at this link:

When you have drives with differing sizes and want to use the full capacity of each drive, you have to use the single profile for the data blocks, rather than raid0.

# Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)
mkfs.btrfs -d single /dev/sdb /dev/sdc

Is that what I am looking for? If so I guess the first drive I set becomes the first it writes to?

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering to but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

There are ways to do it now but the next beta will make cache pooling an option where it wasn't before.

What ways are there to mount the cache pool after rebooting?  I can add a drive to make a pool but if I remember right, on reboot it would show unformatted.  It's been a while and I didn't try very hard.

After you create your pool, type the following but replace sdx with your pool letter.

btrfs filesystem show /dev/sdx

you will get a read-out consisting of the following.

Label: 'VM-store'  uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d
        Total devices 3 FS bytes used 1.96GiB
        devid    1 size 465.76GiB used 2.02GiB path /dev/sdf
        devid    2 size 465.76GiB used 3.01GiB path /dev/sdh
        devid    3 size 465.76GiB used 3.01GiB path /dev/sdg 

you will then mount your cache pool using the uuid number in the go script with /dev/by-id.

Link to comment

After you create your pool, type the following but replace sdx with your pool letter.

btrfs filesystem show /dev/sdx

you will get a read-out consisting of the following.

Label: 'VM-store'  uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d
        Total devices 3 FS bytes used 1.96GiB
        devid    1 size 465.76GiB used 2.02GiB path /dev/sdf
        devid    2 size 465.76GiB used 3.01GiB path /dev/sdh
        devid    3 size 465.76GiB used 3.01GiB path /dev/sdg 

you will then mount your cache pool using the uuid number in the go script with /dev/by-id.

 

Removed the other quotes from the reply. Is there a set of commands that I can run to set up 4 SSDs into one pool - I spend about an hour looking around and could not find anything helpful. What I am looking for is:

I have 4 500GB SSDs (/dev/sdf /dev/sdg /dev/sdh /dev/sdi) and I want to combine them into one pool (raiding them doesn't matter but those commands would also be helpful too). So I am looking for the complete command set to do this.

 

Thanks.

Link to comment

I was getting poor performance on disk writes so I started looking for information about improving write speed in KVM and Win 8.1.  I did find this interesting blurb at http://www.linux-kvm.org/page/Tuning_KVM

 

Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

 

I did a quick search here and did not see this mentioned, so I just wanted to put it out there.  I am going to move my KVM Windows 8.1 vm off of the btrfs cache drive and see what happens.

Link to comment

I was getting poor performance on disk writes so I started looking for information about improving write speed in KVM and Win 8.1.  I did find this interesting blurb at http://www.linux-kvm.org/page/Tuning_KVM

 

Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

 

I did a quick search here and did not see this mentioned, so I just wanted to put it out there.  I am going to move my KVM Windows 8.1 vm off of the btrfs cache drive and see what happens.

We haven't seen any IO performance issues yet with btrfs and image files for KVM but let us know if upon further testing you find some.

Link to comment

I was getting poor performance on disk writes so I started looking for information about improving write speed in KVM and Win 8.1.  I did find this interesting blurb at http://www.linux-kvm.org/page/Tuning_KVM

 

Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

 

I did a quick search here and did not see this mentioned, so I just wanted to put it out there.  I am going to move my KVM Windows 8.1 vm off of the btrfs cache drive and see what happens.

We haven't seen any IO performance issues yet with btrfs and image files for KVM but let us know if upon further testing you find some.

 

So, I shut down the windows81pro kvm (on the btrfs cache drive) copied the files to a ReiserFS drive outside of the array.  I adjusted the pathing in the kvm xml configuration file and booted up the vm.

 

CrystalDiskMark 3.0.3 Shizuku Edition x64 reports:

 

BTRFS Filesystem sectorsize 4096, nodesize 16384, leafsize 16384)

Seq Read: 710 [MB/s]

Seq Write: 5.5 [MB/s]

 

ReiserFS V3.6

Seq Read: 2027 [MB/s]

Seq Write: 46.8 [MB/s]

 

Obviously more testing is needed as this could be 100% an issue with my environment, but it is certanly a huge difference in this case.  Food for thought....

 

Link to comment

I was getting poor performance on disk writes so I started looking for information about improving write speed in KVM and Win 8.1.  I did find this interesting blurb at http://www.linux-kvm.org/page/Tuning_KVM

 

Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

 

I did a quick search here and did not see this mentioned, so I just wanted to put it out there.  I am going to move my KVM Windows 8.1 vm off of the btrfs cache drive and see what happens.

We haven't seen any IO performance issues yet with btrfs and image files for KVM but let us know if upon further testing you find some.

 

So, I shut down the windows81pro kvm (on the btrfs cache drive) copied the files to a ReiserFS drive outside of the array.  I adjusted the pathing in the kvm xml configuration file and booted up the vm.

 

CrystalDiskMark 3.0.3 Shizuku Edition x64 reports:

 

BTRFS Filesystem sectorsize 4096, nodesize 16384, leafsize 16384)

Seq Read: 710 [MB/s]

Seq Write: 5.5 [MB/s]

 

ReiserFS V3.6

Seq Read: 2027 [MB/s]

Seq Write: 46.8 [MB/s]

 

Obviously more testing is needed as this could be 100% an issue with my environment, but it is certanly a huge difference in this case.  Food for thought....

 

Thank you for sharing.  We've been using QCOW2 image formats for images, but all our cache drives have been SSD.  We haven't tried QCOW on a spinner yet, but that is probably something worth checking out...

Link to comment

I am confused about all of the various steps in this thread for adding the additional drives to the cache pool. I have 4 SSDs that I would like to combine into a pool. How would I do that?

That's what I was wondering to but given its on the roadmap. I don't think it's possible.  I can add drives to make a cache pool but on reboot I have to redo it. Think you'll have to wait for next beta.

There are ways to do it now but the next beta will make cache pooling an option where it wasn't before.

What ways are there to mount the cache pool after rebooting?  I can add a drive to make a pool but if I remember right, on reboot it would show unformatted.  It's been a while and I didn't try very hard.

After you create your pool, type the following but replace sdx with your pool letter.

btrfs filesystem show /dev/sdx

you will get a read-out consisting of the following.

Label: 'VM-store'  uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d
        Total devices 3 FS bytes used 1.96GiB
        devid    1 size 465.76GiB used 2.02GiB path /dev/sdf
        devid    2 size 465.76GiB used 3.01GiB path /dev/sdh
        devid    3 size 465.76GiB used 3.01GiB path /dev/sdg 

you will then mount your cache pool using the uuid number in the go script with /dev/by-id.

 

I think everyone but jonp thinks I was having trouble trying to create a btrfs pool instead of what I wrote which is a cache pool (a btrfs pool mounted as the cache drive).  But anyway here is what I did.  The two SSD's I'm using are /dev/sde and /dev/sdh.  I started the server without auto mounting the array so I could create the pool first.

sgdisk -g -N 1 /dev/sde
sgdisk -g -N 1 /dev/sdh
mkfs.btrfs -f /dev/sde1
mkfs.btrfs -f /dev/sdh1
mkdir /mnt/btrfs
mount /dev/sde1 /mnt/btrfs
btrfs device add /dev/sdh1 /mnt/btrfs
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/btrfs
umount /mnt/btrfs
rmdir /mnt/btrfs

 

Then I added this to the top of my go script because after reboot the cache drive would show up as unformatted unless you did a btrfs device scan first.

 

btrfs device scan

 

I then added one of the SSD's as the cache drive since if you mount either of them the pool will be mounted. Then started the array. You don’t have to add anything else to the go script since unraid mounts the cache drive by-id. This will only work if you create a pool in this manner since unraid looks for /dev/sdx1 to mount the cache drive.  If you create a pool with mkfs.btrfs /dev/sde /dev/sdh it won't work.

 

Link to comment

So it sounds like if I follow the commands posted by dmacias, I should be golden? Since I have 4 SSDs for the cache pool, I would use a RAID 0 or 5 instead of RAID 1, right?

 

The 'cache pool' feature is only going to support raid1 to start, which is slightly different than 'traditional' raid1, e.g., you can have 3 devices in the pool.  raid0 is not recommended because loss of a single disk could mean losing all you data.  raid5 is not recommended because that feature is not yet mature in btrfs.

 

If you want to remain compatible then you should create a partition on your cache disks and use them to form a btrfs raid1 pool.

Link to comment

So it sounds like if I follow the commands posted by dmacias, I should be golden? Since I have 4 SSDs for the cache pool, I would use a RAID 0 or 5 instead of RAID 1, right?

 

The 'cache pool' feature is only going to support raid1 to start, which is slightly different than 'traditional' raid1, e.g., you can have 3 devices in the pool.  raid0 is not recommended because loss of a single disk could mean losing all you data.  raid5 is not recommended because that feature is not yet mature in btrfs.

 

If you want to remain compatible then you should create a partition on your cache disks and use them to form a btrfs raid1 pool.

 

Okay, I have my pool set up now following dmacias steps. However I did notice in the webgui that the only drive listed has the cache drive is the one that I picked, but the remaining of the drives in the pool are shown as "new disks not in array". Is that normal in the webgui? Just want to make sure that I did not mess something up.

Link to comment

So it sounds like if I follow the commands posted by dmacias, I should be golden? Since I have 4 SSDs for the cache pool, I would use a RAID 0 or 5 instead of RAID 1, right?

 

The 'cache pool' feature is only going to support raid1 to start, which is slightly different than 'traditional' raid1, e.g., you can have 3 devices in the pool.  raid0 is not recommended because loss of a single disk could mean losing all you data.  raid5 is not recommended because that feature is not yet mature in btrfs.

 

If you want to remain compatible then you should create a partition on your cache disks and use them to form a btrfs raid1 pool.

 

Mmm... Sofar the cache drive has been something used for temporary storage of data.. Because of that limited timeframe (at least for me) it is acceptable that there is a higher chance of dataloss.. I would actually like to use raid0, just to increase the size of the cache pool.. The shares I have that contain important data just do not use the cache drive... That works out fine since those are my photos and document which are small and the cachedrive has less benefit.. Cachedrive has its use for big downloads and if I loose something there I can just redownload..

 

So... could using raid0 not be a personal choice ?

Link to comment

I currently use a SSD for my cache drive..  would changing it to btrfs allow it to have trim support?

Also, TOM any chance of getting this support backported to 5.x ?

btrfs has trim support.  can't speak to backporting of features yet.

 

Sent from my Nexus 5 using Tapatalk

 

Jonp: does unraid actually apply the discard parameter when mounting an SSD with btrfs?  If not, is there a plan to?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.