Double or Triple "Cache" pools


1812

Recommended Posts

On 2/22/2017 at 8:47 AM, hermy65 said:

 

Yep, im more so looking to split it out though so i can have better quality ssd's running my vms/docker containers and regular ssds handling tasks like downloads, etc

This right here describes my thoughts exactly. Please make this a feature!

  • Upvote 1
Link to comment
  • 2 weeks later...
  • 3 weeks later...

I'd like to also add that I moved my older setup of using a single cache drive with vm's mounted on unassigned ssd's to creating a raid 0 cache with 2 samsung evo 250gb ssd's, just to see how performance was vs my older setup.

 

Running only plex, krusader, and crashplan, the vm's have seen more issues and spinning pinwheel icons and other 1-3 second pauses where the vm is unresponsive vs when they were on their own independent drive. I can't imagine what it would be like if someone was running downlaoders trying to use it this way. I'm sure there are those who may be doing it with no problems? But having the ability to have a separation between dockers/vm's/etc running, would be nice for a performance bump.

Link to comment

+1 

 

I too would love to see this. I have been trying to fix my Cache I/O performance when my dockers/VM's/Web Interface timeout for 20-30 seconds sometimes longer when my windows VM is downloading at +75mb/s. I think this issue is cause by the disk(ssd) unable to keep up with background tasks along with +1000s of connections and processing the data inside an image file. My old setup had every sata port full(6- 4TB Hdd's in my array and 1 SSD as a cache). This is an ITX build so space in my node 304 is very valuable.

 

New setup is as fallows

4-8TB HDD Array via onbaord sata ports

1-m.2 nVme 512gb SSD Cache (via PCI-E 3.0 x4[x16 slot])

2-4TB (Hardware Raid1 via asmedia mini-pcie wifi adaptor slot pcie 2.0 x 1) for backups

2-240GB SSD via onbaord sata ports for VM's

 

I am currently waiting on my new 8TB drives to preclear so this setup has gone untested so far. My only issue with this setup is that I would like my (2)240GB ssd's to be on raid0 for my VM's/Domains share. Since my motherboard uses Intel RST it is not true hardware raid so unraid is unable to see it as a raid. I would use my 2port asmedia card to do this but since they are SSD's and that is a gen2 x1 card I would lose a lot of performance. Plus I would like my backup drive(s) to be redundant in Radi1. My current thinking of a workaround is to run my windows VM on one drive and use the second to run my other VM's/domins folder.

 

The only other option I see is to move my nVme to my onboard m.2, loose 2 sata ports and use a pcie raid card that can support the bandwidth. I don't really want to do this because my m.2 slot is on the other side of my motherboard and I think I will run into heat issues along with the inability to service in a drive failure.

 

In my case having a separate "cache" pool in raid0 is really my only other option. By the way, Yes I have 9 drives running inside a Node 304 ITX build :D.

Edited by Nickglott
Link to comment

Was copying 160GB worth of video editing files from a vm hosted on raid 0 cache drives to a  share which was set to use cache=yes... locked up the vm until the file transfer was complete.... Not sure why I am having these disk performance issues but wouldn't if we could have separate pools.

 

original.jpg.a0f65df3693c0f405bd4d0320fe927dd.jpg

Link to comment

I suggested a similar feature a while ago. Basically, due to how Unraid has grown in v6, we really need the capability to define & run multiple tier 2 pools (using BTRFS raid). T1 being the main Unraid pool, and several T2 pools like: Apps, Cache, VMs with optional mover support on each pool. Unassigned devices is a really nice plugin, but would be depended on a lot less with definable T2 pools. As a matter of course, I don't even understand why Unassigned devices isn't already integrated instead of a plugin given how integral it is to obtaining seriously increased functionality. LT took the first step in moving past being simply a storage OS with v6, now they need to really embrace it and add in the requisite storage options to make good use of the new features. 

  • Upvote 1
Link to comment
2 hours ago, DarkKnight said:

I suggested a similar feature a while ago. Basically, due to how Unraid has grown in v6, we really need the capability to define & run multiple tier 2 pools (using BTRFS raid). T1 being the main Unraid pool, and several T2 pools like: Apps, Cache, VMs with optional mover support on each pool. Unassigned devices is a really nice plugin, but would be depended on a lot less with definable T2 pools. As a matter of course, I don't even understand why Unassigned devices isn't already integrated instead of a plugin given how integral it is to obtaining seriously increased functionality. LT took the first step in moving past being simply a storage OS with v6, now they need to really embrace it and add in the requisite storage options to make good use of the new features. 

 

LT just posted their intention to include UD as part of the webGUI, maybe they will also make it support extra pools.

 

 

  • Upvote 1
Link to comment
  • 2 weeks later...
  • 2 months later...
On 2/14/2017 at 8:41 AM, 1812 said:

Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance.

 

TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!)

+1 for multiple, redundant SSD cache pools!

 

I would like to have a RAID1 pool that's dedicated to writing data which is then moved to the protected array nightly; and one RAID1 for VMs, system caching of appdata and etc. that is never moved. Other RAID levels would be cool too.

Link to comment
  • 4 weeks later...

Reading this thread is... I get it, but wow I feel like we've come full circle here, we're back to why unRAID was created to get away from all this complexity!

 

All this sounds like a IOPS constraint. Why not pass-thru a SSD to the VM you care about? They are cheap enough these days.

Link to comment
  • 2 months later...

+1

 

Having the standard cache pool, plus additional users definable pools would be awesome.  I'm running into horrible iowait issues when downloading/parchecking/extracting/moving lots of data on my SSD cache, it's causing all my containers to be slow.  The possibility of having a cache pools plus a docker/vm pool or separate docker and vm pools would be great. 

 

I found a crappy old 120GB SSD and moved my docker.img and some of my appdata contents to it and my iowait has decreased substantially and even when the cache drive gets backed up, it doesn't affect my app performance.

 

The only option currently is to scale up and buy a diamond encrusted NVME ssd and hope it can take the load.  Scale out is the way to go!

Edited by Dephcon
Link to comment
On 11/5/2017 at 3:14 PM, -Daedalus said:

Because then you can't run your VM on redundant storage, requiring downtime if you want to create a backup image. That, or you have to passthrough a hardware RAID1 config, which seems a bit silly within an OS like unRAID.

 

 

BTRFS snapshots can be an alternative, when you don't want downtime.

 

But from the general perspective, I would prefer if unRAID could handle multiple mirrors.


My main storage server is not unRAID for that very reason. It has multiple RAID volumes, where most are two-disk mirrors.

Link to comment
6 hours ago, pwm said:

BTRFS snapshots can be an alternative, when you don't want downtime.

 

But from the general perspective, I would prefer if unRAID could handle multiple mirrors.


My main storage server is not unRAID for that very reason. It has multiple RAID volumes, where most are two-disk mirrors.

 

Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place.

 

I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well.

Link to comment
1 hour ago, -Daedalus said:

 

Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place.

 

I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well.

You don't have snapshot support in the GUI, but you can still create a snapshot and mount separately to for the backup to read from.

 

ZFS and BTRFS have lots in common. BTRFS doesn't have the deduplication functionality of ZFS, which on the other hand is a feature where lots of users have locked themselves out of their data because they have filled their storage pool larger than the maximum RAM capacity of the motherboard. Something that isn't obvious until they reboot and find they can no longer mount the ZFS array until they build a brand new system.


The marketing point for unRAID as a storage system is for users who don't want to spin all drives of the array when making disk accesses. Which means to have parity without striping.


If you want the bandwidth of a striped RAID and are ok with the recovery issues of a striped RAID, then the obvious route for you should be to select a system that stripes the data.

Link to comment
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.