3 250GB SSD Cache Pool or 1 750GB Cache Drive


Recommended Posts

My personal opinion only, and it's in direct contravention of Limetech's current official stance, but I'd use a single device XFS formatted, and make sure I had a properly configured backup of any cache persistent items.

 

1st reason, BTRFS has not been kind to me. XFS hasn't caused me issues yet. (Personal experience only)

2nd reason, Space utilization - single XFS=all 750GB available - pool of 3 BTRFS = 375GB available

 

I'm not qualified to comment on speed differences, as I haven't researched or experienced it. johnnie.black would be the one to ask about that.

 

As for backups, the community application's backup module looks to be ideal for appdata, and there are threads knocking around for VM domain backup as well.

 

XFS only works for single device cache, so if you at any point decide to add a second cache drive, it would have to be erased and reformatted at BTRFS.

Link to comment

I think I agree with you.  BTRFS only gives you half of the available space (i.e. 250*3=750/2=375).  I suppose there's something to be said for the redundancy, but as you said, the appdata backup should solve a big portion of issues.  I suppose if you went Raid5 BTRFS, then you'd only loose 1 drive and keep redundancy and a 750gb drive.  Prices on 250gb SSD's is down around $70-80 now, so 4 drives puts you around $300 for 750GB fully redundant, whereas 750GB drives are around $220 up...  so $80 for redundancy?  Hmmm...

Link to comment

As you've already concluded, it's a personal choice that depends essentially on whether or not you want your cache to be fault-tolerant.    I think it largely depends on what you use the cache for => if you use it for the traditional function of caching all the writes to your array, then I'd personally want those to be fault-tolerant from the instant I wrote them to the server, so I'd favor a fault-tolerant BTRFS array vs. a single unprotected cache drive.    If you're purely using it as an application drive; then automated backups to the array should provide a reasonable solution so you can recover without too much loss in the event of a failure.

 

At the price of SSDs these days, I'd probably just go with a pair of large SSDs and a fault-tolerant cache ... but as I noted, it's a very personal choice.

 

Link to comment

Yes, but btrfs raid5 is still experimental, not recommended for production, also lacks trim support, so not ideal for SSDs.

 

ETA: for future readers, apparently trim in btrfs works with any profile, btrfs raid is very different from traditional raid as it divides the data and metadata in chunks and these are distributed by the disks depending on the profiles use, so I believe it's much more simple to get trim working, I tested and trim worked with raid5/6, at least the command itself works so I expect it's doing its job.

 

The other part of my original post is still true, at the time of this edit btrfs Raid5/6 is still considered experimental and not safe for production use.

Link to comment

Yes, but btrfs raid5 is still experimental, not recommended for production, also lacks trim support, so not ideal for SSDs.

 

Ah, that makes sense.  Trim is critical on SSD's, so I'm going for 4 drives, BTRFS Raid 10.  Funny that Raid 5 is still considered experimental since its been around for my entire career (80's)...

He meant that BTRFS RAID 5 is experimental.  BTRFS has only been around since 2007 according to Wikipedia.
Link to comment

... and, for example, a pair of 500GB SSDs is generally less expensive than 4 250GB SSDs

 

... depends how you get there! In my case I started with 2x250 and it was cheaper to add 2x250 to expand my cache pool to 500GB instead of buying 2x500 as replacement.

 

True ... if you already have a few drives you can use, that changes things a bit.  Of course it depends on whether the DIFFERENCE between a 250GB drive and 500GB drives is more than you could sell your used 250GB drives for  :)    For example, looking at SOLD listings of USED 250GB SSDs on e-bay, they tend to sell for $55 - $60.  A NEW 250GB is in the $75-95 range [e.g. a 275GB Crucial MX300 is $89];  whereas a NEW 500GB drive is around $120 - 140 [e.g. a 525GB Crucial MX300 is $128].  So, at least if you compare the two sizes of MX300's, you can buy the larger drive for $39 more -- which is almost certainly LESS than you could sell a used 250GB drive for.  So it would actually be CHEAPER to buy a pair of the larger drives and sell your smaller drives than it would be to buy two more of the smaller drives.

 

Link to comment

... and, for example, a pair of 500GB SSDs is generally less expensive than 4 250GB SSDs

 

... depends how you get there! In my case I started with 2x250 and it was cheaper to add 2x250 to expand my cache pool to 500GB instead of buying 2x500 as replacement.

 

True ... if you already have a few drives you can use, that changes things a bit.  Of course it depends on whether the DIFFERENCE between a 250GB drive and 500GB drives is more than you could sell your used 250GB drives for  :)    For example, looking at SOLD listings of USED 250GB SSDs on e-bay, they tend to sell for $55 - $60.  A NEW 250GB is in the $75-95 range [e.g. a 275GB Crucial MX300 is $89];  whereas a NEW 500GB drive is around $120 - 140 [e.g. a 525GB Crucial MX300 is $128].  So, at least if you compare the two sizes of MX300's, you can buy the larger drive for $39 more -- which is almost certainly LESS than you could sell a used 250GB drive for.  So it would actually be CHEAPER to buy a pair of the larger drives and sell your smaller drives than it would be to buy two more of the smaller drives.

 

I like your answers, Gary  :)

 

Link to comment

... and, for example, a pair of 500GB SSDs is generally less expensive than 4 250GB SSDs

 

... depends how you get there! In my case I started with 2x250 and it was cheaper to add 2x250 to expand my cache pool to 500GB instead of buying 2x500 as replacement.

 

True ... if you already have a few drives you can use, that changes things a bit.  Of course it depends on whether the DIFFERENCE between a 250GB drive and 500GB drives is more than you could sell your used 250GB drives for  :)    For example, looking at SOLD listings of USED 250GB SSDs on e-bay, they tend to sell for $55 - $60.  A NEW 250GB is in the $75-95 range [e.g. a 275GB Crucial MX300 is $89];  whereas a NEW 500GB drive is around $120 - 140 [e.g. a 525GB Crucial MX300 is $128].  So, at least if you compare the two sizes of MX300's, you can buy the larger drive for $39 more -- which is almost certainly LESS than you could sell a used 250GB drive for.  So it would actually be CHEAPER to buy a pair of the larger drives and sell your smaller drives than it would be to buy two more of the smaller drives.

 

I like your answers, Gary  :)

And perhaps an even more important question for some: Are all those ports better used for cache or array?
Link to comment

One does have to factor in the # of available ports.  For example, in jeffreywhunter's config (assuming it's accurate in the sig), it shows 12 drives in the array, with 16 total SATA ports => but 2 of those ports are on a PCI card (apparently only one of those ports is in use, so the bandwidth restriction isn't too bad ... but I'd replace that card with another PCIe x4 or better card -- which will not only eliminate (or at least strongly mitigate) the bandwidth restrictions, but will also provide additional ports for further expansion).

 

With another 8-port card, there would be 22 total ports available -- which should be more than enough to max out the case.  With 3 5-in-3 bays he can put in 15 array drives; and I assume the SSDs can be arranged otherwise within the case, as they take up very little space.

 

Link to comment

One does have to factor in the # of available ports.  For example, in jeffreywhunter's config (assuming it's accurate in the sig), it shows 12 drives in the array, with 16 total SATA ports => but 2 of those ports are on a PCI card (apparently only one of those ports is in use, so the bandwidth restriction isn't too bad ... but I'd replace that card with another PCIe x4 or better card -- which will not only eliminate (or at least strongly mitigate) the bandwidth restrictions, but will also provide additional ports for further expansion).

 

With another 8-port card, there would be 22 total ports available -- which should be more than enough to max out the case.  With 3 5-in-3 bays he can put in 15 array drives; and I assume the SSDs can be arranged otherwise within the case, as they take up very little space.

 

You are correct.  I originally used the Syba with the express intention for it to be the cache drive controller (originally I didn't have the AOC card).  I get pretty good performance through that card (113-118mb/s across the LAN from PC to unRaid, parity checks clock in at 90mb/s avg).  I have not tested it with 2 cache drives yet.  8 of the drives are on the AOC (all data drives).  2 data drives on 3gb/s Sata II mobo port, 1 data drive on 6gb/s Sata III mobo port and the parity drive is on the other 6GB/S mobo port.  I did that because it seems like the Mobo port would have better throughput?  I plan to add a second parity drive to the 2nd 6gb/s mobo port.  I had thought about using 4 240gb SSD's raid10, but that chews up a lot of drive real-estate.  So I'm second guessing that to go with 2 500GB SSD's in the 2 port Syba card (6gb/s).  Its in an X16 slot, so I'm thinking there should be plenty of bandwidth to handle both drives?

 

So all done I'd have the following (ASUS P8Z68-V Pro):

 

8 Data Drives on AOC SAS2LPMV8

2 Parity Drives on 6GB/S Marvell PCIe 9128 Mobo Controller

2 Data Drives on 6GB/S JMicron Mobo Controller

4 Data Drives on 3GB/S Mobo Ports

2 Cache Drives on 6GB/S PCIe x16 Ports

 

That gives me 14 Data Drives, 2 Parity Drives (6gb/s), 2 Cache Drives (6gb/s)

 

Any thoughts on this approach?

 

I've thought about just getting another SAS2LPMV8, but it would have to be on an x4 PCIe, given the card is an x8 card, this would impact performance when drives on the card are hit simultaneously.  So does not feel like a great choice.

 

 

Link to comment

A few additional thoughts ...

 

First, your signature showed "Syba SD-PEX40068 (PCI Sata 3)" => I didn't look up the card, so I assumed it was, as stated, a PCI card.  But in fact it's a PCIe x2 card, so yes, it has PLENTY of bandwidth for a pair of SSDs to run at full speed.

 

Second, as I presume you know, that card has NOTHING to do with parity checks, since the cache drives aren't involved in those.

 

Third, adding an additional SAS2LPMv8 isn't a bad idea, as long as you're aware of the bandwidth restriction you'd have with the x4 interface on the 2nd x16 slot.  (which clearly you are)    Note that both your motherboard and the card are Gen 2 PCIe devices, so 4 lanes will still give you 2GB/s of bandwidth ... which should be plenty for 8 traditional spinning hard drives.

 

In fact, even if you used it to connect 4 SSDs and 4 hard drives, it wouldn't cause any significant bottlenecks.  When you're actively using the system, you're not likely running a parity check or rebuild operation, so the only drives in use would probably be the SSDs -- so they'd have 500MB/s each available bandwidth.  A few SSDs can sustain speeds slightly above that; but not enough to matter.    And during a parity check the SSDs most likely wouldn't be active ... and even if they were, and "only" had 250MB/s each of available bandwidth, that would still be far above the Gb network limit, so it wouldn't impact read/write performance to/from the array.

 

In fact, as I wrote the above, it dawned on me that you have another SAS2LPMv3 in an x16 slot that has 4GB/s of bandwidth => so you could connect you SSDs to that and you'd have NO bottlenecks to be concerned about on any of your drives, no matter what the array was doing.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.