more then 30 drives?


Recommended Posts

Sorry if this has been asked before, I tried to find an answer and haven't found anything. 

 

I am using a supermico 36bay box to run unraid. Is there a reason why, using unraid pro, I am limited to 30 drives (split between data and parity)? In the main tab, it appears that the maximum number of slots is 30. 

 

I would like to use the following:

 

• 2 (3 if possible) parity drives

• 3 cache drives

• 30 data drives (31 if 2 parity is the max)

 

I thought unraid pro allowed for unlimited number of drives..... is there actually a limit to what unlimited is defined as? If 30 is the max, is there other ways to use the additional drives?

 

Thanks in advance. 

  • Like 1
Link to comment

2 parity is all that's supported and I personally wouldn't trust more data drives with only 2 parity. Do you really need that many? Drives are pretty large these days and you can get a lot of capacity without using a lot of drives.

 

I always recommend people only add drives as needed. More drives just means more opportunities for problems.

Link to comment
2 hours ago, dgtlman said:

more parity drives in the future?

Not at this time. Never say never, but I don't think it's even on the radar. Unraid is aimed more at the home hobby / tinkerer market right now, and supporting huge numbers of drives is more an enterprise level. The economics of supporting that many spindles in a single PC just doesn't make sense for most people, considering 8TB drives can be had for less than $200 USD, and not many people have a need for 200+ TB systems all in one unit. Each drive needs power, space, and an interface, each of those cost a not inconsiderable amount, especially when you get over 12 or so drives.

Link to comment

This is a sore subject for me too, actually. The fact that unraid has a limit of drives - is the most annoying thing about it. The main purpose of unraid is to be a storage solution. I know that with VMs and docker support it has evolved to being used not only as storage solution, but that does not change the main purpose of it. And most importantly - unraid is not free, but rather expensive. And for a storage solution software to have a limit on drives in itself (as in - the limit is set in the software itself, and not in hardware limitations) is, IMHO the most ridiculous thing about unraid.

 

And to put it bluntly - it's like a TV manufacturer making and selling TVs that only show 30 channels and no more. And if you happen to have a cable subscription with a hundred channels, you need to buy 4 separate TVs to see all the channels.

 

IMHO, unraid should have supported multiple pools years ago. Multiple pools and no drive limit. And I would actually love to see a proper answer to this question - why doesn't it? Because honestly, the only reason I can see is the money, as in people buying more than one license, hence more money for lime-tech.

 

Multiple pools would solve the problem of drive limits, bandwidth limit problem with a lot of drives when doing parity build/checks, and the problem of actually having 28 drives protected with only 2 parity drives.

 

And I'm sorry guys, but your answers are not very helpful.

 

4 hours ago, trurl said:

2 parity is all that's supported and I personally wouldn't trust more data drives with only 2 parity. Do you really need that many? Drives are pretty large these days and you can get a lot of capacity without using a lot of drives.

 

I always recommend people only add drives as needed. More drives just means more opportunities for problems.

 

What would be the difference between 28 and 38 or 48 drives actually? I mean 28 is already too much with only 2 parity. I mean more than 28 with dual parity is way better than 20 with a single parity, as it was a case for years before.

And yes, he probably needs that many, seeing that he bought a case that supports 36 drives...

More drives means more problems is the same as running unraid is more problems than not running unraid at all.

 

2 hours ago, jonathanm said:

Not at this time. Never say never, but I don't think it's even on the radar. Unraid is aimed more at the home hobby / tinkerer market right now, and supporting huge numbers of drives is more an enterprise level. The economics of supporting that many spindles in a single PC just doesn't make sense for most people, considering 8TB drives can be had for less than $200 USD, and not many people have a need for 200+ TB systems all in one unit. Each drive needs power, space, and an interface, each of those cost a not inconsiderable amount, especially when you get over 12 or so drives.

 

I agree, as it stands now, unraid is only a hobbyist solution at best, and why lime-tech is OK with it I have no idea.

I think having a lot of drives even in a home server has already become an enough common-place deal to not be enterprise-only situation. I think it's been years already, with many people having huge home media servers (storage-wise). I am a 100% hobbyist and I have 200TB+ and it's not that hard or expensive as it was before. It's actually pretty easy and cheap nowdays.

The cost of having more than 12 drives in one system is actually very cheap. Actually, nowdays you can easily connect more than a hundred drives to a single machine, by using HBA(s) and used external SAS expander boxes from ebay. And it is many times cheaper than building multiple machines + buying multiple unraid Pro licences to run 30 drives at time.

That is the situation today, actually.

 

I have bought an unraid licence, as an impulse buy, almost 2 years ago and haven't used it until now for mostly a single reason - the drive limit was too low. I can connect 80 drives to my single home server with my current setup. And 30 is way less than 80. The only reason I actually picked unraid over freenas for now, is because I already had a Pro license.

Actually it would be interesting to calculate what would be cheaper to run if you have 50+ or more drives - freenas or unraid.

 

This is just a wishful thinking, but IMHO, what lime-tech should do is:

  • Keep the real time protection using a cache pool, I'm not sure if btrfs is stable enough, though, but it has checksums, and that's a must. Maybe even add an ability to have multiple cache pools with separate raid options.
  • Implement multiple pools. No drive limit. Period.
  • Ditch this proprietary real time 2 parity drives nonsense. Use Snapraid on those multiple data drive pools.
  • Keep the convenience of user shares, of course (the pooling).

Result:

  • Way better (if not the best) protection at whatever level user wants.
  • You still get the real time protection using cache pool(s).
  • Mover moves the files from cache during the night hours, and updates Snapraid parity. Copy to the pool(s), update parity, delete from cache... easy.
  • You still have the convenience of adding/removing any type of drives whenever you want.
  • You get the everyday usage speed with cache pools.
  • You avoid I/O bottlenecks during parity checks with smaller Snapraid "protection-pools".
  • Unlimited drives, as it actually should be, in a storage software solution.

If/when someone makes a user-friendly solution as described above - unraid will loose customers, imho. Because that one is the best solution for home servers. For anyone that would want more - there's Freenas.

 

 

 

 

 

  • Thanks 1
Link to comment

I am also a 36bay Supermicro user, and while adding more drives would be nice, until all my drives are 8TB, I am ok with the limit. Honestly the number of unRAID users who would like to have more then 30 drives in their array is probably minuscule, so what driver is their for Lime-Tech to do it? Also the number of unRAID users who have more then 30 drives in a single system is also probably quite small, we aren't the target market they are after. Perhaps unRAID isn't the right solution for you?

Edited by ashman70
Link to comment

I really don't think the reason multiple unRAID pools are not supported is due to profit motive. The number of drives supported by unRAID has steadily increased. When I bought it was 16. Why would LimeTech continue to increase the count without raising the price? And they recently implemented Docker and VM functionality. It would have been easy for LimeTech to EOL unRAID, and release "unRAID Ultimate" or something, that includes NAS and the Docker/VM functionality. LimeTech had had ample opportunity to monetize the relationship with existing customers but has consistently NOT done so.

 

I think the issue is two fold - one technical and the other demand. unRAID utilizes (some might say cannibalizes) the software RAID feature of Linux for unRAID. I think bending it to support multiple pools may be complex. And, frankly, we don't get many users that want that many disks in a their server.

 

If you want to use a different product - have at it. I did my homework and bought unRAID. And have never had a reason to look elsewhere. If FreeNas has the features you want, buy all means use it. The forum is not really the place for messages such as this one. LimeTech infrequently monitors posts unless in the release threads and if escalated to them. Feel free to send a PM or email to them. 

  • Like 1
Link to comment

@shEiD that's a very well written post. I like the ideas.

 

There's the technical and the business side, here's a suggestion that is hopefully a good marriage of both and a win for Lime-tech.

  1. Allow two pro licenses to be assigned to the same USB GUID. Or add a whole new license offering.
  2. Copy and paste much of the code to allow for a second 30 drive max, dual parity array. I don't see why it can't stop/start one array array complete independent of the second array.
    1. The merge/fuse of user shares could still combine from both first and second array.If one array is stopped, those files just won't be visible.
    2. Parity, whether single or dual is unchanged, still tied to the array. So if you want 2 arrays up to 30 drives max, dual parity on both would mean 4 drives. Or run one array with dual and the second array with single parity, shouldn't matter.
    3.  Extra licensed feature... add a hot-spare disk. It's pre-cleared, it's ready to go to replace a failed disk on either the first or second array. Either with user acknowledgement or once confidence in the feature builds, then allow for it to be automatic.
  3. Cache drives remain unchanged.
  • Like 1
Link to comment

Thanks you for responses, guys.

 

@bjp999 Me thinking, that the profit is the reason of not having multiple pools, is simply speculation. But what makes me think this way, is that I have not seen any good explanation for the limit on the drives, especially on the most expensive Pro version. Actually, unraid has pricing tied to exactly that - the drive limit. If profits are not the reason - remove the limit on the Pro licence. What's with this 30 drives limit? Why 30, exactly? If unraid can successfully protect 5 or 10 or 28 drives with the same double parity, why not more? What's the difference, anyway, 28 or 48? You are probably hitting I/O bandwidth limits anyway on parity checks.

 

The $129 is not exactly cheap. Especially if you need to buy more than one licence. And even though I have been using unraid only for a couple of weeks, I have been checking the forums for more than 10 years, probably. And I know there are tons of people having bought multiple licences and running multiple unraid boxes. There was even a 2 licence bundle before, iirc, no? I'm pretty sure there was, maybe a long time ago.

 

Why isn't forums a good place to talk about features and feedback? Methinks it a good place to toss some ideas around, especially before contacting the company with half baked requests.

 

@BRiT I know it would take some work. IIRC, I have seen that emHttpd blamed for many things, as being the reason too hard to change to implement this or that... I may be wrong, but methinks, the biggest reason of unraid being hard to change and evolve is - that it's a paid software, closed source and it's using a custom distro, as I understand.

 

@Lev @dgtlman Actually, I am not willing to pay for multiple licences. I especially would not be happy to pay for multiple licences to be used on a single machine, in whichever way it would work - licence per pool, or whatever. Like I said, imho, $129 is expensive enough. Actually, for that money I would actually like to be able to use that same licence on multiple machines. For personal use, that is, for businesses the licence could be different. But that's the point - if Lime-tech would bring out unraid "update" unraid to bring it out from home-hobbyist level, it would be more attractive for business use and the licensing and money would be different. I know I may get booed for saying this, but for business use, imho - it would be silly to use unraid, when there is freenas out there, which is way more secure, faster and most of all - free. But for home use, yes unraid is acceptable, if you have small server with not a lot of drives.

 

Anyway, my previous post was just a  frustration. I am in the middle of migrating my windows server to unraid (100TB+ of data). I chose unraid because I already had a license, and it was cheaper (for now) to buy 3 new 10TB drives to replace 3 smaller 3TB drives (which are in perfect condition, btw), just to be able to "fit" my data into that 28 data drives limit on unraid. So, when I saw the exact topic on 30 drives limit, and then read the nonsensical answers, like - more drives more problems, and maybe you don't need that many drives... My reaction simply was - what the hell? That's it.

 

The problem is basically, I chose unraid, even though I already have a server, that I could connect 80 drives to easily, today. I chose unraid, because it has a pretty simple webUI, which is a must for me - because I have no experience with linux, whatsoever. If I had, I would probably do this: The Perfect Media Server 2017

 

And that's the point I was trying to make before. I may be wrong, being a linux newb, but it seems to me, that all the software needed to make a perfect (or way better) unraid is out there already, and all of it is free and open source. Like I said, all you need is:

  • some good linux distro
  • btrfs for cache pool(s)
  • snapraid for parity
  • mergerfs for easy and flexible pooling
  • a smarter mover script using cron, with some options for multiple cache and data pools and to able to ignore folders, etc
  • docker - no problem, and control with something like Portainer is very nice
  • KVM - no problem
  • write some webUI to manage all this, if you want, but is that necessary? You could easily run this on some linux distro with a Desktop.
  • that's it, isn't it?

I am not a programmer, just a hobbyist. And I've got no linux experience. So that's a no-go for me, for now. But that solution sounds pretty doable to me. And awesome. And free. I bet it's gonna be made by someone, and pretty soon...

Edited by shEiD
Link to comment

If I Remember Correctly, the drive number limit is a technical one. unRAID was kinda stuck for a long time with 26 Drives max including the flash drive simply because LT was a one man operation and weren't ready to support the case where the drive names under the hood exceeded the sda-sdz naming (it wrapped around to sdaa)

 

There is no such thing as a custom distro underneath. unRAID is primarily based on Slackware - one of the oldest surviving distro's which is rock stable and simple. Of course some components are bumped up to meet user needs and demands, the but the core principles of the distro are all there.

 

Link to comment
1 hour ago, shEiD said:

Thanks you for responses, guys.

 

@bjp999 Me thinking, that the profit is the reason of not having multiple pools, is simply speculation. But what makes me think this way, is that I have not seen any good explanation for the limit on the drives, especially on the most expensive Pro version. Actually, unraid has pricing tied to exactly that - the drive limit. If profits are not the reason - remove the limit on the Pro licence. What's with this 30 drives limit? Why 30, exactly? If unraid can successfully protect 5 or 10 or 28 drives with the same double parity, why not more? What's the difference, anyway, 28 or 48? You are probably hitting I/O bandwidth limits anyway on parity checks.

 

The $129 is not exactly cheap. Especially if you need to buy more than one licence. And even though I have been using unraid only for a couple of weeks, I have been checking the forums for more than 10 years, probably. And I know there are tons of people having bought multiple licences and running multiple unraid boxes. There was even a 2 licence bundle before, iirc, no? I'm pretty sure there was, maybe a long time ago.

 

Why isn't forums a good place to talk about features and feedback? Methinks it a good place to toss some ideas around, especially before contacting the company with half baked requests.

 

Talk all you want. Just leave out the whining about moving to FreeNAS if it isn't implemented.

 

If you want to make a difference, send a convincing note to LimeTech to tell them that this enhancement is more valuable to them than Ryzen / Threadripper enhancements and whatever else they are cooking in Colorado.

 

I think if you want to go beyond 30, that you'd have a decent chance of getting the count increased. It's easy to increase the count. There was a bit of a challenge going past 26 (after sdz you go to sdaa (4 letters not 3), and that required some changes. But since that was overcome there are a lot of drives before you get past sdzz :) ). The questions I think that is in Tom's mind - how many drives is too many drives, and at what point am I not comfortable that my redundancy scheme is sufficient to protect my users' data. "I can" is different than "I should", and putting a loaded gun into his user's hands is not his goal. I think he cares - and I think dual parity is giving him confidence to continue to raise the drive count. I think 36 or even higher is possible in the future.

 

As for 3 parity drives, he has said in the past that once you get past 2, the mathematics gets much more complex and performance would take a big hit. So I think that is unlikely. Plus I don't think it matters. It is too easy for Parity to get corrupted in the single disk failure model. And if one parity gets corrupted - ALL parities get corrupted. It's not like an independent mechanism. I personally think 2 parities is not very useful even as you get into larger arrays. Someone called it preparing for a plane hitting your house. I think that's pretty accurate.

 

Let's take an example. Let's say the chance of a drive failure in an array in one year is 50%. And let's say there is a 5% chance of a failure corrupting parity. And that it will take 2 days to recover from a failure (if you can). So you have a 5% chance of data loss from corrupted parity, and a 0.26% of a second failure occurring during the 2 days you are trying to recover. So the second parity has reduced your risk of data loss from 5.26% to 5.015%. Third parity - it could reduce the risk by another .015%. Remember that one parity is protecting you from 94.73% of disk failures. Is it worth the cost of another 8 or 10TB drive to gain another 0.26%? How about that 3rd parity? :) The flat 5% risk of a failure corrupting parity can never be reduced. And these are some pretty conservative numbers. They discount single drive recovery efforts - that have a VERY high chance of success. I''ll leave it to the reader to decide how much they're willing to pay for the small percentage improvement

 

It was hoped that dual parity would provide triangulation - and if there was a parity error unRAID would be able to tell you what disk caused it. That would be great info, and I beleive that the data is there, just needs some high order math to figure it out. But knowing the disk would be helpful. And then there is the holy grail - a tool that would tell you, based on a sector on a disk, what file was impacted. With these two features, dual parity would be hugely useful. A parity error could be traced back to a file! But in its current implementation, it isn't worth the cost of the drive to me.

 

But that's me. I've been using unRAID for almost 10 years, know most of the tricks, recovered from numerous nasties. For a new user that has cabling issues, no hot swap bays, and doesn't know what they are doing, dual parity can provide SOME protection them from things that have nothing to do with drive failures. This is actually a much more valid use case. NEW USERS SHOULD HAVE DUAL PARITY, but they might consider eliminating it once cabling is secure, hot-swap cages are in place, and the risk of self inflicted wombs is lessened.

 

As for multiple parity pools - I think its a good idea - better than dual parity as currently implemented. But I think it would be very difficult to implement - and even though I think it is a good idea, I don't think I'd actually use it. And then there are questions like - would user shares work across the pools? Or be separate? Or user gets to choose? Would they start and stop independently. How about Dockers - two different sets with independent settings? VMs? All gets complicated. Is it worth it? My vote would be no. Set up a second server if you need more than a 300T array. I'd much prefer the triangulation features.

  • Upvote 2
Link to comment

Thanks for replies, guys.

 

@ken-ji Thanks for info. Not a linux guy - me, so sorry for maybe misunderstanding some things.

 

@bjp999 Very good points. And I actually agree with most of them. Thanks for actually explaining the math and percentages. It is really quite hard to do it as a new unraid user. Actually, the seconds parity and how the hell it works, and helps to recover - was exactly the point I was and still am not sure 100%. I actually assumed, that the second parity would provide the info, which hard drive was "bad". I guess I was wrong. But then I seriously have no clue, what the hell that second parity does. Of course, this problem could be sorted out by implementing a proper checksums systems. And talking a bout checksums - I have read someplace, that one of the check-summing scripts/plugins has the ability to actually help to achieve this, by providing the information which file has been corrupted, during the repair... is that true? I tried to look through my notes, but can;t find the link, where I read this.

 

The extra 0.26% protection for the price of additional drive sounds really silly, when you put it this way. But I have 2 drives on me in a 1-2 days period at least 3 times over the years. So even from my own experience - this is not that uncommon, especially with a shitload of drives. and I have "some" :)

 

What worries me more, is if I can't successfully recover from failure 100% - that means that some file(s) got broken. If unraid has no idea what file(s) or on what drive(s), and it sounds like that's the case even with double parity - that means my paranoia and OCD will kill me :) Or am I misunderstanding how double parity and recovery works again?

 

In this case, I assume, I could at least probably use the hashes made by the Dynamix File Integrity plugin to find the broken files?

 

As for multiple pools - I stand by my opinion - it is essential, imho. Multiple pools would enable to use smaller "protection-groups". For that I would gladly sacrifice more drives. Let's say 1 parity for every 16 or even 10 drives. Multiple pools would greatly speed up parity checks, as I understand. That's just of the top of my head at the moment.

As for pooling the pools - why not use the same system? You have multiple separate user shares now. You can setup every share individually - set the included/excluded drives. Nothing has to change when it comes to user shares. The only thing changes is setting up separate parity-pools. I mean, you could set one parity to protect drives 1-16, another would protect 17-32, and so on... That's it.

 

I assume the parity drives and the whole protection scheme has nothing to do with user shares implementation in the current system also... The only changes would be, if they would implement multiple cache pools, which would be awesome. One could be protected (RAID1), another could be simple JOBD... Would be awesome.

 

And the mover script would need to be made a little bit smarter and with options.

 

Also, I'm not whining about moving to Freenas. I know you won't like it me saying it, but it is a fact - unraid and freenas are in different leagues. It's not unraids fault or an accomplishment of Freenas, as it were. It's ZFS - it simply has no equals. Overall, unraid has better usability when it comes anything other than FS - docker, VMs - everything is easier, imho. So don't go biting my head of, now.

If/when I want to move to freenas, I'll simply do it, no whining required :)

Edited by shEiD
Link to comment

Perhaps the real reason that dual parity is useful is that for unRAID to rebuilt any disk, it has to be able to read every sector on the remaining disks with single parity.  With dual parity if any portion of a second disk is unreadable, the missing data from that read error can still be reconstructed and used to rebuilt the data for the disk being rebuilt.  Granted, this doesn't happen often but as the drive counts go up, it become more more probable.

 

I run the calculations for single and dual parity and provide some other interesting statistics in another thread a while back.  You can read it here:

Plus, it would be surprising to learn what percentage of the unRAID users have never set (and are actually using) the Notification system.  I have been attempting to help in assisting users with problem for a while and it is apparent that many folks don't  use it.  Some, apparently, haven't even setup periodical parity checks. The first time that they are aware of a problem is when the server has several problems severe enough that it has file serving  issues.

 

And I didn't even address the the issue of hardware failures beside the hard disks.  I am not really into the the commercial server environments but I am not hearing that those folks are increasing the drive counts in their servers.  If high counts of hard drives have some huge advantage, why aren't they increasing drive counts and bragging about the increases? Apparently, they are going to larger-size drives  and the limited data that I have seen indicates that these bigger hard drives have about the same failure rate as smaller ones. 
 

Link to comment
1 hour ago, Frank1940 said:

Perhaps the real reason that dual parity is useful is that for unRAID to rebuilt any disk, it has to be able to read every sector on the remaining disks with single parity.  With dual parity if any portion of a second disk is unreadable, the missing data from that read error can still be reconstructed and used to rebuilt the data for the disk being rebuilt.  Granted, this doesn't happen often but as the drive counts go up, it become more more probable.

 

It doesn't happen that often that a REAL read error will happen on rebuild. But it is INCREDIBLY common that a user exchanges a bad or upsizes an existing drive and they knock a cable askew and the result is an apparent failure. Dual parity lets the rebuild complete even if that second drive gets knocked offline. Then you can fix the second issue after the rebuild. With single parity the recovery is still very possible, but certainly more complex. 

 

Good cabling and drive cages will stop this phenomenon much more effectively, but dual parity is better than nothing. But I sometimes suggest NOT doing dual parity and instead doing drive cages. They seem unrelated. But this is the connection. And the price is similar.

 

7 hours ago, shEiD said:

What worries me more, is if I can't successfully recover from failure 100% - that means that some file(s) got broken. If unraid has no idea what file(s) or on what drive(s), and it sounds like that's the case even with double parity - that means my paranoia and OCD will kill me :) Or am I misunderstanding how double parity and recovery works again?

 

You don't get 100% protection from anything. But drives themselves are pretty darn reliable - certainly if you look at a failure window of 1-2 days, and assuming monthly parity checks, the chances of another drive failing while trying to recover from the first is awfully small. A single "rough spot" on a different disk can subtly corrupt the recovery, but with the file integrity information you can figure out what file(s) were impacted, and restore from backujp or another source. And if a drive fails and the recovery fails, you always have the failed disk. Having a disk fail so badly that you can't get data off of it is quite rare. You might have a bad spot that affects 1 or a small set of files, but typically the lion's share can be salvaged.

 

The problem is that dual parity does not address the main reasons single parity recoveries don't work - and that is failing disks corrupting parity. You'd need something more akin to PAR blocks for such a recovery. I experimented with this a while back, but took forever to build and no way to keep updated without re-doing. This same phenomenon can happen with Freenas.

 

So while not perfect, single parity is pretty darn effective. You have multiple angles to pursue recovery in the proper order. People tend to loose data when they don't know what do to and make mistakes in the process. But even then, the forum experts have a remarkably high success percentage of getting all or most of the data back.

 

Dual parity, as I've said, helps more from cabling issues and might be a help in once in a while. But without the dual parity, the recovery might have been successful anyway. 

 

I sleep very well at night with the parity scheme. 

 

7 hours ago, shEiD said:

In this case, I assume, I could at least probably use the hashes made by the Dynamix File Integrity plugin to find the broken files?

 

I have my own file integrity scripts, and don't have experience with that plugin - but YES, having a system for capturing checksums for all disks is very worthwhile. If something does go wrong, you have  means to identify which (if any) files got corrupted. Since parity operates below the file system level, it is quite possible for silent corruption to creep in, evidenced by an occasional parity error. You have to compare the checksums when you see one of these happen to determine if something unexpected has corrupted a file (more typically, a parity error is only caused by a hard shutdown and parity, not the data disks, are the cause.

 

7 hours ago, shEiD said:

As for multiple pools - I stand by my opinion - it is essential, imho. Multiple pools would enable to use smaller "protection-groups". For that I would gladly sacrifice more drives. Let's say 1 parity for every 16 or even 10 drives. Multiple pools would greatly speed up parity checks, as I understand. That's just of the top of my head at the moment.

As for pooling the pools - why not use the same system? You have multiple separate user shares now. You can setup every share individually - set the included/excluded drives. Nothing has to change when it comes to user shares. The only thing changes is setting up separate parity-pools. I mean, you could set one parity to protect drives 1-16, another would protect 17-32, and so on... That's it.

 

Essential? Not the word I'd choose. A useful enhancement? Maybe. Would not much affect parity check times, which are based largely on the parity size (all the other disk I/O is happening in parallel).

 

7 hours ago, shEiD said:

Also, I'm not whining about moving to Freenas. I know you won't like it me saying it, but it is a fact - unraid and freenas are in different leagues. It's not unraids fault or an accomplishment of Freenas, as it were. It's ZFS - it simply has no equals. Overall, unraid has better usability when it comes anything other than FS - docker, VMs - everything is easier, imho. So don't go biting my head of, now.

If/when I want to move to freenas, I'll simply do it, no whining required :)

 

Freenas likely suffers from some of the same fundamental issues that unRAID does. Maybe there is no one over there able to point out the warts and wrinkles. I know the forum there is not nearly as active as here. If that's your preference - go for it. I really don't mind someone discussing specific features of competing products. But the ultimatums are annoying.

 

Enjoy your array! (And if you move to Freenas - good luck!!)

Link to comment
  • 2 years later...

Almost 3 years later.... I've returned to this thread looking for answers to go beyond the drive count limits.

 

I have a lot of drives. Maybe 60 or so. Presently I run UnRAID on bare metal, with two nested VMs that also run UnRaid. Tota of three UnRAID pro licenses to make this possible, each with their own USB key pass-thru to the VM. It's awesome and it works @limetech. The bare metal is 36 bay SuperMicro server with external SAS cards that attach to a SuperMicro JBOD array.

 

I want to add more drives. A lot more drives. I recently came into possession of 4 of those BackBlaze 45 drive storinators. The backplanes and SATA expanders can be mapped to two UnRAID instances. It'd be split with 30 disks to one UnRaid instance, and 15 to the other UnRAID. That 15 seem inefficent and is nagging at me to find something else to have a single 45 drive array.

 

What's the answer? I've been researching SnapRaid. Seems viable. If I explore deeper I'll start a threat on my findings and link back to this thread.

 

This thread is important. Years gone by and looking at where I am now, versus where I was when I posted here. The drive limit hasn't changed. UnRAID is still awesome. But my needs have changed. I'm happy to purchase more UnRAID licenses, but I don't want to manage multiple UnRaid servers. Three is enough.

 

Any ideas on what else to explore or consider?

  • Like 1
Link to comment

I did not read the whole article, but when you seek for an enterprise solution, use something like CEPH.

https://ceph.io/

 

I am using Unraid in my home setup as it is an incredible flexible and cheap solution for backup data and archiving. Nothing is as flexible and cheap in setup in my opinion.

At work, I setup a CEPH cluster, here you can have hundreds of drives on dozens of servers with failover and parity over multiple servers. You can reboot parts of the cluster for updates with no down time and have virtual machines use CEPH as underlying network storage, for example with Proxmox (built-in). If you need additional backups, use a second technology like GlusterFS that also can run over multiple disks on multiple servers. If there is a problem with CEPH, your second storage on GlusterFS is not affected. Never confuse backups with failover! And then you have your scalable, enterprise ready system with multiple parity disks > 2, many data disks > 28, server and even location failover and backup. Nothing of this can be given by Unraid and Unraid should not be designed for this, as it would complicate the setup for every home and small business user - and it would compete with CEPH and GlusterFS, but for no reason as these systems already solve the problems you seem to seek for.

CEPH also supports multiple pools with different redundancy settings, different synchronization strategies, distinct SSD and HDD pools on same infrastructure for high performance and high storage pools. You can choose which disks you want in which pool, you can choose the parity calculation, and more... you can use CEPH as block storage and file storage with CephFS. CEPH also has integrity verification built in (scrubbing). And and and... everything enterprise ready storage needs! I am sure it is possible to run SMB sharing over CephFS pools, though I am not sure about SMB failover, so maybe the gateway running SMB might have downtimes on reboot. Maybe SMB can be made high availability via keepalived (VRRP) and conntrackd, then the SMB shares would be failover, too, like the whole storage system.

 

I don't want to make advertising for these technologies. I am not involved in any development of them, but I use them all where applicable. Unraid at home, CEPH for high performance, high availability, high scalable enterprise VM/Container storage at work, GlusterFS for backups of CEPH VMs/Containers at work. keepalived for failover of some services (but not SMB), conntrackd on VyOS firewalls for connection failover in case of reboots (but not with SMB). I just want to say, there are solutions out there!

 

@shEiD

Use the right tool for the right task. My opinion! Learn Linux administration, all the tools I mentioned are free of charge, Unraid costs money, CEPH not; but professional support can be bought. Proxmox has a GUI for CEPH management. CEPH also has an own Dashboard if you want to use CEPH standalone, but for advanced management, you need to learn Linux administration. For performance, see this: https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark

 

PS: yes the thread is old, but for someone returning and being unhappy with Unraids limits (for example like @Lev), I would like to have some solutions written here to be sure nobody is unhappy with Unraid not knowing there is another tool for the task.

Unraid is great, but I cannot repeat it enough: Use the right tool for the right task. If you have a hammer, everything looks like a nail. But screws need a screwdriver. :D

 

Edited by Addy90
  • Like 4
Link to comment
  • 1 month later...

@Addy90 Now that I've got more free time on my hands, I'm going to give CEPH a try. I ordered some more RAM for my 4 server cluster that I'll be setting up for it. In the meantime I'm reading and watching tutorial videos. Thanks again for your post. Using CEPH at home is a big leap from UnRAID but in the years ahead it will prove to scale beyond my yearly budget to buy additional hard drives 🤣

Link to comment
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.