Trying to figure out if unRAID is right for my needs


vertigo

Recommended Posts

I'm going to be building a new computer soon, and am trying to decide how to do it. I'm looking for advice both on general setup and on whether or not I should use unRAID. I'm typically the only one accessing my media, but I want the ability for others to do so, and I also want to be able to easily access it remotely once I get internet capable of streaming. I don't transcode content to watch locally, but may need to for watching over the internet. In the past, I've had a separate server for my movies, etc, but currently, due to larger capacity drives, I have them in my main computer, which I do prefer for multiple reasons. I'm torn on whether to keep it that way or split it back up into a main/gaming rig and a server. As I see it, the advantages to keeping it all in one are:

+ It would be cheaper (only need one case/mobo/cpu/ram/psu)
+ It would be more power efficient (only one computer running instead of two)
+ Faster transfer speeds (not limited by network)

Whereas the disadvantages are:

- It limits me more on cases (need a case that can hold more drives (server/movie drives plus 2 3.5" drives and 1-2 SSDs that would otherwise be in the main computer))
- Likely slightly more noise from main PC, which would be close enough to hear, vs being able to stick server in an unused room where noise wouldn't be an issue and allowing main PC to be quieter due to less drive noise and being able to run fans slower due to not having to push air through and cool the drives
- More traffic on the SATA bus, which could cause issues (would be transferring between two internal drives, meaning the read traffic from one and the write traffic to the other will all be on the same bus, which causes issues on my current computer, e.g. Windows will sometimes freeze for a few seconds, mouse occasionally freezes for half a second to a second, videos will stutter and freeze, and I would really like to get rid of these issues). Running a separate server would mean transferring from a drive on the main computer to a drive on the server, so only the read traffic will be on the main computer's SATA bus, which should (in theory) go a long way to reducing the problem.
- Less separation of data, leaving more drives exposed to potential viruses, ransomware, etc that the main computer may be afflicted by (low concern)
- Possible issues and definitely more work with sharing the media from different OS's (I plan to multi-boot Windows and one or more Linux distros), not to mention I hate Windows sharing and with a separate server could avoid it for the media drives
- Having all the drives spin up every time I reboot (whether installing software/updates or switching OS's) is less than ideal for the power supply and the drives, and would like lengthen my boot time, which I like to have as short as possible
- Media files would be unavailable when rebooting (not a big issue, but would be nice to not have availability impacted by this)

My understanding is that unRAID would help with the last four disadvantages, as I would run a NAS VM and a gaming PC VM, so that would separate the media drives so they couldn't be infected by something in the gaming VM, it would simplify the sharing of the media drives as they would just be done once in the NAS VM, and it would allow me to reboot the main PC VM without affecting the NAS VM.

I have several concerns and questions, however, about using unRAID:

1) It's another layer of complexity, which adds another point of failure. This is especially concerning due to the possibility of the flash drive failing, leaving me with a non-functioning computer until I get a replacement flash drive and load a replacement key on it, and then hope it doesn't happen again in the next year or I'll have to contact support and hope that I'm able to get another replacement key and that it doesn't take a long time to do so.
2) Slow transfer speed without using a cache, which I don't want to have to do, both due to cost and due to the fact I don't want to risk the cache drive failing before it transfers everything to the platter drives. I prefer to know that once something is done transferring, it's actually done. I also use FastCopy to perform verification when doing file transfers, and using a cache it could verify a successful transfer to the cache drive but then the file could get corrupted between the cache and the destination drive. I'm wondering if this only applies to the use of parity drives, as that seems to be the cause of the slow speeds. Since I don't want to use a parity drive (I simply do a 1:1 backup on separate drives), would I get normal speeds the same as I would get doing file transfers in an OS not running in an unRAID VM? Is unRAID even meant to use that way, or is it only meant to use for RAID-like purposes?
3) What would the speed be like in the VMs (especially the main one) vs just running the OS's on their own? In the LinusTechTips videos, he shows that despite running in a VM, the gaming performance is still excellent, but I wonder if it's really running at or near 100% or is it only performing at maybe ~80% and that just happens to be more than sufficient for his situation. In other words, if my video card is able to provide 60 FPS in a game running in Windows, would it still provide 60 FPS running in Windows under unRAID, or would it be more like 50 FPS? Similarly, if a video encode takes an hour in Windows, would it still take an hour under unRAID, or would it take several more minutes or even longer? I just find it hard to believe you would get 100% or even >90-95% efficiency running in a VM.
4) Are CPU core and memory assignments permanent once a VM is created, or can they be changed on the fly? For example, I'm planning on using a Ryzen 8 core for my new build, so if I were to assign 1 core (2 threads) to the NAS VM and 7 cores (14 threads) to the main VM, could I change that later, maybe to shut down the NAS VM and give the main VM all cores for encoding or to give the NAS VM more cores if it turns out 1 isn't enough (I can't imagine it wouldn't be)?
5) While I don't want to use RAID for my server/media drives, I do want to run the two 3.5 drives that will be in my main PC (if I do separate boxes) or tied to the main VM in RAID 1. Would there be an issue running normal RAID in an unRAID VM? I've never run RAID before, so I'm not sure how exactly I'm going to accomplish this yet, especially since I'm going to be multi-booting (any suggestions are very welcome). Maybe the best way to do it would be to use unRAID and assign them to the NAS VM simply so they'll always be under one OS.
6) If I were to buy a Plus key, and later needed a Pro key, would I just pay the difference or would I have to pay the full Pro key price?

I'm sure I've probably forgotten something, but this is already really long as it is. Thanks for any answers or advice. This has been driving me crazy trying to figure all this out.

Link to comment

I am not sure I can answer all your questions but I'll answer what I can.

 

I would not worry too much about the flash drive dying, there is a way to back it up so that if that does happen its easy enough to swap out. I've had three unRAID servers running for over a year, no issues with my flash drive.

 

If you want to run dockers and it sounds like you want to run something like Plex, then a cache drive is a great place for them to get installed, many of us use SSD's for our cache drives and you can configure then in RAID 1 for performance and redundancy as well.

 

Using a parity drive while not necessary is recommended to protect your data, in fact you can have two parity drives as many of us do, but ultimately its up to you. As you know with the way unRAID works, if you lose a drive and have no parity drive, or backup, you only lose whatever was on that particular drive.

 

One of my unRAID servers was built for the sole reason of running a Windows 10 VM for gaming, it has a quad core i7, 32GB of RAM and an Nvidia 960, its like playing on bare metal, I kid you not, so don't worry too much about the performance its real. Isolating your OS and programs/games on a dedicated SSD is key to VM performance. This drive is not part of the array but outside of it.

 

Once you assign cores to a VM you cannot change them without powering down the VM.

 

Don't confuse hardware RAID with unRAID, unRAID is software RAID and functions quite differently from hardware RAID. There are a number of FAQ's on this subject and I would refer you to those. Suffice it to say that if you want to protect your data in unRAID you should have a parity drive.

 

You pay the key upgrade price when you want to upgrade which is cheaper then buying the key you need as if you were buying it the first time.

Link to comment

I have a separate main pc vs unRaid box. Have begun to think of combining, but the geographic location of the server (in the basement) would make physical connection of monitor and keyboard/mouse a challenge. My server is noisy and would not move it upstairs. Remote access is an option, but spoiled by instant screen updates. So not really excited by that option.

 

I also realize that sometimes my server or pc are down for one reason or another. Infrequent, but I don't want to be without a usable option. More than once my pc had become the destination of data from a problematic drive or come until pay in a diagnostic or recovery exercise for my server.

 

So I'm not in a hurry to consolidate. But up to each person to assess given their circumstances.

Link to comment
10 hours ago, vertigo said:

6) If I were to buy a Plus key, and later needed a Pro key, would I just pay the difference or would I have to pay the full Pro key price?

Neither. It's the difference plus a small fee, for administration I presume. If you are sure you will be upgrading later, I'd go ahead for the Pro. In the grand scheme of things, the price of the license is pretty small compared to the rest of the parts.

10 hours ago, vertigo said:

1) It's another layer of complexity, which adds another point of failure. This is especially concerning due to the possibility of the flash drive failing, leaving me with a non-functioning computer until I get a replacement flash drive and load a replacement key on it, and then hope it doesn't happen again in the next year or I'll have to contact support and hope that I'm able to get another replacement key and that it doesn't take a long time to do so.

The flash drive is not written much in the daily operation of the machine, only to boot the OS into RAM and save configuration changes. I've been using the same flash drive for almost 10 years in one of my servers. It's been upgraded all along and transferred from server to server as I upgraded. If it does die, replacement is super easy as long as you have a current backup. The license transfer is completely automated, and takes seconds.

 

Just be sure to run the flash you wish to use in trial mode for a week or two to weed out a bad flash drive or incompatibility.

Link to comment

Thanks everyone for the replies.

@ashman70 - As I stated, I don't want/need a parity drive, since I back up my data onto separate drives. I do this since this is a real backup, whereas RAID is not, and I don't want to waste money by having wasted drive space in the form of parity drives when the data is already backed up. The drives I intend to put in RAID 1 also have another drive to which they're backed up, as that's my really important stuff. And yes, I know unRAID is different from hardware RAID, which, from what little I know about running RAID, I'm pretty sure I don't want to do. I want to run those drives in a software RAID 1 setup. I just don't know if this could be done via unRAID, or if it only does parity setups. I don't know that I'll need to use containers, and I don't currently use or plan to use PLEX, since I prefer Kodi, though I may end up using it for transcoding for playing my media over the internet. Even then, though, from what little I know about containers, I don't think that'll be necessary or even beneficial, as I could just run PLEX on the server VM.

I'm not clear on what you meant when you said "Isolating your OS and programs/games on a dedicated SSD is key to VM performance. This drive is not part of the array but outside of it." I plan on getting an NVME SSD for running all the OS's (Windows, whatever linux distro(s) I decide to use for my main PC, and whatever linux distro (or maybe something like FreeNAS) I decide to use for the NAS VM) and using my current SATA SSD, and possibly getting a second one, for games and maybe to use for active projects (encodes, mkv muxing, etc). Would this work, or are you saying I'd have to use a separate drive for each VM?

I'm fine with having to shut down a VM in order to change the resource allotment, that makes total sense. But you're saying I could in fact shut down the server VM and give all cores to the main PC VM and switch back and forth at will, I would just have to shut down both VMs first?

Link to comment

@bjp999 - Since you're considering consolidating, and you don't want to move the server out of the basement, I assume that means you are thinking of having your main PC in the basement as well? That should be doable with a long HDMI cable and an active USB extension cable or with HDBaseT and I think you can do USB over the network as well.

Link to comment

Three comments:

1) You have several mentions to RAID above.  It's really better to get rid of those notions.  unRAID isn't RAID in the traditional sense and unRAID and traditional RAID really don't mix.

2) I'm a firm believer in separating my "main machine" and my unRAID server (for now).  This isn't a comment on unRAID's virtualization capabilities, they're great.  But as a broader statement (and this is just my opinion), I feel that the current state of virtualization with hardware pass-thru can be finicky (perhaps even persnickety) and I just don't have time for that.  Great potential, but more in the enthusiast phase rather than mainstream.  Virtualization without pass-thru is mainstream, and virtualization with pass-thru can be rock solid - but you have to be prepared to work at it.  So, I'm perfectly happy having my unRAID server on all the time, and my laptops/desktops in S3 sleep until they are needed.

3) If you don't want to use a cache drive to cache writes to the array (FYI, I don't either) and you don't want use a Parity drive (hmm, why not?) then writes to the array will be essentially full speed, no parity overhead.  Personally, I use a parity drive because it provides redundancy (not backup) and expedited recovery from a failure.  I also use the unRAID Turbo Write feature so that writes to the array are very fast.

Link to comment

tdallen - The only actual RAID I want is the RAID 1 for my two non-media drives. This way, I have the redundancy I currently have, but unlike now where I have to copy everything to both drives manually, it would be automatic (would only have to copy once instead of twice), and with faster read speed. The reason I don't want to use a parity drive is because I don't want to spend the money for a drive that will give me no extra storage space and, while it would be nice to have that extra layer of redundancy, it's not needed as I keep 1:1 backups. But you're saying running those two drives in RAID 1 under unRAID would not be a good idea? Your other point does have me concerned, about the possibility of virtualization with hardware pass-thru being finicky. Considering linux already tends to be finicky on its own, and I'm looking at doing one or multiple linux distros on top of it, I wonder if I'd just be asking for a massive time sink and headache.

Link to comment

ashman70 - No. I want to run unraid as the base layer, then two (or more?) VMs on top of it, one being a media server (most likely running linux), and one being my main PC, which would be set up to multi-boot Windows and one or more linux distros. But I was planning on using the NVME SSD to hold the OS's, and the SATA SSD that currently has my Windows install on it for games and active stuff like downloads, encodes, mkv muxing, etc. And I might get a second SATA SSD to use for the same stuff, to increase the space. Then there will be the two platter drives that hold my important, non-media data, which I want to run in RAID 1 for redundancy and increased speed, and the media drives, which would be assigned to the server VM. Does that make sense?

Link to comment
40 minutes ago, vertigo said:

But you're saying running those two drives in RAID 1 under unRAID would not be a good idea? 

Before you try to figure out how to pass a RAID controller into a VM and run an OS on a RAID-1 configuration within it, I'd suggest trying the default unRAID approach - implement a cache pool of two or more SSDs in a BTRFS RAID-1 configuration and run your VMs from the cache pool.

 

42 minutes ago, vertigo said:

Your other point does have me concerned, about the possibility of virtualization with hardware pass-thru being finicky.

I'd recommend two things - first, spend some time in the VM Engine forum and just browse.  See what kind of problems people have, and how they solve them.  You'll find some highly hardware specific issues related to pass-thru, more about initial setup than ongoing use.  Second, give it a try!  The unRAID trial license will allow you to sort out any issues you might have with your proposed hardware.

 

Lastly, consider this.  You have data on your server.  You have a 1:1 backup.  You have a failure on your server.  You now have one copy of your data - on your backup.  It better be a good backup.  I use unRAID parity protection.  If I have a failure on my server the data is still there!  Yes, I have 1:1 backups too - but there are benefits to a more layered strategy.

Link to comment

I won't be running the OS on RAID 1, just the data drives. Not sure if that matters. The problem with trying it out first, aside from time, is that I don't have the new build to try it on. That's why I'm trying to figure this out, to decide what parts to get (all-in-one or separate main PC and server). The only way that would work, and I'm considering doing this, is getting a bigger case and trying to do it all in one, and if that doesn't work, I'll get a smaller case for my main PC and get another mobo/cpu/ram/psu for the server, but then I'd have to move everything over and it would just be a pain. As I said, I am considering it, so I'm not totally against it, but if I can determine that unRAID won't work for what I'm trying to do it'll save me the time and headache of doing it. Of course, even if I do separate them, I still have the problem of wanting to run the two drives in RAID 1 on a multi-boot system, though I suppose I could probably put those in the server as well since that would be running one constant OS, though I'd really rather keep them in the main PC.

Also, you say to use BTRFS, but last I heard (admittedly several months ago), BTRFS was found to have some pretty major problems requiring a more or less total revamp, which would mean it's not an ideal candidate. I haven't done any research on this specifically, since I just didn't plan on using it, but is that not accurate?

Yes, I realize that if a drive fails, I have one working copy, and if that fails before I'm able to get all the data off of it onto another drive, I lose everything on it. And that would suck. I have thought about using one or even two parity drives, but ultimately, the cost is just not worth it, not to mention needing more drive bays for no added data. And I could do that then lose the server in a fire or to theft, and I'm right back to my current situation. So it's a question of how much protection is enough, and at least for now, I'm ok with my current setup. Maybe later down the road I'll switch to a parity setup.

Link to comment
7 minutes ago, vertigo said:

 

I won't be running the OS on RAID 1, just the data drives.

 

Not sure which OS you are talking about, the base OS or VM.

 

Unraid base OS runs from RAM, and loads from the USB. No disk space used.

 

VM disk image files by default are set up on the cache pool, which defaults to BTRFS RAID1 if you add more than one physical disk to the pool.

Link to comment
2 hours ago, vertigo said:

tdallen - The only actual RAID I want is the RAID 1 for my two non-media drives. This way, I have the redundancy I currently have, but unlike now where I have to copy everything to both drives manually, it would be automatic (would only have to copy once instead of twice), and with faster read speed. The reason I don't want to use a parity drive is because I don't want to spend the money for a drive that will give me no extra storage space and, while it would be nice to have that extra layer of redundancy, it's not needed as I keep 1:1 backups. But you're saying running those two drives in RAID 1 under unRAID would not be a good idea? Your other point does have me concerned, about the possibility of virtualization with hardware pass-thru being finicky. Considering linux already tends to be finicky on its own, and I'm looking at doing one or multiple linux distros on top of it, I wonder if I'd just be asking for a massive time sink and headache.

 

Clear... With unraid however you only need ONE drive in your total system (unimportant if you have 3, 5 or 20 data drives) for the use of "parity", with that one drive dedicated to parity basically every drive in your system is protected in case of drive failure... 

 

So ofcourse you can just buy a seperate drive and use that drive to keep a second copy of your important data, but you can also make that drive your parity drive and then you can loose ANY drive in your array, not only the one with important data..

Link to comment
3 minutes ago, jonathanm said:

Not sure which OS you are talking about, the base OS or VM.

 

Unraid base OS runs from RAM, and loads from the USB. No disk space used.

 

VM disk image files by default are set up on the cache pool, which defaults to BTRFS RAID1 if you add more than one physical disk to the pool.

Correct. I understand that unRAID works this way. When I say OS, I mean the various ones I will actually be using, on top of unRAID. So, if I go this route, I will use unRAID to create various VMs, particularly a server/NAS VM, for which the OS will be linux, and my main PC VM, for which the OS's will be Windows and one or more Linux distros. So the server VM will be single-boot, and the main PC VM will be multi-boot, and all of these OS's (Windows and Linux) will be run off a single, non-raided NVME SSD. That will be that drive's main purpose.

Link to comment
9 minutes ago, Helmonder said:

 

Clear... With unraid however you only need ONE drive in your total system (unimportant if you have 3, 5 or 20 data drives) for the use of "parity", with that one drive dedicated to parity basically every drive in your system is protected in case of drive failure... 

 

So ofcourse you can just buy a seperate drive and use that drive to keep a second copy of your important data, but you can also make that drive your parity drive and then you can loose ANY drive in your array, not only the one with important data..

Yeah, I get that, and it does make it somewhat more tempting, but the fact is, I'm going to have the 1:1 backups anyways, since simply having parity is not a backup. So it's not a matter of making that second drive the parity drive, it's a matter of having to buy an additional drive to use that way. And, since I'd be using a parity drive, I'd have to use a cache, which means having to buy another 1-2 SSDs, which definitely starts increasing the cost. And then I have to worry about a cache drive failing before copying everything over to the platter drives. Doing a 1:1 is more expensive and less convenient than doing parity, but it's much cheaper than doing both a 1:1 and parity, and it removes the possibility of losing data because the cache drive fails.

Link to comment

It's great to have backups, even complete backups, but I'm not sure I understand exactly what you mean by 1:1 backups. Do you mean every file will be backed up somewhere on your unRAID server, or do you mean each disk that you want backed up will have a corresponding disk in your unRAID server? I'm just wondering how much trouble it will be to determine what backups you will lose in the event you have a failure without parity.

 

Parity (and User Shares) can free you from being concerned with which disk is being used, and you can take full advantage of unRAIDs ability to expand by adding a new disk or by simply replacing a disk with a larger disk. Unlike traditional RAID, unRAID doesn't require all your disks be the same size, it just requires parity (if you have one) be at least as large as the largest single data disk.

Link to comment

For every drive I have in my computer, I have an identical drive with identical contents. So if I have a drive failure, I just have to get another drive of the same capacity or larger and copy everything over from the backup drive. The backup drives are intended to be kept in a safe, though currently, due to the limitations of my setup, they're just sitting in an enclosure next to my computer.

 

Bottom line is that if I wanted to do a parity-type backup, unRAID certainly seems like a good way to do it, and in the future I may go that route. But currently, to keep costs down and keep things simple, I plan to continue the way I'm doing things, only with a new PC (or two, if I decide to separate them out) to give myself more room for drives. And even if cost and drive space weren't an issue, I would still be concerned about losing data due to a cache drive failure. For example, currently I rip a Blu-ray then use mkvtoolnix or makemkv to convert the files to mkv files (main movie and special features). I make the mkv's from the source disc iso directly onto one of the drives I want to store them on, then I copy them from that drive to its backup drive. Then I delete the source iso. If I were using a cache drive (or more than one) in order to maintain good transfer speeds when using a parity drive (or multiple parity drives), then the potential would exist for the cache drive to fail before copying the data over to the platter drives, which would mean I'd have to spend the time re-ripping the disc (though I could keep the iso until after the cache is done transferring everything, but that's just more to keep track of) and, even worse, going through determining which tracks to mux into mkv again. I like knowing when I copy stuff that it's where it needs to be and fully backed up, and I don't like the idea with a cache drive of doing a file transfer but not actually having it be complete. And in addition to that, as I said before, I use FastCopy to verify the files after transfer to ensure they weren't corrupted in transfer, and all that would accomplish in this case would be to verify that they transferred to the cache drive without corruption, but the possibility would still exist for them to be corrupted when being moved from the cache drive to the platter drive.

Link to comment
41 minutes ago, vertigo said:

I will use unRAID to create various VMs, particularly a server/NAS VM, for which the OS will be linux, and my main PC VM, for which the OS's will be Windows and one or more Linux distros.

I think you have a pretty good understanding of unRAID so I apologize for being picky over terminology, but just in case this isn't your understanding...

 

You don't create a server/NAS VM under unRAID.  unRAID *is* fundamentally and based on a long history, NAS software.  It also has the ability to run Dockers.  It also includes a hypervisor (KVM) and various friendly tools to manage your storage array, Dockers, and VMs.

 

32 minutes ago, vertigo said:

And, since I'd be using a parity drive, I'd have to use a cache, which means having to buy another 1-2 SSDs

Nope, they are unrelated.  If you want a cache drive you use a cache drive (or pool).  That decision is unrelated to parity, except for the fact that on older, slower systems the performance advantage of writing to cache instead of parity was pretty compelling.

Edited by tdallen
Link to comment

Seems like you may end up spending more $/TB with your 1:1 backup scheme. If you allowed User Shares to decide where to put things you could take advantage of the lower $/TB of larger drives, and not worry about buying disks that are the same size as the disks you are backing up, or having unused disk space because you bought a a new backup disk that is larger than the disk you are wanting to backup.

 

If you do use cache for faster writes, your cache can have redundancy with cache pools. People also usually put dockers and VMs on SSD cache drives for performance reasons, so redundancy of cache pools helps there too.

Link to comment

tdallen - So I would only need to run unRAID, and use that to create shares to my media for Kodi, and if desired later, run PLEX in a container, then create just one VM for my main PC? In that case, how would resource allocation work, e.g. would unRAID itself need a cpu core assigned to it, or could I assign all cores to my main PC VM and unRAID just uses what it needs and the VM gets the rest? Also, I was under the impression the purpose of the cache is that without it, when using parity drives, transfer speeds are slow due to the parity overhead.

Link to comment

trurl - As I said, I realize the cost for how I do it is higher than if I were to use parity, but even if I used parity, I would still do actual backups, because parity ≠ backup. So it wouldn't save me from having to spend the money on an additional drive for every drive of storage, it would just make me have to get an additional drive or two or more in addition to the additional drives I already need and have to maintain a true backup. And having to buy disks the same size is not a worry. Every time I need to increase my storage space, I simply buy two drives for every one drive of additional space I want to add. For example, the last time I bought more drives, I bought 4 x 4TB drives, which increased my storage by 8TB with the other 8TB being used as a backup. It's simple and, once I get a bigger case so I can increase my space and set it up properly, will allow me to store the backups in a safe or perhaps even offsite for a true backup. I can have 100 parity drives and one fire will nullify all redundancy and cause me to lose everything.

 

I feel like the majority of this thread has been dealing with parity, which is what I specifically said I don't want to do, and it's tying up the conversation and detracting from the points I'm trying to learn more about. I know you guys need to have an understanding of my intended setup, but I would like to keep it on track and discuss the intricacies of the setup as I intend so that those points that are truly applicable to me don't get lost in the background, and try to keep parity discussion at a minimum. I definitely appreciate the help, but IME threads have a tendency to veer off course and then the initial questions never get answered, and I'd like to avoid that.

Link to comment
12 minutes ago, vertigo said:

I was under the impression the purpose of the cache is that without it, when using parity drives, transfer speeds are slow due to the parity overhead.

Writes directly to the parity array aren't as fast as single disk writes because parity also has to be calculated and written. There are a couple of different ways you can configure this though. See here:

Perhaps a more important use of cache these days is for dockers and VMs, since you can afford to have relatively small SSDs in cache for this purpose, and so gain a lot in performance.

Link to comment
16 minutes ago, vertigo said:

trurl - As I said, I realize the cost for how I do it is higher than if I were to use parity, but even if I used parity, I would still do actual backups, because parity ≠ backup. So it wouldn't save me from having to spend the money on an additional drive for every drive of storage, it would just make me have to get an additional drive or two or more in addition to the additional drives I already need and have to maintain a true backup. And having to buy disks the same size is not a worry. Every time I need to increase my storage space, I simply buy two drives for every one drive of additional space I want to add. For example, the last time I bought more drives, I bought 4 x 4TB drives, which increased my storage by 8TB with the other 8TB being used as a backup. It's simple and, once I get a bigger case so I can increase my space and set it up properly, will allow me to store the backups in a safe or perhaps even offsite for a true backup. I can have 100 parity drives and one fire will nullify all redundancy and cause me to lose everything.

Are you sure you know how parity works? Nobody here thinks parity is a substitute for backups. But you don't need another parity drive for every data drive you add. You just need a single parity drive for the entire array. Or you can have dual parity and have 2 parity drives for the entire array if you want the ability to recover from 2 simultaneous missing disks. In fact, unRAID doesn't even support more than 2 parity drives for the entire array. You can have a single parity drive and it will provide parity protection for 20+ drives.

 

I'll have nothing more to say about parity unless you have a question about it. Just trying to make sure your decision to NOT use parity is based on a good understanding.

Link to comment
35 minutes ago, vertigo said:

tdallen - So I would only need to run unRAID, and use that to create shares to my media for Kodi, and if desired later, run PLEX in a container, then create just one VM for my main PC?

Yes, that's correct.  Generally, by the way, Dockers are preferred as applications over VMs if they are available.  The virtualization there is lots lighter weight.

 

36 minutes ago, vertigo said:

In that case, how would resource allocation work, e.g. would unRAID itself need a cpu core assigned to it, or could I assign all cores to my main PC VM and unRAID just uses what it needs and the VM gets the rest?

If you are running VMs it is recommended to set aside at least one core (and it's hyperthreaded VCPU) for unRAID.  You can let things be a free for all and that can work for non-performance sensitive VMs that do background processing.  But for performance sensitive VMs like gaming people have had the best experience running dedicated/pinned cores for VMs under unRAID.  That gets tricky if you want to run a lot of VMs and you'll see people moving to socket 2011 and Xeon E5s in that case.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.