Rendering farm/Plex server + NAS


Recommended Posts

I work with 3D rendering using the software 3ds max. I want to build a pc that is a render farm and a NAS.

 

I'm going to make a NAS using Unraid, and install windows OS using VMs. In Windows OS I will install the 3ds max (I need to install a version of it to use as a render node), but I also wanted to install plex media server in the same VM. But seeing some videos on youtube I have some questions.

 

1-How much hardware does Unraid need for the NAS part? After all, since a render farm uses 100% CPU, I want Windows VM to have as many threads as possible.

2-If I install the Plex media server on the Windows VM, but the files are on the NAS, the processing power that will be used in transcoding will be of the VM or the NAS part? (Is it possible to do this, or there is a better way?)

3- In the video 2 gamers 1 CPU Linus distributes the cpu threads to the two VMs leaving (apparently) no cpu thread for the NAS part. What I am not able to find, is how much processing Unraid needs? Can I put all cpu threads to the VM? What impact this have?

 

Sorry for the many questions, but as I'm starting to venture into this world of servers, I'm kind of lost.

Link to comment

I have a 4 server cluster setup for batch rendering, so I can tell you my experiences. Without knowing your hardware, your mileage may vary. 

 

 

1 hour ago, kayo7 said:

1-How much hardware does Unraid need for the NAS part? After all, since a render farm uses 100% CPU, I want Windows VM to have as many threads as possible.

 

If you're not writing to the array, or using any unRaid functions, you can assign all cores. Some people will say not to do that. For me, it takes 1-5% of a given cpu to manage a vm running on all the other cores at 100%. So yeah, unRaid ends up "fighting" for cpu time when the vm is pushing 100% on all cores, but it seems to figure it out without any noticeable stability issues. If I know I'm going to have no writes to the array of a given server, and I'm running a batch job, I give the vm all the cores. If you're concerned about needing to write to the array and run the vm, then just give unRaid 1 core, and put your emulator pin on that as well, isolate the rest of the cores and give them to the vm and you should have zero issues.

 

1 hour ago, kayo7 said:

2-If I install the Plex media server on the Windows VM, but the files are on the NAS, the processing power that will be used in transcoding will be of the VM or the NAS part? (Is it possible to do this, or there is a better way?)

 

Both. You'll have a little from the NAS to read and feed the file to the VM, the VM transcodes (if necessary) then sends it back through unRaid (which uses a little to manage the vm) and then spits it out to the network. You could consider running Plex in a docker and assign it a few cores, the total needed depends on hardware and transcoding requirements. If you're just streaming, you could be fine with 1-2 cores assigned to a plex docker. 

 

For example, my main server has cores 0-6 left for unRaid and a Plex docker to share. It can handle multiple file transfers at the same time while transcoding 2-4 plex viewers.

Cores 7-23 are isolated from unRaid and left for VM usage. They sit at 100% utilization but unRaid and Plex never notice nor have any problems.

 

1 hour ago, kayo7 said:

3- In the video 2 gamers 1 CPU Linus distributes the cpu threads to the two VMs leaving (apparently) no cpu thread for the NAS part. What I am not able to find, is how much processing Unraid needs? Can I put all cpu threads to the VM? What impact this have?

 

 

See my answer to question 1. It's all very much hardware based. If you've got enough power, it's no problem, at least for me it has not been.

 

 

 

Link to comment
3 hours ago, 1812 said:

I have a 4 server cluster setup for batch rendering, so I can tell you my experiences. Without knowing your hardware, your mileage may vary. 

 

 

 

If you're not writing to the array, or using any unRaid functions, you can assign all cores. Some people will say not to do that. For me, it takes 1-5% of a given cpu to manage a vm running on all the other cores at 100%. So yeah, unRaid ends up "fighting" for cpu time when the vm is pushing 100% on all cores, but it seems to figure it out without any noticeable stability issues. If I know I'm going to have no writes to the array of a given server, and I'm running a batch job, I give the vm all the cores. If you're concerned about needing to write to the array and run the vm, then just give unRaid 1 core, and put your emulator pin on that as well, isolate the rest of the cores and give them to the vm and you should have zero issues.

 

 

Both. You'll have a little from the NAS to read and feed the file to the VM, the VM transcodes (if necessary) then sends it back through unRaid (which uses a little to manage the vm) and then spits it out to the network. You could consider running Plex in a docker and assign it a few cores, the total needed depends on hardware and transcoding requirements. If you're just streaming, you could be fine with 1-2 cores assigned to a plex docker. 

 

For example, my main server has cores 0-6 left for unRaid and a Plex docker to share. It can handle multiple file transfers at the same time while transcoding 2-4 plex viewers.

Cores 7-23 are isolated from unRaid and left for VM usage. They sit at 100% utilization but unRaid and Plex never notice nor have any problems.

 

 

 

See my answer to question 1. It's all very much hardware based. If you've got enough power, it's no problem, at least for me it has not been.

 

 

 

Thank you so much for the information.

This is the hardware I'm thinking of buying, what do you think?

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Xeon E5-2650 V4 2.2GHz 12-Core Processor  ($1099.99 @ SuperBiiz) 
CPU Cooler: Noctua NH-U14S 55.0 CFM CPU Cooler  ($64.89 @ OutletPC) 
Motherboard: Supermicro MBD-X10SRL-F-O ATX LGA2011-3 Motherboard  ($273.98 @ Newegg) 
Memory: Crucial 64GB (4 x 16GB) Registered DDR4-2133 Memory  ($739.98 @ Directron) 
Storage: Samsung 850 EVO-Series 500GB 2.5" Solid State Drive  ($179.99 @ Amazon) 
Storage: Samsung 850 EVO-Series 500GB 2.5" Solid State Drive  ($179.99 @ Amazon) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($118.99 @ Best Buy) 
Video Card: EVGA GeForce GTX 1050 2GB ACX 2.0 Video Card  ($107.49 @ SuperBiiz) 
Case: Fractal Design Define R5 Blackout Edition ATX Mid Tower Case  ($99.99 @ SuperBiiz) 
Power Supply: EVGA SuperNOVA P2 850W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($129.99 @ Newegg) 
Other: 10Gtek for Intel 82599ES Chip Ethernet Converged Network Adapter X520-DA2  ($263.99 @ Amazon) 
Other: Linksys Max-Stream AC2200 MU-MIMO Tri-band Wireless Router, Works with Amazon Alexa (EA8300)  ($199.97 @ Amazon) 
Total: $4054.19


I liked the plex Docker idea. Since I'm only going to use plex on my cell phone and on my tv, and I'm just going to use the NAS for backup, I was thinking of leaving the cores 0-4 for them, and the rest for the vm/render farm. What do you think?

Link to comment

Full disclosure: I dig using older enterprise/business servers vs. buying new. 

 

My first reaction is 4 grand...! 

 

Do you know if your renders are cpu or gpu dependent. If they are cpu dependent, You might consider another plan that could be cheaper and more powerful. Here is why:

 

You're getting a passmark score of 15990. A pair of dual x5670's run 12613. So the newer processor is 22% faster. But for 1000 dollars you could have 5 of the dual x5670 machines with a combined 63000+ passmark. Or even 4 with a backup sitting there incase one goes down. The 5670's do use more power, but with a cluster going it could potentially be less expensive for electricity if 4 chomp through the render faster than 1 at a higher rate of return per watt. You'd also be saving 1000 bucks on ram/cooler/case to put towards that electricity. Guess how many years it will take to use it. More than 1 since they all don't need to be on 100% of the time I assume. There are other options one could consider, but the route I went was building a 4 server cluster that has 96 total cores across it. 

 

And here is why: 

 

On an 8,400 passmark cpu, a test render would take me 16 minutes. On a 12613 machine (only one dual x5670) the render takes 12 minutes. On the cluster it takes under 2 minutes. (note: nothing I render actually takes 2 minutes, it is usually much longer, but this is an illustrative point.) The cluster with its copious amounts of cores is exponentially faster. Instead of a 24 hour render, it takes 3-4 hours. Then 3 machines get powered off. If there is a problem with one server, I only lose 1/4 the production, not 100%. So no real downtime. Plus, replacement parts are way less expensive. If I lose a motherboard, it costs me 100 bucks to replace the entire server, and the bad one becomes a parts chassis. 

 

Anyways, don't actually follow what I said. It's crazy. I know it's crazy. But I love it. It's like crack. 

 

1o3e12.jpg.1c8b24dab30676434cb720a95eff904d.jpg

 

 

Now, with that said, your particular build: lots of ram, niiice. Verify your network card works in unRaid (http://lime-technology.com/wiki/index.php/Hardware_Compatibility). I don't know if it does or does not. You could probably pick up something used for waaaay cheaper. Mellanox connectx2 cards are 40 bucks and are plug and play in uRaid. I have 2. They work without issue.


If the server is going to live in the same space as a person, and you're going to be pegging the cores often, you might consider water cooling to keep it quieter AND cooler. HDD's might need another fan, especially when pre-clearing (I only saw the 1 on the rear.) 

 

I like reds. They may not be the fastest, but I've had some for several years and they are champs (knock on wood.) If you're not married to them, every now and then some 8TB drives come up from Seagate that might be competitive in the dollars per TB arena... I don't use them, but some folks on here really like them.

 

Core assignment seems fine. You'll know more when it's all setup and see if you need that many, or need to add/can remove.

  • Upvote 1
Link to comment
2 hours ago, 1812 said:

Anyways, don't actually follow what I said. It's crazy. I know it's crazy. But I love it. It's like crack. 

 

1o3e12.jpg.1c8b24dab30676434cb720a95eff904d.jpg

Ahahaha, Thanks I'm learning a lot.

 

Yeah, it actually got a lot more expensive than I expected. But since I'm not very knowledgeable about it, I basically chose what seemed to be "right."

What parts would you recommend for a render farm server/ NAS around $ 3,000?

Link to comment
Just now, kayo7 said:

Ahahaha, Thanks I'm learning a lot.

 

Yeah, it actually got a lot more expensive than I expected. But since I'm not very knowledgeable about it, I basically chose what seemed to be "right."

What parts would you recommend for a render farm server/ NAS around $ 3,000?

 

For buying new parts, I'm not the right person to answer. Others could probably set you up with a sweet build though. But it'll be important for them to know your requirements: acceptable noise level, how much Storage/ram, obviously the 1 vm and plex docker, and how much cpu power you hope to get. There may be some older but still brand new dual processor boards that would allow you to get a much higher core count within that budget and find a balance between something crazy like mine and something more standard.

  • Upvote 1
Link to comment
On 2017-4-28 at 11:13 PM, 1812 said:

 

For buying new parts, I'm not the right person to answer. Others could probably set you up with a sweet build though. But it'll be important for them to know your requirements: acceptable noise level, how much Storage/ram, obviously the 1 vm and plex docker, and how much cpu power you hope to get. There may be some older but still brand new dual processor boards that would allow you to get a much higher core count within that budget and find a balance between something crazy like mine and something more standard.

 

With the help of a user here in the forum I got this configuration. I wanted to know your opinion too, as someone with more experience.

 

 

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Xeon E5-2660 V2 2.2GHz 10-Core Processor  ($200.00) 
CPU: Intel Xeon E5-2660 V2 2.2GHz 10-Core Processor  ($200.00) 
CPU Cooler: Noctua NH-U12DXi4 55.0 CFM CPU Cooler  ($64.89 @ OutletPC) 
CPU Cooler: Noctua NH-U12DXi4 55.0 CFM CPU Cooler  ($64.89 @ OutletPC) 
Storage: Samsung 850 EVO-Series 500GB 2.5" Solid State Drive  ($179.99 @ Amazon) 
Storage: Samsung 850 EVO-Series 500GB 2.5" Solid State Drive  ($179.99 @ Amazon) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Storage: Western Digital Red 4TB 3.5" 5400RPM Internal Hard Drive  ($136.84 @ OutletPC) 
Video Card: EVGA GeForce GTX 1050 Ti 4GB SSC GAMING ACX 3.0 Video Card  ($119.99 @ Newegg) 
Case: Corsair 750D ATX Full Tower Case  ($119.99 @ Newegg) 
Power Supply: EVGA SuperNOVA 1000 P2 1000W 80+ Platinum Certified Fully-Modular ATX Power Supply  ($183.99 @ SuperBiiz) 
Case Fan: Noctua NF-A14 PWM 82.5 CFM  140mm Fan  ($20.95 @ Amazon) 
Case Fan: Noctua NF-A14 PWM 82.5 CFM  140mm Fan  ($20.95 @ Amazon) 
Case Fan: Noctua NF-A14 PWM 82.5 CFM  140mm Fan  ($20.95 @ Amazon) 
Other: 10Gtek for Intel 82599ES Chip Ethernet Converged Network Adapter X520-DA2  ($99.00) 
Other: SAS9211-8I 8PORT Int 6GB Sata+sas Pcie 2.0  ($98.80 @ Amazon) 
Other: Supermicro X9DRI-LN4F+-O Dual LGA2011 /Intel C602/ DDR3/ SATA3/ V&4GbE/ EATX Server Motherboard  ($459.05 @ Amazon) 
Other: Asunflower 3.3Ft Internal Mini-SAS 36P (SFF-8087)Male to 4x SATA Female Breakout Cable  ($10.99 @ Amazon) 
Other: Asunflower 3.3Ft Internal Mini-SAS 36P (SFF-8087)Male to 4x SATA Female Breakout Cable  ($10.99 @ Amazon) 
Other: 8x 8GB 64GB RDIMM ECC REG DDR3 1600MHz RAM HP Workstation Z420 Z620 Z820 A2Z51AA ($207.00)
Total: $3163.45

 

 

Link to comment

did you determine if  3ds max is more cpu core dependent for rendering vs GPU? if so, a 40 core rig looks solid.  

 

On my dual E5520 server, I use 4 cores for unRaid/plex and the rest for vm's. You could do the same as well, 4 for unRaid/plex and if you need more/less adjust later. 

 

Make sure the 10Gtek card works with unRaid and the SAS/sata card. About 10% faster and a grand cheaper. You can buy a lot of other fun toys for the money saved.

 

You could consider a used pair of processors and shave another 70 bucks off the price and treat someone you love to a really nice dinner.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.