What to look for in a server motherboard?


Recommended Posts

Now that the new Xeon-W chips have just been announced by Intel for single-socket motherboards, I am considering moving my unRAID machine to a new single Xeon, ECC motherboard. Having used consumer non-ECC motherboards in every PC I have built since I owned a 2010 Mac Pro, I have no idea which brands are at the top of the hierarchy, which are the wannabe's and what features I should look for or avoid on a server board.

 

Just at the moment (today) the only board that has been announced as "ready for the Xeon-W" with LGA2066 C422 chipset is the Gigabyte MW51-HP0 and I have no idea how its features stack up against what you would expect for such a board.

 

I will be using two GPUs, popping in a LSI SAS9201-8i for the spinners and putting the SSDs on the onboard SATA. I only intend to have one M.2 NVMe drive but I do want to watercool the board. I am likely to settle for either the 8-core or 10-core CPU. I know the CPU will be fine for watercooling but what about the voltage circuit? How toasty do those get in a server board compared to a consumer board? Looking at the Gigabyte board there are some modest passive coolers sited there to pick up stray airflow from the CPU air cooler which I won't have.

 

Any thoughts from the server watercoolers?

Link to comment

Since all of my servers are headless I look first and foremost for IPMI or equivalent.  I won't go back to having to use a monitor and keyboard to manage it. The other thing I look for is NOT the latest generation board but one that is known compatible.  I'll let someone else experiment with the cutting edge boards - that most likely will work with unRAID.  I just prefer to use something known good.  I spend too much time on maintenance now on my servers I don't want to have to work out compatibility issues along with everything else.

Edited by BobPhoenix
  • Upvote 1
Link to comment

Support is something I had not thought about.

 

Assuming a fairly "typical" unRAID use case (NAS on a SAS card, passing through hardware to VMs), how much of an issue has support for new CPUs or motherboards been for unRAID in the recent past? I would be looking at a server board and single CPU as I mentioned.

Link to comment

PCIe slots would actually be my first priority. I need a PCIe 3.0 x8 (or x16) double wide slot for a passthrough video card, a PCIe 2.0+ x8 slot for an HBA, and a PCIe x4+ slot for a RAID card I use. That's my minimum. But you really need to look at what cards you'll need to fulfill your use cases. If you want to pass through two video cards, you need to plan for that. Given this MB should serve you well into the future (4+ years?), you want to think about possible future needs and do your best to satisfy them. I can promise you that even 2 years ago, needing a video card slot would be the furthest thing from my mind! But today, I'd love to have 2. 10GB lan is also slowly emerging, and I'd want an x1 slot for that.

 

The second thing would be memory expandability. Today 32G is a generous amount, but looking to the future I'd want expansion options to at least 64G. I also like ECC memory in a server.

 

You should also consider NVMe slots.

 

I also like onboard video. Not a huge deal, but with installing and troubleshooting, having a full time console right next to the server is nice IMO. And I had trouble with IPMI when I wasn't using the onboard video as primary. My new (used actually) MB has neither ipmi nor onboard video, and my older MB had both. I gave them up for an x4 instead of an x1 third slot and don't regret the decision. I didn't want a new CPU or memory, so my options were quite limited. My purchase will let me get more life out of my existing components and postpone my next big server upgrade.

 

I've had good luck with ASRock and Supermicro server MBs, and would look at one of those brands if I could find it. Depending on your slot needs, you could be pushed more towards a gaming platform, which I would not rule out.

 

I've found it can be frustrating to find the features you want, but with some digging the best 1 or 2 options emerge. You need to have a petty firm grasp of your must have vs nice to haves in order.

 

Good luck! 

Link to comment
15 hours ago, DanielCoffey said:

Support is something I had not thought about.

 

Assuming a fairly "typical" unRAID use case (NAS on a SAS card, passing through hardware to VMs), how much of an issue has support for new CPUs or motherboards been for unRAID in the recent past? I would be looking at a server board and single CPU as I mentioned.

Just look at the Skylake(Kabylake?) and Ryzen problem threads for examples.  Both are fixed/mostly fixed now but that is why I don't want a cutting edge MB and CPU combo.  Either would most likely work for me NOW since I run headless but that is only because others have gone first and worked out the bugs.

Edited by BobPhoenix
Link to comment

Any insights what are advantages of ecc ram and xeon cpus? i know they're used in servers, but no idea what is the advantage? is it worth the change?

 

also, any material improvement of the xeon-w over older generations. running a file server (even with VM) shouldn't be overly intense on hardware requirements?

Link to comment

How big of an issue is this for a homeserver (runs 24/7 as mediaserver)?

 

Besides ECC memory, is there any other advantage of Xeon over i5/7? E.g., energy consumption? Stability?

 

I am not unhappy with my current setup, but am playing with the thought to upgrade my mobo to support M.2. When doing so, I can go the current route with an i5 or upgrade to i7 or Xeon. Or even some different mobo/CPU. Energy efficiency and stability with Unraid will be key considerations.

 

And on separate note, I read somewhere in this forum that 4GB ram is now minimum for Unraid. I have 16GB ram and assigned 12GB to the VM running Win10. Is this the right balance between Unraid and the VM? Would I benefit from more ram? As mentioned, I don't run any dockers and the VM is primarily used as mysql database (Kodi) as well as Itunes.

Link to comment

You should be able to find energy usage specs for the different CPUs you are considering.

 

Some CPUs, depending on generation, have limitations around VM and passthrough. I know Haswell "K" CPUs don't support vt-d. You want one that supports vt-x,  vt-d, and ECC in my opinion. Kaby Lake CPUs have hardware support for decoding 10bit HEVC which might be useful as well.

 

I would caution you, if investing in a new server to last you several years (at least), to not be overly restrictive based on your current use cases. For example, you may not be using Dockers today (not sure why, because there are some very useful ones), I'd still recommend you consider that you may want to use them in the future. Same for GPU passthrough. Build a server with capabilities that you will grow into over time. I never dreamed I'd need GPU passthrough in my unRaid server, yet that's very important to me now. Somewhat dumb luck that my server and motherboard support it.

 

As far as your memory allocation, your split seems about right. I have 32G and give 16G to my Windows VM (that I use as my primary workstation) and the other 16G to unraid and the Dockers.

 

Good luck! 

Link to comment

On the subject of VMs and GPU passthrough... I have seen the suggestions that you enable the iGPU (if your CPU has one) for unRAID's use and leave the main PCI GPU free for your VMs but what do you do in the use case where your CPU (say a Xeon) has no iGPU?

 

I presume you would have a quick look at the PassMark performance of a typical iGPU of the current generation and pick up a lightweight PCI GPU for unRAID, yes? Would general "browsing and YouTube" VMs need their own light GPU too or can they make use of the emulated version of the one unRAID had access to?

 

I mean this situation... say the board has GPU1 (lightweight) and GPU2 (gaming). Say you have unRAID for NAS and Plex Docker (to a viewer on the network), VM1 for browsing/Youtube and VM2 for Win10 gaming. Can VM1 make use of GPU1 or does it need its own discrete card?

Link to comment
7 minutes ago, DanielCoffey said:

On the subject of VMs and GPU passthrough... I have seen the suggestions that you enable the iGPU (if your CPU has one) for unRAID's use and leave the main PCI GPU free for your VMs but what do you do in the use case where your CPU (say a Xeon) has no iGPU?

 

I presume you would have a quick look at the PassMark performance of a typical iGPU of the current generation and pick up a lightweight PCI GPU for unRAID, yes? Would general "browsing and YouTube" VMs need their own light GPU too or can they make use of the emulated version of the one unRAID had access to?

 

I mean this situation... say the board has GPU1 (lightweight) and GPU2 (gaming). Say you have unRAID for NAS and Plex Docker (to a viewer on the network), VM1 for browsing/Youtube and VM2 for Win10 gaming. Can VM1 make use of GPU1 or does it need its own discrete card?

 

Perhaps VM1 uses NoMachine. But if you want to, you should be able to pass through the primary GPU. I think a UFI boss is required, though, so the low end cards may not work. Passing through the primary may make troubleshooting more difficult as you'd have no functional console. But a quick press of the power button will cause the server to run the powerdown script to attempt a forced but clean shutdown and save the syslog to the flash, which is handy for those of us without a usable console.

Link to comment

There are a few things I am running within the VM instead of as a docker (namely sabnzbd+, SB, CP, Kodi mysql database, MiniM). I am doing this as I need the VM anyhow for Itunes and I don't see an obvious benefit of using a docker if I have the VM anyways. Are there any benefits to do this differently or any obvious docker that I am missing.

 

I don't mind to over-build, but still don't see any obvious reason to go for a server board besides ECC (which I am not clear whether I even need it). Plus you bring up a good point that I would even need a PCI GPU, which I currently don't have, but rely on the iGPU. I don't mind here either to over-build, but am searching for the "reason why".

 

From what I am seeing now, I may be better off with one of the more recent desktop boards with M.2 and invest in more ram to give some more to Unraid. Would this conclusion be off?

Link to comment

On my MBs with Xeons I have an IPMI video chip separate from the CPU that gives you a console either across the network via IPMI or with a monitor plugged into the video port on the MB.  IPMI is a must have for a headless server for me.  I will be passing through my iGPU on my i7 Living room server when I get my Colossus card problems solved.

Link to comment
1 hour ago, BobPhoenix said:

On my MBs with Xeons I have an IPMI video chip separate from the CPU that gives you a console either across the network via IPMI or with a monitor plugged into the video port on the MB.  IPMI is a must have for a headless server for me.  I will be passing through my iGPU on my i7 Living room server when I get my Colossus card problems solved.

 

My old motherboard was like yours - server motherboard with onboard video chip and IPMI. (Was interesting that IPMI stopped working when I added the second video card and before changing the built-in video card to be always primary (the default is that the external video card is primary if it is present). So, with IPMI may not function if primary video is passed through. IPMI is not a video card.

 

My new MB requires an iGPU CPU (to use the motherboard VGA port), which my Xeon does not have. So no onboard video. I can see the console display on the display attached to the addon video card, until I start the VM and pass it through, at which point I have not way to view the console. I believe I could type some commands from a server installed keyboard, but could not see the output. But as I said, a quick press of the power button is sufficient for getting the server cleanly shutdown which is my most frequent need of viewing the console after the server is booted and working fine.

 

I have not experimented with bringing down the VM and seeing if the monitor starts displaying the console again.

 

My setup is not optimal, and I may regret it at some point in the future if I have a problem with my VM that brings down SSH access. And I'm sure I'll miss IPMI the next time I'm away from home and need to power cycle the server. Although that was once in a blue moon need, when it is needed, BOY is it nice to be able to do. But so far I am satisfied with my much faster parity performance and ability to support some additional drives.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.