[Build Check] First Unraid Beast


Recommended Posts

 

Hi Unraid community,

 
I am planning a new build and I was hoping for some feedback on my part choices, and any suggestions you may have that would be better. I realize that I have a lot if information below, and that answering/commenting on everything would be difficult, but any suggestions would be appreciated!
 

Requirements

I have not used Unraid before (currently using FreeNAS with 2 x RAIDZ6 vdevs... 60TB raw which is 90% full), but I like the expandability that comes with Unraid. In doing so I realize that I am sacrificing the speed that comes with my current 2 x RAIDZ6  setup. My plan is to put the data that doesn't require speed on Unraid, and put the more speed-sensitive data on FreeNAS and have them connect via 10Gbe. Having said that, I would like:
  • To start "small" with easy drive expansion (ie. buy a few drives, and buy more as I fill up the space)
  • TONS of horsepower for Plex and various VMs/Dockers
    • My current FreeNAS server has a CPU that is around 9,500 passmark and it was able to handle 8-9 streams at once (not all 1080p) but I often find the CPU being near capacity
    • New build needs to have much more horsepower as I will be shifting my Plex needs onto the new server's beefier CPU
    • Since I am planning for this server to last me for years to come, Ideally it should be able to transcode a few HEVC 4k streams
  • Fast NVMe storage for cache and VMs (likely will put more than one NVMe drive in the future)
  • 10Gbe
  • IPMI for easy management
 
Build
  • CPU: 2 x E5-2683 v3 (OEM from ebay, used)
  • CPU Cooler: 2 x NH-U9DXi4
  • Motherboard: Supermicro X10DRI
  • SAS Card: 2 or 3 of either the LSI 9240-8i or the Supermicro AOC-SAS2LP-MV8 (not sure what the difference is - question below)
  • Case: Norco 4224
  • RAM: 64GB (4 x 16GB) Registered DDR4-2133 Memory
  • 10Ge card: Connect X-2, or any other SFP+ card - question below
  • NVMe: Samsung 960 Pro NVMe 500 GB
  • NVMe PCIE adapter: StarTech PEX4M2E1 M.2 Adapter
  • Drives: still unsure about this one - question below
  • PSU: Corsair - Professional 1200W 80+ Platinum
  • Norco 120mm Fan Bracket (to make it quieter)
  • Additional fans: 3 x Noctua NF-F12
  • The required SAS/Sata cables to connect everything
 
Questions
  1. Is there anything that I'm missing in my build?
  2. Anything that you would suggest to make it better/more efficient?
  3. I've seen people mention "maxing out the backplane" in various places. Would my current build do so, or would using the SAS cards remove the bottleneck?
  4. Haven't seen this explicitly answer before, but if I start with single parity can I move my array to dual parity down the road? Or do I have to start with dual parity from day one?
  5. Am I losing speed by using a PCIe M.2 NVMe adapter rather than having it on motherboard?
  6. Which 10TB drives are recommended? Seagate IronWolf Pro, HGST, WD Red, Red Pro, Gold etc?
  7. Any other recommendations for 10Gbe card?
  8. Does it really matter which SAS card you get? Would I need to get 2 SAS cards and use the 10 built in SATA ports on the MB? What is the best practise?
Link to comment

I'd suggest steering clear of the Marvell based controller cards, like the Supermicro SAS card, and even confirming that some of the motherboard ports are not based on Marvell chips. Might look at the LSI SAS9201-16i, a non-RAID card that works very well with unRaid.

 

Also, take a good look at the requirements to transcode HEVC. A ton of horsepower may not be enough. Kaby Lake CPUs have hardware features that might be more advantageous with much lower passmark scores. I have a cheap, slow Braswell Celeron Nuc for playback that can handle 8bit HEVC that would stress an earlier generation Xeon with much more muscle. (But it doesn't handle 10 bit like the Kaby Lakes.)

 

You don't mention VMs, but 64G of memory implies that is your plan. If not, that might be overkill.

 

10T drives are relatively new and expensive per TB. 8T deals abound. Recently both the Seagate Archive and WD RED 8T drives have been under $180. I'd suggest looking there as a start to moderate cost. The HGST NAS drives are a bit more expensive, but likely will prove to be the longest lived out of the options. They are also faster at 7200 RPM.

 

The Norco is a beast, and I've heard complaints on quality of the cases and cages. There is a comparable Supermicro people have bought on eBay and are happy with. I am not a rack mount guy, and use a tall tower with 4 5in3 cages (plus one external) to provide 25 drive slots in a much smaller footprint. Personal preference.

 

Good luck with your build! 

 

 

 

Link to comment
19 minutes ago, bjp999 said:

I'd suggest steering clear of the Marvell based controller cards, like the Supermicro SAS card, and even confirming that some of the motherboard ports are not based on Marvell chips. Might look at the LSI SAS9201-16i, a non-RAID card that works very well with unRaid.

 

Also, take a good look at the requirements to transcode HEVC. A ton of horsepower may not be enough. Kaby Lake CPUs have hardware features that might be more advantageous with much lower passmark scores. I have a cheap, slow Braswell Celeron Nuc for playback that can handle 8bit HEVC that would stress an earlier generation Xeon with much more muscle. (But it doesn't handle 10 bit like the Kaby Lakes.)

 

You don't mention VMs, but 64G of memory implies that is your plan. If not, that might be overkill.

 

10T drives are relatively new and expensive per TB. 8T deals abound. Recently both the Seagate Archive and WD RED 8T drives have been under $180. I'd suggest looking there as a start to moderate cost. The HGST NAS drives are a bit more expensive, but likely will prove to be the longest lived out of the options. They are also faster at 7200 RPM.

 

The Norco is a beast, and I've heard complaints on quality of the cases and cages. There is a comparable Supermicro people have bought on eBay and are happy with. I am not a rack mount guy, and use a tall tower with 4 5in3 cages (plus one external) to provide 25 drive slots in a much smaller footprint. Personal preference.

 

Good luck with your build! 

 

 

 

Thanks for the advice regarding Marvell chips - I wasn't aware that they were undesired chips. Does it matter that the card is PCIe 2.0? Will the bottlenecks still be the drives?

 

Regarding Plex and 10-bit HEVC decoding, I am aware that Kaby Lake CPUs support it, but I'm not sure it's supported by Plex yet (I could be very wrong). Also, I wonder how many streams hardware acceleration would be able to support simultaneously.

 

I understand that the 10TB drives are higher $/TB, but they do provide higher storage density. Given the lack of striping with unraid, my guess is that I should shoot for 7200rpm vs 5400rpm.

 

I've been searching for a Supermicro 24-bay case on ebay for some time but haven't managed to find something at a reasonable price. Shipping always makes it prohibitively expensive for me. Out of curiosity, which tower do you use to get you 25 drive slots with the 4 x 5in3 cages?

 

Link to comment
3 hours ago, unraidusername said:

Thanks for the advice regarding Marvell chips - I wasn't aware that they were undesired chips. Does it matter that the card is PCIe 2.0? Will the bottlenecks still be the drives?

 

 

This has been said so many times. I must have said it a dozen times in the last month :)

 

The issue is not PCIe 2.0 or 3.0. That is not a problem at all. The issue is the Marvell chips and their ability to support hardware passthrough to VMs without corrupting data. Having certain BIOS settings enabled, even without using VMs, can cause problems. And the problems seem to vary by motherboard. I suggest steering clear until this is resolved. My motherboard has a Marvell controller onboard for 2 ports, and I have disabled it in my BIOS. And I have both Supermicro SAS and SAS2 controllers, which are sitting on a shelf awaiting a fix. I have an LSI SAS9201-16i and it works great.

 

3 hours ago, unraidusername said:

Regarding Plex and 10-bit HEVC decoding, I am aware that Kaby Lake CPUs support it, but I'm not sure it's supported by Plex yet (I could be very wrong). Also, I wonder how many streams hardware acceleration would be able to support simultaneously.

 

I do think it is supported invisibly. But what I think or you think doesn't much matter. :) What matters is reality. You should do your homework. But even if it is not supported now, seems this will be supported very soon. I personally would not buy one of these older Xeons because my primary need is Plex. I don't want my server to run games or anything like that which requires lots of processing muscle.~10,000 Passmark is plenty of power for me without HEVC transcoding. And I feel that a Kaby at that power level will be my next upgrade.

 

3 hours ago, unraidusername said:

I understand that the 10TB drives are higher $/TB, but they do provide higher storage density. Given the lack of striping with unraid, my guess is that I should shoot for 7200rpm vs 5400rpm.

 

EVERY new unRAID user wants to optimize speed. It is a hard message to explain that you have the tools to make things that need to happen fast, fast. But that much of what unRAID does does not need to be super fast.

 

The speed of your individual disks matters little. Parity is going to slow down the writes. 40 - 50 MB/sec is about the most you'd expect for writes (see below about turbo writes which are faster). But you are NOT very frequently waiting for those writes to occur - for most it happens in the background or overnight.  And turbo write (see below) gives a sizable bump in performance when needed.

 

Reads are not striped, so are limited by your individual disk speeds. But much of what our servers do is stream light data streams to viewers. And even if we want faster speed for a specific need, most are limited to gigabit LAN, so whether your disk is able to read at 125 MB/sec, or 225 MB/sec, you are still going to be limited. Believe it or not, in practice, the speed of unRAID is very seldom mentioned as a problem by the users.

 

The things you really want to be fast are your Dockers and VMs. Their OSes / containers and files that are being downloaded, uncompressed, repaired, etc. You want that to be fast. And all of these can be configured to happen on the SSD. And copies to the server can be configured to go through the cache and be made as fast as network allows. And if you want something like a database, you can store that on an SSD in a RAID-1 configuration, so you have a very fast media for database accesses with redundancy.

 

On the occasion when you want writes to the unRAID array to be fast(er), you want unRAID has a write mode called "Turbo Write" or "Reconstructed Write", that will help drive higher performance. This is particularly valuable on initial array load.

 

In summary - I don't much care whether my disks are 7200 RPM, 5900 RPM, or 5400 RPM. I just want them to be big, reliable, and inexpensive. 8T is the "sweet spot" for now IMO, and 10T are not bigger enough to make it worthwhile IMO. 12T, 16T - I might feel different. But every user makes these decisions. I do have a 500G SSD and use it effectively to drive performance in my time sensitive use cases.

 

3 hours ago, unraidusername said:

I've been searching for a Supermicro 24-bay case on ebay for some time but haven't managed to find something at a reasonable price. Shipping always makes it prohibitively expensive for me. Out of curiosity, which tower do you use to get you 25 drive slots with the 4 x 5in3 cages?

 

 

I have a Sharkoon Rebel 12 case. It holds 4 5in3s (20 drives). I have one extra 5in3 that sits on top of the case for 5 extra drives. Works for me. 

 

The Sharkoon case is no longer made (i don't think it is anyway), but might be obtained on eBay. Antec makes a 1200 that I think could be made to support my configuration.

Link to comment
5 hours ago, bjp999 said:

 

This has been said so many times. I must have said it a dozen times in the last month :)

 

The issue is not PCIe 2.0 or 3.0. That is not a problem at all. The issue is the Marvell chips and their ability to support hardware passthrough to VMs without corrupting data. Having certain BIOS settings enabled, even without using VMs, can cause problems. And the problems seem to vary by motherboard. I suggest steering clear until this is resolved. My motherboard has a Marvell controller onboard for 2 ports, and I have disabled it in my BIOS. And I have both Supermicro SAS and SAS2 controllers, which are sitting on a shelf awaiting a fix. I have an LSI SAS9201-16i and it works great.

 

I do think it is supported invisibly. But what I think or you think doesn't much matter. :) What matters is reality. You should do your homework. But even if it is not supported now, seems this will be supported very soon. I personally would not buy one of these older Xeons because my primary need is Plex. I don't want my server to run games or anything like that which requires lots of processing muscle.~10,000 Passmark is plenty of power for me without HEVC transcoding. And I feel that a Kaby at that power level will be my next upgrade.

 

 

EVERY new unRAID user wants to optimize speed. It is a hard message to explain that you have the tools to make things that need to happen fast, fast. But that much of what unRAID does does not need to be super fast.

 

The speed of your individual disks matters little. Parity is going to slow down the writes. 40 - 50 MB/sec is about the most you'd expect for writes (see below about turbo writes which are faster). But you are NOT very frequently waiting for those writes to occur - for most it happens in the background or overnight.  And turbo write (see below) gives a sizable bump in performance when needed.

 

Reads are not striped, so are limited by your individual disk speeds. But much of what our servers do is stream light data streams to viewers. And even if we want faster speed for a specific need, most are limited to gigabit LAN, so whether your disk is able to read at 125 MB/sec, or 225 MB/sec, you are still going to be limited. Believe it or not, in practice, the speed of unRAID is very seldom mentioned as a problem by the users.

 

The things you really want to be fast are your Dockers and VMs. Their OSes / containers and files that are being downloaded, uncompressed, repaired, etc. You want that to be fast. And all of these can be configured to happen on the SSD. And copies to the server can be configured to go through the cache and be made as fast as network allows. And if you want something like a database, you can store that on an SSD in a RAID-1 configuration, so you have a very fast media for database accesses with redundancy.

 

On the occasion when you want writes to the unRAID array to be fast(er), you want unRAID has a write mode called "Turbo Write" or "Reconstructed Write", that will help drive higher performance. This is particularly valuable on initial array load.

 

In summary - I don't much care whether my disks are 7200 RPM, 5900 RPM, or 5400 RPM. I just want them to be big, reliable, and inexpensive. 8T is the "sweet spot" for now IMO, and 10T are not bigger enough to make it worthwhile IMO. 12T, 16T - I might feel different. But every user makes these decisions. I do have a 500G SSD and use it effectively to drive performance in my time sensitive use cases.

 

 

I have a Sharkoon Rebel 12 case. It holds 4 5in3s (20 drives). I have one extra 5in3 that sits on top of the case for 5 extra drives. Works for me. 

 

The Sharkoon case is no longer made (i don't think it is anyway), but might be obtained on eBay. Antec makes a 1200 that I think could be made to support my configuration.

Firstly I want to thank you for your very detailed and informative response.

 

I agree about Kaby Lake being a good idea for HEVC 10bit, but the problem is I need the horsepower for other VMs too. The E5-2683 has a passmark of 18,000 and I'm getting 2 of them. Hopefully in the future Plex will allow for easy GPU transcoding and then maybe I'll buy a dedicated GPU to transcode H265.

 

Regarding hard drives, perhaps it is best to get 8TB drives, especially since I will have 24 drive bays. Do you think it's worth paying extra money for the "pro" version that comes with a longer warranty, better reliability, slightly higher speeds (even though you mentioned speed doesn't matter to you that much)?

 

I will look into the case recommendations. I was looking at towers as well, but each drive cage really adds to the cost, eventually landing at the Norco price with all of the drive cages. 

Link to comment

I am not sure about how to get a GPU to do transcoding. Maybe you can share that trick with me. My current server has all slots consumed with controller cards, but I do have an RES2SV240 put away for a rainy day, that I could put into service and maybe free up an x4 slot.

 

My experience with drives -

 

HGST (previously Hitachi) are great. Last forever. Tanks. Highest recommendation. I'd use them all day and all night if they were a little cheaper. But they are worth the premium in reliability IMO.

 

Seagates - Long ago best of the best. Had an 80Meg ST4096. What a tank it was! Today - not a good reputation. Recent class action lawsuit due to high failure rate. One model was at like 90% failure rate. Think the lawsuit may have jolted them a little to improve quality. I have invested in some 8T archives that work well so far. Will only buy cheap Seagates. If I'm going to spend more, I'll go with the HGST.

 

WD - People here like them. I don't particularly. Their incredibly high LCAs always annoyed me. The 1T green drives were awesome though that was a long time ago. The 2T greens were not nearly as good. And then I switched to Hitachi and stayed there. Hitachi didn't draw a premium price then. I went from 2T to 6T with almost all Hitachi / HGST. Did recently buy 2 8T from WD. Good price. So far so good. Like Seagate, I'd only buy cheap WDs, expecting higher failure rates.

 

HGST is $260/8T. Seagate/WD are $180/8T. I actually consider each a good deal, and bought some of each and will likely cost me about the same in the long run.


Supermicro CSE-M35T-1B Cages can be had for ~$50-$60 on ebay. Kindof expensive for 4. But they are awesome and last forever. Assuming same cages in the SuperMicro rack mounted case. There was a flurry a month or 2 ago of people buying those SuperMicro rack mounted cases on eBay. With upgraded backplanes. Not sure why they dried up if you can't find em.

 

Is it happy hour yet? Cheers!
 glasswhisky.jpg?w=225&ssl=1

Link to comment
2 hours ago, bjp999 said:

I am not sure about how to get a GPU to do transcoding. Maybe you can share that trick with me. My current server has all slots consumed with controller cards, but I do have an RES2SV240 put away for a rainy day, that I could put into service and maybe free up an x4 slot.

 

My experience with drives -

 

HGST (previously Hitachi) are great. Last forever. Tanks. Highest recommendation. I'd use them all day and all night if they were a little cheaper. But they are worth the premium in reliability IMO.

 

Seagates - Long ago best of the best. Had an 80Meg ST4096. What a tank it was! Today - not a good reputation. Recent class action lawsuit due to high failure rate. One model was at like 90% failure rate. Think the lawsuit may have jolted them a little to improve quality. I have invested in some 8T archives that work well so far. Will only buy cheap Seagates. If I'm going to spend more, I'll go with the HGST.

 

WD - People here like them. I don't particularly. Their incredibly high LCAs always annoyed me. The 1T green drives were awesome though that was a long time ago. The 2T greens were not nearly as good. And then I switched to Hitachi and stayed there. Hitachi didn't draw a premium price then. I went from 2T to 6T with almost all Hitachi / HGST. Did recently buy 2 8T from WD. Good price. So far so good. Like Seagate, I'd only buy cheap WDs, expecting higher failure rates.

 

HGST is $260/8T. Seagate/WD are $180/8T. I actually consider each a good deal, and bought some of each and will likely cost me about the same in the long run.


Supermicro CSE-M35T-1B Cages can be had for ~$50-$60 on ebay. Kindof expensive for 4. But they are awesome and last forever. Assuming same cages in the SuperMicro rack mounted case. There was a flurry a month or 2 ago of people buying those SuperMicro rack mounted cases on eBay. With upgraded backplanes. Not sure why they dried up if you can't find em.

 

Is it happy hour yet? Cheers!
 glasswhisky.jpg?w=225&ssl=1

 

I believe Plex currently cannot do GPU transcoding - I was saying hopefully they support it in the future. They currently have limited support for hardware transcoding in CPU see here: https://forums.plex.tv/discussion/250946/plex-media-server-hardware-transcoding-preview-1-4-0

 

Thanks for your recommendations regarding drives and cases!

Link to comment
2 hours ago, TinkerToyTech said:

A case that'll take 4 5 in3 modules that I had good luck with is the Antec 1200.  my 2 cents worth.

 

https://www.amazon.com/Antec-Twelve-Hundred-V3-Gaming/dp/B004INH0FS/ref=sr_1_1?ie=UTF8&qid=1498083812&sr=8-1&keywords=antec+1200+v3

 

Much appreciated. I saw that case, but it's pretty ugly IMO. Aesthetics don't matter that much, but if I choose a tower I prefer a "cleaner" look.

Link to comment
11 minutes ago, unraidusername said:

Much appreciated. I saw that case, but it's pretty ugly IMO. Aesthetics don't matter that much, but if I choose a tower I prefer a "cleaner" look.

 

The earlier versions weren't quite as whiz  bang. And if you installed 4x 5in3s, all those blue fans on the front would be gone. But I get you.

 

My Sharkoon:


594b12b3b1e0a_MyServer-Copy.thumb.jpg.da3fa93d22bc2b0e89ee2550496b7083.jpg

 

 

Link to comment
  • 4 weeks later...
On 6/21/2017 at 6:04 PM, bjp999 said:

HGST (previously Hitachi) are great. Last forever. Tanks. Highest recommendation. I'd use them all day and all night if they were a little cheaper. But they are worth the premium in reliability IMO.

 

I beg to differ. I've been using WD drives for about 15 years and have had very little problems (1 DOA in 15 years, maybe 3 or 4 die of old age out of like 50+ drives). When I decided to upgrade my drives I saw on NewEgg that the 4 TB HGSTs had a higher rating than the 4 TB WD Reds, so I decided to go with them. I got 3 and 2 were DOA, so I RMA'd them through HGST, which I had to pay the shipping cost for and it took about 1.5 weeks to get there and about another 1.5 weeks for them to send it back to me. WD will overnight you a drive as long as you put up your credit card as collateral, which I think is awesome and is why I've stuck with them. Two years later one of my 7 4 TB HGST drives has 744 reallocated sectors! When I upgrade any of the drives I'm going with either WD Red or WD Red Pro.

Edited by brando56894
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.