New member first build and disk strategy questions...


MacGeekPaul

Recommended Posts

Hi all,

 

I’ve outgrown my current Synology system(s) and after doing my research I’ve decided I’m going to take the plunge and build my own unRAID server.

This will primarily be a File/Media server but I plan to add a good GPU in a few months and double it up with a gaming VM.

I have already bought a few items and am just waiting to source the remaining parts, I wonder if I list them here and then show my disk & migration strategy someone could comment on anything I may need to be aware of?

 

So the purchased items are as follows:

  • Asus Prime Z270-A Motherboard
  • Kingson HyperX FURY 16GB 2400 MHz DDR4 Ram
  • Noctua NH-U14S U-Series CPU Cooler
  • EVGA SuperNOVA 550 G2 80+ GOLD 550W PSU
  • Supermicro AOC-SASLP-MV8 SAS/Sata Card (+ Sata cables)

Items still to be purchased:

  • Fractal Design Define R5 Case
  • Intel i7 6700/i7 7700 CPU

 

I currently have 2x 4 bay and 1x 2 bay Synology NAS with the following hard drives that I will be moving to the unRAID server.

  • 8x 3TB WD Red HDD
  • 1x 4TB WD Red HDD
  • 1 4TB Seagate Desktop HDD
  • 1x 2TB Seagate Desktop HDD

Plus I will be adding:

  • 2x 1TB SanDisk SSD

 

So my drive thinking is as follows:
The motherboard has 6x Sata ports so connected to that I'll put-

  • 2x 4TB HDD as Parity drives
  • 2x 1tb SSD as a Cache pool
  • 1x 2TB HHD as a single drive for Time Machine backups from my Mac.

On the SAS card I plan to put-

  • 8x 3TB HDD

 

My drive migration thoughts are to initially add a parity drive, a data drive and a cache drive, then slowly start copying over the data from the Synology, as each 3TB drive in the Synology is copied I will then add that to the unRAID data array until everything is copied. (Note: not all drives on the Synology are full, they are set as JBOD rather than SHR with anywhere between 300GB to 2TB per drive used)

My initial thoughts are if this is the best way to do the data migration and should I use both 4TB HDD's as parity drives or is that over kill and use one of them as an additional data drive in the array.

 

Sorry it's a long drawn out question, I'm just trying get everything planned correctly before I make the transition. :)

 

Cheers,

Paul.

Link to comment

Hello and welcome.  Some random feedback:

  • I would not have recommended a AOC-SASLP-MV8.  They were once the preferred HBA for unRAID, but not any longer.  LSI based cards have been more compatible and better performers.
  • Is your data backed up?  It really should be, and if you have backups then you could go without parity during the data movement to unRAID (it would be faster).
  • I wouldn't use the cache drive during the migration - actually, I don't use a cache drive to cache writes to the array at all - just turbo write when I need it.  My cache drive is purely an "application" drive.
  • I agree with putting the parity drives and SSD on the motherboard SATA ports.
  • I'd use dual parity with 8 data drives, but I freely admit that's somewhat conservative.  Others won't bother with dual parity until they're running a larger array.
  • Are you buying drives?  The Seagate 8TB Archive drives are the king of $/TB right now.
Link to comment

Hi tdallen,

 

Thanks for your welcome and feedback. 

 

I wasn’t initially sure about the AOC-SASLP-MV8 card, but I had read on here that is was compatible and it seemed like a good price, also I had to make a snap decision, so I grabbed it for now to get me up and running, with my thinking being that if needed, I can get something better as a later date and sell it on.

 

All the data is backed up, so that's very handy to know that it will be quicker and better to just start copying over the data, that sounds good as I can then add the parity and cache drives once it's all copied over.

 

With just using the 3TB drives for data, it gives me 24TB of space which is plenty as I'm currently only using about 12TB, so I think I can use dual parity for now and then always drop it to one if needed.

 

I do have all the drives so am not in the market for any at the moment.

Link to comment
14 minutes ago, MacGeekPaul said:

All the data is backed up

That's good news.  Parity calculations and writes are slower than direct writes to disk, even with turbo write.  Copying all the data over and then implementing Parity afterwards will save you some time.

 

I'd recommend standing up your unRAID server and playing around with it for a while before starting the data migration.  There are a lot of options and ways to do things, and it's worth taking a look at that before starting the heavy lifting.

Link to comment

I would very strongly recommend a case that would enable you to use hot-swap style drive cages. Our #1 NAS issue is people knocking cables loose while adding or exchanging disks, and although this case is better than some, it does not allow swapping disks without touching the drive wires, which are incredibly easy to knock slightly askew. And that's all it takes to create an intermittent issue that leads to drives being dropped from your array, and a ruined Sat afternoon trying to get it sorted out (and sometimes lead to data loss).

 

Drive cages, like the SuperMicro CSE-M35T-1B, are installed in the case once, and the cables are never touched again. Adding or exchanging disks becomes incredibly easy. I'd recommend these above dual parity, which like @tdallen mentions are more appropriate for arrays with larger drive counts. In fact, the main value of dual parity in a small array is to help recover (sometimes anyway) when a cable IS knocked loose, while these cages virtually eliminate these problems from happening in the first place! 

 

A case like the Antec 900 would hold three of those cages and give you 15 hot-swap drive slots (see below). The one I mentioned can sometimes be found on eBay for between $55 and $70 delivered.

596e1f953c782_Antec900with5-in-3s.png.22c101032ae89de08482222ed4152504.png

Link to comment
51 minutes ago, tdallen said:

I'd recommend standing up your unRAID server and playing around with it for a while before starting the data migration.  There are a lot of options and ways to do things, and it's worth taking a look at that before starting the heavy lifting.

 

That's a good shout, thank you.

 

22 minutes ago, bjp999 said:

I would very strongly recommend a case that would enable you to use hot-swap style drive cages. Our #1 NAS issue is people knocking cables loose while adding or exchanging disks, and although this case is better than some, it does not allow swapping disks without touching the drive wires, which are incredibly easy to knock slightly askew. And that's all it takes to create an intermittent issue that leads to drives being dropped from your array, and a ruined Sat afternoon trying to get it sorted out (and sometimes lead to data loss).

 

 

Hmm, that's a good point that I hadn't thought of, I was pretty much focused on finding a reasonably priced case that could hold all of the drives, but hadn't taken into account the disk swap issue.

Thanks for highlighting that and the case recommendation, I'll do some more delving into some case options before I pull the trigger.

Link to comment

It's a trade-off.  The R5 is a nice standard case, it has removable disk trays and allows you to take off both sides and therefore have access to both the front and back of the disks when you are changing them out.  I feel better about it than most standard cases.  Hot-swap cages are a lot nicer and as @bjp999 says, there are tons of issues with inadvertently loosened cables during disk swaps.  That said, hot-swap cages come with the trade-offs of added cost and complexity - additional power and sata connections, fans, backplanes that can fail, etc.  It's clearly true, though - after you get your server setup and burned in the only thing you are likely to touch on an ongoing basis is the drives, and you want a setup that is as fail-proof as possible for drive maintenance.

 

Link to comment

I believe the pros of drive cages far outweigh the cons. My cages allow for front access without tools or screws to remove and insert disks. Internally less power connectors are needed, due to cages having their own power distribution system in place. Individual disk activity can be easily monitored as each disk has its own acitivty LED. Cages have their own fan to provide cooling for the inserted disks..

 

The only cons I've found: they are relatively expensive and you need the proper case (not all are suitable) to house them.

 

  • Upvote 1
Link to comment
7 minutes ago, tdallen said:

It's a trade-off.  The R5 is a nice standard case, it has removable disk trays and allows you to take off both sides and therefore have access to both the front and back of the disks when you are changing them out.  I feel better about it than most standard cases.  Hot-swap cages are a lot nicer and as @bjp999 says, there are tons of issues with inadvertently loosened cables during disk swaps. 

 

Ok so far :)

 

8 minutes ago, tdallen said:

I@bjp999That said, hot-swap cages come with the trade-offs of added cost and complexity - additional power and sata connections, fans, backplanes that can fail, etc.

 

Cost - Agree. They cost about $12 a slot. But they are a long term investment. I consider it part of the case cost. I'd go cheaper case and add these. Still more expensive :(, but well worth it. :)

 

Each cage requires one power connection - that is 1 power for 5 drives! You can plug a second, but that is for a redundant PSU. So seems like less with the cage. :)

 

Each cage requires one SATA connection per drive - that is 5 data connectors for 5 drives. Seems like equal. :) (I still dream of the cage with a SAS connector)

 

Fans - fans can fail. Whether in the cage or case. The SuperMicro (SM) cages use full ball bearing fans. I have had some for over 5 years running continuously and they are still running fine - no strange noises or sounds. I'd call this one equal and highly dependent on the fan quality! :)

 

The cage units don't have separate backplanes. I think on big rack mount jobs, backplanes are separate. I would say that latch mechanism is more of an issue with the cages. When you latch (or click) the drive in pace, it should be securely plugged. But on some cheaper units (Rosewill) I found that you needed to give the tray a extra little shove into the cage to ensure good contact with the plugs in the back of the cage. The SMs are flawless. If latched, there is solid contact. If you compare the reliability of a cage with the reliability of hand plugged wiring into the back of each of 5 drives, the cage wins HUGELY (even the Rosewill!) I can't remember a single user reporting a drive dropped with a cage, vs an average of 1 or 2 a week (every week) with hand plugged wires. Cages win hands down! :) 

 

8 minutes ago, tdallen said:

It's clearly true, though - after you get your server setup and burned in the only thing you are likely to touch on an ongoing basis is the drives, and you want a setup that is as fail-proof as possible for drive maintenance.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.