skynet + hal & wopr (esxi head + 2 unraid das)


Recommended Posts

so, this is what i'm building ... it's a work in progress right now, as i'm waiting on the ram (last piece missing)

 

skynet (ESXi vSphere 5U1)

Case:  Cooler Master CM 690 II Advanced

Drive bays:  Cooler Master 4 in 3

CPU: Intel Xeon E3-1230 v2

Motherboard: Supermicro X9SCM-F-O

Storage Adapter: AOC-SASLP-MV8

Storage Adapters: 2x IBM RaidServe M1015 flashed to IT mode (from ebay)

RAM: KVR1600D3D4R11SK4/32G ECC kit (32G total)

PSU: SeaSonic X Series X650 Gold

Flash drives: 2x Kingston Digital DataTraveler 101 Generation 2 - 4 GB

SSD: OCZ 120GB Vertex 3

Hard Disks: 1.5Tb WD15EADS + 500Gb WD5000AAKS (cache + staging for usenet downloader)

Hard Disks: 2Tb WD20EADS + 2Tb Samsung HD204UI (backup storage)

Hard Disks: 2x 1Tb WD10EADS + 1Tb WD10EACS + 1Tb Seagate 7000.12 + 1Tb Samsung HD103UJ (datastore via nfs)

NIC: Intel Gigabit CT PCI-E Network Adapter EXPI9301CTBLK

Expansion Card: 2x Chenbro SAS Expander CK23601

 

hal (unRaid DAS)

Case:  Norco 4020 with stock fan plate and silent fans

Hard Disks: Parity + 20x 2Tb

Total Capacity: 40Tb

 

wopr (unRaid DAS)

Case:  Norco 4224 with 120mm Norco plate and ball bearing rails

Hard Disks: Parity + 24x 3Tb (planned)

Total Capacity: 72Tb (planned)

 

Primary Use

i'm planning to run the following VMs initially

 

unRaid (2x: hal & wopr) / 4Gb each

For movies and tv shows storage (mostly blu-ray or blu-ray rips)

 

Solaris 11 (as yet unnamed) / 12Gb

will hold two separate zfs pools

2 disk encrypted mirror to backup my main workstation and other machine's data

5 disk raidz to serve as a datastore

 

FreeBSD (hermes) / 4Gb

nzbget usenet downloader and sickbeard "pvr"

 

Ubuntu Server 12.04 (kepler) / 2Gb

gitlab, postgresql and other general purpose linux duties :)

 

Additional notes

i have a lot of work ahead of me, i have to build skynet, disassemble/reassemble my current norco unraid server, mount both norcos into my 4 post rack, make sure unRaid (hal at first) runs fine ... and stuff :)

needless to say, i got my inspiration from Johnm, BetaQuasi and other guy's awesome builds in this section

 

Link to comment

OK, so your building what I have designed for my next project...

I will defiantly follow this thread.

 

Few quick  random thoughts oozing from my head before I have had coffee...

You have the wrong RAM. that is registered RAM.

The good news though... the correct RAM has dropped about 50% in price in the last month or so http://www.superbiiz.com/detail.php?name=W16GE1333K

 

I have already switched my datastores to ZFS RAIDz array shared VIA NFS and it was worth it. not just are they snappier, but they are now on redundant arrays and easier to backup. my benchmarks on a VM using the array are in the 450MB/s range. (M1015 w/ 4x samsung 2TB F4's)

 

You might consider using an SSD with advanced Garbage collection. my Vertex 3 didn't last a year. it might have had nothing to do with it being a datastore drive or not. but it will be taking a beating.

 

As long as you are using the  Chenbro expanders, I would try and find the Chenbro EUK for the DAS boxes. it will make life so much easier for the build. Provantage used to have them listed, now they seem to have stopped selling them.. hmm

http://usa.chenbro.com/corporatesite/products_detail.php?sku=76

 

I would also order one of these for skynet (your head). http://www.pc-pitstop.com/sas_cables_adapters/AD8788-4.asp

You might be able to get away with the 2 port version since the SAS Expanders only have a single input port. (see below why 4 ports would be used.)

Then get your 8088 cables from MONOPRICE or somewhere.

 

The way i would build the Head/DAS combo..

HEAD (I would stick the M1015 in the head)
M1015
| |
| |-- {channel 1 (SAS1)} for the Parity and cache (and 2 faster array drives if you want) and leave it in the head.
|
|DAS
| --{Channel 2 (SAS2)} Expander with 20-24 DATA drives

The other option is to run a second 8088 to the DAS and use one of the PC PITSTOP adapters and put the Parity into the DAS on its own Channel.

 

 

I think naming the servers after computers that went nuts and declared war on their owners is asking for trouble.. Especially since at least 2 of them were clearly homicidal

PS, I think HAL's full name is HAL 9000.

 

 

EDIT:.. for the cache drives... you could use virtual drives from your NFS share. I have a 250MB virtual drive now. the nice part is, it is a protected drive since it is on a raidz.

 

It looks like you're trying to use what you already own.. I would consider moving the head to a Norco..  just to keep it all symmetrical. eventually that is...

 

 

Link to comment

Few quick  random thoughts oozing from my head before I have had coffee...

You have the wrong RAM. that is registered RAM.

The good news though... the correct RAM has dropped about 50% in price in the last month or so http://www.superbiiz.com/detail.php?name=W16GE1333K

f&$k !! you're right, i have to get the correct ram  :(

 

You might consider using an SSD with advanced Garbage collection. my Vertex 3 didn't last a year. it might have had nothing to do with it being a datastore drive or not. but it will be taking a beating.

eventually will

 

As long as you are using the  Chenbro expanders, I would try and find the Chenbro EUK for the DAS boxes. it will make life so much easier for the build. Provantage used to have them listed, now they seem to have stopped selling them.. hmm

http://usa.chenbro.com/corporatesite/products_detail.php?sku=76

i'm going commando at first, but if it turns out to be a problem, then yes, i'll get the kit

 

The way i would build the Head/DAS combo..

HEAD (I would stick the M1015 in the head)
M1015
| |
| |-- {channel 1 (SAS1)} for the Parity and cache (and 2 faster array drives if you want) and leave it in the head.
|
|DAS
| --{Channel 2 (SAS2)} Expander with 20-24 DATA drives

The other option is to run a second 8088 to the DAS and use one of the PC PITSTOP adapters and put the Parity into the DAS on its own Channel.

i'm currently planning to leave parity on the DAS. No cache, as the 1.5Tb will basically serve that purpose for both unRaid DAS.

why are you suggesting to leave the parity on the head ? will it have too big an impact on parity calculation leaving it in the DAS ?

 

 

I think naming the servers after computers that went nuts and declared war on their owners is asking for trouble.. Especially since at least 2 of them were clearly homicidal

PS, I think HAL's full name is HAL 9000.

i've had hal for about 2 years now and it's been behaving, although i keep a very close eye on it  ;D

you're right, it's HAL 9000, i took an artistic license there

 

thanks for your comments Johnm

Link to comment

 

why are you suggesting to leave the parity on the head ? will it have too big an impact on parity calculation leaving it in the DAS ?

 

 

You could have it in the DAS, You could have it in the head..

MY thought was for performance. you are going to only use one port on the M1015.. I was thinking put the Parity (cache if you used one) on the second port..

If you do that.. leaving it in the head eliminates an expensive cable and 8087 to 8088 adapter..

Leaving it in the das (on its own channel via a second 8088 and its own port) would give you option to power it off.. so that is an advantage..

 

As I said, there is no real reason other then performance (if it even helps at all? it might not)

 

I am guessing you are going to toss parity on the expander.. I am sure that will work. I would be very interested in seeing what 24 drives on a single SAS2 channel look like when doing a parity compute..

Link to comment

I am guessing you are going to toss parity on the expander.. I am sure that will work. I would be very interested in seeing what 24 drives on a single SAS2 channel look like when doing a parity compute..

i think we'll both find out soon (at least for 20 drives)  ;)

 

me to i plan on doing the same thing as i fill up my drives

Link to comment

I am guessing you are going to toss parity on the expander.. I am sure that will work. I would be very interested in seeing what 24 drives on a single SAS2 channel look like when doing a parity compute..

i think we'll both find out soon (at least for 20 drives)  ;)

 

me to i plan on doing the same thing as i fill up my drives

 

Right now I am running 16 + 4 with no impact. 16 on channel1 and 4 on channel 2. my parity is on channel 2.

(I would be willing to move a sas cable to the expander and test a full load of 20 on one channel. I fried my expander and it is mid RMA)

I would go larger myself, unfortunately i ran out of room in my norco. I have 20x 3TB drives for unRAID and 4x 2TB drives for my ZFS. Hence, I need to look into the DAS thing myself to get my additional drives into me system.

 

I am thinking 3x RPC4224 or 2x RPC4224 + 1x 2212.

I'll probably for with 3x 4224 for flexibility and future expandability.

Link to comment
  • 3 weeks later...

i had a glimpse at awesomeness today !!!

 

i managed to boot unraid as guest inside my esxi host, and was able to access all disks and shares ... woohooo !!!

 

but then ...

 

well, there were a couple of hiccups along the way

 

- the first couple of times i tried, i had a red dot drive .. it was the drive connected to one of the chenbro ports via a sata fanout cable. in the end, i had a norco reverse breakout cable which wouldnt work, but fortunately i also had this forward breakout cable i could use. when i plugged it in, voila ! all drives available, i was able to start the array !!!

- as i was hauling the norco 4220 case around, i managed to unhinge the power connector of my power supply (corsair hx850). Although it worked, i declared it dead. since i had a corsair ax850 for the other das, i went about plugging it in. this is when i found out there aren't enough molex connector to power 5 backplanes, fanplate fan connector AND the chenbro. bad luck man ! i've already ordered this sata to molex adapter to power the chenbro.

 

some other notes

- i've never seen unraid boot so fast ... it was awesome (i used johnm method, option 2, derived from briangp original method). it was easy to create a 2gb unraid vdmk with

vmkfstools -c 2147483648 -a lsilogic -d thin unraid-50rc5.vmdk

, copy your chosen unraid version, make_bootable and then offer it to the unraid guest, to boot off of it.

- i found out a supermicro aoc saslp-mv8 doesnt work with solaris 11, so im off to buy another m1015 ... i will end up with 3 of them

 

a couple of pics of my ghetto rack and some of the gear (sorry for the crap iphone pics)

 

GHETTO RACK

nFUttl.jpg

 

top row : motorola surfboard / hp procurve 1410-24G switch / Linksys WRT54GL router+wifi with tomato firmware / generic 14" monitor

2nd row : dell pentium III 500mhz pfsense firewall / skynet / harmony one charger / cyberpower CP1350AVRLCD ups

3rd row : set aside for wopr (das #2)

4th row : hal (das #1)

 

the dell was the first machine i purchased (1999) and has been so resilient, that i'll run it til the very end :)

i'm not done with cable management, but it was just too much for this round, will do something about it next time

 

skynet (head)

oSXREl.jpg

 

the sff8088 cables are coming out the watercooler holes in the case  :o

in that pic i still had the saslp-mv8

 

hal (das #1)

HWkKHl.jpg

 

the chenbro performing like da man !

Link to comment

nice...

 

It is coming along quite well.

 

Are you using the PC that is in the DAS for something else or is it just a source for PCIe power and an easy switch for the DAS?

 

thank you Johnm.

 

no other use for the pc/mobo, it doesn't even power the chenbro (since  the chenbro needs a molex to operate).

the uek you mentioned previously is the way to go ... but they seem to have been discontinued (can't find them even on ebay)

Link to comment
  • 3 weeks later...

 

Were you able to run parity and check speeds? Would appreciate the results.

 

It seems before one decides and buys a Chenbro expander card, the application way needs to be decided too.. as they are coming in 2 different form factors:

CK23601 ... the stand alone card

UEK-23601 ... the card with the kit

 

Speed results (ideally loaded with 24 drives.. 20 for now) would help in decided if one should go a particular way:

parity (cache OPT) in head -OR-

parity (cache OPT) in DAS

 

Also, in a norco 4220 or 4224, where would you mount the UEK:

front: then a 8088 cable will be going from back-of-head to front-of-das in a rackmount... kind of ugly

back: is it possible? slot-wise where... i think uek uses a 3.5" slot.. might be wrong here.

 

Link to comment

 

Were you able to run parity and check speeds? Would appreciate the results.

not yet, im planning on doing in this coming weekend. ill post the results, right now parity + 20x disks are on the das, on a single m1015 port.

 

Also, in a norco 4220 or 4224, where would you mount the UEK:

front: then a 8088 cable will be going from back-of-head to front-of-das in a rackmount... kind of ugly

back: is it possible? slot-wise where... i think uek uses a 3.5" slot.. might be wrong here.

the uek attaches to the case, pretty much like a motherboard would (with holes for the standoffs) and it has connections to enable power on like you would normally do with any other mobo/case, so it's pretty elegant.

 

Link to comment

Thanks lboregard, looking forward for the results...

 

Had another question as you or johnm might be aware on this...

Does having a Chenbro UEK type configuration allows "daisy chaining" power too? Meaning powering up or down the main esxi HEAD would automatically power up or down the DAS unit? And this sort of arrangement can work using UPS too, in case of power down and shutdowns.

 

 

 

 

Link to comment

well, i finally got around to run parity ... it ran in 10h 23m, at an avg speed of 53.5 MB/sec

 

running off a single m1015 port, definitely took its toll, i understand avg speed around here are 70-75 MB/sec ...

 

the question is ... will i do something about it or will i live with it ? for the time being, i'll live with it ... it may blow up the m1015 at some point in time though  ;D

 

6HYRu.jpg

Link to comment

Thats not to bad. I have seen worse speeds off of a SASLP-MV8.  We knew there would be a hit in speed. Now we know.

 

I honestly thought it might be worse.

 

I bet faster drives would not be so bad.

 

As an experiment, I wonder if it would be any faster if you put the Parity drive on the single port in the head?

 

I'm glad it is working for you.

Link to comment

Thanks lboregard, for the results... though johnm and you have mentioned that reduction in parity speed was expected in the single M1015 slot config. I'm not able to understand exactly why?

 

As per my understanding, PCIe v3.0 8x slot has a bandwidth of 8GB/s (8 lanes * 1 GB/s per lane). With one M1015 slot, 4GB/s should be available.

    assuming 24 drives: ~166 MB/s available bandwidth per drive

    assuming 1p+20d+1c i.e. 22 drives: ~180 MB/s available bandwidth per drive

i.e. ~50 MB/s parity feels a bit low.... even if all drives were spinning (its like 27% efficiency... 50/180)

I was expecting close to 70-90 MB/s ... given the Green WD drives read speed limits ~70-90 MB/s being the limiter here.

 

I had read somewhere that the 24-port and even 36-port expanders like Intel and Chenbro, had come in the market with 24-bay 4U and 36-bay 6U cases configurations... with the basic assumption that a PCIe v3.0 8x has enough bandwidth (~110 MB/s per drive for 36 drive config) to support arrays and should still easily saturate SATA2 drives. Hence majority of server boards carry x8 PCIe slots only... and x16 never became the norm.

 

Given this reasoning and previous calculations, one M1015 slot shouldn't really slow down parity. Please correct me here.

 

Any thoughts? Could something be going w/ Chenbro card? Or in your config?

 

Also johnm asked, will you be able to do another test with the parity now on the other slot of M1015 (basically in the head w/ direct access to PCIe lanes vs earlier shared lanes with data drives)? This might give some more insight...

 

Link to comment
  • 2 weeks later...

Hi lboregard and johnm,

 

Just wanted to bounce of some ideas to address both:

  - maximum thru-put of PCIe x8 (to avoid slowdown and bottlenecks) to an Unraid DAS

  - and clean installation w.r.t. having a ESXi head + UnRAID DAS(es)!

 

Do you think this following configuration would work?

 

Esxi head box (could be whatever you have norco 2112 etc.)

> IBM M1015 in PCIe x8 slot

> 2 port pitstop adapter (where both INside ports will be internally connected to 2 sas ports of M1015)

 

UnRAID DAS (Norco 4224... 24 bays)

> Chenbro CK23601

------ Has 6 SAS ports inside for backplane... i.e. 24 bays connected

> 1 port pitstop adapter (connected to internal IN port of CK23601)

 

With this you essentially connect both machines with 2x 8088 cables:

- Esxi head's 1st OUTside port of pitstop adapter -> UnRAID DAS' Chenbro CK23601's External IN port

- Esxi head's 2nd OUTside port of pitstop adapter -> UnRAID DAS' 1 (and only) OUTside port of pitstop adapter

 

This configuration can easily be extended to ESXi head and 2 DAS(es), by just using 4 port (instead of 2 port) pitstop in the head.

 

Advantages:

- Should allow clean installation, clean cables etc.

- Maximum bandwidth possible from a PCIe X8 (M1015 port), going to UnRAID DAS... avoiding slowdown of parity etc. Parity and cache can be now in the DAS itself.

 

Question is will Chenbro allow both the inputs to be used (from M1015) and still work?

I think other SAS cards (which don't specifically define INs OUTs e.g. Intel RES) allow this.

 

Chenbro defines its ports as:

Internal Ports

- Input from RAID/HBA?4-ports (1x Mini-SAS)

- Output to Backplane?24-ports (4x Mini-SAS)

External Ports

- Input from Host?4-ports (1x Mini-SAS)

- Output to JBOD?4-ports (1x Mini-SAS)

 

Let me know what u guys think?

 

Also, lboregard... other than the original experiment of putting parity in the head... you might just connect the second cable from M1015 port to the other IN of Chenbro and run parity again (still in das) to see if this speeds up?

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.