skynet + hal & wopr (esxi head + 2 unraid das)


Recommended Posts

hi notandor, your proposal seems ok, i might investigate into doing it ... haven't had a chance to mess around with the das, since im now building a desk (based on

and
instructions) and switching from xbmc to jriver media center on my htpc, so i've had my hands full (and needing a working array for the latter :) ) ... i'll post as soon as im able to work on the das.
Link to comment

Hi lboregard and johnm,

 

Just wanted to bounce of some ideas to address both:

  - maximum thru-put of PCIe x8 (to avoid slowdown and bottlenecks) to an Unraid DAS

  - and clean installation w.r.t. having a ESXi head + UnRAID DAS(es)!

 

Do you think this following configuration would work?

 

Esxi head box (could be whatever you have norco 2112 etc.)

> IBM M1015 in PCIe x8 slot

> 2 port pitstop adapter (where both INside ports will be internally connected to 2 sas ports of M1015)

 

UnRAID DAS (Norco 4224... 24 bays)

> Chenbro CK23601

------ Has 6 SAS ports inside for backplane... i.e. 24 bays connected

> 1 port pitstop adapter (connected to internal IN port of CK23601)

 

With this you essentially connect both machines with 2x 8088 cables:

- Esxi head's 1st OUTside port of pitstop adapter -> UnRAID DAS' Chenbro CK23601's External IN port

- Esxi head's 2nd OUTside port of pitstop adapter -> UnRAID DAS' 1 (and only) OUTside port of pitstop adapter

 

This configuration can easily be extended to ESXi head and 2 DAS(es), by just using 4 port (instead of 2 port) pitstop in the head.

 

Advantages:

- Should allow clean installation, clean cables etc.

- Maximum bandwidth possible from a PCIe X8 (M1015 port), going to UnRAID DAS... avoiding slowdown of parity etc. Parity and cache can be now in the DAS itself.

 

Question is will Chenbro allow both the inputs to be used (from M1015) and still work?

I think other SAS cards (which don't specifically define INs OUTs e.g. Intel RES) allow this.

 

Chenbro defines its ports as:

Internal Ports

- Input from RAID/HBA?4-ports (1x Mini-SAS)

- Output to Backplane?24-ports (4x Mini-SAS)

External Ports

- Input from Host?4-ports (1x Mini-SAS)

- Output to JBOD?4-ports (1x Mini-SAS)

 

Let me know what u guys think?

 

Also, lboregard... other than the original experiment of putting parity in the head... you might just connect the second cable from M1015 port to the other IN of Chenbro and run parity again (still in das) to see if this speeds up?

From what I read on [h]ard.. is that the chenbro out ports only work for out and the in ports only work for in. Unlike the intel units.

 

 

Sent from my iPhone using Tapatalk

Link to comment

From what I read on [h]ard.. is that the chenbro out ports only work for out and the in ports only work for in. Unlike the intel units.

Sent from my iPhone using Tapatalk

 

So what advantage does the Chenbro have over the Intel then? Because it looks like the Intel is cheaper too.

Link to comment

 

From what I read on [h]ard.. is that the chenbro out ports only work for out and the in ports only work for in. Unlike the intel units.

 

Sent from my iPhone using Tapatalk

 

So you mean there is a chance that the above proposal might work... as the proposal is not really modifying OUTs or INs of chenbro... just that it is trying to use 2 INs (external and internal) at the same time?

 

If this works... I'm thinking this will be a sweet way of keeping HEAD and DAS with clean boundaries and cable/rack management without worrying about any sort of slowdown essentially passing 1 PCIe x8 bandwidth fully to Unraid.

 

Ideal goal would be to have a Norco 4224 as DAS with 2 parity and 22 data drives... once in some future version of unraid limetech supports this...  with cache being on NFS RAIDz in the ESXi HEAD! :-) Till then the empty hotswap slots can be used for hot/warm spares!

 

lboregard, eagerly waiting for your results...

 

 

Link to comment

From what I read on [h]ard.. is that the chenbro out ports only work for out and the in ports only work for in. Unlike the intel units.

Sent from my iPhone using Tapatalk

 

So what advantage does the Chenbro have over the Intel then? Because it looks like the Intel is cheaper too.

 

If you are building the esxi and unraid in the same box then really no advantage... as you get 4 SATA port from 1 SAS port of M1015 and 20 SATA port from 5 SAS ports of Intel... so essentially 24 SATA ports... the other port of M1015 is connected to 1 SAS port of Intel.

 

If you are building esxi and unraid in different boxes AND if you plan to keep parity and cache in the esxi box, then also no advantage as of today... as parity and cache will go on 1 port of M1015, and in esxi with Intel you can populate 20 data drives.. maximum limit of drives as of today.

 

If you are building esxi and unraid in different boxes AND want to be anal :-) (I'm).. and keep all unraid related stuff in one box, then chenbro can come to rescue.. it allows 24 ports... so 1 parity, 1 cache, 20 data and will leave 2 free for the future.

 

If one wants to be futuristic, they could buy the next version of Intel... RES2CV360... which has 36 ports (almost same price as chenbro).. though all are internal ports and 1 or 2 could be used to connect to M1015... and hope someday Unraid will use all!

 

Chenbro is a solution if one wants to keep good clean boundaries in rack which has for e.g. sliding rails. In that case, you will want to have external cables pluggable/unpluggable to connect and disconnect two boxes.

 

Link to comment

From what I read on [h]ard.. is that the chenbro out ports only work for out and the in ports only work for in. Unlike the intel units.

Sent from my iPhone using Tapatalk

 

So what advantage does the Chenbro have over the Intel then? Because it looks like the Intel is cheaper too.

 

If you are building the esxi and unraid in the same box then really no advantage... as you get 4 SATA port from 1 SAS port of M1015 and 20 SATA port from 5 SAS ports of Intel... so essentially 24 SATA ports... the other port of M1015 is connected to 1 SAS port of Intel.

 

If you are building esxi and unraid in different boxes AND if you plan to keep parity and cache in the esxi box, then also no advantage as of today... as parity and cache will go on 1 port of M1015, and in esxi with Intel you can populate 20 data drives.. maximum limit of drives as of today.

 

If you are building esxi and unraid in different boxes AND want to be anal :-) (I'm).. and keep all unraid related stuff in one box, then chenbro can come to rescue.. it allows 24 ports... so 1 parity, 1 cache, 20 data and will leave 2 free for the future.

 

If one wants to be futuristic, they could buy the next version of Intel... RES2CV360... which has 36 ports (almost same price as chenbro).. though all are internal ports and 1 or 2 could be used to connect to M1015... and hope someday Unraid will use all!

 

Chenbro is a solution if one wants to keep good clean boundaries in rack which has for e.g. sliding rails. In that case, you will want to have external cables pluggable/unpluggable to connect and disconnect two boxes.

 

 

When I was looking at the picture of the Chenbro I missed the 3 SAS ports in the center of the card. Only saw the 4 off the back. That is why I didn't understand the advantage haha.

Link to comment

hi notandor, your proposal seems ok, i might investigate into doing it ... haven't had a chance to mess around with the das, since im now building a desk (based on

and
instructions) and switching from xbmc to jriver media center on my htpc, so i've had my hands full (and needing a working array for the latter :) ) ... i'll post as soon as im able to work on the das.

 

Hi lboregard,

 

Any new update?

Link to comment
  • 3 weeks later...

i finally connected an additional cable from the second port in the m1015 to the second input on the chenbro ... i think speed improvement was kind of marginal .. fwiw

 

i didnt reboot the server ... just plugged the cable at both ends ... not sure if some initialization is required for it to acknowledge the presence of both lanes

 

qTNNz.png

Link to comment
  • 3 weeks later...

Nice to see someone has done this!

 

I have just started buying parts to do this myself.

 

Currently running unraid in ESXi in a Norco 4220 and filled it up pretty much so am buying a norco 4224 only other difference is I'm planning to use the intel expanders.

 

Updating my mobo/processor while i'm at it. Seems like going to have to order some bits from states as supermicro motherboards are a pain to find over here and a PE-2SD1-R10 for the DAS (if it will work?) so might take a while to get it all collected.

 

Will keep an eye on this thread!

 

 

Link to comment
  • 2 weeks later...

Limetech had put a poll up asking if multi server shares was an option people would want.

There might still be a possibility of that happening after stable 5.0 release.

 

Many media players like XBMC will allow you to pool several media shares into one library.

 

I have been talking about virtualizing my second unraid and going DAS for a long time.

That time is coming way to fast. My main 4224 is at maximum capacity (with 3TB drives) and has less then 4TB free.

the only way to expand is Head+DAS. If (when) I do that, I'll plan on going Head+DAS+DAS.

 

I'll have to do this before Christmas at this rate..

 

Link to comment
  • 3 months later...

quick update ...

 

i had one 4yr old 1.5tb die on me .. it was attached to the hermes vm (nzb downloading). this vm was running freebsd 9, with a zfs pool on each no disk, no redundancy, no nothing ... i thought long and hard about what to do and in the end i decided to go for a complete overhaul

 

- install esxi 5.1 from scratch

- install freebsd 9.1 for hermes with a 500gb disk for temp files and such and a 2x1.5tb zfs mirror pool for "unraid cache"

- install solaris 11.1 for atlas with two zfs pools: a 2x2tb encrypted mirror for backups and a 4x1tb raidz pool (previously 5x1tb), to serve as nfs datastore and any other random purposes

 

i backed up the vm internals, but not the vm themselves (really didnt feel like going the ghettovcb route).

 

i also tried to update the supermicro bios to 2.0b, but it didnt work: not from ipmi with the servethehome.com iso, nor from a physical usb with the bios straight from supermicro's site.

 

i was worried that my 3 m1015s would throw a fit ... but the handled it like da man ! passthrough went just fine and attaching them to the unraid vms worked flawlessly as well.

 

all in all ... a very smooth "upgrade" ... no hardware change at all, except for the new hard drives.

Link to comment

- install solaris 11.1 for atlas with two zfs pools: a 2x2tb encrypted mirror for backups and a 4x1tb raidz pool (previously 5x1tb), to serve as nfs datastore and any other random purposes

 

Solaris or OpenIndiana? If Solaris, why it over OpenIndiana?

Link to comment
  • 10 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.