ATLAS My Virtualized unRAID server


Recommended Posts

Very nice thread..

 

I just tried ESXi with my AOC-SASLP-MV8, didnt work. :/ so now i might need to get one of those IBM m1015

 

however, i do have some concerns about using it with green drives.. i have a 8x1.5tb raid running on a Adaptec 5805 and i know for sure it killed a couple of my green WD15eads.  not entirely sure but i think it has something to do with TLERa sort of self repair that raid card only allow for 7-8 seconds but the green drives can take much longer and are therefore flagged as faulty..

 

do you have any problems running green drives with the ibm..

 

I am running 14 WD Green drives off of 2 IBM M1015's through ESXi - a mixture of 1-2TB EACS, EADS, EARX drives.  No issues whatsoever.

 

Link to comment

I'm running all green drives (currently 5 + parity) on an LSI 9211-8i HBA (same chipset as the m1015)/Intel RES2SV240 expander and have no issues. If you run the HBA in IT mode you will be bypassing any of the hardware RAID functions, so drive timeouts shouldn't be an issue. I average around 40 MB/s read, 25 MB/s write, without a cache drive.

 

I upgraded to ESXi 5 a few weekends ago with no problems.

Link to comment

I'm running all green drives (currently 5 + parity) on an LSI 9211-8i HBA (same chipset as the m1015)/Intel RES2SV240 expander and have no issues. If you run the HBA in IT mode you will be bypassing any of the hardware RAID functions, so drive timeouts shouldn't be an issue. I average around 40 MB/s read, 25 MB/s write, without a cache drive.

 

I upgraded to ESXi 5 a few weekends ago with no problems.

 

thanks.. i did a test run on my mv8 and 3x wd15eads.. i got around 80mb/s i think.. so my question is: is 40mb/s because you run it on esxi. or is that just normal performance.

 

 

Link to comment
  • 2 weeks later...

Almost there. Finally got a little bit of time to get back to working with ESXi 5 and setting up unRAID. Work has been a time blackhole lately.

 

Got everything working (MV8 passthrough hack) and it boots up fine using the plop boot usb iso.

 

Did not passthrough the NIC as unRAID will be the main network user but will have some other guests that will access unRAID (comskipping, file conversions, etc) so want to keep that internal instead of sending out to the switch and back in.

 

My problem though is that unRAID is not getting an IP and only able to access via vSphere Client console.

I'm sure I missed something. Any help/suggestions would be appreciated.

 

syslog.txt

Link to comment

Thanks. I'll check that. But the problem was I'm a dolt. 

 

Forgot to add the installpkg for the vmtools in my go script.

After I did that, I recycled the VM and it came up with the IP and able to bring up the unRAID menu via web browser.

 

 

that should not matter.

I am guessing it was just taking it's time with dhcp. I have had it take 10min every now and then on first boot, dont ask why.

I would set unRAID to a static IP.

Link to comment

Hi John,

I've got the same setup as you (case, mobo). Was wondering if you ever were able to mount a 3.5" drive in the Norco case?  I want to use this 750gb black drive as my data store but my drive bays are full. Thx!

 

I have not gotten around to trying that yet.

physically, there should be room. it just depends on how creative you are.

 

I have to do a major rebuild to my ESX box soonish. the stock fans are starting to bother me. I picked up a 120mm fan wall for this box. when I do that I'll do some reconfiguration of the box including the PSU swap out.

 

 

My plan for now is going to be a physical limit of 20 3.5" drives for unraid.

because of the physical limit of 4 drives per backplane, I don't see a way to easily get around that without passing through individual drives. while that will work, it was not something I wanted to do (I would make an exception for cache, possibly Parity if performance was good).

 

Current Build:

Datastore drives:

2x 2.5" SSD's

1x 3.5" 7200rpm 1.5TB drive (Additional VM's).

1X 3.5" LP 2TB drive. (VM Backup and ISO file storage)

 

Single drive pass though to WHS2011

1x 3.5" LP 2TB

 

Controller pass through to unRAID:

1x SASLP-MV8 in PCIe 4x port (only 1 SAS cable connected) = 4x 3.5" LP 3TB drives (1 is cache, 1 is parity).

2x IBM M1015's in PCIe 8x ports = 4x 3.5" LP 3TB drives on each card

12 unRAID drives total so far.

 

15x 3.5" & 2x 2.5" drives in current configuration

 

 

Planned Build A at this time:

Datastore drives:

2x 2.5" SSD's

2x 2.5" 500GB 7200rpm (Additional VM's). [or just 1. I have 2 in spares pile]

1X 3.5" LP 2TB drive. (VM Backup and ISO file storage)

 

Two drive pass though to WHS2011

2x 3.5" LP 2TB (1 for OS and 1 for backup)

 

Controller pass through to unRAID:

1x SASLP-MV8 in PCIe 4x port = 4x 3.5" LP 3TB drives & 1x 2.5" 7200RPM cache drive on breakout cable internally mounted.

2x IBM M1015's in PCIe 8x ports = 8x 3.5" LP 3TB drives on each card

Plans also include upgrading the parity drive to 7200RPM 3TB Drive.

21 unRAID drives total. (If I really need the 22nd drive, I'll have to get creative.)

 

23x 3.5" & 4x 2.5" drives in this configuration (leaving 1 more 3.5 passthough drive for future Guest)

 

 

Planned Build B at this time:

Datastore drives:

2x 2.5" SSD's

Arreca 1222 with 4x 2TB 3.5" drives in RAID5 (already have)

 

WHS2011

move to virtual drive on raid array.

 

Controller pass through to unRAID:

1x SASLP-MV8 in PCIe 4x port = 4x 3.5" LP 3TB drives

2x IBM M1015's in PCIe 8x ports = 8x 3.5" LP 3TB drives on each card

move cache to virtual drive on RAID5 or 2.5" & parity upgrade to 7200RPM or virtual drive on raid5.

21 unRAID drives total. (22 drives, if I put parity on raid5)

 

24x 3.5" & 2x 2.5" drives in this configuration

 

If I go route B, there is a good chance I'll also move the SSD's to raid0 on the Arreca. this would allow me to add up to 4 SSD's in raid0.

route B is the overkill & expensive route. but I do have the raid card and drives, making it tempting, but unlikely....

 

 

Link to comment

Thanks. I'll check that. But the problem was I'm a dolt. 

 

Forgot to add the installpkg for the vmtools in my go script.

After I did that, I recycled the VM and it came up with the IP and able to bring up the unRAID menu via web browser.

 

 

that should not matter.

I am guessing it was just taking it's time with dhcp. I have had it take 10min every now and then on first boot, dont ask why.

I would set unRAID to a static IP.

 

Yep. Made that change to Static IP. Running well so far. Doing some pre-clears then will run with it (free edition) on 4.7 for some testing, then try it out with the latest Beta to see how that looks.

 

Link to comment

Just want to say Wow...Such a detailed build process!

 

and you've inspired me to do the same!

 

Currently have an unraid box and a 'server' which runs ~6-8 VMs and i *think* i will consolidate into a single server.

 

currently using a Xeon 1156 board and i3-540. Just bought a i5-650 (For the VT-D). Don't have the dough for a Xeon :(

 

main question, have you left Hyperthreading enabled? (sorry if its been answered previously!) How does ESXi react to it if enabled? (ie, does it give you 4 'cores' to allocate to VMs instead of 2, etc)

 

Do you see any problems migrating an existing install over? as long as i remove all HDDs before i start installing ESXi :P

Link to comment

Just want to say Wow...Such a detailed build process!

 

and you've inspired me to do the same!

 

Currently have an unraid box and a 'server' which runs ~6-8 VMs and i *think* i will consolidate into a single server.

 

currently using a Xeon 1156 board and i3-540. Just bought a i5-650 (For the VT-D). Don't have the dough for a Xeon :(

 

 

be aware, lots of consumer grade LGA1156 and LGA1155 boards that claim to have VT-d support don't actually work when you turn it on the bios (nothing happens). lots of manufacturers dropped the ball on this. even a few supermicro server boards...

 

 

main question, have you left Hyperthreading enabled? (sorry if its been answered previously!) How does ESXi react to it if enabled? (ie, does it give you 4 'cores' to allocate to VMs instead of 2, etc)

 

ESXi will utilize the hyperthreading.

 

 

Do you see any problems migrating an existing install over? as long as i remove all HDDs before i start installing ESXi :P

 

That was how I did it.

 

I made sure i had a good backup first.

 

I pulled my drives and flash.

Installed ESXi

Plugged in my unRAID flash.

Created the unRAID guest assigning the Flash, the boot iso and the controller card pass-though.

shut down the ESXi box

inserted all my unRAID drives on the proper controller

powered on...

unRAID 5.x didn't care at all that I changed from physical to virtual and booted right up.

 

unRAID 4.7 will require you to reassign all of your drives again. follow the basic instructions for a motherboard swap for 4.7

 

 

Link to comment

So John,

I think I came up with a creative way to mount that drive :)

 

SG14s.jpg

MoxaG.jpg

 

This mount was rock solid as long as I used a real PCI slot cover and not the one that the Norco includes (which is removable and not stable).  But then I researched if there were any PCI slot mounts for a 3.5" HDD.  There's not but there IS one for a 2.5" HDD.  Not sure if you've seen this before:

 

http://www.amazon.com/gp/product/B002MWDRD6/ref=oh_o00_s00_i00_details

 

I just bought this, should see how nice it is in a few days.  Thought this would be perfect for you.  I have a 500GB 2.5" hard drive laying around that should do the job for me. 

Link to comment

I took the plunge and got ESXi 4.1 up and running on my Gigabyte EP43-ud3l.  For the most part everything is working fine.  I had to do an oem.tgz for the Realtek NIC and the Sil3132 cards (though I am not sure if those work quite yet, have not had a chance to test them).

 

I was hoping to use 5.0 but there was no oem.tgz (or the like) for the Sil3132 controller cards.

 

I have gotten an XP Pro VM up and running under ESXi and it is working a treat.  Going to test it out some more but if everything pans out I should be able to get rid of the XP install(s) on my MacBook Pro and free up a lot of space on its drive.

 

 

The only issue I have now is that if I install a SASLP card in the x16 slot on the board the board fails to give me any graphics.  I have to make sure the SASLP is not defective, but if it is not then I am going be extremely PO'ed at Gigabyte.  They claim the x16 is PCIe 2.0 compliant in there docks and I have set the BIOS correctly so that PCI graphics (I have a cheap PCI video card in the machine) is initialized first.  I am going to mess with a few more of the BIOS settings tonight in hopes that I can get it working with the SASLP card.

Link to comment

 

be aware, lots of consumer grade LGA1156 and LGA1155 boards that claim to have VT-d support don't actually work when you turn it on the bios (nothing happens). lots of manufacturers dropped the ball on this. even a few supermicro server boards...

 

Interesting....this is my board

 

http://www.intel.com/content/www/us/en/motherboards/server-motherboards/server-board-s3420gp.html

 

would've thought an Intel server grade xeon board would support it ok...guess i will see what happens!

 

 

ESXi will utilize the hyperthreading.

 

cool :)

 

 

That was how I did it.

 

I made sure i had a good backup first.

 

I pulled my drives and flash.

Installed ESXi

Plugged in my unRAID flash.

Created the unRAID guest assigning the Flash, the boot iso and the controller card pass-though.

shut down the ESXi box

inserted all my unRAID drives on the proper controller

powered on...

unRAID 5.x didn't care at all that I changed from physical to virtual and booted right up.

 

unRAID 4.7 will require you to reassign all of your drives again. follow the basic instructions for a motherboard swap for 4.7

 

good to know, still running 4.7...thinking about upgrading to 5.12b but some of the issues (namely the BLCK error thing) have me a little worried...

 

Will post back when i recieve my CPU and do some testing!

 

Thanks!

Link to comment

So John,

I think I came up with a creative way to mount that drive :)

 

This mount was rock solid as long as I used a real PCI slot cover and not the one that the Norco includes (which is removable and not stable).  But then I researched if there were any PCI slot mounts for a 3.5" HDD.  There's not but there IS one for a 2.5" HDD.  Not sure if you've seen this before:

 

http://www.amazon.com/gp/product/B002MWDRD6/ref=oh_o00_s00_i00_details

 

I just bought this, should see how nice it is in a few days.  Thought this would be perfect for you.  I have a 500GB 2.5" hard drive laying around that should do the job for me.  

 

Nice way to mount that 3.5. I would be worried it would vibrate to much being screwed from one side on a bracket. but if it is solid, nice! I was considering making a mount on the sidewall of my norco. maybe even drilling 4 holes in the sidewall to mount a drive. I also thought about mounting a cage from a scrapped PC inside the case.

It will be a while before I gut my server enough to drill holes in it (metal shavings = bad).

 

I saw that drive mount before. I would be willing to try one for sure. the one i am looking for i can only find in the UK. http://linitx.com/product/12669 I have seen 2 builds using this in a norco 4224.

the closest i can find is the scythe slot rafter http://www.scythe-usa.com/product/acc/064/slotrafter_detail.html .

 

at $10 for the rafter. I cant go wrong.. can i?

 

 

 

Link to comment

I took the plunge and got ESXi 4.1 up and running on my Gigabyte EP43-ud3l.  For the most part everything is working fine.  I had to do an oem.tgz for the Realtek NIC and the Sil3132 cards (though I am not sure if those work quite yet, have not had a chance to test them).

 

I was hoping to use 5.0 but there was no oem.tgz (or the like) for the Sil3132 controller cards.

 

I have gotten an XP Pro VM up and running under ESXi and it is working a treat.  Going to test it out some more but if everything pans out I should be able to get rid of the XP install(s) on my MacBook Pro and free up a lot of space on its drive.

 

 

The only issue I have now is that if I install a SASLP card in the x16 slot on the board the board fails to give me any graphics.  I have to make sure the SASLP is not defective, but if it is not then I am going be extremely PO'ed at Gigabyte.  They claim the x16 is PCIe 2.0 compliant in there docks and I have set the BIOS correctly so that PCI graphics (I have a cheap PCI video card in the machine) is initialized first.  I am going to mess with a few more of the BIOS settings tonight in hopes that I can get it working with the SASLP card.

 

Cool. Glad you got it up.

My second ESXi box has a similar board. a gigabyte EP45T-USB3p or something like that.

I went the OEM.tgz route at first, then I just picked up a cheap intel PCIe 1x CT card. It was so much easier in the end.

 

The problem I am afraid you will run into is that that the EP43 might not have VT-D, just VT-i.

That is the problem I was having with the EP45. Without that, You can't do full PCIe slot pass-though. If you're planning on putting a storage server on the same box, then you might want to look into that before you fix the video issue.

 

PS. i used some generic 1990's  pci video card on my EP45. there was a bios setting to change it to pci

 

As far as ESX 4.1 vs 5.0. for the hardware you have, you wont gain any of the new 5.0 features. so don't  even sweet it. In your case the older version is almost better.

Link to comment

 

be aware, lots of consumer grade LGA1156 and LGA1155 boards that claim to have VT-d support don't actually work when you turn it on the bios (nothing happens). lots of manufacturers dropped the ball on this. even a few supermicro server boards...

 

Interesting....this is my board

 

http://www.intel.com/content/www/us/en/motherboards/server-motherboards/server-board-s3420gp.html

 

would've thought an Intel server grade xeon board would support it ok...guess i will see what happens!

 

I didn't know what board you had, it was more of a general warning to everyone. I would assume the genuine Intel board would work. that is one of the advantages of buying a genuine Intel board. it just works as it should...

 

~cough~ usually ~cough~

 

 

most of the complaints are coming from ASUS, MSI, and Zotac users. a good reason to check the ESX whitebox forums first.

 

good to know, still running 4.7...thinking about upgrading to 5.12b but some of the issues (namely the BLCK error thing) have me a little worried...

 

Will post back when i recieve my CPU and do some testing!

 

Thanks!

 

I hear you on the BLCK error thing. that's got to bugging tom.

 

I am pretty sure you can still run 4.7 on your hardware with no issues. Unraid will see it as generic hardware because it is virtualized. I don't know your exact hardware list so I can't give you a definitive answer.

but If you can already run 4.7 and ESX on your existing hardware, it would be a safe bet you can still run it as 4.7 in ESX 4.1 or 5.0.

 

You could always run 5B11. the beta 12's were so far were to fix the realtek 8111e issue. something you do not have in this build anyways.

 

If I get a BLCK error i will report it with a syslog and roll back to 11 myself..  I am only on 12 to help test. i have no real need to be on 12. As long as my M1015 works, I am happy.

 

(the guys with the blck errors really need to post in depth hardware/bios/firmware/add ons so we can see a pattern.)

 

 

EDIT:

Ohh.. I just took a look at your board. Yeah that should work. There is even esxi 4.1 install guides from intel for it and possibly a custom image.

 

unlike my build, you should be able to get to 32GB ram easily and (somewhat) cheaply.

 

Link to comment

 

I didn't know what board you had, it was more of a general warning to everyone. I would assume the genuine Intel board would work. that is one of the advantages of buying a genuine Intel board. it just works as it should...

 

~cough~ usually ~cough~

 

most of the complaints are coming from ASUS, MSI, and Zotac users. a good reason to check the ESX whitebox forums first.

 

 

haha, know exactly what you mean!

 

I hear you on the BLCK error thing. that's got to bugging tom.

 

I am pretty sure you can still run 4.7 on your hardware with no issues. Unraid will see it as generic hardware because it is virtualized. I don't know your exact hardware list so I can't give you a definitive answer.

but If you can already run 4.7 and ESX on your existing hardware, it would be a safe bet you can still run it as 4.7 in ESX 4.1 or 5.0.

 

You could always run 5B11. the beta 12's were so far were to fix the realtek 8111e issue. something you do not have in this build anyways.

 

If I get a BLCK error i will report it with a syslog and roll back to 11 myself..  I am only on 12 to help test. i have no real need to be on 12. As long as my M1015 works, I am happy.

 

(the guys with the blck errors really need to post in depth hardware/bios/firmware/add ons so we can see a pattern.)

 

 

EDIT:

Ohh.. I just took a look at your board. Yeah that should work. There is even esxi 4.1 install guides from intel for it and possibly a custom image.

 

unlike my build, you should be able to get to 32GB ram easily and (somewhat) cheaply.

 

 

Yea, i don't expect 4.7 to cause issues...just sick of my HDDs on my SAS controller not spinning down properly and pretty sure 5 has better support for them.

 

Will start off with 2x4GB as the RAM my VMs use should equal ~6.5Gb my VMWare Server box sits at 8GB almost all the time, that should hold me over for a few more VMs and then will grab some more.

 

Thanks for your advise on this, will definately post when it is up and running!

Link to comment

Cool. Glad you got it up.

My second ESXi box has a similar board. a gigabyte EP45T-USB3p or something like that.

I went the OEM.tgz route at first, then I just picked up a cheap intel PCIe 1x CT card. It was so much easier in the end.

I figured if it was not hard I would avoid adding anything extra to the build.  I used an oem.tgz I found along with ESXi customizer to mash it all together.  It worked a treat!

 

The problem I am afraid you will run into is that that the EP43 might not have VT-D, just VT-i.

That is the problem I was having with the EP45. Without that, You can't do full PCIe slot pass-though. If you're planning on putting a storage server on the same box, then you might want to look into that before you fix the video issue.

Indeed you are correct and it is something I came to discover over the past 2 days.  Not all VT is created equal... what a shame.  The EP43-ud3l has VT in it and the p43 chipset does support VT-d but Gigabyte failed to implement it in the BIOS it seems, what a shame to.

 

PS. i used some generic 1990's  pci video card on my EP45. there was a bios setting to change it to pci

yeah, the no video issue is a little more complicated then I had thought originally.  Turns out I can install 4 pcie x1 cards and a PCI video car in the machine without issue.  I can install 2 pcie x1 cards, the SASLP, and a PCI video card without issue.  If I try to install the SASLP in the first config... nothing boots.  If I try to install another x1 card in the second config nothing boots.  I messed with all the Northbridge/Southbridge voltage settings and was not able to get it to post.  This is not the first odd thing this board has done.  I can not install most than 2 stick of RAM either... even though there are 4 slots.  The board runs like a top other than the oddities I just mentioned.

 

As far as ESX 4.1 vs 5.0. for the hardware you have, you wont gain any of the new 5.0 features. so don't  even sweet it. In your case the older version is almost better.

Yup, pretty much the conclusion I came to.

 

So not that I have fully discovered Gigabytes lack of effort to implememnt vt-d on my board I will be looking for a new one for my main server.  I have had a taste of running a VM under ESXi and it worked a treat.  I am going to have to get some more/different hardware over then next month or so, and until then I think I will go back to running unRAID on my old motherboard but moved to my new case.

Link to comment

You would passthrough the onboard nic just like you would the raid card.

 

I had the second NIC passed to my WHS2011 when I had EXSi 4.1. when I upgraded to ESXi 5, it broke.

I honestly never needed it since the whs2011 see so little use. so I never looked into fixing it. I just disabled the card

 

I also pulled the CT card from the system. I had that bound to unraid. one NIC seems to be doing just fine for my build.

 

My newsbin guest pulls 15MB/s tops and unraid wont saturate the Gigabit even with a cache drive.

most of my data transfer is from guest to guest so using the ESXi virtual router is much faster for me.

 

Once i get more ram, I'll reconsider binding a second NIC. I hope

 

Hopefully VMware releases a driver for 5.0 for the second nic soon.

 

If i had to start over, I would consider the X8Sil .

 

Link to comment

You would passthrough the onboard nic just like you would the raid card.

 

I had the second NIC passed to my WHS2011 when I had EXSi 4.1. when I upgraded to ESXi 5, it broke.

I honestly never needed it since the whs2011 see so little use. so I never looked into fixing it. I just disabled the card

 

I also pulled the CT card from the system. I had that bound to unraid. one NIC seems to be doing just fine for my build.

 

My newsbin guest pulls 15MB/s tops and unraid wont saturate the Gigabit even with a cache drive.

most of my data transfer is from guest to guest so using the ESXi virtual router is much faster for me.

 

Once i get more ram, I'll reconsider binding a second NIC. I hope

 

Hopefully VMware releases a driver for 5.0 for the second nic soon.

 

If i had to start over, I would consider the X8Sil .

 

 

Okay cause right now I have a CT adapter and that's the adapter that ESXi is using.  I have no idea why (I have nothing plugged into it) but VMware took over one of the nics (can't figure out which one) as it's vmnic1.  I was using the (if looking from the back i/o plate) right side NIC for unRAID.

 

http://cl.ly/3I3c2Z2D2I2h072I1n1z/10-19-2011_3-50-32_PM.png

 

http://cl.ly/2P2K3I0s3Q0A071w330J/Screen_shot_2011-10-19_at_4.07.30_PM2.png

 

AH NVM, you know what?  I'm a dummy.  That is the CT card that ESXi is using already.  The other 2 are available for passthrough.  Not sure why one of them is on the PCI bus and the other isn't (i'd suspect I was onboard nic the PCI bus) but maybe it was something to do with one NIC being on the Southbridge and one being directly on the PCI bus.  I'm gonna play around with this at home.  

 

BTW, did I mention how fricking amazing this is?  Thank you again.

 

Not sure if it'd interest you but I'm going to be standing up a Mac OS X Lion server on this box and I'd be glad to write up a tutorial if there's interest.

 

I haven't heard anything about 8GB modules for the UDIMM's...have you?  I would love to go to 32GB on this box.

 

Link to comment

Thanks so much for your time and effort on this. It's inspirational for me - I would like to do the same thing.

 

I was just looking at hardware on Newegg, which I will need to start acquiring. One thing that occurred to me was to check with you what hardware you would buy now if you were starting again.

 

I will need a new Mobo, CPU and disk controller. What would you look at now given your experiences?

 

 

If i had to start over, I would consider the X8Sil .

 

Why?

 

And with the controller, on your first post you said you would get a couple of LSI controllers. Which ones? And why did you go with the MV8? Was it just cost?

 

Many thanks again

Aaron

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.