ATLAS My Virtualized unRAID server


Recommended Posts

  • A brief introduction:

    • I have a need to consolidate several servers into one box. this would include my main media storage (unRAID), my client backup server (WHSv1 Migrating to WHS2011), my Usenet downloading workstation, my Ghost server, my FTP server, and possibly my windows PDC. There will also be some secondary Hosts on this server also.
       
       

Why here?

  • There is a bit of a buzz on this and other forums about running ESXi in the home and using virtualization. Several people already are or want to run unraid on top of a hypervisor. The primary use will be unRAID.
     
    This will be my worklog as I share my experiences. I hope to both help others and have others inform me of better practices.
     

Hardware:


  • While I wish to keep the costs down, I also do not want to sacrifice performance for a few dollars saved. I could have saved a few dollars by shopping around. I do like to get as much from one vendor as possible. things like cables can be had for less elsewhere, but I got no mystery cables. I have no issue waiting out the sales or changing hardware later. I am know as the build progresses, things will change.
     
    • Motherboard: I chose a socket 1155 Intel C204 motherboard for price, features, number of PCIe slots, Sata3 for Datastore SSD's and VT-d. I have 2 more of these boards and use them at work. They are solid boards.
      I did buy another open box. I fully inspected and burn in tested the board before building this box. Sometimes open box and used boards are damaged in ways you cant see and can damage other components. it is a gamble.
       
    • CPU: I went with the E3-1240 Xeon for the performance and price point. The 1220 Will do fine, as would any of the E3-12x0 chips. I would avoid the Workstation Xeon 1155 chips if you could, while they would work just fine. you do not need the on chip GPU since you wont use it. save a few bucks and some electricity.
       
    • RAM: I wanted to go with 32 Gigs of ram. This was not an option. While the board supports 32Gigs, only 4Gig chips are available ATM. I had to go with 16Gigs of ECC.
       
    • PSU: Most LGA1155 Server Boards Require EPS 12V Power Supplies. (this is important when shopping for a PSU)
       
    • Case: Norco RPC-4224 (newest). I used the 4224 with the 2 Front USB's. I chose this one of the  3 Norco's I had. I thought the front USB's would be useful in pass-through mode. My Ghost server and WHS for example could use them for making boot thumb drives. The USB ports would also come in handy if i needed to plug in a USB DVD drive.
       
    • HBA/Raid controller: My plan was to buy 1 or 2 LSI based HBA controller(s) and an expander. I already had the 2 Supermicro cards. with minor tweaks, These cards will work fine. I am sure this will be changed in the future. for now, it is fine. Plus no additional cost for now. Once my unraid crosses 16 drives, I'll have to add a third HBA or expander at that time.
       
    • NIC: I do have an extra Intel EXPI9301CTBLK 1Gb NIC. this is not needed but might come in handy in passthough for unRAID.
       
    • Datastore Drives: I chose to go with 2X Sata3 SSD's for pure performance. I also will use a 7200rpm spinner for ISO storage and backups. I might also run some non-critical hosts off this drive.
      Originally I was going to buy 3-4 SSD's and run them in raid10 or raid5 on a cheap LSI raid card. it was still a bit to expensive for the drives and raid card. I also considered using mechanical drives in raid instead. I will most likely change this down the road. for now, the SSD's should be quite fast. as long as I keep them backed up, I should be OK.
      I do not need 3 Datastore drives. I will need 2 for sure. 1 for hosts and for for ISO's and back ups. This second one could be an NFS share on another PC. The point was to cut back on PC's. I'd like to leave it in this box.
      Ideally, I should have bought one large SSD. after I opted to not buy the LSI raid card for the SSD's i had a 3 in the morning blond moment and said.. oh look at this sale on 120Gb drives. I'll run 2 on the ICH10r on my mobo in raid0 for 1000Mb/s performance. the next morning I saw my error, ICH10r wont work in ESX as raid. I'll work with what i have...
      [Remember, ESXi does not support TRIM or garbage collection. I will be killing these drives over time][EDIT:See the Recommended Upgrades section below on this. Some of the NEW SSD's have an "Advanced Garbage Collection". It is almost like auto trim.]

  • Drive configuration: The hardest part of this build with be physical drive management. I will have to get creative and mount some drives internally.
    3Drives:
    I plan to have 3 Datastore drives. i can easily mount the SSD's internally. I might have to buy a new 750 or 1TB 2.5" drive for the third Datastore drive. for now I'll use a 3.5" in one bay until I run low on bays.
    1 Drive:
    WHS2011 will get a passthough drive. Keeping PC backups on the SDD's wont work.
    1 Drive:
    My newsbin client host will need a passthough drive. This can be a 2.5" mounted internally or 3.5" in a bay
    1 Drive:
    Ghost server data drive... ?!? i might have to rethink this. have it redirect to a share on WHS or most likely unraid. I could also use a 2.5" drive internally.
    20-22 Drives:
    unraid. i might be limited to 20 drives.This depends on how creative I get. 20 off HBA's and 1 Data and the Cache on passthough. the cache will most likely be another SSD.
    As you see, 25-28 drives in the end. several are 2.5" drives. maybe i can mod a 2.5 drive bay internally or off the back.
    I wont have a full unraid at the start. I'll have time to figure this out. At the worst, I mount some drives in a second Norco box.
     
     

 

Shopping list:

 

Total Price For base Server: $1340.85

 

Optional Bits:

  • Fans:
    3x120mm Fan bracket ($20ish shipped)
    3x "pressure optimized" Noctua NF-P12-1300 120mm fans I picked up for $15 each plus $5 shipping for all 3.
    2x ARCTIC COOLING ACF8 Pro Pro PWM 80mm Case Fans back on the rear.
  • Flash Drives:
    2x Lexar 4GB Firefly. (1 for unraid and 1 for ESXi) $6.99 each Microcenter
  • ESXi Datastore Drives:
    2x OCZ Solid 3 SLD3-25SAT3-120G 2.5" 120GB SATA III MLC $155 Each Newegg. [i recommend a Marvell 88SS9174 based SSD over a sandforce for this build now. See below.[/i]
    1x 1TB, 1.5TB or 2TB 7200RPM Drive (For ISO's and Backups) (Free from junk pile)
  • [unraid drives]
    8x Hitachi 3TB 5400RPM drives $106.92 Each Amazon
     
  • NIC Intel EXPI9301CTBLK Network Adapter 10/ 100/ 1000Mbps PCI-Express 1 x RJ45 $22 From Newegg
     
     
    Ultimate Price: $2298.19

 

This list will change as I upgrade the build.

 

 

 

Recommended Upgrades:

SSD's


  • I know i had mentioned earlier in this message about SSD's and that running them in ESXi will wear them out at at a fast rate.
    since the time of the original writing. the Marvell 88SS9174 SSD chipset has made major improvements.
    With it's advanced garbage collection, these SSD's are made for uses like this. while they do cost a few dollars more, they should be much faster and outlive the Sandforce drives I originally built this system with. This is an upgrade will be doing myself.

 

Both are 256GB priced at $339. about what I paid per GB for the OCZ's.

 

HBA's

  • 1) M1015
    Replace The SASLP-MV8's with IBM M1015's (LSI SAS9220-8i) about $65-$85 on ebay.
    (you would need 3 for more then 16 Drives. put the first 16 drives on the 8x ports, then fill in the rest on the 4x port.)
    this upgrade will get you faster parity checks. the m1015 is a PCIe2 8X card with 8 SAS2 ports (Sata3 6GB/s).
    They natively support 3TB and larger Drives.
    If you ever dump your unRAID and move to a ZFSx solution, these should be compatible unlike the MV8's
    IF You do get these cards, You will need longer cables then those listed above in a Norco case.
    I recommend the 1M ones from monoprice at $9.49 each
    [Warning! These cards come with an IBM raid bios, you have to re-flash them to LSI IT-mode Bios to work. you can not flash them on the 9XSCM. You need to do it on another motherboard.]
    [These do not work with unRAID 4.7. You must run 5.x and newer only]
     
  • 2) SAS Expander
    If you plan on more then 16 drives in your unRAId guest:
    I would also strongly recommend a single IBM M1015 and one Intel RES2SV240 SAS Expander.
    This combo will only use a single PCIe 8x slot and still get pretty much full mechanical hard drive speed to 20-24 drives. It will also cost less then
    3 HBA's and cables (the RES2SV240 comes with 6 SFF-8087 to SFF-8087 cables saving $60-$120).
    M1015 ($85 Ebay)
    RES2SV240 ($208)
    Order no SFF-8087 to SFF-8087 cables
    This combo saves $154 from puchasing 3x MV8's and Cables.

 

Cables:

  • 1M SFF-8087 to SFF-8087  from Monoprice at $9.49 each. They are cheaper and longer for those with M1015's.
  • Face it, the NORCO C-P1T7 is crap. It is a short waiting to happen. I would recommend a more solid Molex cable.
    Ideally, make your own custom cable from parts [Example of a custom built Norco cable from another forum].
    I know most people can't build a cable like that or don't have the time/budget. I would suggest something like THIS, THIS, or THIS for those of those that cant make a cable.
    Unfortunately, none of those cables are 100% correct for a 4224 (one is perfect for a 4220). You will need to buy more then one cable.
     
     

Recommended Alternate parts:

 

  • Motherboards:
    • TYAN S5510GM3NR (to replace the X9SCM. it does have 3 ESXi compatible NIC's)
    • Supermicro X9SCM-IIF (Updated X9SCM with 2 ESXI compatible NICs and V2 Bios fr IVY Bridge CPU's)

 

Next Part: Hardware and ESXi Install.

  • Like 1
Link to comment

Hardware Build and ESXi install.

 

 

Hardware install notes:

Original Hardware unboxing

cu2Rmm.jpg

 

The 650Watt Corsair Power Supply pictured was not going to cut it. I used the spare 750watt Seasonic I had from an earlier sale. I just need to swap it out from a workstation and put the 650 into it. In addition, the Seasonic is gold certified, That is a bonus for an always on PC.

 

The first step I did was assemble everything into the Norco as if i was going to install unRAID

I installed the Motherboard, RAM, CPU, 1 of the MV8's and the Power Supply for testing.

sNEUPm.jpg

 

Current Build Photo with 32GB Ram, E-1240, 2x M1015, Expander, Corsair Pro SSD's, and custom power cables

lXE62m.jpg

 

I don't think I need to go into detail here.

I'll assume you can assemble the hardware.

Plug in the power and Ethernet cables to both the IPMI and LAN2[use LAN1 For ESXi, LAN2 is for baremetal unRAID]

VqzNDm.jpg

 

After this step, I stuck the ESXi flashdrive into the internal USB

Yes, it is still blank. this is for the Bios config step.

 

 

IPMISetup:

 

At this point go to ftp.supermicro.com/utility/IPMIView/ and download IPMI View.

If you have a monitor and keyboard installed and you dont plan to use IPMI, skip ahead to the BIOS configuration.

Start the IPMIView software and click the Magnifying glass icon to have it auto detect your new server. go ahead and add it to your "IPMI Domain"

 

3nLVJm.png

 

Go ahead and login to your new server. the default login is "ADMIN" pass "ADMIN"

 

Start the server under IMPI Device TAB and open the KVM console in the KVM TAB.

 

Raid Card Bios Settings

As the PC starts to post, watch for the Raid card BIOS.

When it starts detecting drives on the raid card, start pushing "ctrl m" (For mv8 anyways)

Controller Tab:

Disable INT 13h

vC2Gdm.png

 

Optional

Under staggered spin up: set spin up groups to lower the hit on your power at boot.

Exit and save.

 

If you have more then one HBA card. You should now swap them and do the same thing to the next card.

 

Bios Settings:

hit the "Del" key to enter the bios.

 

In the advanced tab: Processor and Clock options

Enable "Intel Virtualization Technology"

ZQdqEm.png

 

In the advanced tab: Integrated IO Configuration

Enable "VT-d"

1xwEFm.png

 

 

In the advanced tab: PCIe/PCI/PnP Configuration

Set PCI ROM Priority to "EFI Compatible ROM"

(NOTE: for Ver 2.0a BIOS this is replaced with "Disable OPROM for slots 7&6" set them to "Disabled")

bCnazm.png

 

 

In the advanced tab:IDE/SATA Configuration

SATA Mode = ACHI

Set staggered spin-up and Hot Plug for all drives if you want.

umBwom.png

 

 

BOOT: Boot Options Priority.

Select your ESXi Flashdrive.

JzknWm.png

 

 

And last (optional)

IPMI: BMC Network config

Set a static IP for your IPMI

uwhT1m.png

 

 

At this point you should save setting and exit.

Manually power off the server if You were using IPMI and you changed the IP in the Bios.

In IPMIView, modify your your IP to reflect the new one you just changed your IPMI IP to.

 

 

 

Basic Pretesting:

This step is not really a step. It was something I did to test my hardware.

it is optional, but it made me feel better..

I pulled my unRAID flash drive from my second unRAID server.

I placed the unRAID Flash drive into the ESXi box.

I booted with the unRAID flash drive and ran several cycles of memtest (Note With this hardware you will need to upgrade the memtest that comes with unraid. SEE HERE.)

After that passed, I felt all warm and fuzzy...

 

 

 

Installing ESXi:

NOTE* These instructions are for 4.1.0.

Since this thread was created, ESXi5.0 has been released.

the instructions are ALMOST the same. these instructions should get you through the ESXi 5.0 setup also.

If there is a major change or a part that is confusing, let me know and I'll update this thread.

(Get a screen shot if you can)

I wont pretend to be an expert at ESXi.

Infact, even though I use it at work, all I know is from google and trial and error..

 

For this build, we will be installing ESXi 4.1.0 to a flashdrive.

When you download ESXi from VMware, it is an ISO image.

I decided for ease of install, I would just burn it to a CD.

 

You can create a flash drive install to install from flash drive if you wish.

After basic google-fu and realizing i didn't have another flash drive laying about, the CD install won. Besides, my RPC-4224 came with a free SATA DVD?! It is karma.

[Edit: You could also use the "Virtual Media" option in the IMPI and mount the ESXi ISO for the install if you don't have a SATA DVD]

 

 

Prep for ESXi Intall:

At this point, if you have not already, Download your free copy of ESXi and register it to get a free Serial number.

http://www.vmware.com/products/vsphere-hypervisor/overview.html?ClickID=bledqduu6egnnqg6nl6vsdzelzklyvkfzgne

 

Burn the ISO to CD (no need to waste a DVD)[or use the USB Install Method]

 

REMOVE ALL DRIVES!

Remove/Unplug all Hard Disks and Flash Drives from the server!

During install, ESXi will erase ALL drives it sees!!

Don't say I didn't warn you.

 

Install your ESX flash drive into the internal USB header. You can use an external header if you like. It makes more sense to put it inside your case so you have access to your unRAID drive.

 

Go ahead and plug a DVD drive into one of your internal Sata ports (USB CD should work also).

You should have no drives in the drive bays so it is OK to leave the top off for now.

 

 

ESXi Install:

Power on your server.

Start hitting the F11 key once you get the supermicro splash screen. This is to bring up a boot menu.

Select your CD Drive.

 

Welcome Screen: > (Enter) Install

WC6lom.png

 

 

EULA Screen: > (F11) Accept

GUIBNm.png

 

Select A Disk: > Select Your Flash Drive (IT SHOULD BE YOUR ONLY DRIVE. IF NOT, STOP! SEE ABOVE!!) > (Enter)

9Tzgjm.png

 

Confirm Install: > (F11) Install

HsTW9m.png

 

Wait for the install. It should take 10-15 minutes.

 

Complete: > (Enter) Reboot!

2C9dDm.png

 

 

Assuming you configured the ESXi Flash drive as your first boot device, you should now boot into ESXi

 

 

 

 

Configuring ESXi Console:

On our first boot into ESXi, We Should be welcomed with this screen.

W7LpZm.png

 

If you see a grey screen with red text flash past and you are now sitting at an error code, Chances are you have an incompatible NIC. (We wont see that with this build.)

However, the issue I did have, I did not get a DHCP IP Address.

I had to move the Cable from LAN2 to LAN1.

This is after i told You to place the Ethernet cable into LAN2.

Apparently I have a newer revision of the motherboard on this build.

My last build was with a Ver 1.0.

This Board is a 1.0b. I wonder what else has changed?

 

Assuming you have a DHCP server (who doesn't?), you should have an IP and it should say HTTP://IP_Addy/ (DHCP).

This takes a few minutes sometimes.

This would be the web address of the server. this is where you would go to get ESXi tools (more on this later).

 

Lets go and set up a static IP and set the Root Password. We could do this from inside the vSphere client later, but lets see what options are in the console.

 

We will need to hit F2 to Customize the system.

If you just hit F2 in your IPMI window, you just found the exit hotkey...

If you are using IPMI, in the top toolbar, on the left, Select "Virtual Media" and then "Virtual Keyboard".

You should now have an on-screen keyboard.

Hit the F2 on the Virtual Keyboard.

You should now have a Login Screen

You can now close the Virtual Keyboard, We are done with it

The default Login is "root" with no Password.

jMbwAm.png

 

mTxdhm.png

 

 

Set up a password now while we are here. (optional)

Select Configure Password > Enter the new password. (you have to use a complex password)

 

Set up a Static IP. (optional but recommended)

Select Configuration Management Network.

h1agCm.png

 

 

Select IP Configuration.

Select Set static IP Address

Fill in your IP Address, Subnet and Gateway.

1z2j8m.png

 

You can setup IPV6 While you are here if you use it.

I do not run IPV6 at home so I skipped that.

 

You can modify your DNS configuration now.

It should have locked in what your DHCP server assigned when we set a static address.

you can change the hostname if you wish also.

 

After you are finished with your changes, hit "esc" until you are greeted with a save changes page and a warning your VM Hosts will be kicked off the network.

We have no hosts yet so this is OK.

Select <Y> Yes

bkczem.png

 

 

This should bring us back to the "System Customization" Menu.

 

One last step to do while we are here.

We are going to turn on SSH.

This allows us to telnet and use WinSCP into the server.

 

Select "Troubleshooting Options"

Select "Enable Remote Tech Support (SSH)" (This enables SSH on the server)

Double check your settings. This screen is a bit confusing to some.

q1kRMm.png

 

 

After you Enable SSH, You can <ESC> all the way back to the main screen.

You should see the static IP now.

We are done with this portion of the install.

You can close the IPMI window if you want.

zJpxnm.png

 

 

 

Configuring ESXi from vSphere:

This is where most people get lost, VMware vSphere client is not very intuitive.

 

The first step is to get the vSpere Client.

Open up a web browser and put in the IP address of your ESXi box and hit enter/go/whatever makes it start..

Stop!

You will now have a warning message in your browser!

8R5rLm.png

IJJbam.png

 

This is OK!

You are connecting over a secure connection with a private security certificate.

Go ahead and connect and save the certificate if it asks you (IE wont save it).

Once you get to the ESXi webpage, Download/Install the "vSphere Client"

OS74gm.png

 

 

I wont hold you hand here. Install the client.

 

Once you have the client installed.

Enter the IP address of your ESXi box, Admin ID, Password and hit "Login"

dh0LFm.png

 

 

STOP!

We we are greeted a certificate error once again.

Check "Install Certificate..."

Then Select "Ignore"

XGCtvm.png

 

 

vSphere Client will now start up and give you a nag box about your license.

It will also remind us that we have no persistent storage.

yiT3Wm.png

 

 

OK, Lets fix the License Issue first.

Configuration > Licensed Features > Edit

6j37Em.png

 

Check "Assign New License Key to this Host"

Click "Enter Key" Button

Enter your License Key

eJWwHm.png

 

 

Click "OK"

Click "OK"

You will have a Licensed ESXi server now.

CzUvfm.png

 

 

Now we need to add the Datastore drives.

These are the drives where we store the virtual disks and hosts along with other data for the ESXi server.

You can hotswap the drives into the server while it is on.

But for the sake of safe practice, we will shut the server down.

 

Right Click on the server in the top left pane > Shut Down.

KKJdlm.png

Or

Summary > Reboot

Ux37Xm.png

 

 

The server will nag that it is not in "Maintenance Mode"

That is OK.

It will then confirm why you are shutting down.

OK and shut down.

 

Install your Datastore disks at this point

We could have done this sooner, we just didn't get to it.

I am going to install 1 SSD off of one of the White SATA600 (Sata3) ports and one Large mechanical Drive off of one of the Black Sata300 (Sata2) ports.

I'll eventually add the second SSD. For now, I'll hold off.

(Honestly I have some test VM's on my second one in my other ESX box I need to reclaim)

You can do what you feel is best for your need.

 

ANY DRIVE WE ADD AND ASSIGN AS A DATASTORE DRIVE WILL BE FORMATTED!!

FOREVER LOST! THERE IS NO GOING BACK!

That is, unless it was already contains a Datastore. You can move those from ESXi to ESXi box.

 

Once you are done adding the drives, power up your server and restart vSphere.

 

 

Adding Datastore Drives:

In vSphere Client,

Configuration > Drives > Datastore > "Add storage"

On7z4m.png

 

 

We are adding a Disk/LUN

Next

wrvibm.png

 

 

Select the disk you want Added to the Datastore.

Next

U7EkWm.png

 

 

All Partitions / Data will be Wiped!

Next

uxEjMm.png

 

 

Name your Datastore.

At Work we call them Datastore1, Datastore2, etc. At home, I name them a bit more descriptive. I like to keep the name simple for scripting later. SSD1, SSD2, 2TB1 for example

Enter a name > Next

6WHZ2m.png

 

 

STOP!

Format "Set Block Size".... this part is critical and most people screw this up and loose all their data after they figure this out.

you have 4 Block size settings!

what you select determines the maximum size of your Virtual Drives!!

ATi7ym.png

 

Block Size Vs. Maximum Virtual Drive size
1Meg = 256MB vDrive
2Meg = 512MB vDrive
4Meg = 1TB vDrive
8Meg = 2TB vDrive

 

Supposedly, there is no performance hit or loss of drive space for choosing a larger block size.

Choose wisely based on your needs.

You can not undo this without reformatting the drive.

 

In this case, I'm going to choose 1 meg blocks.

My SSD is small and most of my clients will be 30gigs or Smaller.

Edit: I now think it is best to format all drives the same block size.

I formatted my Mechanical drives with 8Meg blocks, I am going to format my SSD's 8Meg blocks.

 

Choose a Block Size > Next

Confirm > Finish

pEzsKm.png

 

 

Repeat if needed for each drive.

We now have our Datastore

Yes, there is data on the 2tb drive (it is borrowed from another ESXi box, more on that later)

yY497m.png

 

Updating ESXi to the latest version.

For 4.1, See this thread > http://lime-technology.com/forum/index.php?topic=14695.msg152540#msg152540

For Version 5.0, See this thread > http://lime-technology.com/forum/index.php?topic=14695.msg169119#msg169119

 

This Pretty much concludes the Basic ESXi setup.

 

We will get into more tips and tricks like pass-through as we install the VM's

If anyone sees any changes I should implement, let me know.

 

 

the new ESXi box sitting with my 2 unRAID servers. On the Floor of a spare bedroom temporarily.

rxFF3m.jpg

 

Here is a crappy cellphone picture of the Servers in a Lack Rack.

5msMvm.jpg

  • Like 1
Link to comment

VM Installs:

Install instructions for OS's

 

VM#1 Windows 2008 Test Client:

The first VM we will install will be a Windows 2008r2 Client.

If you do not have 2008, Windows 7 will install the same way.

This will be a very simple install to get used to VMware if this is new to you.

I will use this VM to run ESXi tools (vSphere) and to launch some scripts from

 

Lets get started then.

Start vSphere Client.

With the vSphere Client selected, hit "Ctl N"

This should bring up a "Create New Virtual Machine" wizard.

Typical > Next

28jiOm.png

 

 

Name the VM then hit Next.

YP9YKm.png

 

Select a Datastore to keep this VM

I want to use the Fast SSD

FsWRTm.png

 

Select a "Guest Operating System"

jVxVym.png

 

Select Drive size. (nomally I do not select "Thin Provisioning". I decided to test it this time around due to limited drive space.)

uzJ0hm.png

 

Check "Edit Virtual Machine Settings Before Completion" > Continue

TGfd9m.png

 

This should open the "Virtual Machine Properties" window.

Go ahead and do any tweaks you want.

I left everything default with 2 exceptions.

1) I dropped the Ram to 2gigs

2) I set the DVD to use an ISO.

(you could map to a physical CD drive with a disk in it also. just select "Host Device" and find your DVD)

Alt7Em.png

 

See Tip#1 in the post below on how to upload an ISO.

Make sure you have "Connect at power On" checked. Otherwise this will be a real quick trip.

When you are done Tweaking, hit "Finish".

 

Select Your New VM on the left pane.

Hit the "Launch Virtual Machine Console" button on the toolbar.

Hit the "Power On" button.

yuWewm.png

 

 

Because we Selected "Connect at power On" for the ISO, It will boot from CD and start installing 2008.

 

I will not bore you with installing 2008. I'll assume you can handle that.

PS. if you get your mouse "stuck" in the console "Ctrl + Alt" releases it.

 

After 2008r2 is installed. We are going to install VMware tools.

 

While we still have the "Virtual Machine Console" open,

Top toolbar "VM" > Guest > "Install/Upgrade VMware Tools"

(This is also where you find "send alt+ctrl+del" to log in)

REcDGm.png

 

you need to be logged into the OS when you do this. You should get an auto-launch.

Otherwise run it from the Virtual CD that it mounted.

choose "Typical Settings"

X4Ptmm.png

 

I think you can manage this part on your own.

 

When VMTools install is complete, Reboot the VM.

 

after Reboot. Here are typical tweaks I do to VM's

*Turn on Remote Desktop (ASAP, the ESXi console is horrid laggy)

*Performance Options > Adjust for best Performance

*System Protection > Disable System Restore (unless you are testing that for some reason?)

*For VMs on an SSD, I tend to turn off the Pagefile. I'm not sure if that is bad or good. so far it has worked out OK for me. (plus it cuts back on Disk writes to the SSD)

*Disable Hibernate. I Personally do not need VM's to hibernate.

*I Rename "My Computer" (or "Computer" in Vista and newer) To the VM Name so I can see What PC it is at a glance.(Handy if you have lots of RDC's open)

*on "throw away" (or Test) VMs, I turn off all anti virus and firewalls. (I have copies of the VM)

 

This completes our first VM. It should run in a Remote Desktop pretty snappy.

 

 

 

VM#2

Windows Home Server 2011: With Raw Device Mapping

For the WHS2011 Install, I want to use Raw Device Mapping.

 

WHS2011 should run just fine in the Datastore provided I give it a large enough disk.

I could then back up the entire Virtual disk with my other sessions.

Instead, I think I'll just give it it's very own physical 1.5TB or 2TB drive.

I can then back it up to the Unraid VM if I wanted to.(WHS2011 cant back up to a mapped drive!)

They only purpose WHS2011 will serve is to back up my local workstations and laptops.

 

Lets Install the WHS2011 today.

 

My Plan:

I could install to a virtual drive(s) on a Datastore. I did this with my ESXi Test box. It worked just fine. I was even able to back it up with GhettoVCB.

For this install, I think I will use RDM and give the WHS2011 it's own Physical Hard disk.

 

I have heard of some people trying to go crazy with WHS2011 and put 16 drives raid5 arrays or try to redirect the WHS shares to another sever (an iScsi target or unRAID for example).

I am going for K.I.S.S (Keep it simple stupid).

 

I do not plan to use the WHS for Anything other then Workstation Backups and remote access.

That should cut back drastically on the size of the drive space and ESX resources I'll need.

I figure in the end my WHS data size will exceed 1TB, I might as well just plan for that now.

 

Create the VM:

 

First off, if you are installing with RDM, See TIP#3 and create the RDM.

 

[this will be just like VM#1 at first]

Start vSphere Client.

With the vSphere Client selected, hit "Ctl N" (or select create new client)

This should bring up a "Create New Virtual Machine" wizard.

 

Typical > Next

28jiOm.png

 

 

Name the VM then hit Next.

Pvlvtm.png

 

 

Select a Datastore to keep this VM

Wait What?

We are not going to use this Datastore in the end.

In order to get though the wizard, We must pick a store and act like we want to use it.

We will change this later. lets keep going.

FsWRTm.png

 

Select a "Guest Operating System" (Win2008r2 64Bit)

jVxVym.png

 

Select Drive size. (leave as is for now.)

e9D0lm.png

 

Check "Edit Virtual Machine Settings Before Completion" > Continue

TGfd9m.png

 

This should open the "Virtual Machine Properties" window.

 

STOP!

OK, here we need to do some major tweaks.

I am only selecting 2 gigs of ram here. Ram is the one thing I will be short of in the end with all of my VM's. I think WHS will be ok with 2 for my pourpose. It will need more CPU when crunching back up databases. I'll keep an eye on it to see if i have to feed it a little more.

That the nice thing of ESX, changing hardware with a mouse click.

 

Memory: I bumped it down to 2Gigs

CPUs: I bumped it up to 2 Virtual Processors

CD/DVD: I mapped it to the ISO of my WHS DVD (see Tip #1)

*Check Connect at Power On!*

d50Prm.png

New Hard Disk: Special!

 

For the Hard Drive, Select it and hit "Remove" at the top.

Now we nee need to add our RDM Drive

Hit "Add.." at the top and select hard disk > Next

0S1yKm.png

 

Use an existing virtual disk > next

1G3zYm.png

 

Select the RDM we created > Next

oUgjVm.png

 

If you have a mix of RDM and Datastore virtual drives, it would be a good idea to change the SCSI Id to a new channel (1:0 for example. I do not have a Mix. so i left it alone. If I add a Virtual drive later, I'll put the VirtualDrive on the 1:0 bus)

> Next

PAEMKm.png

 

Review and then hit "Finish"

P1vAwm.png

 

 

Stop!

We are not done here.

Select the "Options" Tab at the top.

Select "Boot Options"

Check "Force Bios Setup"

Click "Finish".

uV03qm.png

 

 

You should now have another VM in inventory.

 

Install WHS2011:

 

"Launch the virtual machine Console" for WHS

It will go right into the Bios.

Go to the boot section and move CD-ROM drive over the Hard Drive

F10 to save and exit

[you might not need this step, I did]

RlyBEm.png

 

At this point the VM will reboot and ask to boot from DVD.

Hit any key to boot to CD.

Otherwise, it will boot to whatever was on the drive before. (hmmm ideas)

 

WHS should start installing.

 

New Install

IJXI3m.png

 

 

This will format and repartition your drive! all data will be lost

(Some people are clueless.. sorry, I had to put that there...)

 

Check "I understand...." > Install

(ESX Drivers are already in WHS2011, there is no need to add any)

rTvdvm.png

 

Go get a beer/pop/Whatever...

dsvVum.png

 

 

Set Your Language

NqMCTm.png

 

Set your time / timezone

mb0Bem.png

 

Accept the EULA

jLNxom.png

 

Skip your CD key for now. Turn off "auto activate".

You have 90 [or was it 120?] days untill you have to put a key in using the rearm feature.

I rather enter the key once I am in full production. Once I know I am happy with my build

R52F8m.png

 

Name, Password....

sZ5afm.png

 

Select your update type... (oops lost screen, we know what it looks like)

 

Get more beer...

tMUpMm.png

 

Err.. done?

kXDrzm.png

 

Yep!

WHS2011 on ESXi with With Raw Device Mapping...

UpfnRm.png

 

OK, Now for configuration and tweaking.. Don't forget to install VMware Tools.

 

 

VM#3

unRAID VMDirectPath Hardware Passthough

 

 

OK, This is how I did my unRAID install on ESXi.

I am aware there is a 30+ Page thread all about this. I did not feel like going through 30 pages. I did however come across the work around by gfjardim for the AOC-SASLP-MV8 in that thread. That was the straw that broke the camels back and got me to virtualize unRAID.

 

I had a feeling that with the Correct hardware and the latest Beta's,  this should all work fine.  I had enough hands-on with both unRAID and ESXi that I "should" be able to fudge it without reading how to.

 

If anyone has a better suggestion, ideas or shortcuts, let me know and I'll edit this page.

 

Things we will need:

Download Putty: http://www.chiark.greenend.org.uk/~sgtatham/putty/

Download winSCP: http://winscp.net/eng/download.php  [get the portable version]

Download Plop: http://www.plop.at/en/bootmanager.html [The current version is plpbt-5.0.13.zip]

 

Prep:

Flash Drive:

Make an unRAID Flash Drive [use the default instructions] and stick it a free USB port on the ESXi box.

 

Upload plop to ESXi:

unzip plpbt.xxxx.zip to a temp location.

Use TIP #1 to upload plpbtin.iso (it is located inside the Install Sub-Folder).

iSVq7m.png

 

 

VMDirectPath Passthough:

Select Server > Configuration > Advanced Settings > Edit

erznjm.png

 

Add Your HBA/RaidCard (Extra NIC card also if you want)

b4bmym.png

 

[*Notes on VMDirectPath. You can kill your ESXi build if you select the wrong things.

Selecting the USB controller with the ESXi USB will cause server to never boot again.

Selecting the entire cougar SATA Controller might take out your video card, a PCIe slot or a NIC.

But it might allow raid in a windows build on the ICH10 in theory.

Note on the 82579LM NIC. While I can pass it through, it wont work. ESXi needs some "tweaks" to make it work.]

 

REBOOT The ESXi Server!

 

 

VMCreation:

Start a New VM > Typical

PFmKnm.png

 

 

Name the VM

960Fgm.png

 

 

Select a Datastore

XytEim.png

 

 

Select  an Operating System [i selected "32bit FreeBSD"]

F59XFm.png

 

 

Create a Disk > I Selected 8GB > Thin Provisioning. [We only need a few Bytes]

W36hHm.png

 

 

Select "Edit The VM Settings...." > Continue

oTTT6m.png

 

Add more RAM.

[i selected 2 gigs to start. More then virgin unRAID needs. The extra RAM will come in handy on preclearing drives. I also plan to run "Cache_Dirs". I might have to add more later.]

8NWxAm.png

 

 

Edit the CD/DVD

Map to the plpbtin.iso in the Datastore > Check "Connect at Power On"

hnPTvm.png

 

 

Stop

select "add" at the top.

Add USB Controller

SEOZAm.png

 

There is nothing to configure here.

Next > Next > Finish

JjqqUm.png

 

 

Stop

select "add" at the top (again).

This time around, We now have "USB Device". Select it > Next.

519iEm.png

 

 

Select your unRAID flash drive > Next > Finish

[We don't care about vmotion]

eouVnm.png

 

At this point I start Adding the VMDirectPath Passthough hardware. I am going to start with only one MV8 for now. I'll add a second one after I get unRAID up. I don't want to start start breaking down my production unRAID until this on is up. I also have some M1015's on order for this box.

 

select "add" at the top (again).

Select "PCI Device"

FnOQLm.png

 

 

Select your HBA > Next > Finish

UIibxm.png

 

 

[OPTIONAL]

You can add a PCIe NIC now if you have one.

If you do add one, it is best you tell the VM default card to NOT "Connect at power on"

 

OK, We are done with this step.

lets hit "OK" now and close out of the Virtual Machine Properties.

 

 

Applying the MV8 Hack

 

With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file:

ev6Xlm.png

# Marvell Technologies, Inc. MV64460/64461/64462 System Controller, Revision B

11ab  6485  d3d0    false

EE0ozm.png

 

Now open your VM's .vmx file and change this:

pciPassthru0.present = "TRUE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

to this:

 

pciPassthru0.present = "TRUE"

pciPassthru0.msiEnabled = "FALSE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

tfYoym.png

The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement.

 

 

Reboot the hypervisor and start your unRAID VM!

 

Good luck.

* Note, the pciPassthru0 # could be different depending the number of cards you have passed though. For my last rebuild, it was pciPassthru3

 

Installing Plop and Booting to unraid

(I never found any PLOP instructions, I am guessing on how to do this.)

 

start the VM, it should boot into Plop installer.

wdBNLm.png

 

 

Type 1

Type Y

hit the "any" key

t99i5m.png

 

Type u

Type 9 (to reboot)

HVFEzm.png

 

after reboot.

B2YlCm.png

 

setup > bootmanager >

startmode: Menu

Boot Countdown:On

Edit Boot Countdown: 15 seconds (I set it to 5 and sometimes it missed the USB every few boots. find what works for you.)

Default Profile:USB

Show Floppy: Off

Show CD: Off

Show USB: ON

Everything else: Default

F82Uim.png

 

Esc back to start... shutdown.

 

 

Start VM

 

You should now boot to unRAID with VMdirectPath and an MV8

 

[optional way to boot unRAID and not use Plop. You can create a hard drive VMDK image of your unRAID flash drive and boot that to your flash drive. See Here]

 

  • Like 1
Link to comment

Extras:

Extra things and issues will go here.

Like backup instructions/configurations.

 

DISCLAIMER: Use this information at your own risk. I will not be held responsible for your actions.

 

Tip #1

Install your VM's from ISO not CD.

How to copy your ISO's to your ESX Server.

Most people use winSCP to copy ISO files (and other files) to the VM server to use for install media.

Let me show you a quick trick that is built into ESX that most people don't know about.

I have yet to see a "how to" or tip that mentions this.

 

1. Start VMware vSphere Client.

2. Log into your ESXi Server.

3. Configure > Storage > Datastore > Select a Datastore > Right Click Datastore > Browse Datastore.

ZlXhnm.png

 

 

4. A "Datastore Browser" box will pop up.

5. Select Make "New Folder" from the toolbar.

6. Create a folder Called "ISO" (or similar)

m5HB8m.png

 

7. Create a sub folder with a name of the ISO if you wish (2008r2) for example.

8. From the top toolbar, Select "Upload".

9. Select an ISO to upload.

FI8E8m.png

 

10. "Open" button.

11. Say yes to warning about overwrite.

When you are done, You should have something like this.

yGJAtm.png

Once this has been completed. You can mount this ISO as a CD/DVD ROM in your VM.

Make sure you Check "Connect at power on" in the VM Properties box. That way you can boot from CD to install the Client.

 

This is one nice reason for the Large, Datastore Drive.

 

*Someone asked me how to make an ISO from a CD/DVD

I use DAEMONtools Lite. It is free.

Be careful installing it, do a custom install and remove the extra crap it wants to install like Ask toolbar and junk.

 

 

Tip #2

How to make a copy of a VM.

This a quick how to make a copy of a VM.

 

1. In vSphere Client, Stop the VM we wish to copy.

2. Open the  "Datastore Browser" for the Datastore that contains your VM (see Tip #1 for how to do this).

3. Create a new folder and give it a name

EhHJnm.png

 

 

4. Once you created the new folder, go into the folder of the VM you wish to make a copy of.

5. Select the .VMX and .VMDK files.

6. Right Click and "Copy"

tAgCom.png

in this example, we will copy the Win 2008 Test VM we made earlier

 

7. Switch the new folder we created.

8. Right click anywhere inside the empty folder and "Paste"

J7ky1m.png

You should now see your VM copying.

 

9. Once that is done, Right click on the .VMX file and select "Add to Inventory"

19ox0m.png

 

10. Name the new VM and hit Next

aVsWpm.png

 

11. Select the server you wish to add it to and hit Next.

8dpw8m.png

 

12. Hit finish!

PJ4Rxm.png

 

13. You should now see the VM in the top left pane of VM's.

kCXFsm.png

 

14. Start the new copy of the VM.

You will notice it hangs at 95%

15. Launch the "Virtual Machine Console" for this VM.

You will now see a "Virtual Machine Question" window pop up

16. Select "I copied it" (Or moved it if that is the case).

15uoDm.png

 

At this point the copied VM should boot right up.

Keep in mind duplicate Machine names on the network.

 

 

 

 

 

Mini Tip:Copy a VM To a new Datastore

If you have more then one Datastore "visible" to your ESX server and you want to copy a VM from one Datastore to another.

(You can use the "Datastore Browser" for this also, But it is very slow)

 

1. Stop the VM you wish to move.

2.Telnet into your ESXi box. (more detail on this later.)

3. Use the copy command (cp).

cp -a /vmfs/volumes/datastoresource/vmfoldername /vmfs/volumes/datastoredestination

[to copy an entire datastore drive to a new drive would be: "cp -a /vmfs/volumes/datastore1/* /vmfs/volumes/datastore2".

Without the quotes. Replace datastore# with your datastore names.]

x1JdUm.png

 

In the above sample I copied the VM named PDC from the datastore WD2TB to the datastore 15TB7200

Depending on the size of your VM, this can take a long time with no apparent activity.

 

Once that is complete, you need import the new VM into your server.

See Step #9 in Tip#2:copying a VM.

 

If wish to delete the source you can use the rm command.

(Personally, I would not do that until I confirm the new copy works first)

 

 

 

 

TIP #3

Raw Device Mapping

Basic Instructions to configure raw device mapping (RDM) to provide a VM direct access to a local SATA drive that is connected a SATA controller.

There Are much better instructions on the  web if you need more indepth detail.

 

I'll assume you turned on SSH in the ESXi setup instuctions.

 

1:

Download Putty: http://www.chiark.greenend.org.uk/~sgtatham/putty/

 

2:

Start Putty

 

3:

Enter in the IP for Your ESXi server > select "SSH" > connect

qZeTUm.png

 

 

If this is your first time connecting, You will get a "Security Alert".

Hit "YES" and add to cache.

cLeUwm.png

 

4:

login as root

ignore the message

4QgAWm.png

 

5:

In order to use a drive for RDM, We need to make a pointer file.

This pointer file needs to be placed inside an existing Datastore.

[think of the pointer file as a "shortcut to the RDM Drive".]

 

First get a list of your Datastores

Type> ls -l /vmfs/volumes

 

 

Select The Datastore you with to use and "cd" to that directory. (I am going to use SSD1)

Type> cd /vmfs/volumes/SSD1

[Replace SSD1 with your own Datastore name]

VMrSfm.png

 

6:

Create a folder to store the RDMs then switch to that folder

I called my folder RMDs (Case sensitive)

Type> mkdir RDMs

 

Then switch to the new folder

Type> cd RDMs

HIFmwm.png

 

 

7:

Lets get a list of drives available and find the one we want to RDM

Type> ls -la /dev/disks

 

-rw-------    1 root    root        4009754624 Aug 20 15:02 mpx.vmhba32:C0:T0:L0

-rw-------    1 root    root          939524096 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:1

-rw-------    1 root    root            4177920 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:4

-rw-------    1 root    root          262127616 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:5

-rw-------    1 root    root          262127616 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:6

-rw-------    1 root    root          115326976 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:7

-rw-------    1 root    root          299876352 Aug 20 15:02 mpx.vmhba32:C0:T0:L0:8

-rw-------    1 root    root      120034123776 Aug 20 15:02 t10.ATA_____OCZ2DSOLID3______________________________OCZ2D191X3Q652G7WC20K

-rw-------    1 root    root      120031445504 Aug 20 15:02 t10.ATA_____OCZ2DSOLID3______________________________OCZ2D191X3Q652G7WC20K:1

-rw-------    1 root    root      2000398934016 Aug 20 15:02 t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV

-rw-------    1 root    root          104857600 Aug 20 15:02 t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:1

-rw-------    1 root    root        64424509440 Aug 20 15:02 t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:2

-rw-------    1 root    root      1935867379712 Aug 20 15:02 t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:3

-rw-------    1 root    root      1500301910016 Aug 20 15:02 t10.ATA_____ST31500341AS________________________________________9VS27HMX

-rw-------    1 root    root      1500299231744 Aug 20 15:02 t10.ATA_____ST31500341AS________________________________________9VS27HMX:1

-rw-------    1 root    root      2000398934016 Aug 20 15:02 t10.ATA_____WDC_WD20EARS2D00S8B1__________________________WD2DWCAVY2067763

-rw-------    1 root    root      2000396255744 Aug 20 15:02 t10.ATA_____WDC_WD20EARS2D00S8B1__________________________WD2DWCAVY2067763:1

lrwxrwxrwx    1 root    root                20 Aug 20 15:02 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:4 -> mpx.vmhba32:C0:T0:L0:4

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7

lrwxrwxrwx    1 root    root                22 Aug 20 15:02 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8

lrwxrwxrwx    1 root    root                73 Aug 20 15:02 vml.0100000000202020202020202020202020355944344e355956535432303030 -> t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV

lrwxrwxrwx    1 root    root                75 Aug 20 15:02 vml.0100000000202020202020202020202020355944344e355956535432303030:1 -> t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:1

lrwxrwxrwx    1 root    root                75 Aug 20 15:02 vml.0100000000202020202020202020202020355944344e355956535432303030:2 -> t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:2

lrwxrwxrwx    1 root    root                75 Aug 20 15:02 vml.0100000000202020202020202020202020355944344e355956535432303030:3 -> t10.ATA_____ST2000DL0032D9VT166__________________________________5YD4N5YV:3

lrwxrwxrwx    1 root    root                72 Aug 20 15:02 vml.01000000002020202020202020202020203956533237484d58535433313530 -> t10.ATA_____ST31500341AS________________________________________9VS27HMX

lrwxrwxrwx    1 root    root                74 Aug 20 15:02 vml.01000000002020202020202020202020203956533237484d58535433313530:1 -> t10.ATA_____ST31500341AS________________________________________9VS27HMX:1

lrwxrwxrwx    1 root    root                74 Aug 20 15:02 vml.0100000000202020202057442d574341565932303637373633574443205744 -> t10.ATA_____WDC_WD20EARS2D00S8B1__________________________WD2DWCAVY2067763

lrwxrwxrwx    1 root    root                76 Aug 20 15:02 vml.0100000000202020202057442d574341565932303637373633574443205744:1 -> t10.ATA_____WDC_WD20EARS2D00S8B1__________________________WD2DWCAVY2067763:1

lrwxrwxrwx    1 root    root                74 Aug 20 15:02 vml.01000000004f435a2d3139315833513635324737574332304b4f435a2d534f -> t10.ATA_____OCZ2DSOLID3______________________________OCZ2D191X3Q652G7WC20K

lrwxrwxrwx    1 root    root                76 Aug 20 15:02 vml.01000000004f435a2d3139315833513635324737574332304b4f435a2d534f:1 -> t10.ATA_____OCZ2DSOLID3______________________________OCZ2D191X3Q652G7WC20K:1

 

Here is where it gets fun.

I want to RDM my segate 2TB drive. I'll need to copy the identifier for the Drive [in bold].

(not the :1 or :2 etc, those are the drives partitions, not the RAW drive [those are showing up because this drive already has WHS2011 installed on it])

[copy and paste works wonders]

 

 

8:

Lets create the RDM. In order to create the RDM, we use the command vmkfstools.

 

Type> vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020355944344e355956535432303030 WHS2011RDM.vmdk -a lsilogic

[replace the vml.xxxxxxx with your own drive identifier]

[the -r = create RDM (there is also -z createRDM-passthrough). the "WHS2011RDM.vmdk" is the name of the RDM we create. the -a lsilogic creates the RDM on an LSIlogic controller instead of the defaul Buslogic controller]

 

[Note: at least one forum member ran into a motherboard that required the -z command SEE HERE and HERE for details. I used the -r for my board and Windows compatibility]

 

If it worked, you will be back at a # prompt. Take a look

Type> ls -l

 

You should see 2 .vmdk files for the RDM we just mapped. 1 should be tiny and one the size of the drive we just mapped.

Don't worry, the file is not 2TB. it is only a few megs.

aj2Qdm.png

 

 

Now that the RDM is created, we can now use that RDM as a drive inside a VM.

 

In theory, I could take that drive out and put it into any PC that can read that file format and get the data off.

 

I decided to test this. I installed win7 on an SSD with RDM.

After Win7 was up and running, I ran sysprep to reset the hardware.

I then installed the SSD into another PC and booted from the SSD.

It worked perfectly.

 

 

Tip #4

Auto-starting/stopping your VM's

Lets set Your VM's to start/stop when ESXi starts.

This should be a no brainer, but some people are confused in vShpere Client

 

1:

Start vSphere Client

2:

Server > Configuration > Virtual Machine Statup/Shutdown > Properties

rDAhUm.png

 

3:

Check "Allow Virtual Machines to start and stop..."

set your delays

Set the "shutdown action:" to to guest shutdown from stop. (this should clean shutdown if VMtools is installed in the client.)

Move the VM's you want to start automatically into "Automatic Startup" in the order you want.

(they will shutdown in the reverse order)

You can also put them into the "any order" section if you don't care what order they start.

Hit OK

cVVPcm.png

 

 

TIP #5

Putting the ESXi box on a UPS and shutting the server and VMs Down in a power failure.

Link to tip:http://lime-technology.com/forum/index.php?topic=14695.msg140013#msg140013

 

 

TIP #6

Installing the second NIC (Intel 82579LM) on SuperMicro X9SCM in ESXi 5.0.

Thanks to Chilly and Peetz on [H]ardforms

  • 1. Install your machine(s) with the vanilla ESXi 5.0 ISO. (It looks like upgrading from 4.1 to 5.0 is OK also)
     
  • 2. Log on to the console (or via ssh) of one of the machines and install the vib file by using the following commands:
      esxcli software acceptance set --level=CommunitySupported
      esxcli software vib install -v http://files.v-front.de/net-e1001e-1.0.0.x86_64.vib
     
  • 3. reboot, configure all NICs, and try to enable FDM then
     

Link to comment

"REMOVE ALL DRIVES!

Remove/Unplug all Hard Disks and Flash Drives from the server!

During install, ESXi will erase ALL drives it sees!!"

 

I've seen this mentioned before; what exactly is ESXi doing here? It's just claiming all the disks as datastores regardless of what's currently on them? Wondering what I'll need to do if and when i upgrade to ESXi 5

Link to comment

I've seen this mentioned before; what exactly is ESXi doing here? It's just claiming all the disks as datastores regardless of what's currently on them? Wondering what I'll need to do if and when i upgrade to ESXi 5

 

Hit the nail on the head! that is exacly what it is doing.. so...

Don't leave your 40TB raid 6 array plugged in. it will be toast.

Link to comment

Johnm,

 

when I saw your thread and the details you are giving I got totally excited...

 

but then you stopped... ;)

 

are you planning to go through the installation of unraid to ?

I was hoping you would also go this deep in the installation of unraid, I am thinking about setting it up... and I am still trying to figure out whether to do it or not... got so many questions are still pending on how to use the HD... RDM, passthru....

 

anyway even if you were not to continue I still wanted to say how appreciative the whole community should be for such a detailed thread !

big up,

 

R

 

 

Link to comment

I'm in the midst of a similar build on a Tyan S5512 board.  I have my first 6 drives on the southbridge motherboard "Intel" controller.  When raw-device-mapped to the unRaid VM, the temps don't appear to work whereas the drives mapped through from the LSI SAS controller report correct temps.  Have you seen the same?

 

Link to comment

@Redia

Thanks,

 

I am not going to stop!! I needed to to some "real" work for a day or two.

My plan was to at least do:

 

*WHS2011 with a single drive RDM.

 

*Get unRAID up and running using plop and Pass the MV8's though to it. then possibly adding a PHYSICAL RDM for the cache drive. This will be new to me also.. so...

 

*Then walk though an automated Backup plan with GhettoVCB.

 

I also ordered a M1015/LSI 9220-8i (9210-8i) of ebay for this build I was sort of hoping it showed up before I got to unRAID so we can compare the two.

Then possibly at a later date, ordering an expander and showing the benefits of that.

 

 

@Jimwhite

There are two ways to RDM a drive in ESXi, there is a virtual RDM (with the -r switch) and a Physical RDM (With the -z Switch). I plan to show and test both when i get to that point. I'll have an answer for you when i get there. 

The unRAID on ESXi is still newish to me. I did it once in testing. Now I am rebuilding and documenting it.

Part of the point of this thread was to show what I did, then hopefully someone with better knowledge gives us some pointers and how to fix what I did wrong :).

 

Link to comment

Johnm,

 

thanks for your answer.

I did not mean to rush you ;)

the thing is I am pretty familiar with esx for windows environment. so the part you did.. I already know.. lol

but never worked on any type of linux dist... nor with passthru.

thus my question..

 

I wish I had time to test it (and maybe help in your findings) unfortunately I am out of time as I am moving in a couple of weeks and will be very busy for a while.. I was hoping to install everything before that in a safe manner (i.e. following a guide someone like you could do... lol)..

 

I will keep an eye on this great thread, and if I have time for testing I will do and share them with you (probably by PM to avoid messing with your thread)

 

Cheers,

R

 

Link to comment

*Then walk though an automated Backup plan with GhettoVCB.

 

I also ordered a M1015/LSI 9220-8i (9210-8i) of ebay for this build I was sort of hoping it showed up before I got to unRAID so we can compare the two.

Then possibly at a later date, ordering an expander and showing the benefits of that.

 

I am really looking forward to the GhettoVCB backup tutorial.  I have been struggling with how to back up the VM's properly.  I am planning to test out Veeam since they give a single user license free to home labs - http://www.veeam.com/nfr/free-nfr-license

 

I use 2 of the M1015's converted to IT firmware with VMDirectPath without any major issues so far.  I need to dig into how to issue a SMART report for drives as unmenu is not working with this setup though.

 

*****

One issue I ran across that others may encounter is with VMDirectPath.  When VMDirectPath is enabled, ESXi will reserve (and consume) all of the memory you allocate to that VM.  If you are running the ESXi box with a limited amount of memory, this may be an issue.

 

When you enable VMDirectPath and create an unRAID VM with say 1GB of memory for testing, then later try to increase that to 2GB you will get an error stating a reserved memory mismatch and the machine will not start.  To fix this, you need to Edit the VM, go to the Resources tab, and increase the Memory Reservation to the amount you set the VM to.  Not a huge problem, but will certainly frustrate you until you figure out what is happening.

*****

 

This is a fantastic thread and will be a MAJOR help to people getting ESXi set up properly.  I already have mine running, but am watching this thread daily for new tips and tricks.  Thank you for putting this together!

 

Link to comment

@FTP222

 

Thanks for the input!

I'll take a look at veeam.

We use Vranger at work. that stuff is expensive!!

We also use ghettoVCB on the test boxes.

 

Another limit of using Hardware passthough beside the memory allocation issue, it makes backing up that VM very difficult.

 

On my last ESXi build, I used only Virtual disks so i could backup my WHS2011.

This time around I think I'll forgo that luxury. i am only using it for client backups. unRAID will be used for my file hosting not WHS2011.

Link to comment

OK, I have 7 Windows based VM's on this system.

I was testing everything out, making sure it was stable before I installed unRAID.

I now see a possible issue in my design.

In formatting the SSD's, I chose block size 1.

In formatting my Mechanical Drives, I chose block size 8....

 

I now see a problem with this that slipped my mind...

I am backing up VMS with a block size of 1 to a drive with a blocksize of 8, that could cause issues in my backups...

 

So.. time to format the second SSD and move the data.. then reformat the first one. this might be doom... we will see. this is going to be a PITA..

 

ACTUALLY!! It will be quicker to start over. infact.. I only need to save one VM (with a database on it). it is a good way to check my walk-though.. to actually do it again.

 

EDIT:

I decided to "mostly" start over...  It was the easy way and minimal time spent.

I installed the new SSD with 8Meg Blocks.

I copied the VM's I wanted to keep to the new SSD.

I removed the rest from inventory in vSphere .

There are several ways to "reformat" the drive in ESX. all are a major PITA.. so, I cheated,

I pulled the original SSD and placed into a 2008r2 box and used the clean command. (took about 1 min to pull, format, & replace from start to finish)

*

replaced the SSD > created a new datastore > copied some VM's back and started them up with the "I moved them" option.

remade the RDM. recreated the WHS VM. after that WHS2011 just booted up. no reinstall.

total rebuild time: about 45 min. 90% of that time was copying VM while surfing the web.

Did I need to? I'm not sure honestly. but I feel better.

 

 

 

Link to comment

I too, followed the lead of bryanr in his http://lime-technology.com/forum/index.php?topic=7914.0 thread, mapping each drive via command line manipulations, and with 16 drives it was a bit of a pain in the arse.  Not only that, but any time a disk is moved or swapped, it must be done again!!  Gotta be an easier way.

 

I have 16 hot swap bays in my tower with 16 Samsung 2TB drives.  The first 6 are on the "Intel Controller", the next 8 on an LSI 2008 SAS controller.

, and the last two on a Marvell 4 port Sata card (which ESXi has no drivers for).  I also have an LSI 4 port raid controller with 3 1TB Seagates in a Raid5 for my ESXi Datastor.

 

While poking around in the GUI for ESXi (vSphere Client) I found a page where I could assign the entire controller to a VM (configuration/advanced-settings).  I created a new VM for unRAID and instead of going through all that commandline stuff, I assigned the 3 PCI-bus controllers as passthrough, then selected them in the unRAID VM settings.  Voilla.... the VM runs just as if it were (and it is) running on the bare bones.  The drives came right up, and they are not virtually mapped, so I'm free to swap them around and replace them just be re-booting the VM  :o;D

 

Link to comment

I too, followed the lead of bryanr in his http://lime-technology.com/forum/index.php?topic=7914.0 thread, mapping each drive via command line manipulations, and with 16 drives it was a bit of a pain in the arse.  Not only that, but any time a disk is moved or swapped, it must be done again!!  Gotta be an easier way.

 

I have 16 hot swap bays in my tower with 16 Samsung 2TB drives.  The first 6 are on the "Intel Controller", the next 8 on an LSI 2008 SAS controller.

, and the last two on a Marvell 4 port Sata card (which ESXi has no drivers for).  I also have an LSI 4 port raid controller with 3 1TB Seagates in a Raid5 for my ESXi Datastor.

 

While poking around in the GUI for ESXi (vSphere Client) I found a page where I could assign the entire controller to a VM (configuration/advanced-settings).  I created a new VM for unRAID and instead of going through all that commandline stuff, I assigned the 3 PCI-bus controllers as passthrough, then selected them in the unRAID VM settings.  Voilla.... the VM runs just as if it were (and it is) running on the bare bones.  The drives came right up, and they are not virtually mapped, so I'm free to swap them around and replace them just be re-booting the VM  :o;D

 

 

I am ahead of you there!

it sounds like you found the "VMDirectPath" settings.

 

I am in he middle of writing up how I passed 20 hard drives through to unraid this same way.

I have 2 m1015's on order for this box, but I am going to build it first with MV8's since most people have those.

I also passed a dedicated NIC card to unRAID to split off from the shared ESXi host... I am debating if i want to keep this or share the single NIC.

 

I had mentions in the beginning I wanted to get a cheap hardware raid card for the Datastore and run raid5 SSD's or 2.5" HDDs.

it got a bit expensive so it is on the back burner for now. I am getting awesome speeds with the Sata3 SSDs and backing up to a cheap spinner will keep my data protected for now.

 

Thank you for your experiences in this. It is always good to hear how others are doing it.

 

I expect to have this posted today. I am just distracted by the Airshow that i can see from my deck.

Link to comment

Yes beware if you end up getting an actual RAID controller for your ESXi datastore needs. I have a RAID 10 array hosting mine (LSI 9212-4i4e) and writes are fairly abysmal due to the absence of a BBU which would give the tasty benefit of proper write caching. I didn't realize this until I was chest deep in my build and swimming in VM's, questioning why the I/O speeds weren't what I was expecting (don't get me wrong, reads are wonderful, but the slow write speeds seems to be hurting me).

 

I took a gamble and manually enabled write caching on the controller using resources such as this, http://communities.vmware.com/message/1302854#1302854  http://www.virtualistic.nl/archives/526, but writes are still pretty pathetic considering how fast the reads are.

 

How good is your performance with just the SSD's? I've been considering picking up one just to house the primary VM's and using the RAID 10 for secondary VM's, storage and additional backing up of the SSD.

Link to comment

VM#3

unRAID VMDirectPath Hardware Passthough

added.

FAILED!!

It was working, Now I have a Pink screen of death.

Let me review this before anyone follows this

 

It looks like I have a bad backplane ..

Whenever I plug into a certain port it blows up.

 

Ill look tomorrow, I am computer nerded out for the night..

 

Problem solved.

It was not my ESX/unRAID install at all, It was indeed a bad backplane.

As soon as I bumped the drives to another backplane it was fine.

 

I put the backplane into another Norco and confirmed its issues.

I am going to have to get that replaced asap as I will be filling this box quickly.

 

 

 

 

Link to comment

Yes beware if you end up getting an actual RAID controller for your ESXi datastore needs. I have a RAID 10 array hosting mine (LSI 9212-4i4e) and writes are fairly abysmal due to the absence of a BBU which would give the tasty benefit of proper write caching. I didn't realize this until I was chest deep in my build and swimming in VM's, questioning why the I/O speeds weren't what I was expecting (don't get me wrong, reads are wonderful, but the slow write speeds seems to be hurting me).

 

I took a gamble and manually enabled write caching on the controller using resources such as this, http://communities.vmware.com/message/1302854#1302854  http://www.virtualistic.nl/archives/526, but writes are still pretty pathetic considering how fast the reads are.

 

How good is your performance with just the SSD's? I've been considering picking up one just to house the primary VM's and using the RAID 10 for secondary VM's, storage and additional backing up of the SSD.

 

I am using the "cheap" OCZ Sata3 SOLID3 SSD's. I am getting about 500MB/s read/write.

I have my VM's split across 2 SSD's for now and a few on a spinner.

I have one VM pounding a database on it while i boot a second 2008r2 VM on the same SSD. it takes about 10 seconds from green arrow to login prompt. the lack of head delay is huge.

The limit is the drive size. I only can get about 3 VM's per drive and leave room for snapshots when I do backups.

 

I was thinking about an Areca ARC-1222 if I go raid5.

I see them on sale every now and then for about $399.

They are Very fast cards. I have one running 24x7 for over a year. I get about 500MB/s read and write to Samsung F4's in Raid5. never had a single hiccup. Yes a BBU is needed.

 

The card has NIC management port. No need to install any sort of software and i can control The card VIA Web browser. The downside is I need to edit he driver to work in ESXi.

 

I would get an LSI 9211-8i (About $225 on sale) if I go RAID0 SSD's. There is no cache bottleneck. I should get about 1700MB/s on 4 OCZ SSD's.

I am sort of hoping I can test this on my M1015. If it works, awesome, If not, no loss.

 

 

Another option I have thought about. since the 1222 does not support expanders and it is maxed out, buy a better areca like the  16xx or 18xx and move my array. that would free up my 1222 for ESX.

Link to comment

VM#3

unRAID VMDirectPath Hardware Passthough

added.

 

FAILED!!

It was working, Now I have a Pink screen of death.

Let me review this before anyone follows this

 

It looks like I have a bad backplane ..

Whenever I plug into a certain port it blows up.

 

Ill look tomorrow, I am computer nerded out for the night..

 

 

 

Have you saw this?

 

http://lime-technology.com/forum/index.php?topic=7914.msg128847#msg128847

 

I was getting PSOD before change these settings.

 

EDIT: Nevermind, just saw you have applied those settings prior on your setup.

Link to comment

I am up and running.

I just did not want to update the thread until I completed party.

 

 

I have a bad backplane. once I switched backplanes it was running like a charm.

Now to find out how good Norco's replacement policy is.

 

It is running better then expected.

R88MQl.png

Link to comment

Great updates, the RDM tutorial was new for me, so I appreciate that part.  Still waiting for the GhettoVCB backup tutorial - :)

 

Your PLOP boot manager method is inefficient.  You can mount an ISO and have that ISO boot right to USB with no delay.  You can grab a pre-created one from the ESXi thread:

 

http://lime-technology.com/forum/index.php?topic=7914.msg128990;topicseen#msg128990

 

It would be better to use VMDirectPath for the USB drive as it will run at full USB 2.0 speeds, not the 1.1 you are seeing with passthrough.  I am unable to get USB via VMDirectPath working properly (hangs during unRAID boot), so I am curious to see if others are successful.

 

Keep up the thread, I think this will get several people to take the plunge.

 

Link to comment

It would be better to use VMDirectPath for the USB drive as it will run at full USB 2.0 speeds, not the 1.1 you are seeing with passthrough.  I am unable to get USB via VMDirectPath working properly (hangs during unRAID boot), so I am curious to see if others are successful.

 

I was able to set it up through VMDirectPath.  It shows 8 USB devices for my motherboard (X8DTH-6F), 6 UHCI and 2 EHCI.  I set VMDirectPath on half of them (3 UHCI and 1 EHCI) since they all seemed to be attached to the same hub and group of ports (Attaching a keyboard to the same USB port was passed through to the guest OS no matter which of the 4 PCI devices I set to pass-through).  I then passed through the EHCI PCI device and it boots fine.

 

I am having problems with the fastpath fail state, but that is a problem with ESXi 4.0 that was fixed in 4.1.  It is important that you set all USB PCI devices that can see a  particular USB port for VMDirectPath.  If you do not, ESXi and the guest OS will fight for ownership and you get into the fastpath fail state problems, it completely freezes 4.0 and while 4.1 the issue was fixed it can still cause problems in the guest OS.

 

Maybe someone else can explain how the USB devices work and how one physical USB port seems to be on any of 3 different UHCI ports and several physical USB ports seem to be on the same EHCI port.  It almost seems like the UHCI and EHCI are busses, not physical ports.  EHCI = USB 2.0 so that makes sense, two USB 2.0 buses per motherboard with each bus servicing 4 physical ports.

 

TIP:  Install a temporary ESXi install onto a SATA drive then start messing around with the VMDirectPath of USB devices.  If you install ESXi onto a flash drive, then set that port to VMDirectPath, bad things will happen and the only way to recover is to reinstall.

 

TIP2: Dont use ES Xeon 5500 processors.  VMware wrote out support in later versions.  4.0.0 is the latest I can run without getting a PSOD on boot.  People with later revisions of ES processors report they can run 4.0 U1 but not any newer versions.  Live and learn.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.