Problems going Virtual


Recommended Posts

So, I've been running a unRAID on a physical machine and I recently decided to move to VMware.  I bought a separate machine and got a stable ESXi environment on it.  Using PlopKExec, I got a fresh USB key to boot into an unRAID VM.  I bought a 9211-8i card and installed the IT firmware on it.  I installed that into my new machine, along with the drives from the old unRAID box.  On boot up of the ESXi server, I saw that it detected the drives.  I also swapped over my production USB key for the unRAID VM to boot from.  It boots up fine, and has the old IP address and settings, but it does not seem to see the hardware.  

 

How should I go about troubleshooting this?  

 

Thanks,

Paul

Link to comment
So, I've been running a unRAID on a physical machine and I recently decided to move to VMware.  I bought a separate machine and got a stable ESXi environment on it.  Using PlopKExec, I got a fresh USB key to boot into an unRAID VM.  I bought a 9211-8i card and installed the IT firmware on it.  I installed that into my new machine, along with the drives from the old unRAID box.  On boot up of the ESXi server, I saw that it detected the drives.  I also swapped over my production USB key for the unRAID VM to boot from.  It boots up fine, and has the old IP address and settings, but it does not seem to see the hardware.  
 
How should I go about troubleshooting this?  
 
Thanks,
Paul
Did you enable device passthrough for the 9211 card?

Sent from my ONEPLUS A3000 using Tapatalk

Link to comment

I started to reply to this last night...  As it turned out, I hadn't added the passthrough device to the VM.  After I did, it came up and started working, right before the ESXi server hung completely...  I went into the BIOS and tried disabling some settings, thinking perhaps there was a conflict of some sort.  I have one other slot I could move the card to, but I've not tried bringing it back up since the ESXI hung last night...

Link to comment
16 minutes ago, Taige said:

I started to reply to this last night...  As it turned out, I hadn't added the passthrough device to the VM.  After I did, it came up and started working, right before the ESXi server hung completely...  I went into the BIOS and tried disabling some settings, thinking perhaps there was a conflict of some sort.  I have one other slot I could move the card to, but I've not tried bringing it back up since the ESXI hung last night...

what  Motherboard do you have?

Link to comment

It's a HP ML30 Gen9 server...  So, it's got their built-in ILO4, their B140i RAID controller, etc.  

 

I'm actually not using the built-in SATA for anything, and not using the ILO portion of the NIC (though I'm using the NICs, of course).  I'm booting ESXi from a 32 GB microSD card (card reader built into motherboard).  

Edited by Taige
Link to comment

I changed the BIOS from UEFI to Legacy BIOS, and I think that also disables some of the abilities of the B140i.  I have a very old Synology that is running iSCSI for the datastore.  I would like to have some sort of mirrored drive setup locally on the server, so I can get rid of the Synology.  Not sure if I can do that natively within ESXi or not.

 

Unfortunately, I'm not going to be able to test much more until tomorrow evening at the earliest.  This ESXi server also runs my firewall, so it needs to be up and running for the Internet to work...  I am working on an alternative to this, so I can take this server up and down without affecting the Internet, but I'm not quite there yet.

Link to comment
20 minutes ago, Taige said:

This ESXi server also runs my firewall

are you running this because of high speed VPN? i have cheep Mikrotik router which handle my 100Mbit up/down connection very well. for high speed VPN i'm using another Mikrotik - Cloud Host Router as VM inside ESXi server.

Link to comment

Ok, so some good news...  I got my second ESXi box up and got my firewalls working in HA mode.  I brought up the unRAID VM (on the other ESXi server) and had a few minor issues (it seemed to have lost my shares, which had happened before on physical hardware - rebooting seemed to fix it).

 

I have had a couple of backups go to it now without trouble, but I ran into a problem...  When I try to add a docker container, I'm getting this:

 

Error: open /var/lib/docker/tmp/GetImageBlob652775487: read-only file system

 

I'm running two 3 TB drives for storage with a 4 TB drive for parity and two 250 GB SSDs for cache.  These all connect to a 9211-8i with IT firmware, version P20.

 

It appears that the one docker container I was actively using (Plex) is stuck updating the EPG...  I imagine this is related.

Edited by Taige
Link to comment

Ok, I backed up my appdata, removed my cache drives, ran a pre-clear on them, added them back, formatted them, restored the backup, added a docker, and all looked good. Then I edited a file from my appdata directory (to set my DDclient up) and tried to restart the DDClient docker, only to get errors. Now, if I try to add another docker, I get this again:

 

Error: open /var/lib/docker/tmp/GetImageBlob019683831: read-only file system

Link to comment

I may have figured something out...  I used two different types of cables from the 9211-8i.  One of them came with another server.  Another one was a third party cable I ordered online.  One of the cache drives is attached with the cable from another server, and the other is the new third party cable.  

 

To see if it was related, I removed the cache drives, then put back a single drive, one that I think is attached with the original cable (from another server).  It's only been a few minutes, but so far everything is fine.  

 

Can a bad cable cause this sort of symptom?

 

Thanks,

Paul

Link to comment

More info...  Things seemed ok for a while, so I stopped the array, and tried adding the other disk back.  At that point, VMware crashed.  So, I'm thinking it's a cable or drive problem...  Anyone have any known good source for the breakout cables to connect to a 9211-8i?

Link to comment

It looks like it's having trouble even with only the one Cache drive (though not to the point that the system crashes, like it was doing)...  I'm having strange issues related to my cache drives, specifically my Plex docker...  

 

So, I've decided to remove this Cache drive as well to see if that will stabilize my docker situation...  

 

....  BUT...  Reading your sig made me think of another idea...  Move both of the SSDs to the ESXi server itself, and add them as Datastores, then create two virtual HDs as cache drives, one on each SSD.  That would give me the benefit of backup for cache, and the side benefit of having SSD speed storage space available for other VMs...

 

 

Screen Shot 2017-10-21 at 12.23.47 AM.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.