NIC Bonding ESXi > UnRAID VM


Recommended Posts

8 hours ago, IamSpartacus said:

Has anyone successfully configured their UnRAID VM to bond mutliple vNICs in LACP mode?  If so, how did you configure the ESXi side of it?

not sure i understand your question..

i have 4port Intel Gigabit Nic attached to ESXi, and all 4 ports are in one vSwitch - with this you can't get more than Gbit from one connection but up to 4 Gbit connections to your VMs attached to this vswitch. every VM have a virtual 10G adapter

to do so, you need to create new virtual switch on ESXi and add all nics to it, see picture.

to get this available to all other devices on LAN, you need to bound these all 4 ports on your physical switch too. i have HP 1810-24G v2 and it works just fine.  

Esxi_Switch.PNG

Edited by uldise
Link to comment
On 4/15/2017 at 1:43 AM, uldise said:

not sure i understand your question..

i have 4port Intel Gigabit Nic attached to ESXi, and all 4 ports are in one vSwitch - with this you can't get more than Gbit from one connection but up to 4 Gbit connections to your VMs attached to this vswitch. every VM have a virtual 10G adapter

to do so, you need to create new virtual switch on ESXi and add all nics to it, see picture.

to get this available to all other devices on LAN, you need to bound these all 4 ports on your physical switch too. i have HP 1810-24G v2 and it works just fine.  

Esxi_Switch.PNG

 

 

Basically what I'm trying to accomplish is bonding the two physical 10Gb NICs I have attached to the ESXi host running my UnRAID VM inside the VM itself.  Those NICs are bonded via LACP to my vDS in vCenter.  The issue is that when I add two vNICs so the UnRAID VM and try to bond them within the UnRAID WebUI, I lose connectivity until I remove the bonding configuration from the flash drive.

 

While I get that it's overkill to have 20Gb of bonded bandwidth, I still want to test it as I have multiple servers/clients that access the NFS shares simultaneously.

Edited by IamSpartacus
Link to comment
22 minutes ago, IamSpartacus said:

Basically what I'm trying to accomplish is bonding the two physical 10Gb NICs I have attached to the ESXi host running my UnRAID VM inside the VM itself.  Those NICs are bonded via LACP to my vDS in vCenter.  The issue is that when I add two vNICs so the UnRAID VM and try to bond them within the UnRAID WebUI, I lose connectivity until I remove the bonding configuration from the flash drive.

 

While I get that it's overkill to have 20Gb of bonded bandwidth, I still want to test it as I have multiple servers/clients that access the NFS shares simultaneously.

i'm not sure ESXi supports your 20Gb connection in such way you want.. when i made my config some years ago, i read that with ESXi you can't double speed but you can use it in parallel(not sure if something is changed to this on ESXi side) 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.