UNRAID network issues


Recommended Posts

Hi

 

I need some help troubleshooting a niggling network issue with UNRAID 6.2.x and 6.3.0 RC releases.

My UNRAID host is connected to a HomePlug 1Gbps adaptor (i have both Solwise and Devolo).

I have set up 1 network bridge for my VMs with an MTU of 9000 and all my OS NICs are set to jumbo frames (9000).

On first boot up, I have absolutely no issues connecting to the local LAN or the Internet, all speeds are as expected.

 

However, on completely random occasions (but mainly and more noticeably whilst online gaming), the network connection drops! It only drops for about 1-3mins and then restores itself.

When this happens, I try to isolate whether it's a VM, host or HomePlug issue. So, when the connection drops, I ping the router's default gateway from two different VMs and UNRAID console. Both VMs and the console fail PING. I then switch from Devolo HomePlug to Solwise HomePlug and the same connection issue occurs so I know it's not the HomePlug adaptors. This has been happening for many months now and in this time I have replaced my motherboard also.

 

I am at a loss as to why I get an intermittent network disconnects.

 

Is there any diagnostics that I can run in the background in UNRAID so that I can capture the state of the NIC before and after the issue occurs?

 

My NICs: (tried both same result)

53:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)

00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I218-V [8086:15a1] (rev 05)

 

Network Settings:

Static IP

MTU 9000

Google DNS servers

Bonding - No

 

VM Network on both VMs:

  <interface type='bridge'>

      <mac address='xx:xx:xx:xx:xx:xx'/>

      <source bridge='br0'/>

      <target dev='vnet0'/>

      <model type='virtio'/>

      <alias name='net0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

Link to comment

Ill try to make this clear.

 

I set 9000 MTU on UNRAID eth0. Doing this creates a bridge interface (br0) used for the VMs with an MTU of 9000.

You cannot set an MTU on br0 manually, it needs to be set on the physical interface, in this case my Intel 1Gbps NIC.

Now that the bridge used for my VMs is at 9000 MTU, I can set the 10Gbps VNICs to 9000 MTU. This lets me transfer files between UNRAID host and VMs at speeds greater than 300MB/s.

I know this works because I do it all the time.

Link to comment

Ill try to make this clear.

 

I set 9000 MTU on UNRAID eth0. Doing this creates a bridge interface (br0) used for the VMs with an MTU of 9000.

You cannot set an MTU on br0 manually, it needs to be set on the physical interface, in this case my Intel 1Gbps NIC.

Now that the bridge used for my VMs is at 9000 MTU, I can set the 10Gbps VNICs to 9000 MTU. This lets me transfer files between UNRAID host and VMs at speeds greater than 300MB/s.

I know this works because I do it all the time.

 

I can only guess but I'm fairly sure the Home plug adapters were never designed for jumbo frames, thus when you start trying to transfer bulk data between unraid/vms to your local network/internet, the connection can drop until the system resyncs/recovers.

 

In your particular case, I'm going to recommend to setup an additional vnic on the VMs and connect to unraid via the original vibr0 bridge

[caveat: I'm not sure if a custom bridge is better]

 

that way they can talk to unraid using the 2nd virtual nic with jumbo frames, and still talk to the rest of the LAN/internet using regular frames.

 

Link to comment

Ill try to make this clear.

 

I set 9000 MTU on UNRAID eth0. Doing this creates a bridge interface (br0) used for the VMs with an MTU of 9000.

You cannot set an MTU on br0 manually, it needs to be set on the physical interface, in this case my Intel 1Gbps NIC.

Now that the bridge used for my VMs is at 9000 MTU, I can set the 10Gbps VNICs to 9000 MTU. This lets me transfer files between UNRAID host and VMs at speeds greater than 300MB/s.

I know this works because I do it all the time.

 

I can only guess but I'm fairly sure the Home plug adapters were never designed for jumbo frames, thus when you start trying to transfer bulk data between unraid/vms to your local network/internet, the connection can drop until the system resyncs/recovers.

 

In your particular case, I'm going to recommend to setup an additional vnic on the VMs and connect to unraid via the original vibr0 bridge

[caveat: I'm not sure if a custom bridge is better]

 

that way they can talk to unraid using the 2nd virtual nic with jumbo frames, and still talk to the rest of the LAN/internet using regular frames.

 

Interesting idea. What is the difference between br0 and virbr0 virtual interfaces? How would I create a new bridge, i only get an option in settings/vm manager to choose either of those two bridges. Please explain in a bit more detail what I need to do.

 

thanks

 

Link to comment

Ill try to make this clear.

 

I set 9000 MTU on UNRAID eth0. Doing this creates a bridge interface (br0) used for the VMs with an MTU of 9000.

You cannot set an MTU on br0 manually, it needs to be set on the physical interface, in this case my Intel 1Gbps NIC.

Now that the bridge used for my VMs is at 9000 MTU, I can set the 10Gbps VNICs to 9000 MTU. This lets me transfer files between UNRAID host and VMs at speeds greater than 300MB/s.

I know this works because I do it all the time.

 

I can only guess but I'm fairly sure the Home plug adapters were never designed for jumbo frames, thus when you start trying to transfer bulk data between unraid/vms to your local network/internet, the connection can drop until the system resyncs/recovers.

 

In your particular case, I'm going to recommend to setup an additional vnic on the VMs and connect to unraid via the original vibr0 bridge

[caveat: I'm not sure if a custom bridge is better]

 

that way they can talk to unraid using the 2nd virtual nic with jumbo frames, and still talk to the rest of the LAN/internet using regular frames.

 

Interesting idea. What is the difference between br0 and virbr0 virtual interfaces? How would I create a new bridge, i only get an option in settings/vm manager to choose either of those two bridges. Please explain in a bit more detail what I need to do.

 

thanks

 

br0 is a "physical bridge". eth0 (or bond0 - the bonding of all your eth ports) is part of this bridge and allows traffic to go out to the LAN.

virbr0 is a "logical bridge".there are no physical connections to the LAN, but unRAID and any VMs can talk over this lan securely (its all software, but the OS network stack is still involved.)

Sorry about the confusion regarding custom bridges - please ignore that as there should be issue there.

 

Now since br0 is connected to the LAN, using MTU 9000 will cause all attempts to talk to the LAN to have issues if you are sending large packets, because unraid (or the VMs) will not fragment the packets until they exceed 9000bytes (MTU) but other LAN devices including the router will barf at the malformed packets (too big - MTU is only 1500). this is what's causing your issues (I think)

 

Link to comment

Just disable jumbo frames.

Even using 10gbps and 40gbps NICs there is little difference in "real world" throughputs.

 

1gbps NIC, you'll see 115mbyte/sec (125max - overheads)

10gbps NIC, 1.125 gbyte/sec

 

Jumbo frames are a bit of a misnomer unless you 300% know what you're doing, what the workload is, and what the hardware/software in the path is

 

 

Sent from my iPhone using Tapatalk

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.