ken-ji

[6.3.0+] How to setup Dockers without sharing unRAID IP address

Recommended Posts

ken-ji    34
Posted (edited)

ok. Your dockers are on a macvlan subinterface br0.1, which seems to be a VLAN id 1 under br0.

By design, macvlan subinterfaces cannot talk to the host on the parent interface. Also, as you are trying to use VLAN id 1, does your switch have VLAN support?

I ask because VLANs can't see each other unless you have a L3 router (or some VLAN bridge) in your network.

In a nutshell, Packets on a VLAN subinterface (br0.x) get tagged when they exit the host (br0), the tag is standardized (802.11q) but makes the packet look like garbage to devices without VLAN support.

 

This is the reason my samples gave the simple case of using br0 and br1 directly. When you use a subinterface, this assumes VLAN support and therefore some understanding of what you are trying to achieve.

 

On 2/15/2017 at 2:24 PM, ken-ji said:

How to setup Dockers to have own IP address without sharing the host IP address:

This is only valid in unRAID 6.3 series going forward.

 

 

Some caveats:

  • With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network.

 

 

EDIT: Upon re-checking, your not using VLANs. Since your dockers cannot access the main network, can you run "ip -d link show"? I need a little more detail to understand why its not working.

also "docker inspect pi_hole"

Edited by ken-ji
Corrected...

Share this post


Link to post
Share on other sites
Kewjoe    4
31 minutes ago, ken-ji said:

ok. Your dockers are on a macvlan subinterface br0.1, which seems to be a VLAN id 1 under br0.

By design, macvlan subinterfaces cannot talk to the host on the parent interface. Also, as you are trying to use VLAN id 1, does your switch have VLAN support?

I ask because VLANs can't see each other unless you have a L3 router (or some VLAN bridge) in your network.

In a nutshell, Packets on a VLAN subinterface (br0.x) get tagged when they exit the host (br0), the tag is standardized (802.11q) but makes the packet look like garbage to devices without VLAN support.

 

This is the reason my samples gave the simple case of using br0 and br1 directly. When you use a subinterface, this assumes VLAN support and therefore some understanding of what you are trying to achieve.

 

 

EDIT: Upon re-checking, your not using VLANs. Since your dockers cannot access the main network, can you run "ip -d link show"? I need a little more detail to understand why its not working.

also "docker inspect pi_hole"

 

"ip -d link show"

 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    ipip remote any local any ttl inherit nopmtudisc 
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    gre remote any local any ttl inherit nopmtudisc 
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    gretap remote any local any ttl inherit nopmtudisc 
5: ip_vti0@NONE: <NOARP> mtu 1364 qdisc noop state DOWN mode DEFAULT group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0 
    vti remote any local any ikey 0.0.0.0 okey 0.0.0.0 
6: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 2 
    bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on 
37: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
38: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:7b:d0:d6:90 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
68: vethcffe18d@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b9:83:53:9e:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
70: vethf36d2cb@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether c2:c0:45:06:69:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
72: veth85c2f81@if71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether b2:fb:fe:40:09:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 2 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
74: veth8841ad0@if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 2e:f3:d0:ca:18:04 brd ff:ff:ff:ff:ff:ff link-netnsid 3 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
76: veth43be249@if75: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 3a:e7:11:e4:3a:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 4 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
78: vethb23822a@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 5e:15:4e:4f:e8:27 brd ff:ff:ff:ff:ff:ff link-netnsid 5 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
84: veth8ea80ff@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether a6:b5:c8:54:17:43 brd ff:ff:ff:ff:ff:ff link-netnsid 8 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
86: veth00d62c7@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 36:f6:b3:2f:15:7a brd ff:ff:ff:ff:ff:ff link-netnsid 9 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
88: veth6cf8a31@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 86:34:77:55:ad:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 11 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
90: veth1cfac99@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:8f:52:a8:1f:25 brd ff:ff:ff:ff:ff:ff link-netnsid 12 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
92: veth8e022a0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 02:47:3d:0a:27:07 brd ff:ff:ff:ff:ff:ff link-netnsid 13 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
94: veth16c6eaa@if93: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 4a:a5:da:63:12:00 brd ff:ff:ff:ff:ff:ff link-netnsid 14 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
96: vethff638b9@if95: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether da:6b:d2:56:31:75 brd ff:ff:ff:ff:ff:ff link-netnsid 15 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
100: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    bridge forward_delay 200 hello_time 200 max_age 2000 ageing_time 30000 stp_state 1 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q 
101: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:35:e4:53 brd ff:ff:ff:ff:ff:ff promiscuity 1 
    tun 
    bridge_slave state disabled priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 
102: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:44:b4:34 brd ff:ff:ff:ff:ff:ff promiscuity 1 
    tun 
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on 
104: veth8e6e43c@if103: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether ae:b8:63:19:65:4a brd ff:ff:ff:ff:ff:ff link-netnsid 10 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
106: vethe9ce18e@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 6a:b6:eb:40:23:1b brd ff:ff:ff:ff:ff:ff link-netnsid 16 promiscuity 1 
    veth 
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on 
110: br0.1@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 70:85:c2:2e:e1:ac brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vlan protocol 802.1Q id 1 <REORDER_HDR> 

and the docker inspect for pihole is attached.

 

BTW, the Vlan i enabled after i posted while troubleshooting. I should disable it but forgot to. But i was having the issues before i enabled it.

 

pihole inspect.txt

Share this post


Link to post
Share on other sites
ken-ji    34

Then my answer stands.

* you can't use VLANs unless you have a VLAN supported switch.

* with only a single NIC, dockers with dedicated IPs can not talk to the host and vice versa.

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
Posted (edited)
6 minutes ago, ken-ji said:

Then my answer stands.

* you can't use VLANs unless you have a VLAN supported switch.

* with only a single NIC, dockers with dedicated IPs can not talk to the host and vice versa.

 

This is in your original post:

"With only a single NIC, and no VLAN support on your network, it is impossible for the host unRAID to talk to the containers and vice versa; the macvlan driver specifically prohibits this. This situation prevents a reverse proxy docker from proxying unRAID, but will work with all other containers on the new docker network. "

 

I thought that latter part of that paragraph was that it'd still work as long as i wasn't trying to have the container talk to unRaid. Shouldn't that still mean the container can talk to the outside world? or did I misunderstand what you're saying?

 

In what cases does your single NIC example work? Is it not feasible if you don't have a VLAN supported network? I do have a second NIC and can try your 2 NIC reco, but i was hoping to isolate the second NIC specifically for a VM running PFSense (which i haven't started yet).

Edited by Kewjoe

Share this post


Link to post
Share on other sites
ken-ji    34

Something is wrong with your setup right now...
How did you create the br0.1 interface? The one in your ip commands is created as a VLAN subinterface.

You can reconfigure the pihole container back, delete the homenet docker network with "docker network rm homenet", and stop the array and disable the VLAN network. Then try again.

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
3 minutes ago, ken-ji said:

Something is wrong with your setup right now...
How did you create the br0.1 interface? The one in your ip commands is created as a VLAN subinterface.

You can reconfigure the pihole container back, delete the homenet docker network with "docker network rm homenet", and stop the array and disable the VLAN network. Then try again.

 

When I initially started, VLAN was disabled and I still had the same problems. I only enabled it after as a troubleshooting step. I will disable and do it again, but I don't think it will help. To answer your question about br0.1, it's the same steps you outline in your OP but instead of br0, which I couldn't use because it says it is already being used by another interface, i tried br0.1. That's obviously not right, but I was trying to see how to get this working. From an earlier post, this is what happens when i follow your instructions for the Single NIC solution:

 

Error response from daemon: network dm-ba57b5a60b33 is already using parent interface br0

The only other thing I can think of is, i tried 6.4 RC7 for a little while, I wonder if something happened with that before i backed out to 6.3.5 again.

Share this post


Link to post
Share on other sites
ken-ji    34

hmm. run "docker network ls" - you should have only 3 networks

bridge

host

none

 

run "docker rm name" on all the others

I'm guessing dm-ba57b5a60b33 is autogenerated docker network in 6.4 series. Docker will persist the network settings in the docker.img file across unraid upgrades.

 

Also, telling docker to use br0.1 when it has not been configured will make docker create it anyway, but how its setup is not clear - which can cause problems that will be hard to debug. (AFAIK it will try to do a macvlan subinterface, which makes the containers use a subinterface of a subinterface)

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
12 minutes ago, ken-ji said:

hmm. run "docker network ls" - you should have only 3 networks

bridge

host

none

 

run "docker rm name" on all the others

I'm guessing dm-ba57b5a60b33 is autogenerated docker network in 6.4 series. Docker will persist the network settings in the docker.img file across unraid upgrades.

 

Also, telling docker to use br0.1 when it has not been configured will make docker create it anyway, but how its setup is not clear - which can cause problems that will be hard to debug. (AFAIK it will try to do a macvlan subinterface, which makes the containers use a subinterface of a subinterface)

 

Do you think regenerating the Docker image will help? I may also install 6.4 RC7 again and try to undo whatever I did in that short time span i had it installed. But I'll try to get myself back to stock setup. Thanks for the help so far.  I seem to have gotten myself into a bit of a mess :)

Share this post


Link to post
Share on other sites
Kewjoe    4

BTW, just to be sure. I know VLAN should be disabled. But should I have bridging enabled or no?

Sent from my ONEPLUS A3000 using Tapatalk

Share this post


Link to post
Share on other sites
bonienl    163

Have you tried unRAID 6.4rc? It has built-in support for creating custom networks based on macvlan.

 

The Docker implementation prohibits any container using a macvlan connection to communicate with the host system. Containers can communicate with each other or default gateway to reach outside.

 

Share this post


Link to post
Share on other sites
ken-ji    34

AFAIK, bridging is optional for dockers with the macvlan support, but for me its easier to keep bridging turned on.

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
7 hours ago, bonienl said:

Have you tried unRAID 6.4rc? It has built-in support for creating custom networks based on macvlan.

 

The Docker implementation prohibits any container using a macvlan connection to communicate with the host system. Containers can communicate with each other or default gateway to reach outside.

 

 

I did and things went horribly wrong :) BTW, great job on the new templates.

 

 

Share this post


Link to post
Share on other sites
Kewjoe    4
5 hours ago, ken-ji said:

AFAIK, bridging is optional for dockers with the macvlan support, but for me its easier to keep bridging turned on.

 

So I went back to 6.4RC7 and tried to undo whatever I might have done there. I deleted the Macvlan interface (docker network rm) that was there in 6.4RC7, not sure if that was something I had done, or if that is something that is done automatically in RC7. Cleaned things up and went back to 6.3.5, but same thing happens. I disabled VLAN, so that's gone now. With Bridging enabled, it won't let me create the docker network using br0 saying another interface is already using it. With Bridging disabled it won't let me install it for another reason, I can't recall. I'll have to try it again. Ah well :)

 

Do you know if i wipe my docker image, will it reset anything that 6.4 RC7 might have done?

Share this post


Link to post
Share on other sites
CHBMB    171

I do believe the macvlan stuff is configured within the docker.img

Sent from my LG-H815 using Tapatalk

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
1 minute ago, CHBMB said:

I do believe the macvlan stuff is configured within the docker.img

Sent from my LG-H815 using Tapatalk
 

 

So if 6.4 RC7 added some macvlan stuff that got stuck in the docker.img and i go back to 6.3.5. If I were to wipe the docker.img, i'd go back to a fresh 6.3.5 state?

Share this post


Link to post
Share on other sites
bonienl    163
1 minute ago, Kewjoe said:

 

So if 6.4 RC7 added some macvlan stuff that got stuck in the docker.img and i go back to 6.3.5. If I were to wipe the docker.img, i'd go back to a fresh 6.3.5 state?

 

Yes, CHBMB is correct the macvlan information is stored in the docker image and this image needs to be deleted to start with a clean sheet.

 

  • Upvote 1

Share this post


Link to post
Share on other sites
Kewjoe    4
Posted (edited)

Thanks Gents. I'll give that a go and report back.

Edited by Kewjoe

Share this post


Link to post
Share on other sites
Kewjoe    4
On 5/1/2017 at 2:32 PM, bonienl said:

Perhaps you would be interested to know that macvlan support is added in the upcoming version of unRAID, it allows you to select additional 'custom' networks from the GUI.

 

 

@bonienl is there info on how to find this option in 6.4 rc8q? I checked the network settings and it looks pretty  I tried Ken-ji's method in 6.4 rc8q and it works but doesn't survive a reboot for some reason. Wondering if there is a better way to do it.

Share this post


Link to post
Share on other sites
bonienl    163
10 hours ago, Kewjoe said:

 

@bonienl is there info on how to find this option in 6.4 rc8q? I checked the network settings and it looks pretty  I tried Ken-ji's method in 6.4 rc8q and it works but doesn't survive a reboot for some reason. Wondering if there is a better way to do it.

 

Most of it goes automatic.

 

When the Docker service is started it will scan all available network connections and build a list of custom networks for those connections which have valid IP settings.

 

When creating/editing a Docker container the custom network(s) are automatically available in the dropdown list for network type. Choose here a custom network and optionally set a fixed IP address otherwise a dynamic address is assigned (you can set the range for dynamic assignments under Docker settings and avoid conflicts with the 'regular' DHCP server).

 

 

Edited by bonienl
  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.