Book Image

Docker Networking Cookbook

Book Image

Docker Networking Cookbook

Overview of this book

Networking functionality in Docker has changed considerably since its first release, evolving to offer a rich set of built-in networking features, as well as an extensible plugin model allowing for a wide variety of networking functionality. This book explores Docker networking capabilities from end to end. Begin by examining the building blocks used by Docker to implement fundamental containing networking before learning how to consume built-in networking constructs as well as custom networks you create on your own. Next, explore common third-party networking plugins, including detailed information on how these plugins inter-operate with the Docker engine. Consider available options for securing container networks, as well as a process for troubleshooting container connectivity. Finally, examine advanced Docker networking functions and their relevant use cases, tying together everything you need to succeed with your own projects.
Table of Contents (18 chapters)
Docker Networking Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Making connections


Up until this point, we've focused on physical cables to make connections between interfaces. But how would we connect two interfaces that didn't have physical interfaces? For this purpose, Linux networking has an internal interface type called Virtual Ethernet (VETH) pairs. VETH interfaces are always created in pairs making them act like a sort of virtual patch cable. VETH interfaces can also have IP addresses assigned to them, which allow them to participate in a layer 3 routing path. In this recipe, we'll examine how to define and implement VETH pairs by building off the lab topology we've used in previous recipes.

Getting ready

In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2 toolset installed. If not present on the system, it can be installed by using the command:

sudo apt-get install iproute2

In order to make network changes to the host, you'll also need root-level access. This recipe will continue the lab topology from the previous recipe. All of the prerequisites mentioned earlier still apply.

How to do it…

Let's once again modify the lab topology, so we can make use of VETH pairs:

Once again, the configuration on hosts net2 and net3 will remain unchanged. On the host net1, we're going to implement VETH pairs in two different manners.

On the connection between net1 and net2, we're going to use two different bridges and connect them together with a VETH pair. The bridge host_bridge1 will remain on net1 and maintain its IP address of 172.16.10.1. We're also going to add a new bridge named edge_bridge1. This bridge will not have an IP address assigned to it but will have net1's interface facing net2 (eth1) as a member of it. At that point, we'll use a VETH pair to connect the two bridges allowing traffic to flow from net1 across both bridges to net2. In this case, the VETH pair will be used as a layer 2 construct.

On the connection between net1 and net3 we're going to use a VETH pair but in a slightly different fashion. We'll add a new bridge called edge_bridge2 and put net1 host's interface facing the host net3 (eth2) on that bridge. Then we will provision a VETH pair and place one end on the bridge edge_bridge2. We'll then assign the IP address previously assigned to the host_bridge2 to the host side of the VETH pair. In this case, the VETH pair will be used as a layer 3 construct.

Let's start on the connection between net1 and net2 by adding the new edge bridge:

user@net1:~$ sudo ip link add edge_bridge1 type bridge

Then, we'll add the interface facing net2 to edge_bridge1:

user@net1:~$ sudo ip link set dev eth1 master edge_bridge1

Next, we'll configure the VETH pair that we'll use to connect host_bridge1 and edge_bridge1. VETH pairs are always defined in a pair. Creating the interface will spawn two new objects, but they are reliant on each other. That is, if you delete one end of the VETH pair, the other end will get deleted right along with it. To define the VETH pair, we use the ip link add subcommand:

user@net1:~$ sudo ip link add host_veth1 type veth peer name edge_veth1

Note

Note that the command defines the name for both sides of the VETH connection.

We can see their configuration using the ip link show subcommand:

user@net1:~$ ip link show
…<Additional output removed for brevity>…
13: edge_veth1@host_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 0a:27:83:6e:9a:c3 brd ff:ff:ff:ff:ff:ff
14: host_veth1@edge_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether c2:35:9c:f9:49:3e brd ff:ff:ff:ff:ff:ff
user@net1:~$

Note that we have two entries showing an interface for each side of the defined VETH pair. The next step is to place the ends of the VETH pair in the correct place. In the case of the connection between net1 and net2, we want one end on host_bridge1 and the other on edge_bridge1. To do this, we use the same syntax we used for assigning interfaces to bridges:

user@net1:~$ sudo ip link set host_veth1 master host_bridge1
user@net1:~$ sudo ip link set edge_veth1 master edge_bridge1

We can verify the mappings using the ip link show command:

user@net1:~$ ip link show
…<Additional output removed for brevity>…
9: edge_veth1@host_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master edge_bridge1 state DOWN mode DEFAULT group default qlen 1000
    link/ether f2:90:99:7d:7b:e6 brd ff:ff:ff:ff:ff:ff
10: host_veth1@edge_veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master host_bridge1 state DOWN mode DEFAULT group default qlen 1000
    link/ether da:f4:b7:b3:8d:dd brd ff:ff:ff:ff:ff:ff

The last thing we need to do is bring up the interfaces associated with the connection:

user@net1:~$ sudo ip link set host_bridge1 up
user@net1:~$ sudo ip link set edge_bridge1 up
user@net1:~$ sudo ip link set host_veth1 up
user@net1:~$ sudo ip link set edge_veth1 up

To reach the dummy interface off of net2, you'll need to add the route back since it was once again lost during the reconfiguration:

user@net1:~$ sudo ip route add 172.16.10.128/26 via 172.16.10.2

At this point, we should have full reachability to net2 and its dummy0 interface through net1.

On the connection between host net1 and net3, the first thing we need to do is clean up any unused interfaces. In this case, that would be host_bridge2:

user@net1:~$ sudo ip link delete dev host_bridge2

Then, we need to add the new edge bridge (edge_bridge2) and associate net1's interface facing net3 to the bridge:

user@net1:~$ sudo ip link add edge_bridge2 type bridge
user@net1:~$ sudo ip link set dev eth2 master edge_bridge2

We'll then define the VETH pair for this connection:

user@net1:~$ sudo ip link add host_veth2 type veth peer name edge_veth2

In this case, we're going to leave the host side VETH pair unassociated from the bridges and instead assign an IP address directly to it:

user@net1:~$ sudo ip address add 172.16.10.65/25 dev host_veth2

Just like any other interface, we can see the assigned IP address by using the ip address show dev command:

user@net1:~$ ip addr show dev host_veth2
12: host_veth2@edge_veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 56:92:14:83:98:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.65/25 scope global host_veth2
       valid_lft forever preferred_lft forever
    inet6 fe80::5492:14ff:fe83:98e0/64 scope link
       valid_lft forever preferred_lft forever
user@net1:~$

We will then place the other end of the VETH pair into edge_bridge2 connecting net1 to the edge bridge:

user@net1:~$ sudo ip link set edge_veth2 master edge_bridge2

And once again, we turn up all the associated interfaces:

user@net1:~$ sudo ip link set edge_bridge2 up
user@net1:~$ sudo ip link set host_veth2 up
user@net1:~$ sudo ip link set edge_veth2 up

Finally, we read our route to get to net3's dummy interface:

user@net1:~$ sudo ip route add 172.16.10.192/26 via 172.16.10.66

After the configuration is completed, we should once again have full reachability into the environment and all the interfaces. If there are any issues with your configuration, you should be able to diagnose them through the use of the ip link show and ip addr show commands.

If you're ever questioning what the other end of a VETH pair is, you can use the ethtool command-line tool to return the other side of the pair. For instance, assume that we create a non-named VETH pair as follows:

user@docker1:/$ sudo ip link add type veth
user@docker1:/$ ip link show
…<output removed for brevity>,,,
16: veth1@veth2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 12:3f:7b:8d:33:90 brd ff:ff:ff:ff:ff:ff
17: veth2@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9e:9f:34:bc:49:73 brd ff:ff:ff:ff:ff:ff

While obvious in this example, we could use ethtool to determine the interface index or ID of one or the other side of this VETH pair:

user@docker1:/$ ethtool -S veth1
NIC statistics:
     peer_ifindex: 17
user@docker1:/$ ethtool -S veth2
NIC statistics:
     peer_ifindex: 16
user@docker1:/$

This can be a handy troubleshooting tool later on when determining the ends of a VETH pair is not as obvious as it is in these examples.