hicube

Author Archives: hicube

Maximum route metric on Linux

Ever wondered what is the maximum route metric value you can configure on Linux? man interface and man ip state that route metric is a number, but don’t specify its range.

# ip route add 192.168.113.0/24 via 10.0.10.1 metric 0
# ip route add 192.168.113.0/24 via 10.0.10.1 metric 4294967295
# ip route add 192.168.113.0/24 via 10.0.10.1 metric 4294967296
Error: argument "4294967296" is wrong: "metric" value is invalid

# ip route
192.168.113.0/24 via 10.0.10.1 dev eth0
192.168.113.0/24 via 10.0.10.1 dev eth0  metric 4294967295

It looks like Linux route metric is an unsigned 32-bit integer, ranging from 0 to 4294967295. As you already know a route with the lowest metric is preferred.

Docker DHCP

Docker controls the IP address assignment for network and endpoint interfaces via libnetwork’s IPAM driver(s). On network creation, you can specify which IPAM driver libnetwork needs to use for the network’s IP address management.

Libnetwork’s default IPAM driver assigns IP addresses based on its own database configuration. For the time being, there is no IPAM driver that would communicate with an external DHCP server, so you need to rely on Docker’s default IPAM driver for container IP address and settings configuration.

The need for external DHCP server support has been identified, however, there is currently no sign that libnetwork developers are working on it. There are community efforts to produce a DHCP IPAM driver, but are currently not production ready.

If you critically rely on your DHCP for IP address management in your production, you can use pipework for the time being.

Alternatively, you can use both DHCP and Docker’s default IPAM on the same Layer 2 segment (a segment that covers both the physical network and the Docker hosted macvlan), with DHCP server providing data for hosts outside Docker host and IPAM providing data for Docker containers. In this case you should split the IP space Continue reading

Docker Networking: macvlans with VLANs

If you have read my introduction to macvlans and tried the basic macvlan bridge mode network configuration you are aware that a single Docker host network interface can serve as a parent interface to one macvlan or ipvlan network only.

One macvlan, one Layer 2 domain and one subnet per physical interface, however, is a rather serious limitation in a modern virtualization solution. Fortunately, a Docker host sub-interface can serve as a parent interface for the macvlan network. This aligns perfectly with the Linux implementation of VLANs, where each VLAN on a 802.1Q trunk connection is terminated on a sub-interface of the physical interface. You can map each Docker host interface to a macvlan network, thus extending the Layer 2 domain from the VLAN into the macvlan network.

Multiple macvlans with VLANs configuration

Docker Macvlan Bridge on VLAN 802.1Q trunk

You have a Docker host with a single eth0 interface connected to a router. Connection between the router and the Docker host is configured as 802.1Q trunk on the router with VLAN 10 and VLAN 20.

Configure VLAN 10 and VLAN 20 on your router. Add the following IP addresses to the Layer 3 interface: 10.0.10.1/24 and 2001:db8:babe:10::1/64 for VLAN 10, 10. Continue reading

Docker Networking: macvlan bridge

Docker takes a slightly different approach with its network drivers, confusing new users which are familiar with general terms used by other virtualization products. If you are looking for a way to bridge the container into a physical network, you have come to the right place. You can connect the container into a physical Layer 2 network by using macvlan driver. If you are looking for a different network connection, refer to my docker network drivers post.

Before I begin, you should check some basics on what macvlan is, why it is a better alternative to a linux bridge and how it compares with ipvlan.

Important: As of Docker 1.11 macvlan network driver is part of Docker’s experimental build and is not available in the production release. You can find more info on how to use the experimental build here. If you are looking for a production ready solution to connected your container into a physical Layer 2 network, you should stick to pipework for the time being.

Last but not least, macvlan driver requires Linux Kernel 3.9 or greater. You can check your kernel version with uname -r. If you’re running RHEL (CentoOS, Continue reading

Docker Container Network Types

Docker provides similar network connection options as general virtualization solutions such as VMware products, Hyper-V, KVM, Xen, VirtualBox, etc. However, Docker takes a slightly different approach with its network drivers, confusing new users which are familiar with general terms used by other virtualization products. The following table matches general terms with Docker network drivers you can use to achieve the same type of connectivity for your container.

General Virtualization Term Docker Network Driver
NAT Network bridge
Bridged macvlan, ipvlan (experimental since Docker 1.11)
Private / Host-only bridge
Overlay Network / VXLAN overlay

Macvlan vs Ipvlan

I’ve covered macvlans in the Bridge vs Macvlan post. If you are new to macvlan concept, go ahead and read it first.

Macvlan

To recap: Macvlan allows you to configure sub-interfaces (also termed slave devices) of a parent, physical Ethernet interface (also termed upper device), each with its own unique MAC address, and consequently its own IP address. Applications, VMs and containers can then bind to a specific sub-interface to connect directly to the physical network, using their own MAC and IP address.

Linux Macvlan

Macvlan is a near-ideal solution to natively connect VMs and containers to a physical network, but it has its shortcomings:

  • The switch the host is connected to may have a policy that limits the number of different MAC addresses on a physical port. Although you should really work with your network administrator to change the policy, there are times when this might not be possible (or you just need to set up a quick PoC).
  • Many NICs have a limit on the number of MAC addresses they support in hardware. Exceeding the limit may affect the performance.
  • IEEE 802.11 doesn’t like multiple MAC addresses on a single client. It is likely macvlan sub-interfaces will be blocked Continue reading

Bridge vs Macvlan

Bridge

A bridge is a Layer 2 device that connects two Layer 2 (i.e. Ethernet) segments together. Frames between the two segments are forwarded based on the Layer 2 addresses (i.e. MAC addresses). Although the two words are still often used in different contexts, a bridge is effectively a switch and all the confusion started 20+ years ago for marketing purposes.

Switching was just a fancy name for bridging, and that was a 1980s technology – or so the thinking went.

A bridge makes forwarding decisions based on the MAC address table. Bridge learns MAC addresses by looking into the Frames headers of communicating hosts.

A bridge can be a physical device or implemented entirely in software. Linux kernel is able to perform bridging since 1999. By creating a bridge, you can connect multiple physical or virtual interfaces into a single Layer 2 segment. A bridge that connects two physical interfaces on a Linux host effectively turns this host into a physical switch.

Linux Bridge

Switches have meanwhile became specialized physical devices and software bridging had almost lost its place. However, with the advent of virtualization, virtual machines running on physical hosts required Layer 2 connection to the physical network Continue reading