Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

FRRouting Loopback Interfaces and OSPF Costs

TL&DR: FRRouting advertises the IP prefix on the lo loopback interface with zero cost.

Let’s start with the background story. When we added FRRouting containers support to netlab, someone decided to use lo0 as the loopback interface name. That device doesn’t exist in a typical Linux container, but it’s not hard to add it:

$ ip link add lo0 type dummy
$ ip link set dev lo0 up

Unintended Consequences of IPv6 SLAAC

One of my friends is running a large IPv6 network and has already experienced a shortage of IPv6 neighbor cache on some of his switches. Digging deeper into the root causes, he discovered:

In my larger environments, I see significant neighbor table cache entries, especially on network segments with hosts that make many long-term connections. These hosts have 10 to 20 addresses that maintain state over days or weeks to accomplish their processes.

What’s going on? A perfect storm of numerous unrelated annoyances:

Explore: Why No IPv6? (IPv6 SaaS)

Lasse Haugen had enough of the never-ending “we can’t possibly deploy IPv6” excuses and decided to start the IPv6 Shame-as-a-Service website, documenting top websites that still don’t offer IPv6 connectivity.

His list includes well-known entries like twitter.com, azure.com, and github.com plus a few unexpected ones. I find cloudflare.net not having an AAAA DNS record truly hilarious. Someone within the company that flawlessly provided my website with IPv6 connectivity for years obviously still has some reservations about their own dogfood ;)

Stateful Firewall Cluster High Availability Theater

Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):


Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.

EVPN Designs: VXLAN Leaf-and-Spine Fabric

In this series of blog posts, we’ll explore numerous routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine data center fabric. Every design will come with a companion netlab topology you can use to create a lab and explore the behavior of leaf- and spine switches.

Our leaf-and-spine fabric will have four leaves and two spines (but feel free to adjust the lab topology fabric parameters to build larger fabrics). The fabric will provide layer-2 connectivity to orange and blue VLANs. Two hosts will be connected to each VLAN to check end-to-end connectivity.

Using wemulate with netlab

An RSS hiccup brought an old blog post from Urs Baumann into my RSS reader. I’m always telling networking engineers that it’s essential to set up realistic WAN environments when testing distributed software, and wemulate (a nice tc front-end) seemed like a perfect match. Even better, it runs in a container – an ideal component for a netlab-generated virtual WAN network.

wemulate acts as a bump in the wire; it uses Linux bridges to connect two container interfaces. We’ll use it to introduce jitter into an IP subnet:

┌──┐   ┌────────┐   ┌──┐
│h1├───┤wemulate├───┤h2│
└──┘   └────────┘   └──┘                       
◄──────────────────────►
     192.168.33.0/24    

Repost: EBGP-Mostly Service Provider Network

Daryll Swer left a long comment describing how he designed a Service Provider network running in numerous private autonomous systems. While I might not agree with everything he wrote, it’s an interesting idea and conceptually pretty similar to what we did 25 years ago (IBGP without IGP, running across physical interfaces, with every router being a route-reflector client of every other router), or how some very large networks were using BGP confederations.

Just remember (as someone from Cisco TAC told me in those days) that “you might be the only one in the world doing it and might hit bugs no one has seen before.”

FRRouting Claims IBGP Loopbacks Are Inaccessible

Last week, I explained the differences between FRRouting and more traditional networking operating systems in scenarios where OSPF and IBGP advertise the same prefix:

  • Traditional networking operating systems enter only the OSPF route into the IP routing table.
  • FRRouting enters OSPF and IBGP routes into the IP routing table.
  • On all platforms I’ve tested, only the OSPF route gets into the forwarding table1.

One could conclude that it’s perfectly safe to advertise the same prefixes in OSPF and IBGP. The OSPF routes will be used within the autonomous system, and the IBGP routes will be propagated over EBGP to adjacent networks. Well, one would be surprised 🤦‍♂️

netlab: Building Leaf-and-Spine Fabrics with the Fabric Plugin

netlab release 1.7.0 added the fabric plugin that simplifies building lab topologies with leaf-and-spine fabrics. All you have to do to build a full-blown leaf-and-spine fabric is:

  • Specify the default device type
  • Enable the fabric plugin
  • Specify the number of leaves and spines in the fabric.

For example, the following lab topology builds a fabric with Arista cEOS containers having two spines and four leaves:

1 2 3 122