A year after I started the open-source BGP configuration labs project, I was persuaded to do something similar for IS-IS. The first labs are already online (with plenty of additional ideas already in the queue), and you can run them on any device for which we implemented IS-IS support in netlab.
Want an easy start? Use GitHub Codespaces. Have a laptop with Apple Silicon? We have you covered ;)
I’ll talk about the BGP labs and the magic behind the scenes that ensures the lab configurations are correct at the SINOG 8 meeting later today (selecting the English version of the website is counter-intuitive; choose English from the drop-down field on the right-hand side of the page).
The SINOG 8 presentations will be live-streamed; I should start around 13:15 Central European Time (11:15 GMT; figuring out the local time is left as an exercise for the reader).
In May 2024, I made public the first half of the Network Connectivity and Graph Theory videos by Rachel Traylor.
Now, you can also enjoy the second part of the webinar without a valid ipSpace.net account; it describes trees, spanning trees, and the Spanning Tree Protocol. Enjoy!
Did you know that some vendors use the ancient MPLS/VPN (RFC 4364) control plane when implementing L3VPN with SRv6?
That’s just one of the unexpected tidbits I discovered when explaining why you can’t compare BGP, EVPN, and SRv6.
In the previous blog posts, we explored the simplest possible IBGP-based EVPN design and tried to figure out whether BGP route reflectors do more harm than good. Ignoring that tiny detail for the moment, let’s see how we could add route reflectors to our leaf-and-spine fabric.
As before, this is the fabric we’re working with:
Ages ago, I described how “traditional” network operating systems used the BGP Routing Information Base (BGP RIB), the system routing table (RIB), and the forwarding table (FIB). Here’s the TL&DR:
Béla Várkonyi wrote a succinct comment explaining why so many customers prefer layer-2 VPNs over layer-3 VPNs:
The reason of L2VPN is becoming more popular by service providers and customers is about provisioning complexity.
Nick Buraglio and Brian E. Carpenter published a free, open-source IPv6 textbook.
The book seems to be in an early (ever-evolving) stage, but it’s well worth exploring if you’re new to the IPv6 world, and you might consider contributing if you’re a seasoned old-timer.
It would also be nice to have a few online labs to go with it ;)
Urs Baumann loves hands-on teaching and created tons of lab exercises to support his Infrastructure-as-Code automation course.
During the summer, he published some of them in a collection of GitHub repositories and made them work in GitHub Codespaces. An amazing idea well worth exploring!
After discovering that some EVPN implementations support multiple transit VNI values in a single VRF, I had to check whether I could implement a common services L3VPN with EVPN.
TL&DR: It works (on Arista cEOS)1.
Here are the relevant parts of a netlab lab topology I used in my test (you can find the complete lab topology in netlab-examples GitHub repository):
Shipping netlab release 1.9.0 included running 36 hours of integration tests, including fifteen VXLAN/EVPN tests covering:
All tests included one or two devices under test and one or more FRR containers1 running EVPN/VXLAN with the devices under test. The results were phenomenal; apart from a few exceptions, everything Just Worked™️.
I love bashing SRv6, so it’s only fair to post a (technical) counterview, this time coming as a comment from Henk Smit.
There are several benefits of SRv6 that I’ve heard of.
The very first BGP Communities RFC included an interesting idea: let’s tag paths we don’t want to propagate to other autonomous systems. For example, the prefixes received from one upstream ISP should not be propagated to another upstream ISP (sadly, things don’t work that way in reality).
Want to try out that concept? Start the Using No-Export Community to Filter Transit Routes lab in GitHub Codespaces.
After reading the Layer-3-Only EVPN: Behind the Scenes blog post, one might come to an obvious conclusion: the per-VRF EVPN transit VNI must match across all PE devices forwarding traffic for that VRF.
Interestingly, at least some EVPN implementations handle multiple VNIs per VRF without a hitch; I ran my tests in a lab where three switches used unique per-switch VNI for a common VRF.
Ever since Pawel Foremski talked about BGP Pipe @ RIPE88 meeting, I wanted to kick its tires in netlab. BGP Pipe is a Go executable that runs under Linux (but also FreeBSD or MacOS), so I could add a Linux VM (or container) to a netlab topology and install the software after the lab has been started. However, I wanted to have the BGP neighbor configured on the other side of the link (on the device talking with the BGP Pipe daemon).
I could solve the problem in a few ways:
netlab release 1.9.0 brings tons of new routing features:
Other new goodies include:
As I was doing the final integration tests for netlab release 1.9.0, I stumbled upon a fascinating BGP configuration quirk: where do you configure the allowas-in parameter and why?
BGP runs over TCP, and all parameters related to the TCP session are configured for a BGP neighbor (IPv4 or IPv6 address). That includes the source interface, local AS number (it’s advertised in the per-session OPEN message that negotiates the address families), MD5 password (it uses MD5 checksum of TCP packets), GTSM (it uses the IP TTL field), or EBGP multihop (it increases the IP TTL field).
Urs Baumann brought me a nice surprise last weekend. He opened a GitHub issue saying, “MPLS works on Arista cEOS containers in release 4.31.2F” and asking whether we could enable netlab to configure MPLS on cEOS containers.
After a few configuration tweaks and a batch of integration tests later, I had the results: everything worked. You can use MPLS on Arista cEOS with netlab release 1.9.0 (right now @ 1.9.0-dev2
), and I’ll be able to create MPLS labs running in GitHub Codespaces in the not-too-distant future.
In the previous blog post, I described how to build a lab to explore the layer-3-only EVPN design and asked you to do that and figure out what’s going on behind the scenes. If you didn’t find time for that, let’s do it together in this blog post. To keep it reasonably short, we’ll focus on the EVPN control plane and leave the exploration of the data-plane data structures for another blog post.
The most important thing to understand when analyzing a layer-3-only EVPN/VXLAN network is that the data plane looks like a VRF-lite design: each VRF uses a hidden VLAN (implemented with VXLAN) as the transport VLAN between the PE devices.
Wes made an interesting comment to the Migrating a Data Center Fabric to VXLAN blog post:
The benefit of VXLAN is mostly scalability, so if your enterprise network is not scaling… just don’t. The migration path from VLANs is to just keep using VLANs. The (vendor-driven) networking industry has a huge blind spot about this.
Paraphrasing the famous Dinesh Dutt’s Autocon1 remark: I couldn’t disagree with you more.