Vini Motta decided to use AI on ipSpace.net content to find what it would recommend as the projects to work on in order to become employable in 2025. Here are the results he sent me; my comments are inline on a gray background.
In the previous blog post, I described the usual mechanisms used to connect virtual machines or containers in a virtual lab, and the drawbacks of using Linux bridges to connect virtual network devices.
In this blog post, we’ll see how KVM/QEMU/libvirt/Vagrant use UDP tunnels to connect virtual machines, and how containerlab creates point-to-point vEth links between Linux containers.
It all started with a netlab issue describing different interpretations of VLAN 1 in a trunk. While Cumulus NVUE (the way the netlab configuration template configures it) assumes that the VLAN 1 in a trunk is tagged, Arista EOS assumes it’s the native VLAN.
At that point, I should have said, “that’s crazy, we shouldn’t allow that” and enforce the “VLAN 1 has to be used as a native VLAN” rule. Alas, 20/20 hindsight never helped anyone.
TL&DR: Do not use VLAN 1 in VLAN trunks; if you have to, use it as a native VLAN.
The initial IS-IS labs covered the IS-IS basics. It’s time to move on to interesting IS-IS features (and why you might want to use them), starting with passive IS-IS interfaces.
In the Concise Link Descriptions blog post, I described various data formats that you could use to concisely list nodes attached to a link. Today, we’ll focus on a mechanism that helps you spot errors in your topology: a dictionary of links.
Imagine you have a large topology with dozens of links, and you get an error saying, “there is this problem with links[17]
”. It must be great fun counting the links to find which one triggered the error, right?
Once a virtual machine running a network operating system boots, you’d expect its data-plane interfaces to be operational, right? Some vendors disagree. It takes over a minute for some network operating systems to figure out they have this thing called interfaces.1
I would love to figure out what takes them so long (a minute is an eternity on modern CPUs), but I guess we’ll never know.
netlab uses two device provisioning mechanisms: it can start virtual machines with Vagrant or containers with containerlab. Some of those containers might use KVM/QEMU to run a hidden virtual machine (see also: RFC 1925 rule 6a).
A few days ago, someone mentioned Arista released a cEOS EFT image running on Arm. Of course, I had to test whether it would run on Apple Silicon.
TL&DR: YES 🎉 🎉
Here’s what you have to do to make the Arista cEOS container work with netlab running on an Ubuntu VM on Apple silicon:
There are three major ways to connect network devices in the physical world:
Implementing these connections in virtual labs is a bit harder than one might think, as all virtualization solutions assume you plan to run virtual servers connected to Ethernet segments.
During the last three weeks, we were busy squashing bugs (device configuration fixes, other bug fixes). Some were recent; others were ancient pests uncovered by better integration tests. The end result: netlab release 1.9.4.
netlab release 1.9.4 passed hundreds of integration tests and should be a better choice than the previous 1.9 releases. To upgrade, execute pip3 install --upgrade networklab
.
We still missed a few quirks :( Release 1.9.4-post1 addresses those (and, unfortunately, I’m pretty sure there will be more).
I got this question from Paul:
Have you ever seen a BGP peer in the “Connect” state? In 20 years, I have never been able to see or reproduce this state, nor any mention in a debug/log. I am starting to believe that all the documentation is BS, and this does not exist.
The BGP Finite State Machine (FSM) (at least the one defined in RFC 4271 and amended in RFC 9687) is “a bit” hard to grasp but the basics haven’t changed from the ancient days of RFC 1771:
Dalton Ortega, Cisco Modeling Labs Product Manager, sent me the following email as a response to my Configuring IP Addresses Won't Make You an Expert blog post:
First, your statement on Autonetkit is indeed correct. We had removed that from the product due to lack of popularity. That being said, in our roadmap we are looking at methods to reintroduce on-the-fly configuration as well as enhancing our sample labs library to make getting started with CML easier.
Secondly, CML can be run in full IaC mode because of the API-first build. In fact, many of our customers are using CML as an automated test/validation bed for their CI/CD pipelines. Tools like Ansible and Terraform are available to facilitate this inside CML too. For more details, read:
George V. Neville-Neil published a fantastic, must-read summary of the various code copilots’ usefulness on ACM Queue: The Drunken Plagiarists.
It pretty much mirrors my experience (plus, I got annoyed when the semi-relevant suggestions kept kicking me out of the flow) and reminds me of the early days of OpenFlow, when nobody wanted to listen to old grunts like myself telling the world it was all hype and little substance.
I spent way too much time ironing out the VRRPv3 quirks on the dozen (or so) platforms supported by netlab. This is the second blog post describing some of the ridiculous stuff I had to deal with.
This is how you configure the basic VRRPv3 parameters for IPv4 on a Cisco IOS/XE device:
interface GigabitEthernet0/1
vrrp 217 address-family ipv4
address 172.16.33.42
You would expect something similar for IPv6, right? You’d be right if you were working with Arista EOS:
When a BGP router cannot fit the whole BGP table into its forwarding table (FIB), we often use inbound filters to limit the amount of information the device keeps in its BGP table. That’s usually a waste of resources:
Wouldn’t it be better for the device with an inbound filter to push that filter to its BGP neighbors?
I just wasted several days trying to figure out how to make the dozen (or so) platforms for which we implemented VRRPv3 in netlab work together. This is the first in a series of blog posts describing the ridiculous stuff we discovered during that journey
The idea was pretty simple:
The believers in the There Be Four Layers religion think everything below IP is just a blob of stuff dealing with physical things:
People steeped in a slightly more nuanced view of the world in which IP is not the centerpiece of the universe might tell you that the blob of stuff we need is two things:
Whenever I was explaining how one could build EBGP-only data center fabrics, someone would inevitably ask, “But could you do that with IBGP?”
TL&DR: Of course, but that does not mean you should.
Anyway, leaving behind the land of sane designs, let’s trot down the rabbit trail of IBGP-only networks.
One of the goals we’re always trying to achieve when developing netlab features is to make the lab topologies as concise as possible1. Among other things, netlab supports numerous ways of describing links between lab devices, allowing you to be as succinct as possible.
A bit of a background first:
The initial videos of the Leaf-and-Spine Fabric Architectures webinar are now public. You can watch the Leaf-and-Spine Fabric Basics, Physical Fabric Design, and Layer-3 Fabrics sections without an ipSpace.net account.
One of the recipes for easy IS-IS deployments claims that you should use only level-2 routing (although most vendors enable level-1 and level-2 routing by default).
What does that mean, and why does it matter? You’ll find the answers in the Optimize Simple IS-IS Deployments lab exercise.