Archive

Category Archives for "ipSpace.net"

Projects to Work On – the AI Recommendations

Vini Motta decided to use AI on ipSpace.net content to find what it would recommend as the projects to work on in order to become employable in 2025. Here are the results he sent me; my comments are inline on a gray background.

Network Automation with Python
Project: Automate basic network tasks like device configuration, backup, or monitoring using Python scripts.

Point-to-Point Links in Virtual Labs

In the previous blog post, I described the usual mechanisms used to connect virtual machines or containers in a virtual lab, and the drawbacks of using Linux bridges to connect virtual network devices.

In this blog post, we’ll see how KVM/QEMU/libvirt/Vagrant use UDP tunnels to connect virtual machines, and how containerlab creates point-to-point vEth links between Linux containers.

Tagged VLAN 1 In a Trunk Is a Really Bad Idea

It all started with a netlab issue describing different interpretations of VLAN 1 in a trunk. While Cumulus NVUE (the way the netlab configuration template configures it) assumes that the VLAN 1 in a trunk is tagged, Arista EOS assumes it’s the native VLAN.

At that point, I should have said, “that’s crazy, we shouldn’t allow that” and enforce the “VLAN 1 has to be used as a native VLAN” rule. Alas, 20/20 hindsight never helped anyone.

TL&DR: Do not use VLAN 1 in VLAN trunks; if you have to, use it as a native VLAN.

Group Similar Links in netlab Topologies

In the Concise Link Descriptions blog post, I described various data formats that you could use to concisely list nodes attached to a link. Today, we’ll focus on a mechanism that helps you spot errors in your topology: a dictionary of links.

Imagine you have a large topology with dozens of links, and you get an error saying, “there is this problem with links[17]”. It must be great fun counting the links to find which one triggered the error, right?

Please Wait While We’re Preparing Your Interfaces

Once a virtual machine running a network operating system boots, you’d expect its data-plane interfaces to be operational, right? Some vendors disagree. It takes over a minute for some network operating systems to figure out they have this thing called interfaces.1

I would love to figure out what takes them so long (a minute is an eternity on modern CPUs), but I guess we’ll never know.

Behind the Scenes

netlab uses two device provisioning mechanisms: it can start virtual machines with Vagrant or containers with containerlab. Some of those containers might use KVM/QEMU to run a hidden virtual machine (see also: RFC 1925 rule 6a).

Links in Virtual Labs

There are three major ways to connect network devices in the physical world:

  • Point-to-point links between devices (usually using some variant of Ethernet)
  • Multi-access layer-1 networks running some IEEE 802.x encapsulation on top of that (GPON, WiFi, Ethernet hubs)
  • Multi-access switched layer-2 network (dumb switches, hopefully running some STP variant)

Implementing these connections in virtual labs is a bit harder than one might think, as all virtualization solutions assume you plan to run virtual servers connected to Ethernet segments.

netlab 1.9.4: Bug fixes, VRRPv3 on Junos

During the last three weeks, we were busy squashing bugs (device configuration fixes, other bug fixes). Some were recent; others were ancient pests uncovered by better integration tests. The end result: netlab release 1.9.4.

netlab release 1.9.4 passed hundreds of integration tests and should be a better choice than the previous 1.9 releases. To upgrade, execute pip3 install --upgrade networklab.

New to netlab? Start with the Getting Started document and the installation guide, or run it in a GitHub codespace.

Update: 2025-02-03

We still missed a few quirks :( Release 1.9.4-post1 addresses those (and, unfortunately, I’m pretty sure there will be more).

The Curious Case of the BGP Connect State

I got this question from Paul:

Have you ever seen a BGP peer in the “Connect” state? In 20 years, I have never been able to see or reproduce this state, nor any mention in a debug/log. I am starting to believe that all the documentation is BS, and this does not exist.

The BGP Finite State Machine (FSM) (at least the one defined in RFC 4271 and amended in RFC 9687) is “a bit” hard to grasp but the basics haven’t changed from the ancient days of RFC 1771:

Cisco Modeling Labs and Infrastructure-as-Code

Dalton Ortega, Cisco Modeling Labs Product Manager, sent me the following email as a response to my Configuring IP Addresses Won't Make You an Expert blog post:

First, your statement on Autonetkit is indeed correct. We had removed that from the product due to lack of popularity. That being said, in our roadmap we are looking at methods to reintroduce on-the-fly configuration as well as enhancing our sample labs library to make getting started with CML easier.

Secondly, CML can be run in full IaC mode because of the API-first build. In fact, many of our customers are using CML as an automated test/validation bed for their CI/CD pipelines. Tools like Ansible and Terraform are available to facilitate this inside CML too. For more details, read:

It seems it should be relatively easy to create a cml provider to generate a Terraform file from the netlab topology and use it to start a lab in CML. Any volunteers?

Worth Reading: Drunken Plagiarists

George V. Neville-Neil published a fantastic, must-read summary of the various code copilots’ usefulness on ACM Queue: The Drunken Plagiarists.

It pretty much mirrors my experience (plus, I got annoyed when the semi-relevant suggestions kept kicking me out of the flow) and reminds me of the early days of OpenFlow, when nobody wanted to listen to old grunts like myself telling the world it was all hype and little substance.

Cisco VRRPv3 IPv6 Configuration Sucks

I spent way too much time ironing out the VRRPv3 quirks on the dozen (or so) platforms supported by netlab. This is the second blog post describing some of the ridiculous stuff I had to deal with.

This is how you configure the basic VRRPv3 parameters for IPv4 on a Cisco IOS/XE device:

VRRPv3 IPv4 configuration on Cisco IOS
interface GigabitEthernet0/1
  vrrp 217 address-family ipv4
    address 172.16.33.42

You would expect something similar for IPv6, right? You’d be right if you were working with Arista EOS:

Use BGP Outbound Route Filters (ORF) for IP Prefixes

When a BGP router cannot fit the whole BGP table into its forwarding table (FIB), we often use inbound filters to limit the amount of information the device keeps in its BGP table. That’s usually a waste of resources:

  • The BGP neighbor has to send information about all prefixes in its BGP table
  • The device with an inbound filter wastes additional CPU cycles to drop many incoming updates.

Wouldn’t it be better for the device with an inbound filter to push that filter to its BGP neighbors?

Sturgeon’s Law, VRRPv3 Edition

I just wasted several days trying to figure out how to make the dozen (or so) platforms for which we implemented VRRPv3 in netlab work together. This is the first in a series of blog posts describing the ridiculous stuff we discovered during that journey

The idea was pretty simple:

  • Create a lab with the tested device and a well-known probe connected to the same subnet.
  • Disable VRRP (or interface) on the probe and check IPv4 and IPv6 connectivity through the tested device (verifying it takes over ownership of VRRP MAC and IP addresses).
  • Reenable VRRP on the probe and change its VRRP priority several times to check the state transitions through INIT/BACKUP(lower priority)/MASTER(change in priority)/BACKUP(preempting after a change in priority).

IBGP Is the Better EBGP

Whenever I was explaining how one could build EBGP-only data center fabrics, someone would inevitably ask, “But could you do that with IBGP?”

TL&DR: Of course, but that does not mean you should.

Anyway, leaving behind the land of sane designs, let’s trot down the rabbit trail of IBGP-only networks.

Concise Link Descriptions in netlab Topologies (Part 1)

One of the goals we’re always trying to achieve when developing netlab features is to make the lab topologies as concise as possible1. Among other things, netlab supports numerous ways of describing links between lab devices, allowing you to be as succinct as possible.

A bit of a background first:

  • In the end, netlab collects all links in the links list before starting the data transformation process.
  • Every entry in the links list is a dictionary. That dictionary can contain link attributes and must contain a list of interfaces connected to the link.
  • Every interface must have a node (specifying the lab device it belongs to) and could contain additional interface attributes.
1 2 3 178