
"Can you suggest some specs for a server for my network labs?" is probably the question I get asked the most. People reach out all the time asking for recommendations. The thing is, I never really know their exact situation or what they’re trying to do in their lab. So, I usually just share what I have and what worked best for me, and let them decide what fits their setup.
In this post, I’ll go over the cheapest way to build your own network lab without spending too much.

You don’t need expensive hardware to build a solid network lab. A used mini PC with decent specs is more than enough to run tools like Proxmox, Continue reading
This guide is the steps I follow when adding or updating NTC templates. Contributing to a project in Github is still a learning curve for me, the days of learning CLI by repetition seem long gone so when using or contributing to any of these NetOps type tools I have to keep guides as it is a bit of a struggle to remember with so many new and alien things to know and the sporadic nature that I use them.
I’ve previously mentioned my io-uring webserver tarweb. I’ve now added another interesting aspect to it.
As you may or may not be aware, on Linux it’s possible to send a file descriptor from one process to another over a unix domain socket. That’s actually pretty magic if you think about it.
You can also send unix credentials and SELinux security contexts, but that’s a story for another day.
I want to run some domains using my webserver “tarweb”. But not all. And I want to host them on a single IP address, on the normal HTTPS port 443.
Simple, right? Just use nginx’s proxy_pass?
Ah, but I don’t want nginx to stay in the path. After SNI (read: “browser saying which domain it wants”) has been identified I want the TCP connection to go directly from the browser to the correct backend.
I’m sure somewhere on the internet there’s already an SNI router that does this, but all the ones I found stay in line with the request path, adding a hop.
A few reasons:
We’re excited to announce the release of Calico v3.31,
which brings a wave of new features and improvements.
For a quick look, here are the key updates and improvements in this release:
eBPF, automatically disables kube-proxy via kubeProxyManagement field, and adds bpfNetworkBootstrap for auto API endpoint detection.DSCP) support: prioritize traffic by marking packets (e.g., EF for VoIP).QoSPolicy API for declarative traffic control.IP-in-IP, no-encap) directly — no BIRD required!natOutgoingExclusions config for granular NAT management. Continue readingIs quantum really an immediate and dangerous threat to current cryptography systems, or are we pushing to hastily adopt new technologies we won’t necessarily need for a few more years? Should we allow the quantum pie to bake a few more years before slicing a piece and digging in? George Michaelson joins Russ and Tom to discuss.
I love well-organized small conferences, so it wasn’t hard to persuade me to have another talk at the DEEP Conference in Zadar, Croatia. This time, I talked about the role of digital twins in disaster recovery/avoidance testing. You might know my take on networking digital twins; after that, I only had enough time to focus on bandwidth and latency matter, and this is how you emulate limited bandwidth and add latency bit.
Here’s a cool feature every routing protocol should have: a flag that tells everyone a node is going down, giving them time to adjust their routing tables before disrupting traffic flow.
OSPF never had such a feature; common implementations set the cost of all interfaces to a very high value to emulate it. BGP got it (the Graceful BGP Session Shutdown) almost 30 years after it was created. IS-IS had the overload bit from day one, and it’s just what an IS-IS router needs to tell everyone else they should stop using it for transit traffic. You can try it out in the Drain Traffic Before Node Maintenance lab exercise.
Click here to start the lab in your browser using GitHub Codespaces (or set up your own lab infrastructure). After starting the lab environment, change the directory to feature/5-drain and execute netlab up.

The first half of the Graph Algorithms in Networks webinar by Rachel Traylor is now available without a valid ipSpace.net account; it discusses algorithms dealing with trees, paths, and finding centers of graphs. Enjoy!
When deploying a Kubernetes cluster, a critical architectural decision is how pods on different nodes communicate. The choice of networking mode directly impacts performance, scalability, and operational overhead. Selecting the wrong mode for your environment can lead to persistent performance issues, troubleshooting complexity, and scalability bottlenecks.
The core problem is that pod IPs are virtual. The underlying physical or cloud network has no native awareness of how to route traffic to a pod’s IP address, like 10.244.1.5 It only knows how to route traffic between the nodes themselves. This gap is precisely what the Container Network Interface (CNI) must bridge.

The CNI employs two primary methods to solve this problem:
Daniel Dib wrote a nice article describing the history of the loopback interface1, triggering an inevitable mention of the role of a loopback interface in OSPF and related flood of ancient memories on my end.
Before going into the details, let’s get one fact straight: an OSPF router ID was always (at least from the days of OSPFv1 described in RFC 1133) just a 32-bit identifier, not an IPv4 address2. Straight from the RFC 1133: