We’re excited to announce the release of Calico v3.31,
which brings a wave of new features and improvements.
For a quick look, here are the key updates and improvements in this release:
eBPF, automatically disables kube-proxy via kubeProxyManagement field, and adds bpfNetworkBootstrap for auto API endpoint detection.DSCP) support: prioritize traffic by marking packets (e.g., EF for VoIP).QoSPolicy API for declarative traffic control.IP-in-IP, no-encap) directly — no BIRD required!natOutgoingExclusions config for granular NAT management. Continue readingIs quantum really an immediate and dangerous threat to current cryptography systems, or are we pushing to hastily adopt new technologies we won’t necessarily need for a few more years? Should we allow the quantum pie to bake a few more years before slicing a piece and digging in? George Michaelson joins Russ and Tom to discuss.
I love well-organized small conferences, so it wasn’t hard to persuade me to have another talk at the DEEP Conference in Zadar, Croatia. This time, I talked about the role of digital twins in disaster recovery/avoidance testing. You might know my take on networking digital twins; after that, I only had enough time to focus on bandwidth and latency matter, and this is how you emulate limited bandwidth and add latency bit.
Here’s a cool feature every routing protocol should have: a flag that tells everyone a node is going down, giving them time to adjust their routing tables before disrupting traffic flow.
OSPF never had such a feature; common implementations set the cost of all interfaces to a very high value to emulate it. BGP got it (the Graceful BGP Session Shutdown) almost 30 years after it was created. IS-IS had the overload bit from day one, and it’s just what an IS-IS router needs to tell everyone else they should stop using it for transit traffic. You can try it out in the Drain Traffic Before Node Maintenance lab exercise.
Click here to start the lab in your browser using GitHub Codespaces (or set up your own lab infrastructure). After starting the lab environment, change the directory to feature/5-drain and execute netlab up.

The first half of the Graph Algorithms in Networks webinar by Rachel Traylor is now available without a valid ipSpace.net account; it discusses algorithms dealing with trees, paths, and finding centers of graphs. Enjoy!
When deploying a Kubernetes cluster, a critical architectural decision is how pods on different nodes communicate. The choice of networking mode directly impacts performance, scalability, and operational overhead. Selecting the wrong mode for your environment can lead to persistent performance issues, troubleshooting complexity, and scalability bottlenecks.
The core problem is that pod IPs are virtual. The underlying physical or cloud network has no native awareness of how to route traffic to a pod’s IP address, like 10.244.1.5 It only knows how to route traffic between the nodes themselves. This gap is precisely what the Container Network Interface (CNI) must bridge.

The CNI employs two primary methods to solve this problem:
Daniel Dib wrote a nice article describing the history of the loopback interface1, triggering an inevitable mention of the role of a loopback interface in OSPF and related flood of ancient memories on my end.
Before going into the details, let’s get one fact straight: an OSPF router ID was always (at least from the days of OSPFv1 described in RFC 1133) just a 32-bit identifier, not an IPv4 address2. Straight from the RFC 1133: