Unsurprisingly, the main topic of conversation at the recent Dell Technologies World 2025 event in Las Vegas was AI, and a central theme that wove through many of the messages we heard there was that adopting the emerging technology is much easier now than it was even a year ago. …
We know there are three main ways to move packets across a network. However, before we can start forwarding packets, someone has to populate the forwarding tables in the intermediate devices or build the sequence of nodes to traverse in source routing.
Usually, whoever is responsible for the contents of the forwarding tables must first discover the network topology. Let’s start there, using the following network diagram to illustrate the discussion.
If you follow me or read my blog, you probably know I'm a big advocate of Containerlab. I've been using it for over two years now and I absolutely love it. Why? Because it's open source, it has an amazing community behind it (thank you again, Roman), and labs are defined using simple YAML files that are easy to share and reuse.
So far, I've used Cisco IOL, Arista EOS, and Palo Alto VM in Containerlab. And finally, the time came to try Juniper. I decided to test the Juniper vJunos-router, which is a virtualized MX router. It's a single-VM version of vMX that doesn't require any feature licenses and is meant for lab or testing purposes. You can even download the image directly from Juniper's website without needing an account. Thank you, Juniper and Cisco, please take note. In this post, I'll show you how to run Juniper vJunos-router in Containerlab.
Prerequisites
This post assumes you're somewhat familiar with Containerlab and already have it installed. If you're new, feel free to check out my introductory blog below. Containerlab also has great documentation on how to use vJunos-router, so be sure to check that out as well.
When you drive around the major metropolitan areas of this great country of ours, and indeed in any most of the developed countries at this point, you see two things. …
Hewlett Packard Enterprise is going through yet another restructuring to reduce costs, something we have seen a lot of in the past two decades and a half decades since it acquired Compaq to become a volume server peddler as well as high end system supplier for enterprises. …
If you’ve managed traffic in Kubernetes, you’ve likely navigated the world of Ingress controllers. For years, Ingress has been the standard way of getting HTTP/S services exposed. But let’s be honest, it often felt like a compromise. We wrestled with controller-specific annotations to unlock critical features, blurred the lines between infrastructure and application concerns, this complexity didn’t just make portability more difficult, it sometimes led to security vulnerabilities and other complications.
As part of Calico Open Source v3.30, we have released a free and open source Calico Ingress Gateway that implements a custom built Envoy proxy with the Kubernetes Gateway API standard to help you navigate Ingress complexities with style. This blog post is designed to get you up to speed on why such a change might be the missing link in your environment.
The Situation: The Ingress Rut
The challenge with traditional Ingress wasn’t a lack of effort, since the landscape is full of innovative solutions. However, the problem was the lack of a unified, expressive, and role-aware standard. Existing ingress controllers were capable, implemented advanced features, however at the same time tied you to a specific project/vendor.
This meant:
Vendor Lock-In: Migrating from one ingress controller Continue reading
When the web was first scaling up, content delivery networks (CDNs) became a way of dealing with the ever-increasing load. Akamai is widely considered the pioneer of CDN technology in the late-1990s, but arguably it’s been overtaken now by younger, more agile CDN competitors. At least that’s the view of fashions itself as an “edge cloud platform.”
“Akamai was the first cloud service, the first multitenant cloud service,” Bergman told The New Stack in an interview. “And I think if they had been developer-friendly, then they should have been as large of a player as AWS, right?”
Akamai may not have been the very first cloud service, but it was definitely among the first — and its CDN debuted well before “
Jo attempted to follow the vendor Kool-Aid recommendations and use NETCONF/YANG to configure network devices. Here’s what he found (slightly edited):
IMHO, the whole NETCONF ecosystem primarily suffers from a tooling problem. Or I haven’t found the right tools yet.
ncclient is (as you mentioned somewhere else) an underdocumented mess. And that undocumented part is not even up to date. The commit hash at the bottom of the docs page is from 2020… I am amazed how so many people got it working well enough to depend on it in their applications.
I want to look at just one day of the operation of the Internet’s BGP network by looking at the behaviour of a single BGP session. Nothing special or extraordinary happened on that day. There were no large-scale power blackouts, no major faults in the world’s submarine cable network, nor in the terrestrial trunk cable systems. No headlining-grabbing cyber attack took place on that day, as far as I’m aware. It was just an ordinary Thursday on the Internet, just like any other day, and I selected this day due to its very ordinariness! WhAt can this day tell us about BGP and the way we use it?
This is my second time attending the AutoCon event. The first one I went to was last year in Amsterdam (AutoCon1), and it was absolutely amazing. I decided to attend again this year, and AutoCon3 took place from the 26th to the 30th of May. The first two days were dedicated to workshops, and the conference itself ran from the 28th to the 30th. I only attended the conference. I heard there were around 650 attendees at this event, which is great to see.
Network Automation Forum (NAF)
In case you’ve never heard of AutoCon, it’s a community-driven conference focused on network automation, organized by the Network Automation Forum (NAF). NAF brings together people from across the industry to share ideas, tools, and best practices around automation, orchestration, and observability in networking.
They typically hold two conferences each year, one in Europe and one in the USA, or at least that’s how it’s been so far. The European event is usually around the end of May, and the US one takes place around November. Tickets are released in tiers, with early bird pricing being cheaper. I grabbed the early bird ticket for 299 euros as soon as it was announced.
While it has always been true that flatter networks and faster networks are possible with every speed bump on the Ethernet roadmap, the scale of networks has kept growing fast enough that the switch ASIC makers and the switch makers have been able to make it up in volume and keep the switch business growing. …
SPONSORED POST: When you have got disparate data flowing in from every corner of your business, making sense of it all and making it work harder for you isn’t always easy. …
Recently I needed to be able to stand up a dual-stack (IPv4/IPv6) Kubernetes cluster on Flatcar Container Linux using kubeadm. At first glance, this seemed like it would be relatively straightforward, but as I dug deeper into it there were a few quirks that emerged. Given these quirks, it seemed like a worthwhile process to write up and publish here. In this post, you’ll see how to use Butane and kubeadm to bootstrap a dual-stack IPv4/IPv6 Kubernetes cluster on AWS.
For those who are unfamiliar, Flatcar Container Linux is a container-optimized Linux distribution considered to be the spiritual successor to CoreOS. For configuring OS instances during provisioning, Flatcar uses Ignition (see here or here for more information). Ignition is intended to be machine-friendly, but not human-friendly. Users can use Butane to write human-friendly YAML configurations that then get transpiled into Ignition. So, when bootstrapping Kubernetes on Flatcar, users will generally use a Butane configuration that leverages kubeadm, as described in the Flatcar documentation.
There’s a lot of talk right now about building AI agents, but not a lot out there about what it takes to make those agents truly useful.
An Agent is an autonomous system designed to make decisions and perform actions to achieve a specific goal or set of goals, without human input.
No matter how good your agent is at making decisions, you will need a person to provide guidance or input on the agent’s path towards its goal. After all, an agent that cannot interact or respond to the outside world and the systems that govern it will be limited in the problems it can solve.
That’s where the “human-in-the-loop” interaction pattern comes in. You're bringing a human into the agent's loop and requiring an input from that human before the agent can continue on its task.
In this blog post, we'll useKnock and the CloudflareAgents SDK to build an AI Agent for a virtual card issuing workflow that requires human approval when a new card is requested.
Note to readers: I’m merging the worth reading and weekend reads into a “couple of times a week” worth reading. How often I post these depends on the number of articles I run across, but I’ll try to keep it to around five articles per post
Jan Schaumann published an interesting blog post describing the circuitous journey a browser might take to figure out that it can use QUIC with a web server.
Now, if only there were a record in a distributed database telling the browser what the web server supports. Oh, wait… Not surprisingly, browser vendors don’t trust that data and have implemented a happy eyeballs-like protocol to decide between HTTPS over TCP and QUIC.
Marvell Technology made some big bets about delivering chip packaging and I/O technologies to the hyperscalers and cloud builders of the world who want to design their own ASICs but who do not have the expertise to get those designs across the finish line into products. …