If you’ve managed traffic in Kubernetes, you’ve likely worked with Ingress controllers. For years, Ingress has been the standard way to expose HTTP and HTTPS services. But in practice, it often came with trade-offs. Controller-specific annotations were required to unlock critical features, the line between infrastructure and application responsibilities was unclear, and configurations often became tied to the implementation rather than the intent.
Recently, the Kubernetes community announced that Ingress NGINX will be formally retired, with only best-effort maintenance provided until March 2026. After that point, there will be no bug fixes, no security updates, and the project will move to read-only archival status. Any cluster still relying on Ingress NGINX after that date will be running an unsupported controller, which increases maintenance overhead and security risk.
For many organizations, now is the time to treat this as a high-priority project: inventory all clusters using Ingress NGINX, create a migration plan (test, convert, cut over), and avoid ending up in a reactive scramble as the March 2026 deadline approaches.
If the move from Ingress to the Gateway API once felt optional, this new timeline changes the situation. Depending on an aging data-plane component without Continue reading
Jo attempted to follow the vendor Kool-Aid recommendations and use NETCONF/YANG to configure network devices. Here’s what he found (slightly edited):
IMHO, the whole NETCONF ecosystem primarily suffers from a tooling problem. Or I haven’t found the right tools yet.
ncclient is (as you mentioned somewhere else) an underdocumented mess. And that undocumented part is not even up to date. The commit hash at the bottom of the docs page is from 2020… I am amazed how so many people got it working well enough to depend on it in their applications.

This is my second time attending the AutoCon event. The first one I went to was last year in Amsterdam (AutoCon1), and it was absolutely amazing. I decided to attend again this year, and AutoCon3 took place from the 26th to the 30th of May. The first two days were dedicated to workshops, and the conference itself ran from the 28th to the 30th. I only attended the conference. I heard there were around 650 attendees at this event, which is great to see.
In case you’ve never heard of AutoCon, it’s a community-driven conference focused on network automation, organized by the Network Automation Forum (NAF). NAF brings together people from across the industry to share ideas, tools, and best practices around automation, orchestration, and observability in networking.
They typically hold two conferences each year, one in Europe and one in the USA, or at least that’s how it’s been so far. The European event is usually around the end of May, and the US one takes place around November. Tickets are released in tiers, with early bird pricing being cheaper. I grabbed the early bird ticket for 299 euros as soon as it was announced.
While it has always been true that flatter networks and faster networks are possible with every speed bump on the Ethernet roadmap, the scale of networks has kept growing fast enough that the switch ASIC makers and the switch makers have been able to make it up in volume and keep the switch business growing. …
The AI Datacenter Is Ravenous For 102.4 Tb/sec Ethernet Switch ASICs was written by Timothy Prickett Morgan at The Next Platform.
SPONSORED POST: When you have got disparate data flowing in from every corner of your business, making sense of it all and making it work harder for you isn’t always easy. …
Unlock The Power Of Your Data With BigQuery was written by Timothy Prickett Morgan at The Next Platform.
Recently I needed to be able to stand up a dual-stack (IPv4/IPv6) Kubernetes cluster on Flatcar Container Linux using kubeadm. At first glance, this seemed like it would be relatively straightforward, but as I dug deeper into it there were a few quirks that emerged. Given these quirks, it seemed like a worthwhile process to write up and publish here. In this post, you’ll see how to use Butane and kubeadm to bootstrap a dual-stack IPv4/IPv6 Kubernetes cluster on AWS.
For those who are unfamiliar, Flatcar Container Linux is a container-optimized Linux distribution considered to be the spiritual successor to CoreOS. For configuring OS instances during provisioning, Flatcar uses Ignition (see here or here for more information). Ignition is intended to be machine-friendly, but not human-friendly. Users can use Butane to write human-friendly YAML configurations that then get transpiled into Ignition. So, when bootstrapping Kubernetes on Flatcar, users will generally use a Butane configuration that leverages kubeadm, as described in the Flatcar documentation.
While the Butane configurations in the documentation are a good start for bootstrapping Kubernetes on Flatcar, they don’t address the dual-stack use case. As outlined in the Kubernetes documentation for dual-stack support with kubeadm, you Continue reading
This is a guest post by Chris Bell, CTO of Knock
There’s a lot of talk right now about building AI agents, but not a lot out there about what it takes to make those agents truly useful.
An Agent is an autonomous system designed to make decisions and perform actions to achieve a specific goal or set of goals, without human input.
No matter how good your agent is at making decisions, you will need a person to provide guidance or input on the agent’s path towards its goal. After all, an agent that cannot interact or respond to the outside world and the systems that govern it will be limited in the problems it can solve.
That’s where the “human-in-the-loop” interaction pattern comes in. You're bringing a human into the agent's loop and requiring an input from that human before the agent can continue on its task.
In this blog post, we'll use Knock and the Cloudflare Agents SDK to build an AI Agent for a virtual card issuing workflow that requires human approval when a new card is requested.
You can find the complete code for this example in the repository.
Knock is messaging Continue reading
Note to readers: I’m merging the worth reading and weekend reads into a “couple of times a week” worth reading. How often I post these depends on the number of articles I run across, but I’ll try to keep it to around five articles per post
To build a data-driven story, we must use a basic narrative model. Various models exist in the literature, such as the Data-Information-Knowledge-Wisdom (DIKW) Continue reading
Jan Schaumann published an interesting blog post describing the circuitous journey a browser might take to figure out that it can use QUIC with a web server.
Now, if only there were a record in a distributed database telling the browser what the web server supports. Oh, wait… Not surprisingly, browser vendors don’t trust that data and have implemented a happy eyeballs-like protocol to decide between HTTPS over TCP and QUIC.
Marvell Technology made some big bets about delivering chip packaging and I/O technologies to the hyperscalers and cloud builders of the world who want to design their own ASICs but who do not have the expertise to get those designs across the finish line into products. …
Marvell Is Saved By The AI Boom, But Every Deal Is Tough was written by Timothy Prickett Morgan at The Next Platform.
A reader of my blog pointed out the following minutiae hidden at the very bottom of the Cumulus Linux 5.13 What’s New document:
NVIDIA no longer releases Cumulus VX as a standalone image. To simulate a Cumulus Linux switch, use NVIDIA AIR.
And what is NVIDIA AIR?