On MPLS Paths, Tunnels and Interfaces

One of my readers attempted to implement a multi-vendor multicast VPN over MPLS but failed. As a good network engineer, he tried various duct tapes but found that the only working one was a GRE tunnel within a VRF, resulting in considerable frustration. In his own words:

How is a GRE tunnel different compared to an MPLS LSP? I feel like conceptually, they kind of do the same thing. They just tunnel traffic by wrapping it with another header (one being IP/GRE, the other being MPLS).

Instead of going down the “how many angels are dancing on this pin” rabbit hole (also known as “Is MPLS tunneling?”), let’s focus on the fundamental differences between GRE/IPsec/VXLAN tunnels and MPLS paths.

IP Addresses through 2025

It's time for another annual roundup from the world of IP addresses. Let’s see what has changed in the past 12 months in addressing the Internet and look at how IP address allocation information can inform us of the changing nature of the network itself.

How we mitigated a vulnerability in Cloudflare’s ACME validation logic

On October 13, 2025, security researchers from FearsOff identified and reported a vulnerability in Cloudflare's ACME (Automatic Certificate Management Environment) validation logic that disabled some of the WAF features on specific ACME-related paths. The vulnerability was reported and validated through Cloudflare’s bug bounty program.

The vulnerability was rooted in how our edge network processed requests destined for the ACME HTTP-01 challenge path (/.well-known/acme-challenge/*).

Here, we’ll briefly explain how this protocol works and the action we took to address the vulnerability. 

Cloudflare has patched this vulnerability and there is no action necessary for Cloudflare customers. We are not aware of any malicious actor abusing this vulnerability.

How ACME works to validate certificates

ACME is a protocol used to automate the issuance, renewal, and revocation of SSL/TLS certificates. When an HTTP-01 challenge is used to validate domain ownership, a Certificate Authority (CA) will expect to find a validation token at the HTTP path following the format of http://{customer domain}/.well-known/acme-challenge/{token value}

If this challenge is used by a certificate order managed by Cloudflare, then Cloudflare will respond on this path and provide the token provided by the CA to the caller. If the token provided does not Continue reading

How Moving Away from Ansible Made netlab Faster

TL&DR: Of course, the title is clickbait. While the differences are amazing, you won’t notice them in small topologies or when using bloatware that takes minutes to boot.

Let’s start with the background story: due to the (now fixed) suboptimal behavior of bleeding-edge Ansible releases, I decided to generate the device configuration files within netlab (previously, netlab prepared the device data, and the configuration files were rendered in an Ansible playbook).

As we use bash scripts to configure Linux containers, it makes little sense (once the bash scripts are created) to use an Ansible playbook to execute docker exec script or ip netns container exec script. netlab release 26.01 runs the bash scripts to configure Linux, Bird, and dnsmasq containers directly within the netlab initial process.

Now for the juicy part.

RAID 5 with mixed-capacity disks on Linux

Standard RAID solutions waste space when disks have different sizes. Linux software RAID with LVM uses the full capacity of each disk and lets you grow storage by replacing one or two disks at a time.

We start with four disks of equal size:

$ lsblk -Mo NAME,TYPE,SIZE
NAME TYPE  SIZE
vda  disk  101M
vdb  disk  101M
vdc  disk  101M
vdd  disk  101M

We create one partition on each of them:

$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vda
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdb
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdc
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdd
$ lsblk -Mo NAME,TYPE,SIZE
NAME   TYPE  SIZE
vda    disk  101M
└─vda1 part  100M
vdb    disk  101M
└─vdb1 part  100M
vdc    disk  101M
└─vdc1 part  100M
vdd    disk  101M
└─vdd1 part  100M

We set up a RAID 5 device by assembling the four partitions:1

$ mdadm --create /dev/md0 --level=raid5 --bitmap=internal --raid-devices=4 \
>   /dev/vda1 /dev/vdb1 /dev/vdc1 /dev/vdd1
$ lsblk -Mo NAME,TYPE,SIZE
    NAME          TYPE    SIZE
    vda           disk    101M
┌┈▶ └─vda1        part    100M
┆   vdb           disk    101M
├┈▶ └─vdb1        part    100M
Continue reading

TNO053: Ethernet Is Everywhere

Ethernet is everywhere. Today we talk with one of the people responsible for this protocol’s ubiquity. Doug Boom is a veteran of the Ethernet development world. His code has helped landers reach Mars, submarines sail the deep seas, airplanes get to their gates, cars drive around town, and more. Doug walks us through the origins... Read more »

Astro is joining Cloudflare

The Astro Technology Company, creators of the Astro web framework, is joining Cloudflare.

Astro is the web framework for building fast, content-driven websites. Over the past few years, we’ve seen an incredibly diverse range of developers and companies use Astro to build for the web. This ranges from established brands like Porsche and IKEA, to fast-growing AI companies like Opencode and OpenAI. Platforms that are built on Cloudflare, like Webflow Cloud and Wix Vibe, have chosen Astro to power the websites their customers build and deploy to their own platforms. At Cloudflare, we use Astro, too — for our developer docs, website, landing pages, and more. Astro is used almost everywhere there is content on the Internet.

By joining forces with the Astro team, we are doubling down on making Astro the best framework for content-driven websites for many years to come. The best version of Astro — Astro 6 —  is just around the corner, bringing a redesigned development server powered by Vite. The first public beta release of Astro 6 is now available, with GA coming in the weeks ahead.

We are excited to share this news and even more thrilled for what Continue reading

Kubernetes Networking at Scale: From Tool Sprawl to a Unified Solution

As Kubernetes platforms scale, one part of the system consistently resists standardization and predictability: networking. While compute and storage have largely matured into predictable, operationally stable subsystems, networking remains a primary source of complexity and operational risk

This complexity is not the result of missing features or immature technology. Instead, it stems from how Kubernetes networking capabilities have evolved as a collection of independently delivered components rather than as a cohesive system. As organizations continue to scale Kubernetes across hybrid and multi-environment deployments, this fragmentation increasingly limits agility, reliability, and security.

This post explores how Kubernetes networking arrived at this point, why hybrid environments amplify its operational challenges, and why the industry is moving toward more integrated solutions that bring connectivity, security, and observability into a single operational experience.

Ready to Scale Kubernetes Without the Stress?
Scaling Kubernetes Networking Without the Complexity

The Components of Kubernetes Networking

Kubernetes networking was designed to be flexible and extensible. Rather than prescribing a single implementation, Kubernetes defined a set of primitives and left key responsibilities such as pod connectivity, IP allocation, and policy enforcement to the ecosystem. Over time, these responsibilities were addressed by a growing set of specialized components, each focused on a narrow slice of Continue reading

Human Native is joining Cloudflare

Today, we’re excited to share that Cloudflare has acquired Human Native, a UK-based AI data marketplace specializing in transforming multimedia content into searchable and useful data.

Human Native x Cloudflare

The Human Native team has spent the past few years focused on helping AI developers create better AI through licensed data. Their technology helps publishers and developers turn messy, unstructured content into something that can be understood, licensed and ultimately valued. They have approached data not as something to be scraped, but as an asset class that deserves structure, transparency and respect.

Access to high-quality data can lead to better technical performance. One of Human Native’s customers, a prominent UK video AI company, threw away their existing training data after achieving superior results with data sourced through Human Native. Going forward they are only training on fully licensed, reputably sourced, high-quality content.

This gives a preview of what the economic model of the Internet can be in the age of generative AI: better AI built on better data, with fair control, compensation and credit for creators.

The Internet needs new economic models

For the last 30 years, the open Internet has been based on a fundamental value exchange: creators Continue reading

Using netlab to Set Up Demos

David Gee was time-pressed to set up a demo network to showcase his network automation solution and found that a Ubuntu VM running netlab to orchestrate Arista cEOS containers on his Apple Silicon laptop was exactly what he needed.

I fixed a few blog posts based on his feedback (I can’t tell you how much I appreciate receiving a detailed “you should fix this stuff” message, and how rare it is, so thanks a million!), and David was kind enough to add a delightful cherry on top of that cake with this wonderful blurb:

Netlab has been a lifesaver. Ivan’s entire approach, from the software to collecting instructions and providing a meaningful information trail, enabled me to go from zero to having a functional lab in minutes. It has been an absolute lifesaver.

I can be lazy with the infrastructure side, because he’s done all of the hard work. Now I get to concentrate on the value-added functionality of my own systems and test with the full power of an automated and modern network lab. Game-changing.