On October 13, 2025, security researchers from FearsOff identified and reported a vulnerability in Cloudflare's ACME (Automatic Certificate Management Environment) validation logic that disabled some of the WAF features on specific ACME-related paths. The vulnerability was reported and validated through Cloudflare’s bug bounty program.
The vulnerability was rooted in how our edge network processed requests destined for the ACME HTTP-01 challenge path (/.well-known/acme-challenge/*).
Here, we’ll briefly explain how this protocol works and the action we took to address the vulnerability.
Cloudflare has patched this vulnerability and there is no action necessary for Cloudflare customers. We are not aware of any malicious actor abusing this vulnerability.
ACME is a protocol used to automate the issuance, renewal, and revocation of SSL/TLS certificates. When an HTTP-01 challenge is used to validate domain ownership, a Certificate Authority (CA) will expect to find a validation token at the HTTP path following the format of http://{customer domain}/.well-known/acme-challenge/{token value}.
If this challenge is used by a certificate order managed by Cloudflare, then Cloudflare will respond on this path and provide the token provided by the CA to the caller. If the token provided does not Continue reading
TL&DR: Of course, the title is clickbait. While the differences are amazing, you won’t notice them in small topologies or when using bloatware that takes minutes to boot.
Let’s start with the background story: due to the (now fixed) suboptimal behavior of bleeding-edge Ansible releases, I decided to generate the device configuration files within netlab (previously, netlab prepared the device data, and the configuration files were rendered in an Ansible playbook).
As we use bash scripts to configure Linux containers, it makes little sense (once the bash scripts are created) to use an Ansible playbook to execute docker exec script or ip netns container exec script. netlab release 26.01 runs the bash scripts to configure Linux, Bird, and dnsmasq containers directly within the netlab initial process.
Now for the juicy part.
Standard RAID solutions waste space when disks have different sizes. Linux software RAID with LVM uses the full capacity of each disk and lets you grow storage by replacing one or two disks at a time.
We start with four disks of equal size:
$ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M vdb disk 101M vdc disk 101M vdd disk 101M
We create one partition on each of them:
$ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vda $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdb $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdc $ sgdisk --zap-all --new=0:0:0 -t 0:fd00 /dev/vdd $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M └─vda1 part 100M vdb disk 101M └─vdb1 part 100M vdc disk 101M └─vdc1 part 100M vdd disk 101M └─vdd1 part 100M
We set up a RAID 5 device by assembling the four partitions:1
$ mdadm --create /dev/md0 --level=raid5 --bitmap=internal --raid-devices=4 \ > /dev/vda1 /dev/vdb1 /dev/vdc1 /dev/vdd1 $ lsblk -Mo NAME,TYPE,SIZE NAME TYPE SIZE vda disk 101M ┌┈▶ └─vda1 part 100M ┆ vdb disk 101M ├┈▶ └─vdb1 part 100M ┆ Continue reading
The cost of building and maintaining a data center is rising rapidly–and not just in financial terms. George Michaelson joins Tom and Russ to discuss the wider costs of data centers.
download
No, we did not miss the fact that Nvidia did an “acquihire” of AI accelerator and system startup and rival Groq on Christmas Eve. …
Is Nvidia Assembling The Parts For Its Next Inference Platform? was written by Timothy Prickett Morgan at The Next Platform.
If the GenAI expansion runs out of gas, Taiwan Semiconductor Manufacturing Co, the world’s most important foundry for advanced chippery, will be the first to know. …
TSMC Has No Choice But To Trust The Sunny AI Forecasts Of Its Customers was written by Timothy Prickett Morgan at The Next Platform.
The Astro Technology Company, creators of the Astro web framework, is joining Cloudflare.
Astro is the web framework for building fast, content-driven websites. Over the past few years, we’ve seen an incredibly diverse range of developers and companies use Astro to build for the web. This ranges from established brands like Porsche and IKEA, to fast-growing AI companies like Opencode and OpenAI. Platforms that are built on Cloudflare, like Webflow Cloud and Wix Vibe, have chosen Astro to power the websites their customers build and deploy to their own platforms. At Cloudflare, we use Astro, too — for our developer docs, website, landing pages, and more. Astro is used almost everywhere there is content on the Internet.
By joining forces with the Astro team, we are doubling down on making Astro the best framework for content-driven websites for many years to come. The best version of Astro — Astro 6 — is just around the corner, bringing a redesigned development server powered by Vite. The first public beta release of Astro 6 is now available, with GA coming in the weeks ahead.
We are excited to share this news and even more thrilled for what Continue reading
Why do we need Infrahub, another network automation tool? What does it bring to the table, who should be using it, and why is it using a graph database internally?
I discussed these questions with Damien Garros, the driving force behind Infrahub, the founder of OpsMill (the company developing it), and a speaker in the ipSpace.net Network Automation course.
As Kubernetes platforms scale, one part of the system consistently resists standardization and predictability: networking. While compute and storage have largely matured into predictable, operationally stable subsystems, networking remains a primary source of complexity and operational risk
This complexity is not the result of missing features or immature technology. Instead, it stems from how Kubernetes networking capabilities have evolved as a collection of independently delivered components rather than as a cohesive system. As organizations continue to scale Kubernetes across hybrid and multi-environment deployments, this fragmentation increasingly limits agility, reliability, and security.
This post explores how Kubernetes networking arrived at this point, why hybrid environments amplify its operational challenges, and why the industry is moving toward more integrated solutions that bring connectivity, security, and observability into a single operational experience.
Kubernetes networking was designed to be flexible and extensible. Rather than prescribing a single implementation, Kubernetes defined a set of primitives and left key responsibilities such as pod connectivity, IP allocation, and policy enforcement to the ecosystem. Over time, these responsibilities were addressed by a growing set of specialized components, each focused on a narrow slice of Continue reading
If GenAI is going to go mainstream and not just be a bubble that helps prop up the global economy for a couple of years, AI inference is going to have to come down in price – and do so faster than it has done thus far. …
Cerebras Inks Transformative $10 Billion Inference Deal With OpenAI was written by Timothy Prickett Morgan at The Next Platform.
Today, we’re excited to share that Cloudflare has acquired Human Native, a UK-based AI data marketplace specializing in transforming multimedia content into searchable and useful data.
The Human Native team has spent the past few years focused on helping AI developers create better AI through licensed data. Their technology helps publishers and developers turn messy, unstructured content into something that can be understood, licensed and ultimately valued. They have approached data not as something to be scraped, but as an asset class that deserves structure, transparency and respect.
Access to high-quality data can lead to better technical performance. One of Human Native’s customers, a prominent UK video AI company, threw away their existing training data after achieving superior results with data sourced through Human Native. Going forward they are only training on fully licensed, reputably sourced, high-quality content.
This gives a preview of what the economic model of the Internet can be in the age of generative AI: better AI built on better data, with fair control, compensation and credit for creators.
For the last 30 years, the open Internet has been based on a fundamental value exchange: creators Continue reading
David Gee was time-pressed to set up a demo network to showcase his network automation solution and found that a Ubuntu VM running netlab to orchestrate Arista cEOS containers on his Apple Silicon laptop was exactly what he needed.
I fixed a few blog posts based on his feedback (I can’t tell you how much I appreciate receiving a detailed “you should fix this stuff” message, and how rare it is, so thanks a million!), and David was kind enough to add a delightful cherry on top of that cake with this wonderful blurb:
Netlab has been a lifesaver. Ivan’s entire approach, from the software to collecting instructions and providing a meaningful information trail, enabled me to go from zero to having a functional lab in minutes. It has been an absolute lifesaver.
I can be lazy with the infrastructure side, because he’s done all of the hard work. Now I get to concentrate on the value-added functionality of my own systems and test with the full power of an automated and modern network lab. Game-changing.