Free access to Cloudflare developer services for non-profit and civil society organizations

We are excited to announce that non-profit, civil society, and public interest organizations are now eligible to join Cloudflare for Startups. Under this new program, participating organizations will be eligible to receive up to $250,000 in Cloudflare credits — these can be used for a variety of our developer and core products, including databases & storage, compute services, AI, media, and performance and security.

Non-profit organizations and startups have a lot in common. In addition to being powered by small groups of dedicated, resilient, and creative people, they are constantly navigating funding shortages, staffing challenges, and insufficient tools. Most importantly, both are unrelenting in their efforts to do more with less; maximizing the impact of every dollar spent and hour invested.

Cloudflare's developer services and our startup programs were designed for exactly these challenges. Our goal is to make it easier for anyone to write code, build applications, and launch new ideas anywhere in the world. Put another way, we want to help small teams have a global impact.

All are welcome to apply. The application period for this new program will open today and runs until December 1. After the closing of the application period, Cloudflare will review the Continue reading

Cap’n Web: a new RPC system for browsers and web servers

Allow us to introduce Cap'n Web, an RPC protocol and implementation in pure TypeScript.

Cap'n Web is a spiritual sibling to Cap'n Proto, an RPC protocol I (Kenton) created a decade ago, but designed to play nice in the web stack. That means:

  • Like Cap'n Proto, it is an object-capability protocol. ("Cap'n" is short for "capabilities and".) We'll get into this more below, but it's incredibly powerful.

  • Unlike Cap'n Proto, Cap'n Web has no schemas. In fact, it has almost no boilerplate whatsoever. This means it works more like the JavaScript-native RPC system in Cloudflare Workers.

  • That said, it integrates nicely with TypeScript.

  • Also unlike Cap'n Proto, Cap'n Web's underlying serialization is human-readable. In fact, it's just JSON, with a little pre-/post-processing.

  • It works over HTTP, WebSocket, and postMessage() out-of-the-box, with the ability to extend it to other transports easily.

  • It works in all major browsers, Cloudflare Workers, Node.js, and other modern JavaScript runtimes.

  • The whole thing compresses (minify+gzip) to under 10 kB with no dependencies.

  • It's open source under the MIT license.

Cap'n Web is more expressive than almost every other RPC system, because it implements an object-capability RPC model. That means it:

EVPN/VxLAN Interop – IPv4/IPv6 – MikroTik & IP Infusion

Working with MikroTik and IP Infusion’s OcNOS to interop EVPN/VxLAN has been on my wish list for a long time. Both solutions are a great cost-effective alternative to mainstream vendors – both separately and together – for WISP/FISP/DC & Enterprise networks.


BGP EVPN and VxLAN


One of the more interesting trends to come out for the 2010s in network engineering was the rise of overlays in data center and enterprise networking. While carriers have been using MPLS and the various overlays available for that type of data plane since the early 2000s, enterprises and data centers tended to steer away from MPLS due to cost and complexity.

BGP EVPN – BGP Ethernet VPN or EVPN was originally designed for an MPLS data plane in RFC7432 and later modified to work with a VxLAN data plane in RFC8365.

It solves the following problems:

  • Provides a control plane for VxLAN overlays
  • Supports L2/L3 multitenancy via exchange of MAC addresses & IPv4/IPv6 routing inside of VRFs
  • Multihoming at the Network Virtualization Edge (NVE)
  • Multicast traffic in VxLAN overlays

VxLAN – VxLAN was developed in the early 2010s as an open-source alternative to Cisco’s OTV and released as RFC7348

Problems VxLAN solves:

Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let’s dive in! 🤿

$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New “outlet” service

The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:

Akvorado flow processing before the change: flows are received and processed
by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service

Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult Continue reading

Changing Colors and Line Styles in netlab Graphs

Last week, I explained how to generate network topology graphs (using GraphViz or D2 graphing engines) from a netlab lab topology. Let’s see how we can make them look nicer (or at least more informative). We’ll work with a simple leaf-and-spine topology with four nodes1:

Baseline leaf-and-spine topology
defaults.device: frr
provider: clab

nodes: [ s1, s2, l1, l2 ]
links: [ s1-l1, s1-l2, s2-l1, s2-l2 ]

This is the graph generated by netlab create followed by dot graph.dot -T png -o graph.png:

Cloudflare’s 2025 Annual Founders’ Letter

Cloudflare launched 15 years ago this week. We like to celebrate our birthday by announcing new products and features that give back to the Internet, which we’ll do a lot of this week. But, on this occasion, we've also been thinking about what's changed on the Internet over the last 15 years and what has not.

With some things there's been clear progress: when we launched in 2010 less than 10 percent of the Internet was encrypted, today well over 95 percent is encrypted. We're proud of the role we played in making that happen.

Some other areas have seen limited progress: IPv6 adoption has grown steadily but painfully slowly over the last 15 years, in spite of our efforts. That's a problem because as IPv4 addresses have become scarce and expensive it’s held back new entrants and driven up the costs of things like networking and cloud computing.

The Internet’s Business Model

Still other things have remained remarkably consistent: the basic business model of the Internet has for the last 15 years been the same — create compelling content, find a way to be discovered, and then generate value from the resulting traffic. Whether that was through ads or Continue reading

🖥️ Running Local LLMs: Experiments and Insights

✨ Summary Large Language Models (LLMs) have powered the AI wave of the last 3–4 years. While most are closed-source, a vibrant ecosystem of open-weight and open-source models has emerged. As a long-time AI user, I wanted to peek under the hood: how do GenAI models work, and what happens when you actually run them … Continue reading 🖥️ Running Local LLMs: Experiments and Insights

Getting Started With Infrahub MCP Server

Getting Started With Infrahub MCP Server

In this post, we’ll be looking at how to use the Infrahub MCP server. But, before we get there, we’ll go through some background on the Model Context Protocol (MCP) itself, show a simple example to explain how it works, and then connect it back to Infrahub. This will give us the basics before moving on to the Infrahub-specific setup. Here’s what we’ll cover:

  • A quick background on what MCP is and why it’s needed
  • A simple example of an MCP server to show how it works
  • How to connect an MCP server to host applications like Claude Desktop and Cursor
  • Setting up and using the Infrahub MCP server
  • Example use cases where the Infrahub MCP server can help in real workflows
SPONSORED

Disclaimer – OpsMill has partnered with me for this post, and they also support my blog as a sponsor. The post is originally published under https://opsmill.com/blog/getting-started-infrahub-mcp-server/

What is Model Context Protocol (MCP)?

If you're doing anything with AI (and honestly, who isn’t these days), you’ve probably heard of Model Context Protocol, or MCP. Anthropic introduced MCP in November 2024, which means it hasn’t been around for long and is still evolving quickly.

MCP is a communication Continue reading

Pleasant Surprise: Google AI Overview

When I was writing a blog post, I needed a link to the netlab lab topology documentation, so I searched for “netlab lab topology” (I know I’m lazy, but it felt quicker than navigating the sidebar menu).

The AI overview I got was way too verbose, but it nailed the Key Concepts and How It Works well enough that I could just use them in the netlab README.md file. Maybe this AI thing is becoming useful after all ;)

Ultra Ethernet: Libfabric Resource Initialization

Introduction

Ultra Ethernet uses the libfabric communication framework to let endpoints interact with AI frameworks and, ultimately, with each other across GPUs. Libfabric provides a high-performance, low-latency API that hides the details of the underlying transport, so AI frameworks do not need to manage the low-level details of endpoints, buffers, or the underlying address tables that map communication paths. This makes applications more portable across different fabrics while still providing access to advanced features such as zero-copy transfers and RDMA, which are essential for large-scale AI workloads.

During system initialization, libfabric coordinates with the appropriate provider—such as the UET provider—to query the network hardware and organize communication around three main objects: the Fabric, the Domain, and the Endpoint. Each object manages specific sub-objects and resources. For example, a Domain handles memory registration and hardware resources, while an Endpoint is associated with completion queues, transmit/receive buffers, and transport metadata. Ultra Ethernet maps these objects directly to the network hardware, ensuring that when GPUs begin exchanging training data, the communication paths are already aligned for low-latency, high-bandwidth transfers.

Once initialization is complete, AI frameworks issue standard libfabric calls to send and receive data. Ultra Ethernet ensures that this data flows efficiently across Continue reading

Technology Short Take 188

Welcome to Technology Short Take #188! I’m back once again with a small collection of articles and links related to a variety of data center-related technologies. I hope you find something useful!

Networking

Security

You don’t need quantum hardware for post-quantum security

Organizations have finite resources available to combat threats, both by the adversaries of today and those in the not-so-distant future that are armed with quantum computers. In this post, we provide guidance on what to prioritize to best prepare for the future, when quantum computers become powerful enough to break the conventional cryptography that underpins the security of modern computing systems.  We describe how post-quantum cryptography (PQC) can be deployed on your existing hardware to protect from threats posed by quantum computing, and explain why quantum key distribution (QKD) and quantum random number generation (QRNG) are neither necessary nor sufficient for security in the quantum age.

Are you quantum ready?

“Quantum” is becoming one of the most heavily used buzzwords in the tech industry. What does it actually mean, and why should you care?

At its core, “quantum” refers to technologies that harness principles of quantum mechanics to perform tasks that are not feasible with classical computers. Quantum computers have exciting potential to unlock advancements in materials science and medicine, but also pose a threat to computer security systems. The term Q-day refers to the day that adversaries possess quantum computers that are large and stable enough to Continue reading

Use Additional BGP Paths for IBGP Load Balancing

I wrote about the optimal BGP path selection with BGP additional paths in 2021, and I probably mentioned (in one of the 360 BGP-related blog posts) that you need it to implement IBGP load balancing in networks using BGP route reflectors. If you want to try that out, check out the IBGP Load Balancing with BGP Additional Paths lab exercise.

Click here to start the lab in your browser using GitHub Codespaces (or set up your own lab infrastructure). After starting the lab environment, change the directory to lb/4-ibgp-add-path and execute netlab up.

Connect and secure any private or public app by hostname, not IP — free for everyone in Cloudflare One

Connecting to an application should be as simple as knowing its name. Yet, many security models still force us to rely on brittle, ever-changing IP addresses. And we heard from many of you that managing those ever-changing IP lists was a constant struggle. 

Today, we’re taking a major step toward making that a relic of the past.

We're excited to announce that you can now route traffic to Cloudflare Tunnel based on a hostname or a domain. This allows you to use Cloudflare Tunnel to build simple zero-trust and egress policies for your private and public web applications without ever needing to know their underlying IP. This is one more step on our mission to strengthen platform-wide support for hostname- and domain-based policies in the Cloudflare One SASE platform, simplifying complexity and improving security for our customers and end users. 

Grant access to applications, not networks

In August 2020, the National Institute of Standards (NIST) published Special Publication 800-207, encouraging organizations to abandon the "castle-and-moat" model of security (where trust is established on the basis of network location) and move to a Zero Trust model (where we “verify anything and everything attempting to establish access").

Continue reading