Updated at 6:55 a.m. PT
Today, we’re introducing a new Worker template for Vertical Microfrontends (VMFE). This template allows you to map multiple independent Cloudflare Workers to a single domain, enabling teams to work in complete silos — shipping marketing, docs, and dashboards independently — while presenting a single, seamless application to the user.
Most microfrontend architectures are "horizontal", meaning different parts of a single page are fetched from different services. Vertical microfrontends take a different approach by splitting the application by URL path. In this model, a team owning the `/blog` path doesn't just own a component; they own the entire vertical stack for that route – framework, library choice, CI/CD and more. Owning the entire stack of a path, or set of paths, allows teams to have true ownership of their work and ship with confidence.
Teams face problems as they grow, where different frameworks serve varying use cases. A marketing website could be better utilized with Astro, for example, while a dashboard might be better with React. Or say you have a monolithic code base where many teams ship as a collective. An update to add new features from several teams can get frustratingly rolled back because Continue reading

The videos from the Network Observability webinar with Dinesh Dutt are now available without a valid ipSpace.net account. Enjoy!
Figure 6-12 depicts the ACK_CC header structure and fields. When NSCC is enabled in the UET node, the PDS must use the pds.type ACK_CC in the prologue header, which serves as the common header structure for all PDS messages. Within the actual PDS ACK_CC header, the pds.cc_type must be set to CC_NSCC. The pds.ack_cc_state field describes the values and states for service_time, rc (restore congestion CWND), rcv_cwnd_pend, and received_bytes. The source specifically utilizes the received_bytes parameter to calculate the updated state for inflight packets.
The CCC computes the reduction in the inflight state by subtracting the rcvd_bytes value received in previous ACK_CC messages from the rcvd_bytes value carried within the latest ACK_CC message. As illustrated in Figure 6-12, the inflight state is decreased by 4,096 bytes, which is the delta between 16,384 and 12,288 bytes.
Recap: In order to transport data to network, the Inflight bytes must be less than CWND size.
The Internet woke up this week to a flood of people buying Mac minis to run Moltbot (formerly Clawdbot), an open-source, self-hosted AI agent designed to act as a personal assistant. Moltbot runs in the background on a user's own hardware, has a sizable and growing list of integrations for chat applications, AI models, and other popular tools, and can be controlled remotely. Moltbot can help you with your finances, social media, organize your day — all through your favorite messaging app.
But what if you don’t want to buy new dedicated hardware? And what if you could still run your Moltbot efficiently and securely online? Meet Moltworker, a middleware Worker and adapted scripts that allows running Moltbot on Cloudflare's Sandbox SDK and our Developer Platform APIs.
Firstly, Cloudflare Workers has never been so compatible with Node.js. Where in the past we had to mock APIs to get some packages running, now those APIs are supported natively by the Workers Runtime.
This has changed how we can build tools on Cloudflare Workers. When we first implemented Playwright, a popular framework for web testing and automation that runs Continue reading

In the previous posts, we covered the basics of multicast, including how sources send traffic to group addresses and how receivers use IGMP to signal their interest to the last hop router. We also looked at how IGMP snooping helps switches forward multicast traffic only to ports with interested receivers.
In this post, we will look at PIM, Protocol Independent Multicast. PIM is the protocol that routers use to build the multicast forwarding tree between the source and the receivers. There are different modes/flavours of PIM, and in this post, we will focus specifically on PIM Dense Mode.


PIM, Protocol Independent Multicast, is the protocol routers use to build a loop-free multicast distribution tree from the source to the receivers. It is called protocol-independent because it does not rely on any specific unicast Continue reading
A friend of mine sent me a series of questions that might also be on your mind (unless you’re lucky enough to live under a rock or on a different planet):
I wanted to ask you how you think AI will affect networking jobs. What’s real and what’s hype?
Before going into the details, let’s make a few things clear:
Kubernetes networking offers incredible power, but scaling that power often transforms a clean architecture into a tangled web of complexity. Managing traffic flow between hundreds of microservices across dozens of namespaces presents a challenge that touches every layer of the organization, from engineers debugging connections to the architects designing for compliance.
The solution to these diverging challenges lies in bringing structure and validation to standard Kubernetes networking. Here is a look at how Calico Tiers and Staged Network Policies help you get rid of this networking chaos.
The default Kubernetes NetworkPolicy resource operates in a flat hierarchy. In a small cluster, this is manageable. However, in an enterprise environment with multiple tenants, teams, and compliance requirements, “flat” quickly becomes unmanageable, and dangerous.
To make this easier, imagine a large office building where every single employee has a key that opens every door. To secure the CEO’s office in a flat network, you have to put “Do Not Enter” signs on every door that could lead to it. That is flat networking, secure by exclusion rather than inclusion.
Without a security hierarchy, every new policy risks becoming a potential mistake that overrides others, and debugging connectivity Continue reading
There’s no “networking in 20xx” video this year, so this insightful article by Anil Dash will have to do ;) He seems to be based in Silicon Valley, so keep in mind the Three IT Geographies, but one cannot beat advice like this:
So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies.
As well as:
This too shall pass. One of the great gifts of working in technology is that it’s given so many of us the habit of constantly learning, of always being curious and paying attention to the new things worth discovering.
Hope you’ll find it helpful and at least a bit inspiring.
The Network-Signaled Congestion Control (NSCC) algorithm operates on the principle that the network fabric itself is the best source of truth regarding congestion. Rather than waiting for packet loss to occur, NSCC relies on proactive feedback from switches to adjust transmission rates in real time. The primary mechanism for this feedback is Explicit Congestion Notification (ECN) marking. When a switch interface's egress queue begins to build up, it employs a Random Early Detection (RED) logic to mark specific packets. Once the buffer’s Minimum Threshold is crossed, the switch begins randomly marking packets by setting the last two bits of the IP header’s Type of Service (ToS) field to the CE (11) state. If the congestion worsens and the Maximum Threshold is reached, every packet passing through that interface is marked, providing a clear and urgent signal to the endpoints.
The practical impact of this mechanism is best illustrated by a hash collision event, such as the one shown in Figure 6-10. In this scenario, multiple GPUs on the left-hand side of the fabric transmit data at line rate. Due to the specific entropy of these flows, the ECMP hashing algorithms on leaf switches 1A-1 and 1A-2 Continue reading
* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.
Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide.
For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.
But there is a "tax" to running it. Traditionally, operating a Matrix homeserver has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure reverse proxies, and handle rotation for TLS certificates. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot Continue reading
In the previous VXLAN labs, we covered the basics of Ethernet bridging over VXLAN and a more complex scenario with multiple VLANs.
Now let’s add the EVPN control plane into the mix. The data plane (VLANs mapped into VXLAN-over-IPv4) will remain unchanged, but we’ll use EVPN (a BGP address family) to build the ingress replication lists and MAC-to-VTEP mappings.