When you build on Cloudflare, we consider it our job to do the heavy lifting for you. That’s been true since we introduced Cloudflare Workers in 2017, when we first provided a runtime for you where you could just focus on building.
That commitment is still true today, and many of today’s announcements are focused on just that — removing friction where possible to free you up to build something great.
There are only so many blog posts we can write (and that you can read)! We have been busy on a much longer list of new improvements, and many of them we’ve been rolling out consistently over the course of the year. Today’s announcement breaks down all the new capabilities in detail, in one single post. The features being released today include:
Use more APIs from Node.js — including node:fs and node:https
Use models from different providers in AI Search (formerly AutoRAG)
Deploy larger container instances and more concurrent instances to our Containers platform
Run 30 concurrent headless web browsers (previously 10), via the Browser Rendering API
Use the Playwright browser automation library with the Browser Rendering API — now fully supported and GA
We've been busy.
Compatibility with the broad JavaScript developer ecosystem has always been a key strategic investment for us. We believe in open standards and an open web. We want you to see Workers as a powerful extension of your development platform with the ability to just drop code in that Just Works. To deliver on this goal, the Cloudflare Workers team has spent the past year significantly expanding compatibility with the Node.js ecosystem, enabling hundreds (if not thousands) of popular npm modules to now work seamlessly, including the ever popular express framework.
We have implemented a substantial subset of the Node.js standard library, focusing on the most commonly used, and asked for, APIs. These include:
| Module | API documentation |
|---|---|
| node:console | https://nodejs.org/docs/latest/api/console.html |
| node:crypto | https://nodejs.org/docs/latest/api/crypto.html |
| node:dns | https://nodejs.org/docs/latest/api/dns.html |
| node:fs | https://nodejs.org/docs/latest/api/fs.html |
| node:http | https://nodejs.org/docs/latest/api/http.html |
| node:https | https://nodejs.org/docs/latest/api/https.html |
| node:net | https://nodejs.org/docs/latest/api/net.html |
| node:process | https://nodejs.org/docs/latest/api/process.html |
| node:timers | https://nodejs.org/docs/latest/api/timers.html |
| node:tls | https://nodejs.org/docs/latest/api/tls.html |
| node:zlib | https://nodejs.org/docs/latest/api/zlib.html |
Each of these has been carefully implemented to approximate Node.js' behavior as closely as possible where feasible. Where matching Node.js' behavior is not possible, our implementations will throw a clear error Continue reading
These days the internet as a whole is mostly constructed out of point to point ethernet circuits, meaning an ethernet interface (mostly optical) attached

I made most of the Ansible for Networking Engineers webinar public; you can watch those videos without an ipSpace.net account.
Want to spend more time watching free ipSpace.net videos? The complete list is here.
Launching a website or an online community brings people together to create and share. The operators of these platforms, sadly, also have to navigate what happens when bad actors attempt to misuse those destinations to spread the most heinous content like child sexual abuse material (CSAM).
We are committed to helping anyone on the Internet protect their platform from this kind of misuse. We first launched a CSAM Scanning Tool several years ago to give any website on the Internet the ability to programmatically scan content uploaded to their platform for instances of CSAM in partnership with National Center for Missing and Exploited Children (NCMEC), Interpol, and dozens of other organizations committed to protecting children. That release took technology that was only available to the largest social media platforms and provided it to any website.
However, the tool we offered still required setup work that added friction to its adoption. To help our customers file reports to NCMEC, they needed to create their own credentials. That step of creating credentials and sharing them was too confusing or too much work for small site owners. We did our best helping them with secondary reports, but we needed a method that made Continue reading
The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.
During Birthday Week 2024, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, this system has quietly strengthened security for more than 6 million domains — ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.
Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.
Before diving in, let’s review the basics of Transport Layer Security (TLS). The protocol allows two strangers (like a client and server) to communicate securely.
Every secure web session Continue reading
If we want to keep the web open and thriving, we need more tools to express how content creators want their data to be used while allowing open access. Today the tradeoff is too limited. Either website operators keep their content open to the web and risk people using it for unwanted purposes, or they move their content behind logins and limit their audience.
To address the concerns our customers have today about how their content is being used by crawlers and data scrapers, we are launching the Content Signals Policy. This policy is a new addition to robots.txt that allows you to express your preferences for how your content can be used after it has been accessed.
Robots.txt is a plain text file hosted on your domain that implements the Robots Exclusion Protocol. It allows you to instruct which crawlers and bots can access which parts of your site. Many crawlers and some bots obey robots.txt files, but not all do.
For example, if you wanted to allow all crawlers to access every part of your site, you could host a robots.txt file that Continue reading
The Internet is currently transitioning to post-quantum cryptography (PQC) in preparation for Q-Day, when quantum computers break the classical cryptography that underpins all modern computer systems. The US National Institute of Standards and Technology (NIST) recognized the urgency of this transition, announcing that classical cryptography (RSA, Elliptic Curve Cryptography (ECC)) must be deprecated by 2030 and completely disallowed by 2035.
Cloudflare is well ahead of NIST’s schedule. Today, over 45% of human-generated Internet traffic sent to Cloudflare’s network is already post-quantum encrypted. Because we believe that a secure and private Internet should be free and accessible to all, we’re on a mission to include PQC in all our products, without specialized hardware, and at no extra cost to our customers and end users.
That’s why we’re proud to announce that Cloudflare’s WARP client now supports post-quantum key agreement — both in our free consumer WARP client 1.1.1.1, and in our enterprise WARP client, the Cloudflare One Agent.
This upgrade of the WARP client to post-quantum key agreement provides end users with immediate protection for their Internet traffic against harvest-now-decrypt-later attacks. The value Continue reading
The recent Salesloft breach taught us one thing: connections between SaaS applications are hard to monitor and create blind spots for security teams with disastrous side effects. This will likely not be the last breach of this type.
To fix this, Cloudflare is working towards a set of solutions that consolidates all SaaS connections via a single proxy, for easier monitoring, detection and response. A SaaS to SaaS proxy for everyone.
As we build this, we need feedback from the community, both data owners and SaaS platform providers. If you are interested in gaining early access, please sign up here.
SaaS platform providers, who often offer marketplaces for additional applications, store data on behalf of their customers and ultimately become the trusted guardians. As integrations with marketplace applications take place, that guardianship is put to the test. A key breach in any one of these integrations can lead to widespread data exfiltration and tampering. As more apps are added the attack surface grows larger. Security teams who work for the data owner have no ability, today, to detect and react to any potential breach.
In this post we explain the underlying technology required to make this work and help keep Continue reading
Cloudflare has a unique vantage point: we see not only how changes in technology shape the Internet, but also how new technologies can unintentionally impact different stakeholders. Take, for instance, the increasing reliance by everyday Internet users on AI–powered chatbots and search summaries. On the one hand, end users are getting information faster than ever before. On the other hand, web publishers, who have historically relied on human eyeballs to their website to support their businesses, are seeing a dramatic decrease in those eyeballs, which can reduce their ability to create original high-quality content. This cycle will ultimately hurt end users and AI companies (whose success relies on fresh, high-quality content to train models and provide services) alike.
We are indisputably at a point in time when the Internet needs clear “rules of the road” for AI bot behavior (a note on terminology: throughout this blog we refer to AI bots and crawlers interchangeably). We have had ongoing cross-functional conversations, both internally and with stakeholders and partners across the world, and it’s clear to us that the Internet at large needs key groups — publishers and content creators, bot operators, and Internet infrastructure and cybersecurity companies — to reach a Continue reading
[Updated: September-26, 2025]
Deprectaed: new version: Ultra Etherent: Discovery
Libfabric is a communication library that belongs to the OpenFabrics Interfaces (OFI) framework. Its main goal is to provide applications with high-performance and scalable communication services, especially in areas like high-performance computing (HPC) and artificial intelligence (AI). Instead of forcing applications to work directly with low-level networking details, libfabric offers a clean user-space API that hides complexity while still giving applications fast and efficient access to the network.
One of the strengths of libfabric is that it has been designed together with both application developers and hardware vendors. This makes it possible to map application needs closely to the capabilities of modern network hardware. The result is lower software overhead and better efficiency when applications send or receive data.
Ultra Ethernet builds on this foundation by adopting libfabric as its communication abstraction layer. Ultra Ethernet uses the libfabric framework to let endpoints interact with AI frameworks and, ultimately, with each other across GPUs. Libfabric provides a high-performance, low-latency API that hides the details of the underlying transport, so AI frameworks do not need to manage the low-level details of endpoints, buffers, or the underlying Continue reading
Someone approached me after my NOG.HR netlab presentation and said: “wouldn’t it be great if we could just start the lab from an example topology published on GitHub?”
It took me almost a year to get it done, but the functionality finally made it into the 25.09 release:

In iBGP, all routers in the same AS must be fully meshed, meaning every router forms an iBGP session with every other router. This is required because iBGP by default does not advertise routes learned from one iBGP peer to another. The full mesh ensures that every router can learn all the routes.
The problem is that in a large network with many iBGP routers, a full mesh quickly becomes unmanageable. The number of sessions grows rapidly, and you could end up with hundreds of iBGP sessions. If you have 10 iBGP routers and try to build a full mesh, you would need 45 sessions. For n routers, the number of sessions is n × (n – 1) / 2. So with 10 routers, that’s 10 × 9 / 2 = 45.
This is where route reflectors come in. A route reflector reduces the need for full mesh by allowing certain routers to reflect routes to others. With this design, you only need a few sessions instead of a full mesh, making the iBGP setup much more scalable. If you have the same 10 routers, with RR, you only need 9 sessions.