Automatically Secure: how we upgraded 6,000,000 domains by default to get ready for the Quantum Future

The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.

During Birthday Week 2024, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, this system has quietly strengthened security for more than 6 million domains — ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.

Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.

The Basics: TLS protocol

Before diving in, let’s review the basics of Transport Layer Security (TLS). The protocol allows two strangers (like a client and server) to communicate securely.

Every secure web session Continue reading

A simpler path to a safer Internet: an update to our CSAM scanning tool

Launching a website or an online community brings people together to create and share. The operators of these platforms, sadly, also have to navigate what happens when bad actors attempt to misuse those destinations to spread the most heinous content like child sexual abuse material (CSAM).

We are committed to helping anyone on the Internet protect their platform from this kind of misuse. We first launched a CSAM Scanning Tool several years ago to give any website on the Internet the ability to programmatically scan content uploaded to their platform for instances of CSAM in partnership with National Center for Missing and Exploited Children (NCMEC), Interpol, and dozens of other organizations committed to protecting children. That release took technology that was only available to the largest social media platforms and provided it to any website.

However, the tool we offered still required setup work that added friction to its adoption. To help our customers file reports to NCMEC, they needed to create their own credentials. That step of creating credentials and sharing them was too confusing or too much work for small site owners. We did our best helping them with secondary reports, but we needed a method that made Continue reading

To build a better Internet in the age of AI, we need responsible AI bot principles. Here’s our proposal.

Cloudflare has a unique vantage point: we see not only how changes in technology shape the Internet, but also how new technologies can unintentionally impact different stakeholders. Take, for instance, the increasing reliance by everyday Internet users on AI–powered chatbots and search summaries. On the one hand, end users are getting information faster than ever before. On the other hand, web publishers, who have historically relied on human eyeballs to their website to support their businesses, are seeing a dramatic decrease in those eyeballs, which can reduce their ability to create original high-quality content. This cycle will ultimately hurt end users and AI companies (whose success relies on fresh, high-quality content to train models and provide services) alike.

We are indisputably at a point in time when the Internet needs clear “rules of the road” for AI bot behavior (a note on terminology: throughout this blog we refer to AI bots and crawlers interchangeably). We have had ongoing cross-functional conversations, both internally and with stakeholders and partners across the world, and it’s clear to us that the Internet at large needs key groups — publishers and content creators, bot operators, and Internet infrastructure and cybersecurity companies — to reach a Continue reading

Securing data in SaaS to SaaS applications

The recent Salesloft breach taught us one thing: connections between SaaS applications are hard to monitor and create blind spots for security teams with disastrous side effects. This will likely not be the last breach of this type.

To fix this, Cloudflare is working towards a set of solutions that consolidates all SaaS connections via a single proxy, for easier monitoring, detection and response. A SaaS to SaaS proxy for everyone.

As we build this, we need feedback from the community, both data owners and SaaS platform providers. If you are interested in gaining early access, please sign up here.

SaaS platform providers, who often offer marketplaces for additional applications, store data on behalf of their customers and ultimately become the trusted guardians. As integrations with marketplace applications take place, that guardianship is put to the test. A key breach in any one of these integrations can lead to widespread data exfiltration and tampering. As more apps are added the attack surface grows larger. Security teams who work for the data owner have no ability, today, to detect and react to any potential breach.

In this post we explain the underlying technology required to make this work and help keep Continue reading

Securing today for the quantum future: WARP client now supports post-quantum cryptography (PQC)

The Internet is currently transitioning to post-quantum cryptography (PQC) in preparation for Q-Day, when quantum computers break the classical cryptography that underpins all modern computer systems.  The US National Institute of Standards and Technology (NIST) recognized the urgency of this transition, announcing that classical cryptography (RSA, Elliptic Curve Cryptography (ECC)) must be deprecated by 2030 and completely disallowed by 2035.

Cloudflare is well ahead of NIST’s schedule. Today, over 45% of human-generated Internet traffic sent to Cloudflare’s network is already post-quantum encrypted. Because we believe that a secure and private Internet should be free and accessible to all, we’re on a mission to include PQC in all our products, without specialized hardware, and at no extra cost to our customers and end users.

That’s why we’re proud to announce that Cloudflare’s WARP client now supports post-quantum key agreement — both in our free consumer WARP client 1.1.1.1, and in our enterprise WARP client, the Cloudflare One Agent

Post-quantum tunnels using the WARP client

This upgrade of the WARP client to post-quantum key agreement provides end users with immediate protection for their Internet traffic against harvest-now-decrypt-later attacks. The value Continue reading

Giving users choice with Cloudflare’s new Content Signals Policy

If we want to keep the web open and thriving, we need more tools to express how content creators want their data to be used while allowing open access. Today the tradeoff is too limited. Either website operators keep their content open to the web and risk people using it for unwanted purposes, or they move their content behind logins and limit their audience.

To address the concerns our customers have today about how their content is being used by crawlers and data scrapers, we are launching the Content Signals Policy. This policy is a new addition to robots.txt that allows you to express your preferences for how your content can be used after it has been accessed. 

What robots.txt does, and does not, do today

Robots.txt is a plain text file hosted on your domain that implements the Robots Exclusion Protocol. It allows you to instruct which crawlers and bots can access which parts of your site.  Many crawlers and some bots obey robots.txt files, but not all do.

For example, if you wanted to allow all crawlers to access every part of your site, you could host a robots.txt file that Continue reading

Ultra Ethernet: Resource Initialization

Introduction to libfabric and Ultra Ethernet

[Updated: September-26, 2025]

Libfabric is a communication library that belongs to the OpenFabrics Interfaces (OFI) framework. Its main goal is to provide applications with high-performance and scalable communication services, especially in areas like high-performance computing (HPC) and artificial intelligence (AI). Instead of forcing applications to work directly with low-level networking details, libfabric offers a clean user-space API that hides complexity while still giving applications fast and efficient access to the network.

One of the strengths of libfabric is that it has been designed together with both application developers and hardware vendors. This makes it possible to map application needs closely to the capabilities of modern network hardware. The result is lower software overhead and better efficiency when applications send or receive data.

Ultra Ethernet builds on this foundation by adopting libfabric as its communication abstraction layer. Ultra Ethernet uses the libfabric framework to let endpoints interact with AI frameworks and, ultimately, with each other across GPUs. Libfabric provides a high-performance, low-latency API that hides the details of the underlying transport, so AI frameworks do not need to manage the low-level details of endpoints, buffers, or the underlying address tables that map communication paths. This Continue reading

HW061: Cisco’s Ultra-Reliable Wireless Backhaul

As automation of machinery in industrial environments grows, there is a need for reliable wireless technologies to connect and control mobile assets. Mobile assets cannot tolerate dropped connections or network latency, which could jeopardize safety among other problems. Cisco’s Ultra-Reliable Wireless Backhaul is one such product that promises to deliver reliable wireless in industrial environments. ... Read more »

BGP Route Reflectors, Originator ID and Cluster ID

BGP Route Reflectors, Originator ID and Cluster ID

In iBGP, all routers in the same AS must be fully meshed, meaning every router forms an iBGP session with every other router. This is required because iBGP by default does not advertise routes learned from one iBGP peer to another. The full mesh ensures that every router can learn all the routes.

The problem is that in a large network with many iBGP routers, a full mesh quickly becomes unmanageable. The number of sessions grows rapidly, and you could end up with hundreds of iBGP sessions. If you have 10 iBGP routers and try to build a full mesh, you would need 45 sessions. For n routers, the number of sessions is n × (n – 1) / 2. So with 10 routers, that’s 10 × 9 / 2 = 45.

This is where route reflectors come in. A route reflector reduces the need for full mesh by allowing certain routers to reflect routes to others. With this design, you only need a few sessions instead of a full mesh, making the iBGP setup much more scalable. If you have the same 10 routers, with RR, you only need 9 sessions.

Building unique, per-customer defenses against advanced bot threats in the AI era

Today, we are announcing a new approach to catching bots: using models to provide behavioral anomaly detection unique to each bot management customer and stop sophisticated bot attacks. 

With this per-customer approach, we’re giving every bot management customer hyper-personalized security capabilities to stop even the sneakiest bots. We’re doing this by not only making a first-request judgement call, but also by tracking behavior of bots who play the long-game and continuously execute unwanted behavior on our customers’ websites. We want to share how this service works, and where we’re focused. Our new platform has the power to fuel hundreds of thousands of unique detection suites, and we’ve heard our first target loud and clear from site owners: protect websites from the explosion of sophisticated, AI-driven web scraping.

The new arms race: the rise of AI-driven scraping

The battle against malicious bots used to be a simpler affair. Attackers used scripts that were fairly easy to identify through static, predictable signals: a request with a missing User-Agent header, a malformed method name, or traffic from a non-standard port was a clear indicator of malicious intent. However, the Internet is always evolving. As websites became more dynamic to create rich user Continue reading

Deploy your own AI vibe coding platform — in one click!

It’s an exciting time to build applications. With the recent AI-powered "vibe coding" boom, anyone can build a website or application by simply describing what they want in a few sentences. We’re already seeing organizations expose this functionality to both their users and internal employees, empowering anyone to build out what they need.

Today, we’re excited to open-source an AI vibe coding platform, VibeSDK, to enable anyone to run an entire vibe coding platform themselves, end-to-end, with just one click.

Want to see it for yourself? Check out our demo platform that you can use to create and deploy applications. Or better yet, click the button below to deploy your own AI-powered platform, and dive into the repo to learn about how it’s built.

Deploying VibeSDK sets up everything you need to run your own AI-powered development platform:

  • Integration with LLM models to generate code, build applications, debug errors, and iterate in real-time, powered by Agents SDK

  • Isolated development environments that allow users to safely build and preview their applications in secure sandboxes.

  • Infinite scale that allows you to deploy thousands or even millions of applications that end users deploy, all served on Cloudflare’s global network

  • Observability and caching Continue reading

Cloudflare Confidence Scorecards – making AI safer for the Internet

Security and IT teams face an impossible balancing act: Employees are adopting AI tools every day, but each tool carries unique risks tied to compliance, data privacy, and security practices. Employees using these tools without seeking prior approval leads to a new type of Shadow IT which is referred to as Shadow AI. Preventing Shadow AI requires manually vetting each AI application to determine whether it should be approved or disapproved. This isn’t scalable. And blanket bans of AI applications will only drive AI usage deeper underground, making it harder to secure.

That’s why today we are launching Cloudflare Application Confidence Scorecards. This is part of our new suite of AI Security features within the Cloudflare One SASE platform. These scores bring scale and automation to the labor- and time-intensive task of evaluating generative AI and SaaS applications one by one. Instead of spending hours trying to find AI applications’ compliance certifications or data-handling practices, evaluators get a clear score that reflects an application’s safety and trustworthiness. With that signal, decision makers within organizations can confidently set policies or apply guardrails where needed, and block risky tools so their organizations can embrace innovation without compromising security.

Our Cloudflare Application Confidence Continue reading

Why Cloudflare, Netlify, and Webflow are collaborating to support Open Source tools like Astro and TanStack

Open source is the core fabric of the web, and the open source tools that power the modern web depend on the stability and support of the community. 

To ensure two major open source projects have the resources they need, we are proud to announce our financial sponsorship to two cornerstone frameworks in the modern web ecosystem: Astro and TanStack.

Critically, we think it’s important we don’t do this alone — for the open web to continue to thrive, we must bet on and support technologies and frameworks that are open and accessible to all, and not beholden to any one company. 

Which is why we are also excited to announce that for these sponsorships we are joining forces with our peers at Netlify to sponsor TanStack and Webflow to sponsor Astro.

Why Astro and TanStack? Investing in the Future of the Frontend

Our decision to support Astro and TanStack was deliberate. These two projects represent distinct but complementary visions for the future of web development. One is redefining the architecture for high-performance, content-driven websites, while the other provides a full-stack toolkit for building the most ambitious web applications.

Astro: the framework for the high-performance sites 

When it Continue reading

Launching the x402 Foundation with Coinbase, and support for x402 transactions

Cloudflare is partnering with Coinbase to create the x402 Foundation. This foundation’s mission will be to encourage the adoption of the x402 protocol, an updated framework that allows clients and services to exchange value on the web using a common language. In addition to today’s partnership, we are shipping a set of features to allow developers to use x402 in the Agents SDK and our MCP integrations, as well as proposing a new deferred payment scheme.

Payments in the age of agents

Payments on the web have historically been designed for humans. We browse a merchant’s website, show intent by adding items to a cart, and confirm our intent to purchase by inputting our credit card information and clicking “Pay.” But what if you want to enable direct transactions between digital services? We need protocols to allow machine-to-machine transactions. 

Every day, sites on Cloudflare send out over a billion HTTP 402 response codes to bots and crawlers trying to access their content and e-commerce stores. This response code comes with a simple message: “Payment Required.”

Yet these 402 responses too often go unheard. One reason is a lack of standardization. Without a specification for how to Continue reading