

Earlier in 2023, we announced Super Slurper, a data migration tool that makes it easy to copy large amounts of data to R2 from other cloud object storage providers. Since the announcement, developers have used Super Slurper to run thousands of successful migrations to R2!
While Super Slurper is perfect for cases where you want to move all of your data to R2 at once, there are scenarios where you may want to migrate your data incrementally over time. Maybe you want to avoid the one time upfront AWS data transfer bill? Or perhaps you have legacy data that may never be accessed, and you only want to migrate what’s required?
Today, we’re announcing the open beta of Sippy, an incremental migration service that copies data from S3 (other cloud providers coming soon!) to R2 as it’s requested, without paying unnecessary cloud egress fees typically associated with moving large amounts of data. On top of addressing vendor lock-in, Sippy makes stressful, time-consuming migrations a thing of the past. All you need to do is replace the S3 endpoint in your application or attach your domain to your new R2 bucket and data will start getting copied Continue reading


We launched the Cloudflare Radar Outage Center (CROC) during Birthday Week 2022 as a way of keeping the community up to date on Internet disruptions, including outages and shutdowns, visible in Cloudflare’s traffic data. While some of the entries have their genesis in information from social media posts made by local telecommunications providers or civil society organizations, others are based on an internal traffic anomaly detection and alerting tool. Today, we’re adding this alerting feed to Cloudflare Radar, showing country and network-level traffic anomalies on the CROC as they are detected, as well as making the feed available via API.
Building on this new functionality, as well as the route leaks and route hijacks insights that we recently launched on Cloudflare Radar, we are also launching new Radar notification functionality, enabling you to subscribe to notifications about traffic anomalies, confirmed Internet outages, route leaks, or route hijacks. Using the Cloudflare dashboard’s existing notification functionality, users can set up notifications for one or more countries or autonomous systems, and receive notifications when a relevant event occurs. Notifications may be sent via e-mail or webhooks — the available delivery methods vary according to plan level.
Internet traffic generally follows Continue reading


One of the wonderful things about the Internet is that, whether as a consumer or producer, the cost has continued to come down. Back in the day, it used to be that you needed a server room, a whole host of hardware, and an army of folks to help keep everything up and running. The cloud changed that, but even with that shift, services like SSL or unmetered DDoS protection were out of reach for many. We think that the march towards a more accessible Internet — both through ease of use, and reduced cost — is a wonderful thing, and we’re proud to have played a part in making it happen.
Every now and then, however, the march of progress gets interrupted.
On July 28, 2023, Amazon Web Services (AWS) announced that they would begin to charge “per IP per hour for all public IPv4 addresses, whether attached to a service or not”, starting February 1, 2024. This change will add at least \$43 extra per year for every IPv4 address Amazon customers use; this may not sound like much, but we’ve seen back of the napkin analysis that suggests this will result in an approximately \$2bn tax on Continue reading


Starting November 15, 2023, we’re merging Cloudflare Images and Image Resizing.
All Image Resizing features will be available as part of the Cloudflare Images product. To let you calculate your monthly costs more accurately and reliably, we’re changing how we bill to resize images that aren’t stored at Cloudflare. Our new pricing model will cost $0.50 per 1,000 unique transformations.
For existing Image Resizing customers, you can continue to use the legacy version of Image Resizing. When the merge is live, then you can opt into the new pricing model for more predictable pricing.
In this post, we'll cover why we came to this decision, what's changing, and how these changes might impact you.
When you build an application with images, you need to think about three separate operations: storage, optimization, and delivery.
In 2019, we launched Image Resizing, which can optimize and transform any publicly available image on the Internet based on a set of parameters. This enables our customers to deliver variants of a single image for each use case without creating and storing additional copies.
For example, an e-commerce platform for furniture retailers might use the same image of a lamp on Continue reading


Currently, Cloudflare Radar curates a list of observed Internet disruptions (which may include partial or complete outages) in the Outage Center. These disruptions are recorded whenever we have sufficient context to correlate with an observed drop in traffic, found by checking status updates and related communications from ISPs, or finding news reports related to cable cuts, government orders, power outages, or natural disasters.
However, we observe more disruptions than we currently report in the outage center because there are cases where we can’t find any source of information that provides a likely cause for what we are observing, although we are still able to validate with external data sources such as Georgia Tech’s IODA. This curation process involves manual work, and is supported by internal tooling that allows us to analyze traffic volumes and detect anomalies automatically, triggering the workflow to find an associated root cause. While the Cloudflare Radar Outage Center is a valuable resource, one of key shortcomings include that we are not reporting all disruptions, and that the current curation process is not as timely as we’d like, because we still need to find the context.
As we announced today in a related blog post, Cloudflare Continue reading
When it comes to migration from one infrastructure to another, there are always complexities and risks involved. Finding the most appropriate approach is key to successful delivery of desired outcomes, but depends on the customisations that exist in the current environment and other operational, technical and business features. The technical solution, presented in this post is just a single step in the entire process of migrating workloads from a NSX-V-based environment to a NSX-T-based, which is also enabled with NSX projects.
The overall migration strategy in this use case is “Lift-and-Shift” between two separate environments. The purpose of this post is to outline the steps necessary to perform in order to create Layer 2 bridges between NSX-V and NSX-T environments and potentially do workload migration between the two environments. The products involved are as follows:
| Product | Version |
| VMware vCenter Server® (Target) | 8 update 1 |
| VMware vCenter Server® (Source) | 7.0.3.01100 |
| VMware NSX® (Target) | 4.1.0.2.0.21761691 |
| VMware NSXV (Source) | 6.4.14.20609341 |
VMware ESXi (Target) |
8 update 1 |
The NSX-V environment will sometimes be referred as “source” environment. It consists of 2 ESXi hosts, both with NSX-V installed on them and Continue reading
After discussing names, addresses and routes, and the various addresses we might need in a networking stack, we’re ready to tackle an interesting comment made by a Twitter user as a reply to my Why Is Source Address Validation Still a Problem? blog post:
Maybe the question we should be asking is why there is a source address in the packet header at all.
Most consumers of network services expect a two-way communication – you send some stuff to another node providing an interesting service, and you usually expect to get some stuff back. So far so good. Now for the fun part: how does the server know where to send the stuff back to? There are two possible answers1:
After discussing names, addresses and routes, and the various addresses we might need in a networking stack, we’re ready to tackle an interesting comment made by a Twitter user as a reply to my Why Is Source Address Validation Still a Problem? blog post:
Maybe the question we should be asking is why there is a source address in the packet header at all.
Most consumers of network services expect a two-way communication – you send some stuff to another node providing an interesting service, and you usually expect to get some stuff back. So far so good. Now for the fun part: how does the server know where to send the stuff back to? There are two possible answers1:
On today’s Network Break, Greg Ferro is joined by guest co-host Brad Casemore. You can follow Brad on his blog Crepuscular Circus. Greg and Brad discuss new capabilities in Juniper’s Apstra data center automation software, Versa partnering with Intel to put security software on a NIC, and Cisco buying Splunk for $28 billion. The Linux […]
The post Network Break 448: Cisco Splashes Out $28 Billion For Splunk; OpenTofu Is Vegetarian Alternative To Terraform appeared first on Packet Pushers.

This post is also available in 简体中文, 日本語, 한국어, Deutsch, Español and Français.

Since our founding, Cloudflare has helped customers save on costs, increase security, and boost performance and reliability by migrating legacy hardware functions to the cloud. More recently, our customers have been asking about whether this transition can also improve the environmental impact of their operations.
We are excited to share an independent report published this week that found that switching enterprise network services from on premises devices to Cloudflare services can cut related carbon emissions up to 96%, depending on your current network footprint. The majority of these gains come from consolidating services, which improves carbon efficiency by increasing the utilization of servers that are providing multiple network functions.
And we are not stopping there. Cloudflare is also proud to announce that we have applied to set carbon reduction targets through the Science Based Targets initiative (SBTi) in order to help continue to cut emissions across our operations, facilities, and supply chain.
As we wrap up the hottest summer on record, it's clear that we all have a part to play in understanding and reducing our carbon footprint. Partnering with Cloudflare Continue reading


A lot of people rely on Cloudflare. We serve over 46 million HTTP requests per second on average; millions of customers use our services, including 31% of the Fortune 1000. And these numbers are only growing.
Given the privileged position we sit in to help the Internet to operate, we’ve always placed a very large emphasis on transparency during incidents. But we’re constantly striving to do better.
That’s why today we are excited to announce Incident Alerts — available via email, webhook, or PagerDuty. These notifications are accessible easily in the Cloudflare dashboard, and they’re customizable to prevent notification overload. And best of all, they’re available to everyone; you simply need a free account to get started.

Without proper transparency, incidents cause confusion and waste resources for anyone that relies on the Internet. With so many different entities working together to make the Internet operate, diagnosing and troubleshooting can be complicated and time-consuming. By far the best solution is for providers to have transparent and proactive alerting, so any time something goes wrong, it’s clear exactly where the problem is.
We understand the importance of proactive and transparent alerting around incidents. We have Continue reading


In the dynamic landscape of modern web applications and organizations, access control is critical. Defining who can do what within your Cloudflare account ensures security and efficient workflow management. In order to help meet your organizational needs, whether you are a single developer, a small team, or a larger enterprise, we’re going to cover two changes that we have developed to make it easier to do user management, and best practices on how to use these features, alongside existing features in order to scope everything appropriately into your account, in order to ensure security while you are working with others.
In the preceding year, Cloudflare has expanded our list of roles available to everyone from 1 to over 60, and we are continuing to build out more, better roles. We have also made domain scoping a capability for all users. This prompts the question, what are roles, and why do they exist?
Roles are a set of permissions that exist in a bundle with a name. Every API call that is made to Cloudflare has a required set of permissions, otherwise an API call will return with a 403. We generally group permissions into a role to Continue reading