You no doubt noticed that Cisco bought Splunk last week for $28 billion. It was a deal that had been rumored for at least a year if not longer. The purchase makes a lot of sense from a number of angles. I’m going to focus on a couple of them here with some alliteration to help you understand why this may be one of the biggest signals of a shift in the way that Cisco does business.
Cisco is now a premier security company now. The addition of the most power SIEM on the market means that Cisco’s security strategy now has a completeness of vision. SecureX has been a very big part of the sales cycle for Cisco as of late and having all the parts to make it work top to bottom is a big win. XDR is a great thing for organizations but it doesn’t work without massive amounts of data to analyze. Guess where Splunk comes in?
Aside from some very specialized plays, Cisco now has an answer for just about everything a modern enterprise could want in a security vendor. They may not be number one in every market but Continue reading
The best part of our job is the time we spend talking to Cloudflare customers. We always learn something new and interesting about their IT and security challenges.
In recent years, something about those conversations has changed. More and more, the biggest challenge customers tell us about isn’t something that’s easy to define. And it’s definitely not something you can address with an individual product or feature.
Rather, what we’re hearing from IT and security teams is that they are losing control of their digital environment.
This loss of control comes in a few flavors. They might express hesitance about adopting a new capability they know they need, because of compatibility concerns. Or maybe they’ll talk about how much time and effort it takes to make relatively simple changes, and how those changes take time away from more impactful work. If we had to sum the feeling up, it would be something like, “No matter how large my team or budget, it’s never enough to fully connect and protect the business.”
Does any of this feel familiar? If so, let us tell you that you are far from alone.
The rate of change in Continue reading
Earlier in 2023, we announced Super Slurper, a data migration tool that makes it easy to copy large amounts of data to R2 from other cloud object storage providers. Since the announcement, developers have used Super Slurper to run thousands of successful migrations to R2!
While Super Slurper is perfect for cases where you want to move all of your data to R2 at once, there are scenarios where you may want to migrate your data incrementally over time. Maybe you want to avoid the one time upfront AWS data transfer bill? Or perhaps you have legacy data that may never be accessed, and you only want to migrate what’s required?
Today, we’re announcing the open beta of Sippy, an incremental migration service that copies data from S3 (other cloud providers coming soon!) to R2 as it’s requested, without paying unnecessary cloud egress fees typically associated with moving large amounts of data. On top of addressing vendor lock-in, Sippy makes stressful, time-consuming migrations a thing of the past. All you need to do is replace the S3 endpoint in your application or attach your domain to your new R2 bucket and data will start getting copied Continue reading
We launched the Cloudflare Radar Outage Center (CROC) during Birthday Week 2022 as a way of keeping the community up to date on Internet disruptions, including outages and shutdowns, visible in Cloudflare’s traffic data. While some of the entries have their genesis in information from social media posts made by local telecommunications providers or civil society organizations, others are based on an internal traffic anomaly detection and alerting tool. Today, we’re adding this alerting feed to Cloudflare Radar, showing country and network-level traffic anomalies on the CROC as they are detected, as well as making the feed available via API.
Building on this new functionality, as well as the route leaks and route hijacks insights that we recently launched on Cloudflare Radar, we are also launching new Radar notification functionality, enabling you to subscribe to notifications about traffic anomalies, confirmed Internet outages, route leaks, or route hijacks. Using the Cloudflare dashboard’s existing notification functionality, users can set up notifications for one or more countries or autonomous systems, and receive notifications when a relevant event occurs. Notifications may be sent via e-mail or webhooks — the available delivery methods vary according to plan level.
Internet traffic generally follows Continue reading
One of the wonderful things about the Internet is that, whether as a consumer or producer, the cost has continued to come down. Back in the day, it used to be that you needed a server room, a whole host of hardware, and an army of folks to help keep everything up and running. The cloud changed that, but even with that shift, services like SSL or unmetered DDoS protection were out of reach for many. We think that the march towards a more accessible Internet — both through ease of use, and reduced cost — is a wonderful thing, and we’re proud to have played a part in making it happen.
Every now and then, however, the march of progress gets interrupted.
On July 28, 2023, Amazon Web Services (AWS) announced that they would begin to charge “per IP per hour for all public IPv4 addresses, whether attached to a service or not”, starting February 1, 2024. This change will add at least \$43 extra per year for every IPv4 address Amazon customers use; this may not sound like much, but we’ve seen back of the napkin analysis that suggests this will result in an approximately \$2bn tax on Continue reading
Starting November 15, 2023, we’re merging Cloudflare Images and Image Resizing.
All Image Resizing features will be available as part of the Cloudflare Images product. To let you calculate your monthly costs more accurately and reliably, we’re changing how we bill to resize images that aren’t stored at Cloudflare. Our new pricing model will cost $0.50 per 1,000 unique transformations.
For existing Image Resizing customers, you can continue to use the legacy version of Image Resizing. When the merge is live, then you can opt into the new pricing model for more predictable pricing.
In this post, we'll cover why we came to this decision, what's changing, and how these changes might impact you.
When you build an application with images, you need to think about three separate operations: storage, optimization, and delivery.
In 2019, we launched Image Resizing, which can optimize and transform any publicly available image on the Internet based on a set of parameters. This enables our customers to deliver variants of a single image for each use case without creating and storing additional copies.
For example, an e-commerce platform for furniture retailers might use the same image of a lamp on Continue reading
Currently, Cloudflare Radar curates a list of observed Internet disruptions (which may include partial or complete outages) in the Outage Center. These disruptions are recorded whenever we have sufficient context to correlate with an observed drop in traffic, found by checking status updates and related communications from ISPs, or finding news reports related to cable cuts, government orders, power outages, or natural disasters.
However, we observe more disruptions than we currently report in the outage center because there are cases where we can’t find any source of information that provides a likely cause for what we are observing, although we are still able to validate with external data sources such as Georgia Tech’s IODA. This curation process involves manual work, and is supported by internal tooling that allows us to analyze traffic volumes and detect anomalies automatically, triggering the workflow to find an associated root cause. While the Cloudflare Radar Outage Center is a valuable resource, one of key shortcomings include that we are not reporting all disruptions, and that the current curation process is not as timely as we’d like, because we still need to find the context.
As we announced today in a related blog post, Cloudflare Continue reading
When it comes to migration from one infrastructure to another, there are always complexities and risks involved. Finding the most appropriate approach is key to successful delivery of desired outcomes, but depends on the customisations that exist in the current environment and other operational, technical and business features. The technical solution, presented in this post is just a single step in the entire process of migrating workloads from a NSX-V-based environment to a NSX-T-based, which is also enabled with NSX projects.
The overall migration strategy in this use case is “Lift-and-Shift” between two separate environments. The purpose of this post is to outline the steps necessary to perform in order to create Layer 2 bridges between NSX-V and NSX-T environments and potentially do workload migration between the two environments. The products involved are as follows:
Product | Version |
VMware vCenter Server® (Target) | 8 update 1 |
VMware vCenter Server® (Source) | 7.0.3.01100 |
VMware NSX® (Target) | 4.1.0.2.0.21761691 |
VMware NSXV (Source) | 6.4.14.20609341 |
VMware ESXi (Target) | 8 update 1 |
The NSX-V environment will sometimes be referred as “source” environment. It consists of 2 ESXi hosts, both with NSX-V installed on them and Continue reading
After discussing names, addresses and routes, and the various addresses we might need in a networking stack, we’re ready to tackle an interesting comment made by a Twitter user as a reply to my Why Is Source Address Validation Still a Problem? blog post:
Maybe the question we should be asking is why there is a source address in the packet header at all.
Most consumers of network services expect a two-way communication – you send some stuff to another node providing an interesting service, and you usually expect to get some stuff back. So far so good. Now for the fun part: how does the server know where to send the stuff back to? There are two possible answers1:
After discussing names, addresses and routes, and the various addresses we might need in a networking stack, we’re ready to tackle an interesting comment made by a Twitter user as a reply to my Why Is Source Address Validation Still a Problem? blog post:
Maybe the question we should be asking is why there is a source address in the packet header at all.
Most consumers of network services expect a two-way communication – you send some stuff to another node providing an interesting service, and you usually expect to get some stuff back. So far so good. Now for the fun part: how does the server know where to send the stuff back to? There are two possible answers1: