Welcome to the 22nd edition of the Cloudflare DDoS Threat Report. Published quarterly, this report offers a comprehensive analysis of the evolving threat landscape of Distributed Denial of Service (DDoS) attacks based on data from the Cloudflare network. In this edition, we focus on the second quarter of 2025. To view previous reports, visit www.ddosreport.com.
June was the busiest month for DDoS attacks in 2025 Q2, accounting for nearly 38% of all observed activity. One notable target was an independent Eastern European news outlet protected by Cloudflare, which reported being attacked following its coverage of a local Pride parade during LGBTQ Pride Month.
DDoS attacks continue to break records. During 2025 Q2, Cloudflare automatically blocked the largest ever reported DDoS attacks, peaking at 7.3 terabits per second (Tbps) and 4.8 billion packets per second (Bpps).
Overall, in 2025 Q2, hyper-volumetric DDoS attacks skyrocketed. Cloudflare blocked over 6,500 hyper-volumetric DDoS attacks, an average of 71 per day.
Although the overall number of DDoS attacks dropped compared to the previous quarter — which saw an unprecedented surge driven by a large-scale campaign targeting Cloudflare’s network and critical Internet infrastructure protected by Cloudflare — the Continue reading
When I first launched this site, many years ago, it served as a humble lab notebook and sharing short personal stories from my working life. I shared diagrams, Junos configs , and field notes written after late night maintenance windows or proof of concepts. Those stories took on a life of their own. They brought […]
The post Blog Reboot first appeared on Rick Mur.netlab release 25.07 was published yesterday. The major new features include:
But wait, there’s much more:

Testing individual components is a good start, but what happens when you need to validate how everything works together? In this post, we’ll show you how to run integration tests in Infrahub that verify your schema, data, and Git workflows in a real, running environment.
You’ll learn how to spin up isolated Infrahub instances on the fly using Docker and Testcontainers, automate schema and data loading, and catch issues before they reach production.
OpsMill has partnered with me for this post, and they also support my blog as a sponsor. The post is originally published under https://opsmill.com/blog/integration-testing-infrahub/
You don’t need to be a Python expert to follow along. We’ll walk through everything step by step, with example code and tooling recommendations. You can also follow this guide in video form on the Cisco DevNet YouTube channel:
All the sample data and code used here are available on the OpsMill GitHub repo, so you can set up your own test environment and try it yourself.
Previously, we covered how to write smoke and unit tests using the Continue reading
Every major economy that is not the United States or China, which has a disproportionate share of HPC national labs as well as hyperscaler and cloud builder tech titans, wants AI sovereignty a whole lot more than they ever worried about HPC simulation and modeling. …
Brazil Lays The Hardware Foundation For Its AI Ambitions was written by Timothy Prickett Morgan at The Next Platform.
Previous posts Why Go is not my favourite language and Go programs are not portable have me critiquing Go for over a decade.
These things about Go are bugging me more and more. Mostly because they’re so unnecessary. The world knew better, and yet Go was created the way it was.
For readers of previous posts you’ll find some things repeated here. Sorry about that.
Here’s an example of the language forcing you to do the wrong thing. It’s very helpful for the reader of code (and code is read more often than it’s written), to minimize the scope of a variable. If by mere syntax you can tell the reader that a variable is just used in these two lines, then that’s a good thing.
Example:
if err := foo(); err != nil {
return err
}
(enough has been said about this verbose repeated boilerplate that I don’t have to. I also don’t particularly care)
So that’s fine. The reader knows err is here and only here.
But then you encounter this:
bar, err := foo()
if err != nil {
return err
}
if err = Continue reading
Hi all, welcome back to the AWS networking series. This is actually part 3 of just Transit Gateway. I know some of you might be thinking, why are we still talking about Transit Gateway? But please bear with me. TGW is such an important concept, and it shows up in almost every architecture you come across.
So far, we've covered what a Transit Gateway is, how to create one, how route tables work, and how to manage associations and propagations. We also looked at how to create a VPN and attach it to the TGW, and we went through the process of sharing a TGW with other AWS accounts using AWS Resource Access Manager (RAM). In this post, we'll look at how to peer a Transit Gateway with another TGW, even when they are in different regions. So let's get to it.
If you're completely new to Transit Gateway, I highly recommend checking out the earlier introductory posts listed below.
In this article we will use Multipass to create a virtual machine to experiment with pwru. Multipass is a command line tool for running Ubuntu virtual machines on Mac or Windows. Multipass uses the native virtualization capabilities of the host operating system to simplify the creation of virtual machines.
multipass launch --name=ebpf noble multipass exec ebpf -- sudo apt update multipass exec ebpf -- sudo apt -y install git clang llvm make libbpf-dev flex bison golang multipass exec ebpf -- git clone https://github.com/cilium/pwru.git multipass exec ebpf --working-directory pwru -- make multipass exec ebpf -- sudo ./pwru/pwru -hRun the commands above to create the virtual machine and build pwru from sources.
multipass exec ebpf -- sudo ./pwru/pwru port httpsRun pwru to trace https traffic on the virtual machine.
multipass exec ebpf -- curl https://sflow-rt.comIn a second window, run the above command to generate an https request from the virtual machine.
SKB CPU PROCESS NETNS MARK/x IFACE PROTO MTU LEN TUPLE FUNC 0xffff9fc40335a0e8 0 ~r/bin/curl:8966 4026531840 0 0 Continue reading
Welcome to Technology Short Take #186! Yes, it’s been quite a while since I published a Technology Short Take; life has “gotten in the way,” so to speak, of gathering links to share with all of you. However, I think this crazy phase of my life is about to start settling down (I hope so, anyway), and I’m cautiously optimistic that I’ll be able to pick up the blogging pace once again. For now, though, here’s a collection of links I’ve gathered since the last Technology Short Take. I hope you find something useful here!
Many of us old timers (and a lot of young timers) worry about the future of networking. What if the future isn’t a technology, or even AI, but a change in focus? Mike Bushong joins Tom and Russ to argue for operations as the future of networking.
download
As I was running the netlab pre-release integration tests, I noticed that ArubaCX failed the IPv6 Common Services test (it worked before). Here’s the gist of what that test does:
Here’s the relevant part of the netlab lab topology:
This week, Amazon Web Services announced the availability of its first UltraServer pre-configured supercomputers based on Nvidia’s “Grace” CG100 CPUs and its “Blackwell” B200 GPUs in what is called a GB200 NVL72 shared GPU memory configuration. …
Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums was written by Timothy Prickett Morgan at The Next Platform.
One of the biggest questions that enterprises, governments, academic institutions, and HPC centers the world over are going to have to answer very soon – if they have not made the decision already – is if they are going to train their own AI models and the inference software stacks that make them useful or just buy them from third parties and get to work integrating AI with their applications a lot faster. …
Will Companies Build Or Buy Their GenAI Models? was written by Timothy Prickett Morgan at The Next Platform.
The metrics include:
Note: InfluxDB Cloud has a free service tier that can be used to test this example.
Save the following compose.yml file on a system running Docker.
configs:
config.telegraf:
content: |
[agent]
interval = '15s'
round_interval = true
omit_hostname = true
[[outputs.influxdb_v2]]
urls = ['https://<INFLUXDB_CLOUD_INSTANCE>.cloud2.influxdata.com']
Continue reading
Quicksilver is a key-value store developed internally by Cloudflare to enable fast global replication and low-latency access on a planet scale. It was initially designed to be a global distribution system for configurations, but over time it gained popularity and became the foundational storage system for many products in Cloudflare.
A previous post described how we moved Quicksilver to production and started replicating on all machines across our global network. That is what we called Quicksilver v1: each server has a full copy of the data and updates it through asynchronous replication. The design served us well for some time. However, as our business grew with an ever-expanding data center footprint and a growing dataset, it became more and more expensive to store everything everywhere.
We realized that storing the full dataset on every server is inefficient. Due to the uniform design, data accessed in one region or data center is replicated globally, even if it's never accessed elsewhere. This leads to wasted disk space. We decided to introduce a more efficient system with two new server roles: replica, which stores the full dataset and proxy, which acts as a persistent cache, evicting unused key-value pairs to free Continue reading