Hyper-volumetric DDoS attacks skyrocket: Cloudflare’s 2025 Q2 DDoS threat report

Welcome to the 22nd edition of the Cloudflare DDoS Threat Report. Published quarterly, this report offers a comprehensive analysis of the evolving threat landscape of Distributed Denial of Service (DDoS) attacks based on data from the Cloudflare network. In this edition, we focus on the second quarter of 2025. To view previous reports, visit www.ddosreport.com.

June was the busiest month for DDoS attacks in 2025 Q2, accounting for nearly 38% of all observed activity. One notable target was an independent Eastern European news outlet protected by Cloudflare, which reported being attacked following its coverage of a local Pride parade during LGBTQ Pride Month.

Key DDoS insights

  • DDoS attacks continue to break records. During 2025 Q2, Cloudflare automatically blocked the largest ever reported DDoS attacks, peaking at 7.3 terabits per second (Tbps) and 4.8 billion packets per second (Bpps).

  • Overall, in 2025 Q2, hyper-volumetric DDoS attacks skyrocketed. Cloudflare blocked over 6,500 hyper-volumetric DDoS attacks, an average of 71 per day. 

  • Although the overall number of DDoS attacks dropped compared to the previous quarter — which saw an unprecedented surge driven by a large-scale campaign targeting Cloudflare’s network and critical Internet infrastructure protected by Cloudflare — the Continue reading

Blog Reboot

When I first launched this site, many years ago, it served as a humble lab notebook and sharing short personal stories from my working life. I shared diagrams, Junos configs , and field notes written after late night maintenance windows or proof of concepts. Those stories took on a life of their own. They brought […]

The post Blog Reboot first appeared on Rick Mur.

Integration Testing in Infrahub – Validate Your Automation in Real Environments

Integration Testing in Infrahub - Validate Your Automation in Real Environments

Testing individual components is a good start, but what happens when you need to validate how everything works together? In this post, we’ll show you how to run integration tests in Infrahub that verify your schema, data, and Git workflows in a real, running environment.

You’ll learn how to spin up isolated Infrahub instances on the fly using Docker and Testcontainers, automate schema and data loading, and catch issues before they reach production.

SPONSORED

OpsMill has partnered with me for this post, and they also support my blog as a sponsor. The post is originally published under https://opsmill.com/blog/integration-testing-infrahub/

You don’t need to be a Python expert to follow along. We’ll walk through everything step by step, with example code and tooling recommendations. You can also follow this guide in video form on the Cisco DevNet YouTube channel:

All the sample data and code used here are available on the OpsMill GitHub repo, so you can set up your own test environment and try it yourself.

Quick recap

Previously, we covered how to write smoke and unit tests using the Continue reading

Triggering QUIC

We look in detail at the mechanisms used to trigger a client application (typically a browser) to connect to the server using the QUIC transport protocol.

Brazil Lays The Hardware Foundation For Its AI Ambitions

Every major economy that is not the United States or China, which has a disproportionate share of HPC national labs as well as hyperscaler and cloud builder tech titans, wants AI sovereignty a whole lot more than they ever worried about HPC simulation and modeling.

Brazil Lays The Hardware Foundation For Its AI Ambitions was written by Timothy Prickett Morgan at The Next Platform.

NB534: Arista Late to SD-WAN Party but Ready to Dance; CoreWeave Acquires GPUs, Gigawatts for $9 Billion

Take a Network Break! We start with listener follow-up on Arista market share in the enterprise, and then sound the alarm about a remote code execution vulnerability in Adobe Experience Manager. On the news front, Arista buys VeloCloud to charge into the SD-WAN market, CoreWeave acquires a cryptominer to get access to GPUs and electricity... Read more »

Tech Bytes: Build a Reliable DC Network With Nokia Validated Designs (Sponsored)

Today on the Tech Bytes podcast we explore NVDs, or Nokia Validated Designs, for enterprise data center networks. NVDs are developed to address a broad set of customer requirements and undergo extensive testing of hardware, software, and traffic. We talk with sponsor Nokia about its validation process, customer benefits, NVD use cases, technical details, and... Read more »

Go is still not good

Previous posts Why Go is not my favourite language and Go programs are not portable have me critiquing Go for over a decade.

These things about Go are bugging me more and more. Mostly because they’re so unnecessary. The world knew better, and yet Go was created the way it was.

For readers of previous posts you’ll find some things repeated here. Sorry about that.

Error variable scope is forced to be wrong

Here’s an example of the language forcing you to do the wrong thing. It’s very helpful for the reader of code (and code is read more often than it’s written), to minimize the scope of a variable. If by mere syntax you can tell the reader that a variable is just used in these two lines, then that’s a good thing.

Example:

if err := foo(); err != nil {
   return err
}

(enough has been said about this verbose repeated boilerplate that I don’t have to. I also don’t particularly care)

So that’s fine. The reader knows err is here and only here.

But then you encounter this:

bar, err := foo()
if err != nil {
  return err
}
if err =  Continue reading

AWS Transit Gateway Peering Attachments (VIII)

AWS Transit Gateway Peering Attachments (VIII)

Hi all, welcome back to the AWS networking series. This is actually part 3 of just Transit Gateway. I know some of you might be thinking, why are we still talking about Transit Gateway? But please bear with me. TGW is such an important concept, and it shows up in almost every architecture you come across.

So far, we've covered what a Transit Gateway is, how to create one, how route tables work, and how to manage associations and propagations. We also looked at how to create a VPN and attach it to the TGW, and we went through the process of sharing a TGW with other AWS accounts using AWS Resource Access Manager (RAM). In this post, we'll look at how to peer a Transit Gateway with another TGW, even when they are in different regions. So let's get to it.

If you're completely new to Transit Gateway, I highly recommend checking out the earlier introductory posts listed below.

Tracing network packets with eBPF and pwru

pwru (packet, where are you?) is an open source tool from Cilium that used eBPF instrumentation in recent Linux kernels to trace network packets through the kernel.

In this article we will use Multipass to create a virtual machine to experiment with pwru. Multipass is a command line tool for running Ubuntu virtual machines on Mac or Windows. Multipass uses the native virtualization capabilities of the host operating system to simplify the creation of virtual machines.

multipass launch --name=ebpf noble
multipass exec ebpf -- sudo apt update
multipass exec ebpf -- sudo apt -y install git clang llvm make libbpf-dev flex bison golang
multipass exec ebpf -- git clone https://github.com/cilium/pwru.git
multipass exec ebpf --working-directory pwru -- make
multipass exec ebpf -- sudo ./pwru/pwru -h
Run the commands above to create the virtual machine and build pwru from sources.
multipass exec ebpf -- sudo ./pwru/pwru port https
Run pwru to trace https traffic on the virtual machine.
multipass exec ebpf -- curl https://sflow-rt.com
In a second window, run the above command to generate an https request from the virtual machine.
SKB                CPU PROCESS          NETNS      MARK/x        IFACE       PROTO  MTU   LEN   TUPLE FUNC
0xffff9fc40335a0e8 0   ~r/bin/curl:8966 4026531840 0               0          Continue reading

Google Brings the Lustre Parallel File System to Its Cloud

Google Cloud now offers a fully managed version of the Google Cloud Managed Lustre service went live (“general availability”) globally on July 8. An open source, high-performance file system, running those supercomputing jobs. And Lustre’s ability to stream data in the range of terabytes per second should also make it appealing to very approximately 1TB) — and can scale up to 8PiB or more. With this release, Google Cloud has caught up with other cloud providers in offering a cloud-based Lustre. It competes with Amazon FSx and Oracle‘s EXAScaler Continue reading

Technology Short Take 186

Welcome to Technology Short Take #186! Yes, it’s been quite a while since I published a Technology Short Take; life has “gotten in the way,” so to speak, of gathering links to share with all of you. However, I think this crazy phase of my life is about to start settling down (I hope so, anyway), and I’m cautiously optimistic that I’ll be able to pick up the blogging pace once again. For now, though, here’s a collection of links I’ve gathered since the last Technology Short Take. I hope you find something useful here!

Networking

Security

Dual-Stack Common-Services VRF Confuses Aruba CX

As I was running the netlab pre-release integration tests, I noticed that ArubaCX failed the IPv6 Common Services test (it worked before). Here’s the gist of what that test does:

  • It creates three VRFs (red, blue, and common)
  • It imports routes from red and blue VRF into the common VRF and routes from the common VRF into the red and blue VRF (the schoolbook example of common services VRF)
  • Just to be on the safe side, it imports red routes into the red VRF and so on.

Here’s the relevant part of the netlab lab topology:

Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums

This week, Amazon Web Services announced the availability of its first UltraServer pre-configured supercomputers based on Nvidia’s “Grace” CG100 CPUs and its “Blackwell” B200 GPUs in what is called a GB200 NVL72 shared GPU memory configuration.

Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums was written by Timothy Prickett Morgan at The Next Platform.

Will Companies Build Or Buy Their GenAI Models?

One of the biggest questions that enterprises, governments, academic institutions, and HPC centers the world over are going to have to answer very soon – if they have not made the decision already – is if they are going to train their own AI models and the inference software stacks that make them useful or just buy them from third parties and get to work integrating AI with their applications a lot faster.

Will Companies Build Or Buy Their GenAI Models? was written by Timothy Prickett Morgan at The Next Platform.

AI Metrics with InfluxDB Cloud

The InfluxDB AI Metrics dashboard shown above tracks performance metrics for AI/ML RoCEv2 network traffic, for example, large scale CUDA compute tasks using NVIDIA Collective Communication Library (NCCL) operations for inter-GPU communications: AllReduce, Broadcast, Reduce, AllGather, and ReduceScatter.

The metrics include:

  • Total Traffic Total traffic entering fabric
  • Operations Total RoCEv2 operations broken out by type
  • Core Link Traffic Histogram of load on fabric links
  • Edge Link Traffic Histogram of load on access ports
  • RDMA Operations Total RDMA operations
  • RDMA Bytes Average RDMA operation size
  • Credits Average number of credits in RoCEv2 acknowledgements
  • Period Detected period of compute / exchange activity on fabric (in this case just over 0.5 seconds)
  • Congestion Total ECN / CNP congestion messages
  • Errors Total ingress / egress errors
  • Discards Total ingress / egress discards
  • Drop Reasons Packet drop reasons
This article shows how to integrate with InfluxDB Cloud instead of running the services locally.

Note: InfluxDB Cloud has a free service tier that can be used to test this example.

Save the following compose.yml file on a system running Docker.

configs:
  config.telegraf:
    content: |
      [agent]
        interval = '15s'
        round_interval = true
        omit_hostname = true
      [[outputs.influxdb_v2]]
        urls = ['https://<INFLUXDB_CLOUD_INSTANCE>.cloud2.influxdata.com']
         Continue reading

Quicksilver v2: evolution of a globally distributed key-value store (Part 1)

Quicksilver is a key-value store developed internally by Cloudflare to enable fast global replication and low-latency access on a planet scale. It was initially designed to be a global distribution system for configurations, but over time it gained popularity and became the foundational storage system for many products in Cloudflare.

A previous post described how we moved Quicksilver to production and started replicating on all machines across our global network. That is what we called Quicksilver v1: each server has a full copy of the data and updates it through asynchronous replication. The design served us well for some time. However, as our business grew with an ever-expanding data center footprint and a growing dataset, it became more and more expensive to store everything everywhere.

We realized that storing the full dataset on every server is inefficient. Due to the uniform design, data accessed in one region or data center is replicated globally, even if it's never accessed elsewhere. This leads to wasted disk space. We decided to introduce a more efficient system with two new server roles: replica, which stores the full dataset and proxy, which acts as a persistent cache, evicting unused key-value pairs to free Continue reading