Archive

Category Archives for "Networking"

Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.

PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the
intermediate CA.
Offline PKI backed up by 3 YubiKeys

This post describes an offline PKI system using the following components:

  • 2 YubiKeys for the root CA (with a 20-year validity),
  • 1 YubiKey for the intermediate CA (with a 5-year validity), and
  • 1 Libre Computer Sweet Potato as an air-gapped SBC.

It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part

offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve.

The first step is to reset all your YubiKeys:

$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
 Continue reading

Arista EOS Spooky Action at a Distance

This blog post describes yet another bizarre behavior discovered during the netlab integration testing.

It started innocently enough: I was working on the VRRP integration test and wanted to use Arista EOS as the second (probe) device in the VRRP cluster because it produces nice JSON-formatted results that are easy to use in validation tests.

Everything looked great until I ran the test on all platforms on which netlab configures VRRP, and all of them passed apart from Arista EOS (that was before we figured out how Sturgeon’s Law applies to VRRPv3) – a “That’s funny” moment that was directly responsible for me wasting a few hours chasing white rabbits down this trail.

How Cloudflare is using automation to tackle phishing head on

Phishing attacks have grown both in volume and in sophistication over recent years. Today’s threat isn’t just about sending out generic emails — bad actors are using advanced phishing techniques like 2 factor monster in the middle (MitM) attacks, QR codes to bypass detection rules, and using artificial intelligence (AI) to craft personalized and targeted phishing messages at scale. Industry organizations such as the Anti-Phishing Working Group (APWG) have shown that phishing incidents continue to climb year over year.

To combat both the increase in phishing attacks and the growing complexity, we have built advanced automation tooling to both detect and take action. 

In the first half of 2024, Cloudflare resolved 37% of phishing reports using automated means, and the median time to take action on hosted phishing reports was 3.4 days. In the second half of 2024, after deployment of our new tooling, we were able to expand our automated systems to resolve 78% of phishing reports with a median time to take action on hosted phishing reports of under an hour.

In this post we dig into some of the details of how we implemented these improvements.

The phishing site problem

Cloudflare has observed a similar Continue reading

Welcome to Security Week 2025

The layer of security around today’s Internet is essential to safeguarding everything. From the way we shop online, engage with our communities, access critical healthcare resources, sustain the worldwide digital economy, and beyond. Our dependence on the Internet has led to cyber attacks that are bigger and more widespread than ever, worsening the so-called defender’s dilemma: attackers only need to succeed once, while defenders must succeed every time.

In the past year alone, we discovered and mitigated the largest DDoS attack ever recorded in the history of the Internet – three different times – underscoring the rapid and persistent efforts of threat actors. We helped safeguard the largest year of elections across the globe, with more than half the world’s population eligible to vote, all while witnessing geopolitical tensions and war reflected in the digital world.

2025 already promises to follow suit, with cyberattacks estimated to cost the global economy $10.5 trillion in 2025. As the rapid advancement of AI and emerging technologies increases, and as threat actors become more agile and creative, the security landscape continues to drastically evolve. Organizations now face a higher volume of attacks, and an influx of more complex threats that carry real-world consequences, Continue reading

AI in Network Observability: The Dawn of Network Intelligence

Let’s face it. The modern network is a beast — a sprawling, complex organism of clouds, data centers, SaaS apps, home offices, and, depending on your industry vertical, factories, offices, retail locations, or branches. Mix in the internet as the backbone to connect them all, as well as an ever-increasing volume and velocity of data, and it becomes clear that traditional monitoring tools are now akin to peering through a keyhole to look at a vast landscape. They simply can’t see the bigger picture, and a new approach is needed: Enter Artificial Intelligence (AI), the game-changer ushering in a new era of Network Intelligence. From Reactive to Intelligent: The AI Revolution Remember the days of watching hundreds of dashboards, sifting through endless logs, and deciphering cryptic alerts? Those days are fading fast. Machine Learning and Generative AI are transforming network observability from a reactive chore to a proactive science. ML algorithms, trained on vast datasets of enriched, context-savvy network telemetry, can now detect anomalies in real-time, predict potential outages, foresee cost overruns, and even identify subtle performance degradations that would otherwise go unnoticed. Imagine an AI that can predict a spike in malicious traffic based on historical patterns and automatically Continue reading

The Root of the DNS

The Root Zone of the DNS is "special" in that it is the critical component that glues the rest of the distributed database that is the Internet's Name SYstem into a coherent whole. So how's it going? And more importantly, how will we scale it up to meet the demands of an inexorably larger Internet in the future?

HN772: Measuring Users’ Digital Experience with Catchpoint Internet Performance Monitoring (Sponsored)

Digital Experience Monitoring (DEM) is all about understanding a user’s application experience, and pinpointing problems if the experience is bad. Under the DEM umbrella, you’ll find Internet Performance Monitoring, or IPM. That’s our topic in today’s episode with sponsor Catchpoint. With more and more applications hosted in the cloud and more employees working remotely, organizations... Read more »

N4N017: Routing Fundamentals

On today’s N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today’s... Read more »

Hedge 262: Stealthy BGP Attacks

Many providers count on detection in the global routing table to discover and counter BGP route hijacks. What if there were a kind of BGP hijack that cannot be detected using current mechanisms? Henry Birge-Lee joins Tom Ammon and Russ White to discuss a kind of stealthy BGP attack that avoids normal detection, and how we can resolve these attacks.
 
To find out more, check this RIPE video.
 

 
downloa

Faster, Smarter, Cheaper: The Networking Revolution Powering Generative AI

AI models have rapidly evolved from GPT-2 (1.5B parameters) in 2019 to models like GPT-4 (1+ trillion parameters) and DeepSeek-V3 (671B parameters, using Mixture-of-Experts). More parameters enhance context understanding and text/image generation but increase computational demands. Modern AI is now multimodal, handling text, images, audio, and video (e.g., GPT-4V, Gemini), and task-specific, fine-tuned for applications like drug discovery, financial modeling or coding. As AI models continue to scale and evolve, they require massive parallel computing, specialized hardware (GPUs, TPUs), and crucially, optimized networking to ensure efficient training and inference.

Model Parallelism with Pipeline Parallelism

 

In Model Parallelism, the neural network is partitioned across multiple GPUs, with each GPU responsible for specific layers of the model. This strategy is particularly beneficial for large-scale models that surpass the memory limitations of a single GPU.

Conversely, Pipeline Parallelism involves dividing the model into consecutive stages, assigning each stage to a different GPU. This setup allows data to be processed in a pipeline fashion, akin to an assembly line, enabling simultaneous processing of multiple training samples. Without pipeline parallelism, each GPU would process its inputs sequentially from the complete dataset, while all other GPUs remain idle.

Our example neural network in Figure 8-3 consists of three hidden layers and an output layer. The first hidden layer is assigned to GPU A1, while the second and third hidden layers are assigned to GPU A2 and GPU B1, respectively. The output layer is placed on GPU B2. The training dataset is divided into four micro-batches and stored on the GPUs. These micro-batches are fed sequentially into the first hidden layer on GPU A1. 

Note 8-1. In this example, we use a small training dataset. However, if the dataset is too large to fit on a Continue reading

PP053: Rethinking Secure Network Access and Zero Trust With Bowtie (Sponsored)

On today’s Packet Protector episode we talk with sponsor Bowtie about its secure network access offering. If you think secure network access is just another way to say ‘VPN,’ you’ll want to think again. Bowtie’s approach aims to provide fast, resilient connectivity while also incorporating zero trust network access, a secure Web gateway, CASB, and... Read more »

How Calico Network Security Works

In the rapidly evolving world of Kubernetes, network security remains one of the most challenging aspects for organizations. The shift to dynamic containerized environments brings challenges like inter-cluster communication, rapid scaling, and multi-cloud deployments. These challenges, compounded by tool sprawl and fragmented visibility, leave teams grappling with operational inefficiencies, misaligned priorities, and increasing vulnerabilities. Without a unified solution, organizations risk security breaches and compliance failures.

Calico’s single platform approach to network security.
Calico’s single platform approach to network security.

Calico reimagines Kubernetes security with a holistic, end-to-end approach that simplifies operations while strengthening defenses. By unifying key capabilities like ingress and egress gateways, microsegmentation, and real-time observability, Calico empowers teams to bridge the gaps between security, compliance, and operational efficiency. The result is a scalable, robust platform that addresses the unique demands of containerized environments without introducing unnecessary complexity. Let’s look at how Calico’s key network security capabilities make this possible.

Calico Ingress Gateway

The Calico Ingress Gateway is a Kubernetes-native solution, built on the Envoy Gateway, that serves as a centralized entry point for managing and securing incoming traffic to your clusters. Implementing the Kubernetes Gateway API specification, it replaces traditional ingress controllers with a more robust, scalable, and flexible architecture that is capable of more Continue reading

The Linux Bridge MTU Hell

It all started with an innocuous article describing the MTU basics. As the real purpose of the MTU is to prevent packet drops due to fixed-size receiver buffers, and I waste spend most of my time in virtual labs, I wanted to check how various virtual network devices react to incoming oversized packets.

As the first step, I created a simple netlab topology in which a single link had a slightly larger than usual MTU… and then all hell broke loose.