Archive

Category Archives for "Networking"

DEEP Is Still a Must-Attend Boutique Conference

I love well-organized small conferences, so it wasn’t hard to persuade me to have another talk at the DEEP Conference in Zadar, Croatia. This time, I talked about the role of digital twins in disaster recovery/avoidance testing. You might know my take on networking digital twins; after that, I only had enough time to focus on bandwidth and latency matter, and this is how you emulate limited bandwidth and add latency bit.

How To Deploy a Local AI via Docker

If you’re tired of worrying about your AI queries or the data you share within them being used to either train large language models (LLMs) or to create a profile of you, there are always local AI options you can use. I’ve actually reached the point where the only AI I use is local. For me, it’s not just about the privacy and security, but also the toll AI takes on the energy grids and the environment. If I can do my part to prevent an all-out collapse, you bet I’m going to do it. Most often, I deploy local AI directly on my machine. There are, however, some instances where I want to quickly deploy a local AI to a remote server (either within my LAN or a server beyond it). When that need arises, I have two choices: Install a local AI service in the same way I install it on my desktop. Containerize it. The benefit of containerizing it is that the locally installed AI is sandboxed from the rest of the system, giving me even more privacy. Also, if I want to stop the locally installed AI, I can do so with a quick and easy Continue reading

Breaking the ‘Shared-Nothing’ Bottleneck: A NoSQL Paradigm

While there is no single storage architecture model that fits all NoSQL databases, the often recommended approach is a distributed, shared-nothing architecture using local storage (often flash-based) at each node. At the storage hardware level, direct-attached storage (DAS) would be an example of shared-nothing architecture. This model provides the desired high performance, low latency, fault tolerance and availability that business-critical NoSQL databases like Cassandra and MongoDB require. While DAS offers significant advantages, it’s counterproductive to today’s data center climate of reduced CapEx, OpEx and sustainability initiatives. At the same time, critical data services inherent in a shared networked storage system, such as storage area networks (SANs), are missing in DAS. However, with today’s SAN solutions, you can have your cake and eat it, too: efficiency, data services, resilience and yes, high performance and low latency, too. Modernizing your data platform to a SAN model, using a supplier with a disaggregated, software-defined architecture, can deliver the performance and fault tolerance your NoSQL database requires without compromising efficiency. Why Shared-Nothing Is Common for NoSQL DAS is a prevalent model for performance-sensitive workloads, like NoSQL databases, because historically local flash, especially

Lab: Drain Traffic From an IS-IS Node Before Starting Maintenance

Here’s a cool feature every routing protocol should have: a flag that tells everyone a node is going down, giving them time to adjust their routing tables before disrupting traffic flow.

OSPF never had such a feature; common implementations set the cost of all interfaces to a very high value to emulate it. BGP got it (the Graceful BGP Session Shutdown) almost 30 years after it was created. IS-IS had the overload bit from day one, and it’s just what an IS-IS router needs to tell everyone else they should stop using it for transit traffic. You can try it out in the Drain Traffic Before Node Maintenance lab exercise.

Click here to start the lab in your browser using GitHub Codespaces (or set up your own lab infrastructure). After starting the lab environment, change the directory to feature/5-drain and execute netlab up.

When to Use BGP, VXLAN, or IP-in-IP: A Practical Guide for Kubernetes Networking

When deploying a Kubernetes cluster, a critical architectural decision is how pods on different nodes communicate. The choice of networking mode directly impacts performance, scalability, and operational overhead. Selecting the wrong mode for your environment can lead to persistent performance issues, troubleshooting complexity, and scalability bottlenecks.

The core problem is that pod IPs are virtual. The underlying physical or cloud network has no native awareness of how to route traffic to a pod’s IP address, like 10.244.1.5 It only knows how to route traffic between the nodes themselves. This gap is precisely what the Container Network Interface (CNI) must bridge.

The OSI Model
The OSI Model: Understanding Layers 3 and 4 is key to seeing how CNI modes add or avoid packet overhead.

The CNI employs two primary methods to solve this problem:

  1. Overlay Networking (Encapsulation): This method wraps a pod’s packet inside another packet that the underlying network understands. The outer packet is addressed between nodes, effectively creating a tunnel. VXLAN and IP-in-IP are common encapsulation protocols.
  2. Underlay Networking (Routing): This method teaches the network fabric itself how to route traffic directly to pods. It uses a routing protocol like BGP to advertise pod IP routes to the physical Continue reading

OSPF Router ID and Loopback Interface Myths

Daniel Dib wrote a nice article describing the history of the loopback interface1, triggering an inevitable mention of the role of a loopback interface in OSPF and related flood of ancient memories on my end.

Before going into the details, let’s get one fact straight: an OSPF router ID was always (at least from the days of OSPFv1 described in RFC 1133) just a 32-bit identifier, not an IPv4 address2. Straight from the RFC 1133:

Why Modern IPv6 Failed This Massive Kubernetes Networking Test

PARIS —When I worked for NASA in the 1980s, I helped build a Near Space Network tracking program using Datatrieve on VAX/VMS for the backend. When completed, it manually tracked just over a thousand static network links. That’s nothing — nothing — compared to what Starlink. This is not easy, as OpenInfra Summit Europe 2025. The problem they face is that while the mega-constellations of Low Earth Orbit (LEO) and Medium Earth Orbit (MEO) are revolutionizing telecom, traditional network routing protocols such as Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) struggle with their dynamic topologies — not to mention the next-generation Internet protocol, IPv6. The Challenge of Emulating Dynamic Satellite Networks So, the goal is to emulate large-scale, satellite mesh networks where the nodes are constantly moving and falling in and out of contact as they orbit the Earth and the world revolves underneath them. Deutsche Continue reading

NB548: Broadcom Brings Chips to Wi-Fi 8 Party; Attorneys General Scrutinize HPE/Juniper Settlement

Take a Network Break! On today’s coverage, F5 releases an emergency security update after state-backed threat actors breach internal systems, and North Korean attackers use the blockchain to host and hide malware. Broadcom is shipping an 800G NIC aimed at AI workloads, and Broadcom joins the Wi-Fi 8 party early with a sampling of pre-standard... Read more »

AI / ML network performance metrics at scale

The charts above show information from a GPU cluster running an AI / ML training workload. The 244 nodes in the cluster are connected by 100G links to a single large switch. Industry standard sFlow telemetry from the switch is shown in the two trend charts generated by the sFlow-RT real-time analytics engine. The charts are updated every 100mS.
  • Per Link Telemetry shows RoCEv2 traffic on 5 randomly selected links from the cluster. Each trend is computed based on sFlow random packet samples collected on the link. The packet header in each sample is decoded and the metric is computed for packets identified as RoCEv2.
  • Combined Fabric-Wide Telemetry combines the signals from all the links to create a fabric wide metric. The signals are highly correlated since the AI training compute / exchange cycle is synchronized across all compute nodes in the cluster. Constructive interference from combining data from all the links removes the noise in each individual signal and clearly shows the traffic pattern for the cluster.
This is a relatively small cluster. For larger clusters, the effect is even more pronounced, resulting in extremely sharp cluster-wide metrics. The sFlow instrumentation embedded as a standard feature of data center Continue reading

netlab: Embed Files in a Lab Topology

Today, I’ll focus on another feature of the new files plugin – you can use it to embed any (hopefully small) file in a lab topology (configlets are just a special case in which the plugin creates the relative file path from the configlets dictionary data).

You could use this functionality to include configuration files for Linux containers, custom reports, or even plugins in the lab topology, and share a complete solution as a single file that can be downloaded from a GitHub repository.

TNO046: Prisma AIRS: Securing the Multi-Cloud and AI Runtime (Sponsored)

Multi-cloud, automation, and AI are changing how modern networks operate and how firewalls and security policies are administered. In today’s sponsored episode with Palo Alto Networks, we dig into offerings such as CLARA (Cloud and AI Risk Assessment) that help ops teams gain more visibility into the structure and workflows of their multi-cloud networks. We... Read more »

Hedge 284: Netops and Corporate Culture

We all know netops, NRE, and devops can increase productivity, increase Mean Time Between Mistakes (MTBM), and decrease MTTR–but how do we deploy and use these tools? We often think of the technical hurdles you face in their deployment, but most of the blockers are actually cultural. Chris Grundemann, Eyvonne, Russ, and Tom discuss the cultural issues with deploying netops on this episode of the Hedge.
 

 
download