HN772: Measuring Users’ Digital Experience with Catchpoint Internet Performance Monitoring (Sponsored)

Digital Experience Monitoring (DEM) is all about understanding a user’s application experience, and pinpointing problems if the experience is bad. Under the DEM umbrella, you’ll find Internet Performance Monitoring, or IPM. That’s our topic in today’s episode with sponsor Catchpoint. With more and more applications hosted in the cloud and more employees working remotely, organizations... Read more »

N4N017: Routing Fundamentals

On today’s N Is For Networking, we explore the fundamentals of routing, focusing on layer 3 of the OSI model. We explain the concepts of routers, routing tables, and routing protocols, and discuss why it’s important to have a firm grasp of these concepts before you tackle advanced topics such as VXLAN and EVPN. Today’s... Read more »

Hedge 262: Stealthy BGP Attacks

Many providers count on detection in the global routing table to discover and counter BGP route hijacks. What if there were a kind of BGP hijack that cannot be detected using current mechanisms? Henry Birge-Lee joins Tom Ammon and Russ White to discuss a kind of stealthy BGP attack that avoids normal detection, and how we can resolve these attacks.
 
To find out more, check this RIPE video.
 

 
downloa

Faster, Smarter, Cheaper: The Networking Revolution Powering Generative AI

AI models have rapidly evolved from GPT-2 (1.5B parameters) in 2019 to models like GPT-4 (1+ trillion parameters) and DeepSeek-V3 (671B parameters, using Mixture-of-Experts). More parameters enhance context understanding and text/image generation but increase computational demands. Modern AI is now multimodal, handling text, images, audio, and video (e.g., GPT-4V, Gemini), and task-specific, fine-tuned for applications like drug discovery, financial modeling or coding. As AI models continue to scale and evolve, they require massive parallel computing, specialized hardware (GPUs, TPUs), and crucially, optimized networking to ensure efficient training and inference.

Model Parallelism with Pipeline Parallelism

 

In Model Parallelism, the neural network is partitioned across multiple GPUs, with each GPU responsible for specific layers of the model. This strategy is particularly beneficial for large-scale models that surpass the memory limitations of a single GPU.

Conversely, Pipeline Parallelism involves dividing the model into consecutive stages, assigning each stage to a different GPU. This setup allows data to be processed in a pipeline fashion, akin to an assembly line, enabling simultaneous processing of multiple training samples. Without pipeline parallelism, each GPU would process its inputs sequentially from the complete dataset, while all other GPUs remain idle.

Our example neural network in Figure 8-3 consists of three hidden layers and an output layer. The first hidden layer is assigned to GPU A1, while the second and third hidden layers are assigned to GPU A2 and GPU B1, respectively. The output layer is placed on GPU B2. The training dataset is divided into four micro-batches and stored on the GPUs. These micro-batches are fed sequentially into the first hidden layer on GPU A1. 

Note 8-1. In this example, we use a small training dataset. However, if the dataset is too large to fit on a Continue reading

PP053: Rethinking Secure Network Access and Zero Trust With Bowtie (Sponsored)

On today’s Packet Protector episode we talk with sponsor Bowtie about its secure network access offering. If you think secure network access is just another way to say ‘VPN,’ you’ll want to think again. Bowtie’s approach aims to provide fast, resilient connectivity while also incorporating zero trust network access, a secure Web gateway, CASB, and... Read more »

How Calico Network Security Works

In the rapidly evolving world of Kubernetes, network security remains one of the most challenging aspects for organizations. The shift to dynamic containerized environments brings challenges like inter-cluster communication, rapid scaling, and multi-cloud deployments. These challenges, compounded by tool sprawl and fragmented visibility, leave teams grappling with operational inefficiencies, misaligned priorities, and increasing vulnerabilities. Without a unified solution, organizations risk security breaches and compliance failures.

Calico’s single platform approach to network security.
Calico’s single platform approach to network security.

Calico reimagines Kubernetes security with a holistic, end-to-end approach that simplifies operations while strengthening defenses. By unifying key capabilities like ingress and egress gateways, microsegmentation, and real-time observability, Calico empowers teams to bridge the gaps between security, compliance, and operational efficiency. The result is a scalable, robust platform that addresses the unique demands of containerized environments without introducing unnecessary complexity. Let’s look at how Calico’s key network security capabilities make this possible.

Calico Ingress Gateway

The Calico Ingress Gateway is a Kubernetes-native solution, built on the Envoy Gateway, that serves as a centralized entry point for managing and securing incoming traffic to your clusters. Implementing the Kubernetes Gateway API specification, it replaces traditional ingress controllers with a more robust, scalable, and flexible architecture that is capable of more Continue reading

The Linux Bridge MTU Hell

It all started with an innocuous article describing the MTU basics. As the real purpose of the MTU is to prevent packet drops due to fixed-size receiver buffers, and I waste spend most of my time in virtual labs, I wanted to check how various virtual network devices react to incoming oversized packets.

As the first step, I created a simple netlab topology in which a single link had a slightly larger than usual MTU… and then all hell broke loose.

Europe Takes Another Whack At Homegrown Compute Engines

With RISC-V International, the body controlling the RISC-V instruction set, located in Switzerland for the past five years, RISC-V now has just as much right to call itself indigenous to Europe as does Arm Ltd, the British chip company that finds itself on the other side of the English Channel after the Brexit break up and that is still around 90 percent owned by Japanese conglomerate SoftBank.

Europe Takes Another Whack At Homegrown Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

Tech Bytes: How Internet Synthetic Transactions Boost App Performance Visibility (Sponsored)

Today on the Tech Bytes podcast we talk about Internet Performance Monitoring, or IPM, with sponsor Catchpoint. Catchpoint provides visibility across the full Internet Stack to help you understand the performance of your SaaS and cloud apps, WAN and branch connections, and more. We’ll talk about how Catchpoint can enrich network monitoring with synthetic transactions... Read more »

Parallelism Strategies in Deep Learning

Introduction

Figure 8-1 depicts some of the model parameters that need to be stored in GPU memory: a) Weight matrices associated with connections to the preceding layer, b) Weighted sum (z), c) Activation values (y), d) Errors (E), e) Local gradients (local ∇), f) Gradients received from peer GPUs (remote ∇), g) Learning rates (LR), and h) Weight adjustment values (Δw).

In addition, the training and test datasets, along with the model code, must also be stored in GPU memory. However, a single GPU may not have enough memory to accommodate all these elements. To address this limitation, an appropriate parallelization strategy must be chosen to efficiently distribute computations across multiple GPUs.

This chapter introduces the most common strategies include data parallelism, model parallelism, pipeline parallelism, and tensor parallelism.


Figure 8-1: Overview of Neural Networks Parameters.


Data Parallelism


In data parallelization, each GPU has an identical copy of the complete model but processes different mini-batches of data. Gradients from all GPUs are averaged and synchronized before updating the model. This approach is effective when the model fits within a single GPU’s memory.

In Figure 8-2, the batch of training data is split into eight micro-batches. The first four micro-batches are Continue reading