Multicast PIM Dense Mode (III)

Multicast PIM Dense Mode (III)

In the previous posts, we covered the basics of multicast, including how sources send traffic to group addresses and how receivers use IGMP to signal their interest to the last hop router. We also looked at how IGMP snooping helps switches forward multicast traffic only to ports with interested receivers.

In this post, we will look at PIM, Protocol Independent Multicast. PIM is the protocol that routers use to build the multicast forwarding tree between the source and the receivers. There are different modes/flavours of PIM, and in this post, we will focus specifically on PIM Dense Mode.

Multicast Introduction
With multicast, the source sends only a single copy of the traffic into the network. As that traffic moves through the network, it is replicated
Multicast PIM Dense Mode (III)
Multicast IGMP - Internet Group Management Protocol
IGMP is the protocol used by receivers to signal their interest in multicast traffic. When a host wants to receive a multicast stream, it sends
Multicast PIM Dense Mode (III)

PIM Overview

PIM, Protocol Independent Multicast, is the protocol routers use to build a loop-free multicast distribution tree from the source to the receivers. It is called protocol-independent because it does not rely on any specific unicast Continue reading

Big Blue Poised To Peddle Lots Of On Premises GenAI

If you want to know the state of the art in GenAI model development, you watch what the Super 8 hyperscalers and cloud builders are doing and you also keep an eye on the major model builders outside of these companies – mainly, OpenAI, Anthropic, and xAI as well as a few players in China like DeepSeek.

Big Blue Poised To Peddle Lots Of On Premises GenAI was written by Timothy Prickett Morgan at The Next Platform.

Why Kubernetes Flat Networks Fail at Scale—and Why Your Cluster Needs a Security Hierarchy

Kubernetes networking offers incredible power, but scaling that power often transforms a clean architecture into a tangled web of complexity. Managing traffic flow between hundreds of microservices across dozens of namespaces presents a challenge that touches every layer of the organization, from engineers debugging connections to the architects designing for compliance.

The solution to these diverging challenges lies in bringing structure and validation to standard Kubernetes networking. Here is a look at how Calico Tiers and Staged Network Policies help you get rid of this networking chaos.

The Limits of Flat Networking

The default Kubernetes NetworkPolicy resource operates in a flat hierarchy. In a small cluster, this is manageable. However, in an enterprise environment with multiple tenants, teams, and compliance requirements, “flat” quickly becomes unmanageable, and dangerous.

To make this easier, imagine a large office building where every single employee has a key that opens every door. To secure the CEO’s office in a flat network, you have to put “Do Not Enter” signs on every door that could lead to it. That is flat networking, secure by exclusion rather than inclusion.

Without a security hierarchy, every new policy risks becoming a potential mistake that overrides others, and debugging connectivity Continue reading

NAN112: Inside the CU Boulder Network Engineering Master’s Program

Eric sits down with two graduates from the CU Boulder Networking Engineering Master’s Program to discuss what they learned during their time in the program and how that translated into real world opportunities and experiences. They also offer some invaluable career advice from the “seven plus one” formula and the value of asking “dumb questions.”... Read more »

Setting up a VPC Route Server with Pulumi

If you need to work with BGP in your AWS VPCs—so that BGP-learned routes can be injected into a VPC route table—then you will likely need a VPC Route Server. While you could set up a VPC Route Server manually, what’s the fun in that? In this post, I will walk you through a Pulumi program that will set up a VPC Route Server. Afterward, I will discuss some ways you could check the functionality of the VPC Route Server to show that it is indeed working as expected.

To make things as easy as possible, I have added a simple Pulumi program to my GitHub “learning-tools” repository in the aws/vpc-route-server directory. This program sets up a VPC Route Server and its associated components for you, and I will walk through this program in this blog post.

The first step is creating the VPC Route Server itself. The VPC Route Server has no prerequisities, and the primary configuration needed is setting the ASN (Autonomous System Number) the Route Server should use:

rs, err := vpc.NewRouteServer(ctx, "rs", &vpc.RouteServerArgs{
    AmazonSideAsn: pulumi.Int(65534),
    Tags: pulumi.StringMap{
        "Name":     Continue reading

Worth Reading: A Tech Career in 2026

There’s no “networking in 20xx” video this year, so this insightful article by Anil Dash will have to do ;) He seems to be based in Silicon Valley, so keep in mind the Three IT Geographies, but one cannot beat advice like this:

So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies.

As well as:

This too shall pass. One of the great gifts of working in technology is that it’s given so many of us the habit of constantly learning, of always being curious and paying attention to the new things worth discovering.

Hope you’ll find it helpful and at least a bit inspiring.

HW069: The Hamina Clip

Keith sits down with old friend Jussi Kiviniemi, CEO of Hamina, to unveil their new product: The Hamina Clip. Together they discuss this new wireless survey device, including its portable design, its price point, and its ability to help you perform surveys and create heat maps without a floor plan. They also compare it to... Read more »

Ultra Ethernet: Network-Signaled Congestion Control (NSCC) – Overview

Network-Signaled Congestion Control (NSCC)


The Network-Signaled Congestion Control (NSCC) algorithm operates on the principle that the network fabric itself is the best source of truth regarding congestion. Rather than waiting for packet loss to occur, NSCC relies on proactive feedback from switches to adjust transmission rates in real time. The primary mechanism for this feedback is Explicit Congestion Notification (ECN) marking. When a switch interface's egress queue begins to build up, it employs a Random Early Detection (RED) logic to mark specific packets. Once the buffer’s Minimum Threshold is crossed, the switch begins randomly marking packets by setting the last two bits of the IP header’s Type of Service (ToS) field to the CE (11) state. If the congestion worsens and the Maximum Threshold is reached, every packet passing through that interface is marked, providing a clear and urgent signal to the endpoints.

The practical impact of this mechanism is best illustrated by a hash collision event, such as the one shown in Figure 6-10. In this scenario, multiple GPUs on the left-hand side of the fabric transmit data at line rate. Due to the specific entropy of these flows, the ECMP hashing algorithms on leaf switches 1A-1 and 1A-2 Continue reading

Building a serverless, post-quantum Matrix homeserver

* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.

Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. 

For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.

But there is a "tax" to running it. Traditionally, operating a Matrix homeserver has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure reverse proxies, and handle rotation for TLS certificates. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot Continue reading

IP Address to Organisation Name Map

The whois query tool is useful to identify which organisation holds an IP Address Prefix or an Autonomous System Number, but not so useful in performing the reverse query, listing all IP Addresses and Autonomous System Numbers held by an organisation. Here is a resource that can help with such queries.

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket

With the hyperscalers and the cloud builders all working on their own CPU and AI XPU designs, it is no wonder that Nvidia has been championing the neoclouds that can’t afford to try to be everything to everyone – this is the very definition of enterprise computing – and that, frankly, are having trouble coming up with the trillions of dollars to cover the 150 gigawatts to more than 200 gigawatts of datacenter capacity that is estimated to be on the books between 2025 and 2030 for AI workloads.

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket was written by Timothy Prickett Morgan at The Next Platform.

1 4 5 6 7 8 3,845