Everyone knows that OSPF is a link state protocol. Those that study also discover that OSPF behaves like distance vector between areas as Type-1- and Type-2 LSAs are not flooded between areas, but rather summarized in Type-3 LSAs. This means that OSPF is a logical star, or hub with spokes, where Area 0 is the backbone and all other areas must connect to Area 0. This is shown below:

With this topology, since all the areas only connect to the backbone area, traffic between areas must traverse the backbone:

We learn about this behavior in literature where there is a very straight forward topology where each ABR is only attached to one area beyond the backbone. Such a topology is shown below:

In such a topology, traffic between RT04 and RT05 has to traverse the backbone. This is shown below:

However, what if you have a topology which is not as clear cut? Where an ABR attaches to multiple areas? This is what we will explore in this post. We’ll be using the topology below:

In this topology, RT02 and RT03 are ABRs. RT02 is attached to both Area 1 and Area 2 in addition to the backbone, while RT03 Continue reading
We all had been wondering what VMware would look like when it became part of Broadcom’s massive universe following the semiconductor giant’s $69 billion acquisition of the virtualization juggernaut. …
VMware Wants To Redefine Private Cloud With VCF 9 was written by Jeffrey Burt at The Next Platform.
Big Blue might be a little late to the AI acceleration game, but it has a captive audience in its System z mainframe and Power Systems servers. …
IBM Shows Off Next-Gen AI Acceleration, On Chip DPU For Big Iron was written by Timothy Prickett Morgan at The Next Platform.
Hardware is always the star of Nvidia’s GPU Technology Conference, and this year we got previews of “Blackwell” datacenter GPUs, the cornerstone of a 2025 platform that includes “Grace” CPUs, the NVLink Switch 5 chip, the Bluefield-3 DPU, and other components, all of which Nvidia is talking about again this week at the Hot Chips 2024 conference. …
Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI was written by Jeffrey Burt at The Next Platform.
COMMISSIONED Organizations must consider many things before deploying generative AI services, from choosing models and tech stacks to selecting relevant use cases. …
Get Your Data House In Order To Unlock Your Generative AI Strategy was written by Timothy Prickett Morgan at The Next Platform.
Shipping netlab release 1.9.0 included running 36 hours of integration tests, including fifteen VXLAN/EVPN tests covering:
All tests included one or two devices under test and one or more FRR containers1 running EVPN/VXLAN with the devices under test. The results were phenomenal; apart from a few exceptions, everything Just Worked™️.
Effects are multiplicative, not additive, when it comes to increasing compute engine performance. …
Bechtolsheim Outlines Scaling XPU Performance By 100X By 2028 was written by Timothy Prickett Morgan at The Next Platform.
I’ve been working on new material over at Rule 11 Academy. This month’s posts are:
This brings us up to a total of 39 lessons. Each lesson should be about 15 minutes, so about 10 hours of material so far. The trial membership will take you through the end of the year. After the first of the year, the trial membership will last 2 months.
I love bashing SRv6, so it’s only fair to post a (technical) counterview, this time coming as a comment from Henk Smit.
There are several benefits of SRv6 that I’ve heard of.

Hi all, welcome to the 'Network CI/CD' blog series. To kick things off, let's ask the question, "Why do we even need a CI/CD pipeline for networks?" Instead of diving straight into technical definitions or showing you how to build a CI/CD pipeline, which might make you lose interest, we’ll focus on the reasons behind it. Why should network teams even consider implementing CI/CD?
In this post, we’ll talk about the benefits and the problems it solves, so you can see why it's worth learning. Let's get to it.
Even though I call it the “traditional way,” most of us (myself included) still make changes via the CLI. So, let’s imagine you and two colleagues are managing a campus network with 10 access switches. One of your tasks is to configure VLANs on all of Continue reading
When you are designing applications that run across the scale of an entire datacenter and that are comprised of hundreds to thousands of microservices running on countless individual servers and that have to be called within a matter of microseconds to give the illusion of a monolithic application, building fully connected, high bi-section bandwidth Clos networks is a must. …
This AI Network Has No Spine – And That’s A Good Thing was written by Timothy Prickett Morgan at The Next Platform.
Nvidia hit a rare patch of bad news earlier this month when reports started circulating claiming that the company’s much-anticipated “Blackwell” GPU accelerators could be delayed by as much as three months due to design flaws. …
When Nvidia Says Hot Chips, It Means Hot Platforms was written by Jeffrey Burt at The Next Platform.
Many network operators think the idea of building rather than buying is something that’s out of reach–but is it? Join Steve Dodd, Eyvonne, Tom, and Russ as we discuss the positive and negative aspects of build versus buy, what operators get wrong, and what operators don’t often expect.