Archive

Category Archives for "Networking"

History of SDN in Google’s datacentre

I recently read a very interesting post on LinkedIn in which Urs Hölzle, one of the original Google network engineers, celebrated twenty years of Google network innovation. He provided links to the recent paper from Google describing how Google developed its datacentre network and how it has evolved since then. The paper describes how Google applied the Clos network topology in its datacentres and the early implementations of software-defined-networking that controlled data flows across the network.

One point that was really interesting, which came up in the comments to the article, is that Google implemented the original network routing code in Python.

Mr. Hölzle also linked to an independant research report that came out at the time. It provided the initial view of what Google was developing and is interesting to read almost 20 years after it was written.

The post History of SDN in Google’s datacentre appeared first on Open-Source Routing and Network Simulation.

Data Center Fabric Designs: Size Matters

The “should we use the same vendor for fabric spines and leaves?” discussion triggered the expected counterexamples. Here’s one:

I actually have worked with a few orgs that mix vendors at both spine and leaf layer. Can’t take names but they run fairly large streaming services. To me it seems like a play to avoid vendor lock-in, drive price points down and be in front of supply chain issues.

As always, one has to keep two things in mind:

Data Center Fabric Designs: Size Matters

The “should we use the same vendor for fabric spines and leaves?” discussion triggered the expected counterexamples. Here’s one:

I actually have worked with a few orgs that mix vendors at both spine and leaf layer. Can’t take names but they run fairly large streaming services. To me it seems like a play to avoid vendor lock-in, drive price points down and be in front of supply chain issues.

As always, one has to keep two things in mind:

BGP AS Numbers for a Private MPLS/VPN Backbone

One of my readers was building a private MPLS/VPN backbone and wondered whether they should use their public AS number or a private AS number for the backbone. Usually, it doesn’t matter; the deciding point was the way they want to connect to the public Internet:

We also plan to peer with multiple external ISPs to advertise our public IP space not directly from our PE routers but from dedicated Internet Routers, adding a firewall between our PEs and external Internet routers.

They could either run BGP between the PE routers, firewall, and WAN routers (see BGP as High-Availability Protocol for more details) or run BGP across a bump-in-the-wire firewall:

BGP AS Numbers for a Private MPLS/VPN Backbone

One of my readers was building a private MPLS/VPN backbone and wondered whether they should use their public AS number or a private AS number for the backbone. Usually, it doesn’t matter; the deciding point was the way they want to connect to the public Internet:

We also plan to peer with multiple external ISPs to advertise our public IP space not directly from our PE routers but from dedicated Internet Routers, adding a firewall between our PEs and external Internet routers.

They could either run BGP between the PE routers, firewall, and WAN routers (see BGP as High-Availability Protocol for more details) or run BGP across a bump-in-the-wire firewall:

Adding TS’s IP Address to MAC-VRF (L2RIB) and IP-VRF (L3RIB)

In the previous chapter, we discussed how a VTEP learns the local TS's MAC address and the process through which the MAC address is programmed into BGP tables. An example VTEP device was configured with a Layer 2 VLAN and an EVPN Instance without deploying a VRF Context or VLAN routing interface. This chapter introduces, at a theoretical level, how the VTEP device, besides the TS's MAC address, learns the TS's IP address information after we have configured the VRF Context and routing interface for our example VLAN.


Figure 1-3: MAC-VRF Tenant System’s IP Address Propagation.

I have divided Figure 1-3 into three sections. The section on the top left, Integrated Routing and Bridging - IRB illustrates the components required for intra-tenant routing and their interdependencies. By configuring a Virtual Routing and Forwarding Context (VRF Context), we create a closed routing environment with a per-tenant IP-VRF L3 Routing Information Base (L3RIB). Within the VRF Context, we define the Layer 3 Virtual Network Identifier (L3VNI) along with the Route Distinguisher (RD) and Route Target (RT) values. The RD of the VRF Context enables the use of overlapping IP addresses across different tenants. Based on the RT value of the VRF Context, Continue reading

OSPF Summarization and Split Areas

In the Do We Still Need OSPF Areas and Summarization? I wrote this somewhat cryptic remark:

The routers advertising a summarized prefix should be connected by a path going exclusively through the part of the network with more specific prefixes. GRE tunnel also satisfies that criteria; the proof is left as an exercise for the reader.

One of my readers asked for a lengthier explanation, so here we go. Imagine a network with two areas doing inter-area summarization on /24 boundary:

OSPF Summarization and Split Areas

In the Do We Still Need OSPF Areas and Summarization? I wrote this somewhat cryptic remark:

The routers advertising a summarized prefix should be connected by a path going exclusively through the part of the network with more specific prefixes. GRE tunnel also satisfies that criteria; the proof is left as an exercise for the reader.

One of my readers asked for a lengthier explanation, so here we go. Imagine a network with two areas doing inter-area summarization on /24 boundary:

Tech Bytes: Cisco ThousandEyes Deepens Visibility for Remote Workforce Management (Sponsored)

SecOps, NetOps, and help desks need integrated data, increased context, and the ability to quickly understand interdependencies in order to take on the complex tasks facing them. That’s why ThousandEyes is now integrated with Cisco Secure Access, Cisco’s SSE solution. Tune in to learn about ThousandEyes’ deeper visibility, system process metrics, streamlined test setup, and... Read more »

User Discomfort As A Security Function

If you grew up in the 80s watching movies like me, you’ll remember Wargames. I could spend hours lauding this movie but for the purpose of this post I want to call out the sequence at the beginning when the two airmen are trying to operate the nuclear missile launch computer. It requires the use of two keys, one each in the possession of one of the airmen. They must be inserted into two different locks located more than ten feet from each other. The reason is that launching the missile requires two people to agree to do something at the same time. The two key scene appears in a number of movies as a way to show that so much power needs to have controls.

However, one thing I wanted to talk about in this post is the notion that those controls need to be visible to be effective. The two key solution is pretty visible. You carry a key with you but you can also see the locks that are situated apart from each other. There is a bit of challenge in getting the keys into the locks and turning them simultaneously. That not only shows that the Continue reading

Simplify Kubernetes Hosted Control Planes with K0smotron

Multicluster Kubernetes gets complicated and expensive fast — especially in dynamic environments. Private cloud multicluster solutions need to wrangle a lot of moving parts: Private or public cloud APIs and compute/network/storage resources (or bare metal management) Linux and Kubernetes dependencies Kubernetes deployment etcd configuration Load balancer integration And, potentially other details, too. So they’re fragile — Kubernetes control planes on private clouds tend to become “pets” (and not in a cute way). Multicluster on public clouds, meanwhile, hides some of the complexity issues (at the cost of flexibility) — but presents challenges like cluster proliferation, hard-to-predict costs, and lock-in. What Are Hosted Control Planes (HCPs)? Kubecon Hosted Control Planes (HCPs) route around some (not all) of these challenges while bringing some new challenges. An HCP is a set of Kubernetes manager node components, running in pods on a host Kubernetes cluster. HCPs are less like “pets” and more like “cattle.” Like other Kubernetes workloads, they’re defined, operated, and updated in code (YAML manifests) — so are repeatable, version-controllable, easy to standardize. But worker nodes, as always, need to live somewhere and networked to control planes, and there are several challenges here. They gain basic resilience from Kubernetes itself: if Continue reading