Archive

Category Archives for "The Network Times"

ACI Fabric Access Policies Part 4: Leaf Interface Profile, Leaf Switch Policy Group, and Leaf Switch Profile,


Leaf Interface Profile

 

This section explains how to create an object Interface Profile whose basic purpose is to attach the set of physical interfaces into this object. Phase 6 in Figure 1-40 illustrates the APIC Management Information Model (MIM) from the Interface Profile perspective. We are adding an object L101__102_IPR under the class AccPortP (Leaf Interface Profile). The name of the object includes Leaf switch identifiers (Leaf-101 and Leaf-102) in which I am going to use this Interface Profile. This object has a Child object Eth1_1-5 (class InfraHPorts) that defines the internet block and which has a relationship with the object Port_Std_ESXi-Host_IPG. By doing this we state that ethernet interfaces 1/1-5 are LLDP enabled 10Gbps ports which can use VLAN Identifiers from 300-399. Note that in this phase we haven’t yet specified in which switches we are using this Interface Profile.

 The RN rules used with related objects:

 Objects created under the class InfraAccportP (Leaf Interface Profile):Prefix1-{name}, where the Prefix1 is “accportprof”. This gives us RN “accportprof-L101_L102_IPR”.

 Objects created under the class InfraHPortS (Access Port Selector): Prefix1-{name}-Prefix2-{type}, where the Prefix1 is “hports” and the Prefix2 is “typ”. This gives us RN “hports-Eth1_1-5_typ-range”.

Objects created under the class InfraPortBlk (Access Port Block): Prefix1-{name}, where the Prefix1 is “portblk” and where the name is Property (autogenerated). This gives us the RN “portblk-Block2”.



Figure 1-39: APIC MIM Reference: Interface Profile.

Continue reading

ACI Fabric Access Policies Part 3: AAEP, Interface Policy and Interface Policy Group

 

Attachable Access Entity Profile - AAEP


This section explains how to create an object Attachable Access Entity Profile (AAEP) that is used for attaching a Domain into Port Group. Phase 3 in Figure 1-20 illustrates the APIC Management Information Model (MIM) from the AAEP perspective. Class AttEntityP is a Child class for infra, and they both belong to packages Infra. I have already added the object attentp-AEP_PHY into the figure.The format of the RN for this object is Prefix1-{name}, where the Prefix1 is attentp. This gives us the RN attentp-PHY-AEP.



Figure 1-20: APIC MIM Reference: Attachment Access Entity Profile.

Continue reading

ACI Fabric Access Policies Part 2: Physical Domain

 Physical Domain

This section explains how to create a Physical Domain (Fabric Access Policy). It starts by mapping the REST call POST method and JSON Payload into Fabric Access Policy modeling. Then it explains how the same configurations can be done by using the APIC GUI. Phase 2 in Figure 1-15 illustrates the APIC Management Information Model (MIM) from the Physical Domain perspective. I have already added the object Phys-Standalone_ESXi_PHY into the figure. The format of the RN for this object is Prefix1-{name}, where the Prefix1 is “phys”. This gives us the RN “phys-Standalone_ESXi_PHY”.



Figure 1-15: Fabric Access Policy Modeling: Physical Domain (click image to enlarge).


Continue reading

ACI Fabric Access Policies Part 1: VLAN Pool

 

Introduction

 

Everything in ACI is managed as an Object. Each object belongs to a certain Class. As an example, when we create a VLAN Pool, we create an object that belongs to Class VlanInstP. Classes, in turn, are organized in Packages, Class VlanInstP belongs to Package fvns (fv = fabric virtualization, ns namespace). Figure 1-1 illustrates the classes that we are using in this chapter when we create Fabric Access Policies. Lines with an arrow represent Parent-Child structure and dotted lines represent a relationship (Rs) between classes. We will get back to Rs in becoming sections.



Figure 1-1: ACI Fabric Access Policies.

Continue reading

VXLAN Fabric with BGP EVPN Control-Plane: Design Considerations – Book Description and ToC





About this book

 

The intent of this book is to explain various design models for Overlay Network and Underlay Network used in VXLAN Fabric with BGP EVPN Control-Plane. The first two chapters are focusing on the Underlay Network solution. The OSPF is introduced first. Among other things, the book explains how OSPF flooding can be minimized with area design. After OSPF there is a chapter about BGP in the Underlay network. Both OSPF and BGP are covered deeply and things like convergence are discussed. After the Underlay Network part, the book focuses on BGP design. It explains the following models: (a) BGP Multi-AS with OSPF Underlay, this chapter discusses two design models – Shared Spine ASN and Unique Spien ASN, (b) BGP-Only Multi-ASN where both direct and loopback overlay BGP peering models are explained, (c) Single-ASN with OSPF Underlay, (d) Hybrid-ASN with OSPF Underlay – Pod-specific shared ASN connected via Super-Spine layer using eBGP peering, (e) Dual-ASN model where leafs share the same ASN, and spines share their ASN. Each of the design model chapters includes a “Complexity Map” that should help readers to understand the complexity of each solution. This book also explains BGP ECMP and related to Continue reading

BGP EVPN Underlay Network with BGP (Multi-AS)


Introduction


The focus of this chapter is to explain the BGP Multi-AS Underlay Network design in BGP EVPN/VXLAN Fabric. It starts by explaining the BGP configuration because this way explanation can be done by using show and debug command as well as taking packet captures. The next section discusses of BGP adjacency process and its related states (Idle, Connect/Active, OpenSent, Open Confirm and Established). After that, this chapter explains the BGP routing discussing how connected routes are sent from RIB to Loc-RIB and from there to Adj-RIB-Out (Pre/Post). This section also introduces how NLRIs received within BGP Update eventually ends up into the RIB of receiving BGP speaker. In addition, this chapter shortly introduces the MRAI timer as well as a non-disruptive device maintenance solution. The last section tries to give an answer which protocol best fits in the Underlay Network of BGP EVPN fabric.



Infrastructure AS Numbering and IP Addressing Scheme


The AS-numbering scheme used in this chapter is the same as what was used in chapter 1 but instead of using unnumbered interfaces, each inter-switch interface now has an IP address assigned to it. It is possible to use the Unnumbered interface also with BGP using IPv6 Link-Local addressing [RFC 5549]. However, this solution is not supported by all vendors.


Figure 2-1: IP addressing Scheme.
Continue reading

BGP EVPN Underlay Network with OSPF

Introduction


The foundation of a modern Datacenter fabric is an Underlay Network and it is crucial to understand the operation of the Control-Plane protocol solution used in it. The focus of this chapter is OSPF. The first section starts by introducing the network topology and AS numbering scheme used throughout this book. The second section explains how OSPF speakers connected to the same segment become fully adjacent. The third section discusses the process of how OSPF speakers exchange Link State information and build a Link-State Database (LSDB) which is used as an information source for calculating Shortest Path Tree (SPT) towards each destination using Dijkstra algorithm. The focus of the fourth section is an OSPF LSA flooding process. It strat by explaining how local OSPF speaker sends Link State Advertisements wrapped inside a Link-State Update message to its adjacent router and how receiving OSPF speakers a) installs information into LSDB, b) Acknowledge the packet, and c) floods it out of OSPF interfaces. The fifth section discusses of LSA and SPF timers. At the end of this chapter, there are OSPF related configurations from every device.

Infrastructure AS Numbering and IP Addressing Scheme


Figure 1-1 illustrates an AS numbering and an IP address scheme used throughout this book. All Leaf switches have dedicated BGP Private AS number while spine switches in the same cluster share the same AS number. Inter-Switch links use Unnumbered IP addressing using (interface Loopback 0) which is also used as OSPF Router-Id. Loopback 0 is not advertised by any device. OSPF type for Inter-Switch link is point-to-point so there is no DR/BDR election process. Leaf switches also have interface Loopback 30 that is used as a VTEP (VXLAN Tunnel End Point) address. Loopback 30 IP addresses are advertised by Leaf switches. All Loopback interfaces are in OSPF passive interface mode. At this stage, all switches belong to OSPF Area 0.0.0.0.


Figure 1-1: AS Numbering and IP Addressing Scheme.
Continue reading

Comparing Internet Connection used in AWS and LISP Based Networks


Forewords

This post starts by discussing the Internet connection from the AWS VPC Control Plane operation perspective. The public AWS documentation only describes the basic components, such as an  Internet Gateway (IGW) and a subnet specific Implicit Routers. However, the public AWS documentation does not describe the Control Plane operation related to distributing the default route from IGWs to IMRs. The AWS VPC Control Plane part in this post is based on my assumptions, so be critical of what you read. The second part of this post shortly explains the Control-Plane operation of the Internet connection used in LISP based network. By comparing the AWS VPC to LISP based network I just want to point out that even though some might think that cloud-based networking is much simple than traditional on-premise networking, it is not. People tend to trust network solutions used in clouds (AWS, Azure, etc.) and there is no debate about (a) what hardware is used, (b) how the redundancy works, (c),  are solutions standard-based and so on. Now it is more like, I do not care how it works as long as it works. Good or bad, I do not know.
Continue reading

Intra-Subnet Communication: AWS VPC versus LISP Based Campus Fabric


Forewords


This article introduces the principles of the Amazon Web Service Virtual Private Cloud (AWS VPC) Control-Plane operation and Data-Plane encapsulation. Also, this document explains how the same kind of forwarding model can be achieved using standard protocols. Amazon has not published details of its VPC networking solution, and this document relies on publically available information and the author’s studies. The motivation for writing this document was that I wanted to point out that no matter how simple and easy to manage Cloud Networking looks and feels like, those still are as complex as any other large scale networks.

Example Environment


Figure 1-1 illustrates an example AWS VPC environment running on an imaginary application on two Elastic Cloud Computing (EC2) Instances, EC2-A and EC2-B. The instance EC2-A will be launched in physical server Host-A while the instance EC2-B will later be launched in physical server Host-B. The VPC vpc-1a2b3c4d is created in Stockholm (eu-north-1) Region in Availability Zone (AZ) eu-north-1c. The subnet 172.16.31.0/20 can be used in AZ eu-north-1c. The subnet for instances is 172.31.10.0/24. Elastic Network Interface-1 (ENI1) with IP address 172.31.10.10 will be attached to the instance EC2-A and ENI2 with IP address 172.31.10.20 will be attached to the instance EC2-B. For simplicity, the same Security Group (SG) “sg-nwktimes”, allowing all data traffic between EC2-A and EC2-B) is attached to both instances.

Inside both physical servers, there is a software router, Router-1 in Host-A and Router-2 in Host-B. Servers use offload NICs for connection to AZ Underlay Network and data traffic from instances is sent out of the server straight to offload NIC bypassing the hypervisor. The AZ Backbone includes three routers, Router-3, Router-4, and Router-5. Also, there is a Mapping Service that represents the centralized Control Plane. It holds an  Instance-to-Location Mapping Database that has information about every EC2 Instances running on a given VPC. Routers, servers and Mapping Service use IPv6 addressing.

Figure 1-1: Overall example topology and IP addressing scheme.

Similarities Between AWS VPC and Cisco SDA – Intra-Subnet Communication


Update March 6, 2020: This post will be obsolete soon by a new  version


Forewords


This article explains the similarities between a LISP/VXLAN based Campus Fabric and AWS Virtual Private Cloud (VPC) from the Intra-Subnet Control-Plane and Data-Plane operation perspective. The AWS VPC solution details are not publicly available and the information included in this article is based on the author's own study using publically available AWS VPC documentation. 

There are two main reasons for writing this document: 

First, Cisco SDA is an on-prem LAN model while the AWS VPC is an off-prem DC solution. I wanted to point out that these two solutions, even though used for very different purposes, use the same kind of Control-Plane operation and Data-Plane encapsulation and are managed via QUI. This is kind of my answer to ever going discussion about is there DC-networks, Campus-networks and so on, or is there just networks.

Second, my own curiosity to understand the operation of AWS VPC.

I usually start by introducing the example environment and then explaining the configuration, moving to Control-Plane operation and then to Data-Plane operation. However, this time I take a different approach. This article first introduces the example environment but then the Data-Plane operation is discussed before Control-Plane operation. This way it is easier to understand what information is needed and how that information is gathered.

Continue reading