Wireless ISPs also known as WISP mostly use unlicensed frequency spectrum. Frequency spectrum is the most critical asset for the Mobile and Wireless networks and it is sold in auctions for 100s of millions of dollars. Frequency spectrum is managed by the governments and governments in general, sell frequency spectrum in auctions. And […]
The post Wireless ISPs Spectrum Problem appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
Network virtualization has come a long way. NSX has played a key role in redefining and modernizing networking in a Datacenter. Providing an optimal routing path for the traffic has been one of the topmost priorities of Network Architects. Thanks to NSX distributed routing, the routing between different subnets on a ESXi hypervisor can be done in kernel and traffic never has to leave the hypervisor. With NSX-T, we take a step further and extend this network functionality to a multi-hypervisor and multi-cloud environment. NSX-T is a platform that provides Network and Security virtualization for a plethora of compute nodes such as ESXi, KVM, Bare Metal, Public Clouds and Containers.
This blog series will introduce NSX-T Routing & focus primarily on Distributed Routing. I will explain Distributed Routing in detail with a packet walk between the VMs sitting in same/different hypervisors, connectivity to physical infrastructure and multi-tenant routing. Let’s start with a quick reference to NSX-T architecture.
NSX-T Architecture
NSX-T has a built-in separation for Management plane (NSX-T Manager), Control Plane (Controllers) and Data Plane (Hypervisors, Containers etc.). I highly recommend going through NSX-T Whitepaper for detailed information on architecture to understand the components and functionality of each of the planes.
Couple of interesting points that I want to highlight about the architecture:
Before we dive deep into routing, let me define a few key terms.
Logical Switch is a broadcast domain which can span across multiple compute hypervisors. VMs in the same subnet would connect to the same logical switch.
Logical Router provides North-South, East-West routing between different subnets & has two components: Distributed component that runs as a kernel module in hypervisor and Centralized component to take care of centralized functions like NAT, DHCP, LB and provide connectivity to physical infrastructure.
Types of interfaces on a Logical Router
Edge nodes are appliances with a pool of capacity to run the centralized services and would be an on/off ramp to the physical infrastructure. You can think of Edge node as an empty container which would host one or multiple Logical routers to provide centralized services and connectivity to physical routers. Edge node will be a transport node just like compute node and will also have a TEP IP to terminate overlay tunnels.
They are available in two form factor: Bare Metal or VM(leveraging Intel’s DPDK Technology).
Moving on, let’s also get familiarized with the topology that I will use throughout this blog series.
I have two hypervisors in above topology, ESXi and KVM. Both of these hypervisors have been prepared for NSX & have been assigned a TEP (Tunnel End Point) IP, ESXi Host: 192.168.140.151, KVM host: 192.168.150.152. These hosts have L3 connectivity between them via transport network. I have created 3 Logical switches via NSX Manager & have connected a VM to each one of the switches. I have also created a Logical Router named Tenant 1 Router, which is connected to all the logical switches and is acting as a Gateway for each subnet.
Before we look at the routing table, packet walk etc., let’s look at how configuration looks like in NSX Manager. Here is switching configuration, showing 3 Logical switches.
Following is the routing configuration showing the Tenant 1 Logical Router.
Once configured via NSX Manager, the logical switches and routers are pushed to both the hosts, ESXi and KVM. Let’s validate that on both hosts. Following is the output from ESXi showing the Logical switches and router.
Following is the output from KVM host showing the Logical switches and router.
NSX Controller MAC learning and advertisement
Before we look at the packet walk, it is important to understand how remote MAC addresses are learnt by the compute hosts. This is done via NSX Controllers. As soon as a VM comes up and connects to Logical switch, local TEP registers its MAC with the NSX Controller. Following output from NSX Controller shows that the MAC addresses of VMs on Web VM1, App VM1 and DB VM1 have been reported by their respective TEPs. NSX Controller publishes this MAC/TEP association to the compute hosts depending upon type of host.
Now, we will look at the communication between VMs on the same hypervisor.
Distributed Routing for VMs hosted on the same Hypervisor
We have WEB VM1 and App VM1 hosted on the same ESXi hypervisor. Since we are discussing the communication between VMs on same host, I am just showing the relevant topology below.
Following is how traffic would go from Web VM1 to App VM1.
Please note that the packet didn’t have to leave the hypervisor to get routed. This routing happened in kernel. Now that we understand the communication between two VMs (in different subnet) on same hypervisor, let’s take a look at the packet walk from Web VM1 (172.16.10.11) on ESXi to DB-VM1 (172.16.30.11) hosted on KVM.
Distributed Routing for VMs hosted on the different Hypervisors (ESXi & KVM)
A quick traceflow validates the above packet walk.
This concludes the routing components part of this blog. In the next blog of this series, I will discuss multi-tenant routing and connectivity to physical infrastructure.
Network virtualization has come a long way. NSX has played a key role in redefining and modernizing networking in a Datacenter. Providing an optimal routing path for the traffic has been one of the topmost priorities of Network Architects. Thanks to NSX distributed routing, that the routing between different subnets on a ESXi hypervisor can... Read more →
A look at the wireless technologies used for location-based services.
With the internet of things, there can such a thing as too much data.
Plenty of new stuff was added to the Ansible for Networking Engineers online course and webinar since the last update.
Fun things first: I needed adjustable check mode behavior and change tracking in some playbooks, and documented these features in two new videos (online course and webinar).
Read more ...The latest in all the networking buzz these days is Intent-Based Networking (IBN). There are varying definitions of what IBN is and is not. Does IBN mean you need to deploy networking solely from business policy, does IBN mean you must be streaming telemetry from every network device in real-time, is it a combination of both? Is it automation?
This article isn’t meant to define IBN, rather, it’s meant to provide a broader, yet more practical perspective on automation and intent.
One could argue that intent-based systems have been around for years, especially when managing servers. Why not look at DevOps tools like CFEngine, Chef, and Puppet (being three of the first)? They focused on desired state–their goal was to get managed systems into a technical desired state.
If something is in its desired state, doesn’t that mean it’s in its intended state?
These tools did this eliminating the need to know the specific Linux server commands to configure the device–you simply defined your desired state with a declarative approach to systems management, e.g. ensure Bob is configured on the system without worrying about the command to add Bob. One major difference was those tools used Continue reading
The latest in all the networking buzz these days is Intent-Based Networking (IBN). There are varying definitions of what IBN is and is not. Does IBN mean you need to deploy networking solely from business policy, does IBN mean you must be streaming telemetry from every network device in real-time, is it a combination of both? Is it automation?
This article isn’t meant to define IBN, rather, it’s meant to provide a broader, yet more practical perspective on automation and intent.
One could argue that intent-based systems have been around for years, especially when managing servers. Why not look at DevOps tools like CFEngine, Chef, and Puppet (being three of the first)? They focused on desired state–their goal was to get managed systems into a technical desired state.
If something is in its desired state, doesn’t that mean it’s in its intended state?
These tools did this eliminating the need to know the specific Linux server commands to configure the device–you simply defined your desired state with a declarative approach to systems management, e.g. ensure Bob is configured on the system without worrying about the command to add Bob. One major difference was those tools used Continue reading