
Author Archives: Toni Pasanen
Author Archives: Toni Pasanen
To achieve active/active redundancy for a Network Virtual Appliance (NVA) in a Hub-and-Spoke VNet design, we can utilize an Internal Load Balancer (ILB) to enable Spoke-to-Spoke traffic.
Figure 5-1 illustrates our example topology, which consists of a vnet-hub and spoke VNets. The ILB is associated with the subnet 10.0.1.0/24, where we allocate a Frontend IP address (FIP) using dynamic or static methods. Unlike a public load balancer's inbound rules, we can choose the High-Availability (HA) ports option to load balance all TCP and UDP flows. The backend pool and health probe configurations remain the same as those used with a Public Load Balancer (PLB).
From the NVA perspective, the configuration is straightforward. We enable IP forwarding in the Linux kernel and virtual NIC but not pre-routing (destination NAT). We can use Post-routing policies (source NAT) if we want to hide real IP addresses or if symmetric traffic paths are required. To route egress traffic from spoke sites to the NVAs via the ILB, we create subnet-specific route tables in the spoke VNets. The reason why the "rt-spoke1" route table has an entry "10.2.0.0/24 > 10.0.1.6 (ILB)" is that vm-prod-1 has a public IP address used for external access. If we were to set the default route, as we have in the subnet 10.2.0.0/24 in "vnet-spoke2", the external connection would fail.
This chapter is the first part of a series on Azure's highly available Network Virtual Appliance (NVA) solutions. It explains how we can use load balancers to achieve active/active NVA redundancy for connections initiated from the Internet.
In Figure 4-1, Virtual Machine (VM) vm-prod-1 uses the load balancer's Frontend IP address 20.240.9.27 to publish an application (SSH connection) to the Internet. Vm-prod-1 is located behind an active/active NVA FW cluster. Vm-prod-1 and NVAs have vNICs attached to the subnet 10.0.2.0/24.
Both NVAs have identical Pre- and Post-routing policies. If the ingress packet's destination IP address is 20.240.9.27 (load balancer's Frontend IP) and the transport layer protocol is TCP, the policy changes the destination IP address to 10.0.2.6 (vm-prod-1). Additionally, before routing the packet through the Ethernet 1 interface, the Post-routing policy replaces the original source IP with the IP address of the egress interface Eth1.
The second vNICs of the NVAs are connected to the subnet 10.0.1.0/24. We have associated these vNICs with the load balancer's backend pool. The Inbound rule binds the Frontend IP address to the Backend pool and defines the load-sharing policies. In our example, the packets of SSH connections from the remote host to the Frontend IP are distributed between NVA1 and NVA2. Moreover, an Inbound rule determines the Health Probe policy associated with the Inbound rule.
Note! Using a single VNet design eliminates the need to define static routes in the subnet-specific route table and the VM's Linux kernel. This solution is suitable for small-scale implementations. However, the Hub-and-Spoke VNet topology offers simplified network management, enhanced security, scalability, performance, and hybrid connectivity. I will explain how to achieve NVA redundancy in the Hub-and-Spoke VNet topology in upcoming chapters.
In the previous chapter, you learned how to route east-west traffic through the Network Virtual Appliance (NVA) using subnet-specific route tables with User Defined Routes (UDR). This chapter introduces how to route north-south traffic between the Internet and your Azure Virtual Network through the NVA.
Figure 3-1 depicts our VNet setup, which includes DMZ and Web Tier zones. The NVA, vm-nva-fw, is connected to subnet snet-north (10.0.2.0/24) in the DMZ via a vNIC with Direct IP (DIP) 10.0.2.4. We've also assigned a public IP address, 51.12.90.63, to this vNIC. The second vNIC is connected to subnet snet-west (10.0.0.0/24) in the Web Tier, with DIP 10.0.0.5. We have enabled IP Forwarding in both vNICs and Linux kernel. We are using Network Security Groups (NSGs) for filtering north-south traffic.
Our web server, vm-west, has a vNIC with DIP 10.0.0.4 that is connected to the subnet snet-west in the Web Tier. We have associated the route table to the subnet with the UDR, which forwards traffic to destination IP 141.192.166.81 (remote host) to NVA. To publish the web server to the internet, we've used the public IP of NVA.
On the NVA, we have configured a Destination NAT rule which rewrites the destination IP address to 10.0.0.4 to packets with the source IP address 141.192.166.81 and protocol ICMP. To simulate an http connection, we're using ICMP requests from a remote host.
Subnets, aka Virtual Local Area Networks (VLANs) in traditional networking, are Layer-2 broadcast domains that enable attached workloads to communicate without crossing a Layer-3 boundary, the subnet Gateway. Hosts sharing the same subnet resolve each other’s MAC-IP address binding using Address Resolution Protocol, which relays on Broadcast messages. That is why we often use the Failure domain definition with subnets. We can spread subnets between physical devices over Layer-2 links using VLAN tagging, defined in the IEEE 802.1Q standard. Besides, tunnel encapsulation solutions supporting tenant/context identifier enables us to extend subnets over Layer-3 infrastructure. Virtual eXtensible LAN (VXLAN) using VXLAN Network Identifier (VNI) and Network Virtualization using Generic Route Encapsulation (NVGRE) using Tenant Network ID (TNI) are examples of Network Virtualization Over Layer 3 (NVO) solutions. If you have to spread the subnet over MPLS enabled network, you can choose to implement Virtual Private LAN (VPLS) Service or Virtual Private Wire Service (VPWS), among the other solutions.
In Azure, the concept of a subnet is different. You can think about it as a logical domain within a Virtual Network (VNet), where attached VMs share the same IP address space and use the same shared routing policies. Broadcast and Multicast traffic is not natively supported in Azure VNet. However, you can use a cloudSwXtch VM image from swXtch.io to build a Multicast-enabled overlay network within VNet.
This section demonstrates how the routing between subnets within the same Virtual Network (VNet) works by default. Figure 2-1 illustrates our example Azure VNet setup where we have deployed two subnets. The interface eth0 of vm-west and interface eth1 of vm-nva-fw are attached to subnet snet-west (10.0.0.0/24), while interface eth2 of vm-nva-fw and interface eth0 of vm-west is connected to subnet snet-east (10.0.1.0/24). All three VMs use the VNet default routing policy, which routes Intra-VNet data flows directly between the source and destination endpoint, regardless of which subnets they are connected to. Besides, the Network Security Groups (NSGs) associated with vNICs share the same default security policies, which allow inbound and outbound Intra-VNet data flows, InBound flows from the Load Balancer, and OutBound Internet connections.
Now let’s look at what happens when vm-west (DIP: 10.0.0.4) pings vm-west (DIP: 10.0.1.4), recapping the operation of VFP. Note that Accelerated Networking (AccelNet) is enabled in neither VMs.
Note! This post is under the technical review
VNets and VPN/ExpressRoute connections are associated with vHub’s Default Route Table, which allows both VNet-to-VNet and VNet-to-Remote Site IP connectivity. This chapter explains how we can isolate vnet-swe3 from vnet-swe1 and vnet-swe2 using VNet-specific vHub Route Tables (RT), still allowing VNet-to-VPN Site connection. As a first step, we create a Route Table rt-swe12 to which we associate VNets vnet-swe1 and vnet-swe2. Next, we deploy a Route Table rt-swe3 for vnet-swe3. Then we propagate routes from these RTs to Default RT but not from rt-swe12 to rt-swe3 and vice versa. Our VPN Gateway is associated with the Default RT, and the route to remote site subnet 10.11.11.0/24 is installed into the Default RT. To achieve bi-directional IP connectivity, we also propagate routes from the Default RT to rt-swe-12 and rt-swe3. As the last step, we verify both Control Plane operation and Data Plane connections.
Figure 12-1: Virtual Network Segmentation.
This chapter introduces Azure Virtual WAN (vWAN) service. It offers a single deployment, management, and monitoring pane for connectivity services such as Inter-VNet, Site-to-Site VPN, and Express Route. In this chapter, we are focusing on S2S VPN and VNet connections. The Site-to-Site VPN solutions in vWAN differ from the traditional model, where we create resources as an individual components. In this solution, we only deploy a vWAN resource and manage everything else through its management view. Figure 11-1 illustrates our example topology and deployment order. The first step is to implement a vWAN resource. Then we deploy a vHub. It is an Azure-managed VNet to which we assign a CIDR, just like we do with the traditional VNet. We can deploy a vHub as an empty VNet without associating any connection. A vHub deployment process launches a pair of redundant routers, which exchange reachability information with the VNet Gateway router and VGW instances using BGP. We intend to allow Inter-VNet data flows between vnet-swe1, vnet-swe2, and Branch-to-VNet traffic. For Site-to-Site VPN, we deploy VPN Gateway (VGW) into vHub. The VGW started in the vHub creates two instances, instance0, and instance1, in active/active mode. We don’t deploy a GatewaySubnet for VGW Continue reading
Comment: Here is a part of the introduction section of the eight chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).
This chapter introduces an Azure VNet Peering solution. VNet peering creates bidirectional IP connections between peered VNets. VNet peering links can be established within and across Azure regions and between VNets under the different Azure subscriptions or tenants. The unencrypted data path over peer links stays within Azure's private infrastructure. Consider a software-level solution (or use VGW) if your security policy requires data path encryption. There is no bandwidth limitation in VNet Peering like in VGW, where BW is based on SKU. From the VM perspective, VNet peering gives seamless network performance (bandwidth, latency, delay, and jitter) for Inter-VNet and Intra-VNet traffic. Unlike the VGW solution, VNet peering is a non-transitive solution, the routing information learned from one VNet peer is not advertised to another VNet peer. However, we can permit peered VNets (Spokes) to use local VGW (Hub) and route Spoke-to-Spoke data by using a subnet-specific route table Continue reading
Comment: Here is a part of the introduction section of the fifth chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).
A Hybrid Cloud is a model where we split application-specific workloads across the public and private clouds. This chapter introduces Azure's hybrid cloud solution using Site-to-Site (S2S) Active-Standby VPN connection between Azure and on-prem DC. Azure S2S A/S VPN service includes five Azure resources. The first one, Virtual Network Gateway (VGW), also called VPN Gateway, consists of two VMs, one in active mode and the other in standby mode. These VMs are our VPN connection termination points on the Azure side, which encrypt and decrypt data traffic. The active VM has a public IP address associated with its Internet side. If the active VM fails, the standby VM takes the active role, and the public IP is associated with it. Active and standby VMs are attached to the special subnet called Gateway Subnet. The name of the gateway subnet has to be GatewaySubnet. The Local Gateway (LGW) Continue reading
Comment: Here is a part of the introduction section of the Third chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).
In chapter two, we created a VM vm-Bastion and associated a Public IP address to its attached NIC vm-bastion154. The Public IP addresses associated with VM’s NIC are called Instance Level Public IP (ILPIP). Then we added a security rule to the existing NSG vm-Bastion-nsg, which allows an inbound SSH connection from the external host. Besides, we created VMs vm-front-1 and vm-Back-1 without public IP address association. However, these two VMs have an egress Internet connection because Azure assigns Outbound Access IP (OPIP) addresses for VMs for which we haven’t allocated an ILPIP (vm-Front-1: 20.240.48.199 and vm-Back-1-20.240.41.145). The Azure portal does not list these IP addresses in the Azure portal VM view. Note that neither user-defined nor Azure-allocated Public IP addresses are not configured as NIC addresses. Instead, Azure adds them as a One-to-One entry to the NAT table (chapter 15 introduces a Continue reading
Before moving to the Virtual Filtering Platform (VFP) and Accelerated
Network (AccelNet) section, let’s look at the guest OS vNIC interface
architecture. When we create a VM, Azure automatically attaches a virtual NIC
(vNIC) to it. Each vNIC has a synthetic interface, a VMbus device, using a
netvsc driver. If the Accelerated Networking (AccelNet) is disabled on a VM,
all traffic flows pass over the synthetic interface to the software switch.
Azure hosts servers have Mellanox/NVIDIA Single Root I/O Virtualization
(SR-IOV) hardware NIC, which offers virtual instances, Virtual Function (VF),
to virtual machines. When we enable AccelNet on a VM, the mlx driver is
installed to vNIC. The mlx driver version depends on an SR-IOV type. The mlx
driver on a vNIC initializes a new interface that connects the vNIC to an
embedded switch on a hardware SR-IOV. This VF interface is then associated with
the netvsc interface. Both interfaces use the same MAC address, but the IP
address is only associated with the synthetic interface. When AccelNet is
enabled, VM’s vNIC forwards VM data flows over the VF interface via the
synthetic interface. This architecture allows In-Service Software Updates
(ISSU) for SR-IOV NIC drivers.
Note! Exception
traffic, a data flow with no flow entries on a UFT/GFT, is forwarded through VFP
in order to create flow-action entries to UFT/GFT.
Figure 1-1: Azure Host-Based SDN Building Blocks.
Continue reading
Software-Defined Networking (SDN) is an architecture where the network’s control plane is decoupled from the data plane to centralized controllers. These intelligent, programmable controllers manage network components as a single system, having a global view of the whole network. Microsoft’s Azure uses a host-based SDN solution, where network virtualization and most of its services (Firewalls, Load balancers, Gateways) run as software on the host. The physical switching infrastructure, in turn, offers a resilient, high-speed underlay transport network between hosts.
Figure 1-1 shows an overview of Azure’s SDN architecture. Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale software switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. The forwarding logic of the VFP uses a layered policy model based on policy rules on Match-Action Table (MAT). VFP works on a data plane, while complex control plane operations are handed over to centralized control systems. VFP layers, such as VNET, NAT, ACL, and Metering, have dedicated controllers that programs policy rules to MAT using southbound APIs.
Software switches switching processes are CPU intensive. To reduce the burden of CPU cycles, VFP offloads data forwarding logic to hardware NIC after processing the first packet of the flow and creating the flow Continue reading
Azure Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale virtual switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. Figure 1-1 illustrates an overview of VFP building blocks and relationships with basic vSwitch. Let’s start the examination from the VM vm-nwkt-1 perspective. Its vNIC vm-cafe154 has a synthetic interface eth0 using a NetVSC driver (Network Virtual Service Client). The Hyper-V vSwitch on the Parent Partition is a Network Virtual Service Provider (NetVSP) with VM-facing vPorts. Vm-cafe154 is connected to vPort4 over the logical inter-partition communication channel VMBus. VFP sits in the data path between VM-facing vPorts and default vPort associated with physical NIC. VFP uses port-specific Layers for filtering traffic to and from VMs. A VFP Layer is a Match Action Table (MAT) having a set of policy Rules. Rules consist of Conditions and Actions and are divided into Groups. Each layer is programmed by independent, centralized Controllers without cross-controller dependencies.
Let’s take a concrete example of Layer/Group/Rule object relationship and management by examining the Network Security Group (NSG) in the ACL Layer. Each NSG has a default group for Infrastructure rules, which allows Intra-VNet traffic, outbound Internet connection, and load balancer communication (health check, etc.). We Continue reading
Let’s take a concrete example of Layer/Group/Rule object relationship and management by examining the Network Security Group (NSG) in the ACL Layer. Each NSG has a default group for Infrastructure rules, which allows Intra-VNet traffic, outbound Internet connection, and load balancer communication (health check, etc.). We Continue reading
Here is the Table of Contents of my AWS Networking Fundamentals book. I have added the figures which illustrate the example scenarios in each chapter. The book is available at Leanpub.com. It is still in progress, and there will be additional chapters soon.
Continue reading
Table of Content
Table of Contents
VPC 1
VPC Introduction 1
The Structure of Availability Zone 2
Create VPC - AWS Console 4
Select Region 4
Create VPC 7
DHCP Options Set 9
Main Route Table 10
VPC Verification Using AWS CLI 12
Create VPC - AWS CloudFormation 16
Create Template 17
Uppload Template 17
Verification Using AWS Console 18
VPC Verification using AWS CLI 21
Create Subnets - AWS Console 23
Create Subnets 24
Route Tables 29
Create Subnets – AWS Console 30
Create Subnets - AWS CloudFormation 37
Create Network ACL 40
VPC Control-Plane – Mapping Service 43
Introduction 43
Mapping Register 43
Mapping Request - Reply 44
Data-Plane Operation 45
References 46
Introduction 47
Allow Internet Access from Subnet 48
Create Internet Gateway 49
Update Subnet Route Table 54
Network Access Control List 57
Associate SG and Elastic-IP with EC2 59
Create Security Group 59
Launch an EC2 Instance 65
Allocate Elastic IP address from Amazon Ipv4 Pool 71
Reachability Analyzer 81
Billing 85
Introduction 87
Create NAT Gateway and Allocate Continue reading
Back-End EC2 instances like Application and Database servers are most often launched on a Private subnet. As a recap, a Private subnet is a subnet that doesn’t have a route to the Internet Gateway in its Route table. Besides, EC2 instances in the Private subnet don’t have Elastic-IP address association. These two facts mean that EC2 instances on the Private subnet don’t have Internet access. However, these EC2 instances might still need occasional Internet access to get firmware upgrades from the external source. We can use a NAT Gateway (NGW) for allowing IPv4 Internet traffic from Private subnets to the Internet. When we launch an NGW, we also need to allocate an Elastic-IP address (EIP) and associate it with the NGW. This association works the same way as the EIP-to-EC2 association. It creates a static NAT entry to IGW that translates NGW’s local subnet address to its associated EIP. The NGW, in turn, is responsible for translating the source IP address from the ingress traffic originated from the Private subnet to its local subnet IP address. As an example, EC2 instance NWKT-EC2-Back-End sends packets towards the Internet to NGW. When the NGW receives these packets, it rewrites the source IP address 10.10.1.172 with its Public subnet IP address 10.10.0.195 and forwards packets to the Internet gateway. IGW translates the source IP address 10.10.0.195 to EIP 18.132.96.95 (EIP associated with NGW). That means that the source IP of data is rewritten twice, first by NGW and then by IGW.
Figure 4-1 illustrates our example NAT GW design and its configuration steps. As a pretask, we launch an EC2 instance on the Private subnet 10.10.1.0/24 (1). We also modify the existing Security Group (SG) to allow an Inbound/Outbound ICMP traffic within VPC CIDR 10.10.0.0/16 (2). We also allow an SSH session initiation from the 10.10.0.218/24. I’m using the same SG for both EC2 instances to keep things simple. Besides, both EC2 uses the same Key Pair. Chapter 3 shows how to launch an EC2 instance and how we modify the SGs, and that is why we go ahead straight to the NGW configuration.
When we have done pre-tasks, we launch an NGW on the Public subnet (3). Then we allocate an EIP and associate it with NGW (4). Next, we add a default route towards NGW on the Private subnet Route Table (5).
The last three steps are related to connectivity testing. First, verify Intra-VPC IP connectivity using ICMP (6). Then we test the Internet connectivity (7). As the last step, we can confirm that no route exists back to NWKT-EC-Backe-End from the IGW. We are using an AWS Path Analyzer for that (8).
Note! Our example doesn’t follow good design principles. AWS Availability Zones (AZ) are restricted failure domains, which means that failure in one AZ doesn’t affect the operation of other AZ. Now, if our NGW on AZ eu-west-2c fails, Internet traffic from the Private subnet on eu-west2a fails. The proper design is to launch NGW on the AZ where unidirectional egress Internet access is needed.
Figure 4-1: Example Topology.
Continue reading
Figure 3-20: EC2 Instance, Elastic IP, and Security Group.
Continue reading
This chapter explains what components/services and configurations we need to allow Internet traffic to and from an EC2 instance. VPCs themselves are closed entities. If we need an Internet connection, we need to use an AWS Internet Gateway (IGW) service. The IGW is running on a Blackfoot Edge Device in the AWS domain. It performs Data-Plane VPC encapsulation and decapsulation, as well as IP address translation. We also need public, Internet routable IP addresses. In our example, we allocate an AWS Elastic-IP (EIP) address. Then we associate it with EC2 Instance. By doing it, we don’t add the EIP to the EC2 instance itself. Instead, we create a static one-to-one NAT entry into the VPC associated IGW. The subnet Route Table includes only a VPC’s CIDR range local route. That is why we need to add a routing entry to the Subnet RT, default or more specific, towards IGW. Note that a subnet within an AWS VPC is not a Broadcast domain (VPC doesn’t even support Broadcasts). Rather, we can think of it as a logical place for EC2 instances having uniform connection requirements, like reachability from the Internet. As a next step, we define the security policy. Each Subnet has a Network Access Control List (NACL), which is a stateless Data-Plane filter. The Stateless definition means that to allow bi-directional traffic flow, we have to permit flow-specific Request/Reply data separately. For simplicity, we are going to use the Subnet Default NACL. The Security Group (SG), in turn, is a stateful EC2 instance-specific Data-Plane filter. The Stateful means that filter permits flow-based ingress and egress traffic. Our example security policy is based on the SG. We will allow an SSH connection from the external host 91.152.204.245 to EC2 instance NWKT-EC-Fron-End. In addition, we allow all ICMP traffic from the EC2 instance to the same external host. As the last part, this chapter introduces the Reachability Analyzer service, which we can use for troubleshooting connections. Figure 3-1 illustrates what we are going to build in this chapter.
Figure 3-1: Setting Up an Internet Connection for Public Subnet of AWS VPC.