Archive

Category Archives for "The Network Times"

Chapter 1: Azure VM networking – Virtual Filtering Platform and Accelerated Networking

 Note! This post is under the technical review

Introduction


Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale software switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. The forwarding logic of the VFP uses a layered policy model based on policy rules on Match-Action Table (MAT). VFP works on a data plane, while complex control plane operations are handed over to centralized control systems. The VFP includes several layers, including VNET, NAT, ACL, and Metering layers, each with dedicated controllers that program policy rules to the MAT using southbound APIs. The first packet of the inbound/outbound data flow is processed by VFP. The process updates match-action table entries in each layer, which then are copied into the Unified Flow Table (UFT). Subsequent packets are then switched based on the flow-based action in UFT. However, if the Virtual Machine is not using Accelerated Networking (AccelNet), all packets are still forwarded over the software switch, which requires CPU cycles. Accelerated Networking reduces the host’s CPU burden and provides a higher packet rate with a more predictable jitter by switching the packet using hardware NIC yet still relaying to VFP from the traffic policy perspective.


Hyper-V Extensible Virtual Switch


Microsoft’s extensible vSwitch running on Hyper-V operates as a Networking Virtualization Service Provider (NetVSP) for Virtual Machine. VMs, in turn, are Network Virtualization Service Consumers (NetVSP). When a VM starts, it requests the Hyper-V virtualization stack to connect to the vSwitch. The virtualization stack creates a virtual Network Interface (vNIC) for the VM and associates it with the vSwitch. The vNIC is presented to the VM as a physical network adapter. The communication channel between VM and vSwitch uses a synthetic data path Virtual Machine Bus (VMBus), which provides a standardized interface for VMs to access physical resources on the host machine. It helps ensure that virtual machines have consistent performance and can access resources in a secure and isolated manner. 


Virtual Filtering Platform - VFP


A Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale virtual switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. VFP sits in the data path between virtual ports facing the virtual machines and default vPort associated with physical NIC. VFP uses VM’s vPort-specific layers for filtering traffic to and from VM. A layer in the VFP is a Match-Action Table (MAT) containing policy rules programmed by independent, centralized controllers. The packet is processed through the VFP layers if it’s an exception packet, i.e., no Unified Flow entry (UF) in the Unified Flow Table (UFT), or if it’s the first packet of the flow (TCP SYN packet). When a Virtual Machine initiates a new connection, the first packet of the data flow is stored in the Received Queue (RxQ). The Parser component on VFP then takes the L2 (Ethernet), L3 (IP), and L4 (Protocol) header information as metadata, which is then processed through the layer policies in each VFP layer. The VFP layers involved in packet processing depend on the flow destination and the Azure services associated with the source/destination VM. 

VNET-to-Internet traffic from with VM using a Public IP


The metering layer measures traffic for billing. It is the first layer for VM’s outgoing traffic and the last layer for incoming traffic, i.e., it processes only the original ingress/egress packets ignoring tunnel headers and other header modifications (Azure does not charge you for overhead bytes caused by the tunnel encapsulation). Next, the ACL layer runs the metadata through the NSG policy statements. If the source/destination IP addresses (L3 header group) and protocol, source/destination ports (L4 header group) match one of the allowing policy rules, the traffic is permitted (action#1: Allow). After ACL layer processing, the routing process intercepts the metadata. Because the destination IP address in the L3 header group matches only with the default route (0.0.0.0/0, next-hop Internet), the metadata is handed over to Server Load Balancing/Network Address Translation (SLB/NAT) layer. In this example, a public IP is associated with VM’s vNIC, so the SLB/NAT layer translates the private source IP to the public IP (action#2: Source NAT). The VNet layer is bypassed if both source and destination IP addresses are from the public IP space. When the metadata is processed by each layer, the results are programmed into the Unified Flow Table (UFT). Each flow is identified with a unique Unified Flow Identifier (UFID) - hash value calculated from the flow-based 5-tuple (source/destination IP, Protocol, Source Port, Destination Port). The UFID is also associated with the actions Allow and Source NAT. The Header Transposition (HT) engine then takes the original packet from the RxQ and modifies its L2/L3/L4 header groups as described in the UFT. It changes the source private IP to public IP (Modify) and moves the packet to TxQ. The subsequent packets of the flow are modified by the HT engine based on the existing UFT entry without running related metadata through the VFP layers (slow-path to fast-path switchover). 

Besides the outbound flow entry, the VFP layer processes generate an inbound flow entry for the same connection but with reversed 5-tuple (source/destination addresses and protocol ports in reversed order) and actions (destination NAT instead of source NAT). These outbound and inbound flows are then paired and seen as a connection, enabling the Flow State Tracking process where inactive connections can be deleted from the UFT. For example, the Flow State Machine tracks the TCP RST flags. Let’s say that the destination endpoint sets the TCP RST flags to the L4 header. The TCP state machine notices it and removes the inbound flow together with its paired outbound flow from the UFT. Besides, the TCP state machine tracks the TCP FIN/FIN ACK flags and TIME_WAIT state (after TCP FIN. The connection is kept alive for max. 2 x Max Segment Lifetime to wait if there are delayed/retransmitted packets).


Intra-VNet traffic



The Metering and ACL layers on VFP process inbound/outbound flows for Intra-VNet connections in the same manner as VNet-Internet traffic. When the routing process notices that the destination Direct IP address (Customer Address space) is within the VNet CIDR range, the NAT layer is bypassed. The reason is that Intra-VNet flows use private Direct IP addresses as source and destination addresses. The Host Agent responsible for VNet layer operations, then examines the destination IP address from the L3 header group. Because this is the first packet of the flow, there is no information about the destination DIP-to-physical host mapping (location information) in the cache table. The VNet layer is responsible for providing tunnel headers to Intra-VNet traffic, so the Host Agent requests the location information from the centralized control plane. After getting the reply, it creates a MAT entry where the action part defines tunnel headers (push action). After the metadata is processed, the result is programmed into Unified Flow Table. As a result, the Header Transposition engine takes the original packet from the Received Queue, adds a tunnel header, and moves the packet to Transmit Queue.

Figure 1-1: Azure Host-Based SDN Building Blocks.

Continue reading

Azure Networking Fundamentals: Virtual WAN Part 2 – VNet Segmentation

VNets and VPN/ExpressRoute connections are associated with vHub’s Default Route Table, which allows both VNet-to-VNet and VNet-to-Remote Site IP connectivity. This chapter explains how we can isolate vnet-swe3 from vnet-swe1 and vnet-swe2 using VNet-specific vHub Route Tables (RT), still allowing VNet-to-VPN Site connection. As a first step, we create a Route Table rt-swe12 to which we associate VNets vnet-swe1 and vnet-swe2. Next, we deploy a Route Table rt-swe3 for vnet-swe3. Then we propagate routes from these RTs to Default RT but not from rt-swe12 to rt-swe3 and vice versa. Our VPN Gateway is associated with the Default RT, and the route to remote site subnet 10.11.11.0/24 is installed into the Default RT. To achieve bi-directional IP connectivity, we also propagate routes from the Default RT to rt-swe-12 and rt-swe3. As the last step, we verify both Control Plane operation and Data Plane connections. 


Figure 12-1: Virtual Network Segmentation.

Azure Networking Fundamentals: Virtual WAN Part 1 – S2S VPN and VNet Connections

 This chapter introduces Azure Virtual WAN (vWAN) service. It offers a single deployment, management, and monitoring pane for connectivity services such as Inter-VNet, Site-to-Site VPN, and Express Route. In this chapter, we are focusing on S2S VPN and VNet connections. The Site-to-Site VPN solutions in vWAN differ from the traditional model, where we create resources as an individual components. In this solution, we only deploy a vWAN resource and manage everything else through its management view. Figure 11-1 illustrates our example topology and deployment order. The first step is to implement a vWAN resource. Then we deploy a vHub. It is an Azure-managed VNet to which we assign a CIDR, just like we do with the traditional VNet. We can deploy a vHub as an empty VNet without associating any connection. A vHub deployment process launches a pair of redundant routers, which exchange reachability information with the VNet Gateway router and VGW instances using BGP. We intend to allow Inter-VNet data flows between vnet-swe1, vnet-swe2, and Branch-to-VNet traffic. For Site-to-Site VPN, we deploy VPN Gateway (VGW) into vHub. The VGW started in the vHub creates two instances, instance0, and instance1, in active/active mode. We don’t deploy a GatewaySubnet for VGW Continue reading

Azure Networking Fundamentals: VNET Peering

Comment: Here is a part of the introduction section of the eight chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).

This chapter introduces an Azure VNet Peering solution. VNet peering creates bidirectional IP connections between peered VNets. VNet peering links can be established within and across Azure regions and between VNets under the different Azure subscriptions or tenants. The unencrypted data path over peer links stays within Azure's private infrastructure. Consider a software-level solution (or use VGW) if your security policy requires data path encryption. There is no bandwidth limitation in VNet Peering like in VGW, where BW is based on SKU. From the VM perspective, VNet peering gives seamless network performance (bandwidth, latency, delay, and jitter) for Inter-VNet and Intra-VNet traffic. Unlike the VGW solution, VNet peering is a non-transitive solution, the routing information learned from one VNet peer is not advertised to another VNet peer. However, we can permit peered VNets (Spokes) to use local VGW (Hub) and route Spoke-to-Spoke data by using a subnet-specific route table Continue reading

Azure Networking Fundamentals: Site-to-Site VPN

Comment: Here is a part of the introduction section of the fifth chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).

A Hybrid Cloud is a model where we split application-specific workloads across the public and private clouds. This chapter introduces Azure's hybrid cloud solution using Site-to-Site (S2S) Active-Standby VPN connection between Azure and on-prem DC. Azure S2S A/S VPN service includes five Azure resources. The first one, Virtual Network Gateway (VGW), also called VPN Gateway, consists of two VMs, one in active mode and the other in standby mode. These VMs are our VPN connection termination points on the Azure side, which encrypt and decrypt data traffic. The active VM has a public IP address associated with its Internet side. If the active VM fails, the standby VM takes the active role, and the public IP is associated with it. Active and standby VMs are attached to the special subnet called Gateway Subnet. The name of the gateway subnet has to be GatewaySubnet. The Local Gateway (LGW) Continue reading

Azure Networking Fundamentals: Internet Access with VM-Specific Public IP

Comment: Here is a part of the introduction section of the Third chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane).

In chapter two, we created a VM vm-Bastion and associated a Public IP address to its attached NIC vm-bastion154. The Public IP addresses associated with VM’s NIC are called Instance Level Public IP (ILPIP). Then we added a security rule to the existing NSG vm-Bastion-nsg, which allows an inbound SSH connection from the external host. Besides, we created VMs vm-front-1 and vm-Back-1 without public IP address association. However, these two VMs have an egress Internet connection because Azure assigns Outbound Access IP (OPIP) addresses for VMs for which we haven’t allocated an ILPIP (vm-Front-1: 20.240.48.199 and vm-Back-1-20.240.41.145). The Azure portal does not list these IP addresses in the Azure portal VM view. Note that neither user-defined nor Azure-allocated Public IP addresses are not configured as NIC addresses. Instead, Azure adds them as a One-to-One entry to the NAT table (chapter 15 introduces a Continue reading

Azure Networking Fundamentals: Network Security Group (NSG)

Comment: Here is a part of the introduction section of the second chapter of my Azure Networking Fundamentals book. I will also publish other chapters' introduction sections soon so you can see if the book is for you. The book is available at Leanpub and Amazon (links on the right pane). 

This chapter introduces three NSG scenarios. The first example explains the NSG-NIC association. In this section, we create a VM that acts as a Bastion host*). Instead of using the Azure Bastion service, we deploy a custom-made vm-Bastion to snet-dmz and allow SSH connection from the external network. The second example describes the NSG-Subnet association. In this section, we launch vm-Front-1 in the front-end subnet. Then we deploy an NSG that allows SSH connection from the Bastion host IP address. The last part of the chapter introduces an Application Security Group (ASG), which we are using to form a logical VM group. We can then use the ASG as a destination in the security rule in NSG. There are two ASGs in figure 2-1. We can create a logical group of VMs by associating them with the same Application Security Group (ASG). The ASG can then be used Continue reading

Azure Host-Based Networking: vNIC Interface Architecture – Synthetic Interface and Virtual Function

Before moving to the Virtual Filtering Platform (VFP) and Accelerated Network (AccelNet) section, let’s look at the guest OS vNIC interface architecture. When we create a VM, Azure automatically attaches a virtual NIC (vNIC) to it. Each vNIC has a synthetic interface, a VMbus device, using a netvsc driver. If the Accelerated Networking (AccelNet) is disabled on a VM, all traffic flows pass over the synthetic interface to the software switch. Azure hosts servers have Mellanox/NVIDIA Single Root I/O Virtualization (SR-IOV) hardware NIC, which offers virtual instances, Virtual Function (VF), to virtual machines. When we enable AccelNet on a VM, the mlx driver is installed to vNIC. The mlx driver version depends on an SR-IOV type. The mlx driver on a vNIC initializes a new interface that connects the vNIC to an embedded switch on a hardware SR-IOV. This VF interface is then associated with the netvsc interface. Both interfaces use the same MAC address, but the IP address is only associated with the synthetic interface. When AccelNet is enabled, VM’s vNIC forwards VM data flows over the VF interface via the synthetic interface. This architecture allows In-Service Software Updates (ISSU) for SR-IOV NIC drivers. 

Note! Exception traffic, a data flow with no flow entries on a UFT/GFT, is forwarded through VFP in order to create flow-action entries to UFT/GFT.

Figure 1-1: Azure Host-Based SDN Building Blocks.

Continue reading

Azure Host-Based Networking: VFP and AccelNet Introduction

Software-Defined Networking (SDN) is an architecture where the network’s control plane is decoupled from the data plane to centralized controllers. These intelligent, programmable controllers manage network components as a single system, having a global view of the whole network. Microsoft’s Azure uses a host-based SDN solution, where network virtualization and most of its services (Firewalls, Load balancers, Gateways) run as software on the host. The physical switching infrastructure, in turn, offers a resilient, high-speed underlay transport network between hosts.

Figure 1-1 shows an overview of Azure’s SDN architecture. Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale software switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. The forwarding logic of the VFP uses a layered policy model based on policy rules on Match-Action Table (MAT). VFP works on a data plane, while complex control plane operations are handed over to centralized control systems. VFP layers, such as VNET, NAT, ACL, and Metering, have dedicated controllers that programs policy rules to MAT using southbound APIs.

Software switches switching processes are CPU intensive. To reduce the burden of CPU cycles, VFP offloads data forwarding logic to hardware NIC after processing the first packet of the flow and creating the flow Continue reading

Azure Host-Based SDN: Part 1 – VFP Introduction

Azure Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale virtual switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. Figure 1-1 illustrates an overview of VFP building blocks and relationships with basic vSwitch. Let’s start the examination from the VM vm-nwkt-1 perspective. Its vNIC vm-cafe154 has a synthetic interface eth0 using a NetVSC driver (Network Virtual Service Client). The Hyper-V vSwitch on the Parent Partition is a Network Virtual Service Provider (NetVSP) with VM-facing vPorts. Vm-cafe154 is connected to vPort4 over the logical inter-partition communication channel VMBus. VFP sits in the data path between VM-facing vPorts and default vPort associated with physical NIC. VFP uses port-specific Layers for filtering traffic to and from VMs. A VFP Layer is a Match Action Table (MAT) having a set of policy Rules. Rules consist of Conditions and Actions and are divided into Groups. Each layer is programmed by independent, centralized Controllers without cross-controller dependencies.

Let’s take a concrete example of Layer/Group/Rule object relationship and management by examining the Network Security Group (NSG) in the ACL Layer. Each NSG has a default group for Infrastructure rules, which allows Intra-VNet traffic, outbound Internet connection, and load balancer communication (health check, etc.). We Continue reading

Azure Host-Based SDN: Part 1 – VFP Introduction

Azure Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale virtual switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. Figure 1-1 illustrates an overview of VFP building blocks and relationships with basic vSwitch. Let’s start the examination from the VM vm-nwkt-1 perspective. Its vNIC vm-cafe154 has a synthetic interface eth0 using a NetVSC driver (Network Virtual Service Client). The Hyper-V vSwitch on the Parent Partition is a Network Virtual Service Provider (NetVSP) with VM-facing vPorts. Vm-cafe154 is connected to vPort4 over the logical inter-partition communication channel VMBus. VFP sits in the data path between VM-facing vPorts and default vPort associated with physical NIC. VFP uses port-specific Layers for filtering traffic to and from VMs. A VFP Layer is a Match Action Table (MAT) having a set of policy Rules. Rules consist of Conditions and Actions and are divided into Groups. Each layer is programmed by independent, centralized Controllers without cross-controller dependencies.

Let’s take a concrete example of Layer/Group/Rule object relationship and management by examining the Network Security Group (NSG) in the ACL Layer. Each NSG has a default group for Infrastructure rules, which allows Intra-VNet traffic, outbound Internet connection, and load balancer communication (health check, etc.). We Continue reading

AWS Networking Fundamentals: A Practical Guide to Understand How to Build a Virtual Datacenter into the AWS Cloud

 Table of Content


Table of Contents


Chapter 1: Virtual Private Cloud - VPC 1

VPC 1

VPC Introduction 1

The Structure of Availability Zone 2

Create VPC - AWS Console 4

Select Region 4

Create VPC 7

DHCP Options Set 9

Main Route Table 10

VPC Verification Using AWS CLI 12

Create VPC - AWS CloudFormation 16

Create Template 17

Uppload Template 17

Verification Using AWS Console 18

VPC Verification using AWS CLI 21

Create Subnets - AWS Console 23

Create Subnets 24

Route Tables 29

Create Subnets – AWS Console 30

Create Subnets - AWS CloudFormation 37

Create Network ACL 40


Chapter 2: VPC Control-Plane 43

VPC Control-Plane – Mapping Service 43

Introduction 43

Mapping Register 43

Mapping Request - Reply 44

Data-Plane Operation 45

References 46


Chapter 3: VPC Internet Gateway Service 47

Introduction 47

Allow Internet Access from Subnet 48

Create Internet Gateway 49

Update Subnet Route Table 54

Network Access Control List 57

Associate SG and Elastic-IP with EC2 59

Create Security Group 59

Launch an EC2 Instance 65

Allocate Elastic IP address from Amazon Ipv4 Pool 71

Reachability Analyzer 81

Billing 85



Chapter 4: VPC NAT Gateway 87

Introduction 87

Create NAT Gateway and Allocate Continue reading

AWS Networking – Part XI: VPC NAT Gateway

Introduction


Back-End EC2 instances like Application and Database servers are most often launched on a Private subnet. As a recap, a Private subnet is a subnet that doesn’t have a route to the Internet Gateway in its Route table. Besides, EC2 instances in the Private subnet don’t have Elastic-IP address association. These two facts mean that EC2 instances on the Private subnet don’t have Internet access. However, these EC2 instances might still need occasional Internet access to get firmware upgrades from the external source. We can use a NAT Gateway (NGW) for allowing IPv4 Internet traffic from Private subnets to the Internet. When we launch an NGW, we also need to allocate an Elastic-IP address (EIP) and associate it with the NGW. This association works the same way as the EIP-to-EC2 association. It creates a static NAT entry to IGW that translates  NGW’s local subnet address to its associated EIP. The NGW, in turn, is responsible for translating the source IP address from the ingress traffic originated from the Private subnet to its local subnet IP address. As an example, EC2 instance NWKT-EC2-Back-End sends packets towards the Internet to NGW. When the NGW receives these packets, it rewrites the source IP address 10.10.1.172 with its Public subnet IP address 10.10.0.195 and forwards packets to the Internet gateway. IGW translates the source IP address 10.10.0.195 to EIP 18.132.96.95 (EIP associated with NGW). That means that the source IP of data is rewritten twice, first by NGW and then by IGW.

Figure 4-1 illustrates our example NAT GW design and its configuration steps. As a pretask, we launch an EC2 instance on the Private subnet 10.10.1.0/24 (1). We also modify the existing Security Group (SG) to allow an Inbound/Outbound ICMP traffic within VPC CIDR 10.10.0.0/16 (2). We also allow an SSH session initiation from the 10.10.0.218/24. I’m using the same SG for both EC2 instances to keep things simple. Besides, both EC2 uses the same Key Pair. Chapter 3 shows how to launch an EC2 instance and how we modify the SGs, and that is why we go ahead straight to the NGW configuration.

When we have done pre-tasks, we launch an NGW on the Public subnet (3). Then we allocate an EIP and associate it with NGW (4). Next, we add a default route towards NGW on the Private subnet Route Table (5).

The last three steps are related to connectivity testing. First, verify Intra-VPC IP connectivity using ICMP (6). Then we test the Internet connectivity (7). As the last step, we can confirm that no route exists back to NWKT-EC-Backe-End from the IGW. We are using an AWS Path Analyzer for that (8).

Note! Our example doesn’t follow good design principles. AWS Availability Zones (AZ) are restricted failure domains, which means that failure in one AZ doesn’t affect the operation of other AZ. Now, if our NGW on AZ eu-west-2c fails,  Internet traffic from the Private subnet on eu-west2a fails. The proper design is to launch NGW on the AZ where unidirectional egress Internet access is needed.


Figure 4-1: Example Topology.

Continue reading

AWS Networking – Part X: VPC Internet Gateway Service – Part Two

 

Associate SG and Elastic-IP with EC2


In the previous section, we create an Internet Gateway for our VPC. We also add a static route towards IGW into the Route Table of Subnet 10.10.0.0/24. In this section, we first create a Security Group (SG).  The SG allows SSH connection to the EC2 instance and ICMP from the EC2. Then we launch an EC2 and attach the previously configure SG to it. As the last step, we allocate an Elastic IP address (EIP) from the AWS Ipv4 address pool and associate it with the EC instance. When we are done with all the previous steps, we will test the connection. First, we take ssh connection from MyPC to EC2. Then, we ping MyPC from the EC2. We also use AWS Reachability Analyzer to validate the path from IGE to EC2 instance. The last section introduces AWS billing related to this chapter.


Figure 3-20: EC2 Instance, Elastic IP, and Security Group.

 

Continue reading

AWS Networking – Part X: VPC Internet Gateway Service – Part One


Introduction


This chapter explains what components/services and configurations we need to allow Internet traffic to and from an EC2 instance. VPCs themselves are closed entities. If we need an Internet connection, we need to use an AWS Internet Gateway (IGW) service. The IGW is running on a  Blackfoot Edge Device in the AWS domain. It performs Data-Plane VPC encapsulation and decapsulation, as well as  IP address translation. We also need public, Internet routable IP addresses. In our example, we allocate an AWS Elastic-IP (EIP) address. Then we associate it with EC2 Instance. By doing it, we don’t add the EIP to the EC2 instance itself. Instead, we create a static one-to-one NAT entry into the VPC associated IGW. The subnet Route Table includes only a VPC’s CIDR range local route. That is why we need to add a routing entry to the Subnet RT, default or more specific, towards IGW. Note that a subnet within an AWS VPC is not a Broadcast domain (VPC doesn’t even support Broadcasts). Rather, we can think of it as a logical place for EC2 instances having uniform connection requirements, like reachability from the Internet. As a next step, we define the security policy. Each Subnet has a Network Access Control List (NACL), which is a stateless Data-Plane filter. The Stateless definition means that to allow bi-directional traffic flow, we have to permit flow-specific Request/Reply data separately. For simplicity, we are going to use the Subnet Default NACL. The Security Group (SG), in turn, is a stateful EC2 instance-specific Data-Plane filter. The Stateful means that filter permits flow-based ingress and egress traffic. Our example security policy is based on the SG. We will allow an SSH connection from the external host 91.152.204.245 to EC2 instance NWKT-EC-Fron-End. In addition, we allow all ICMP traffic from the EC2 instance to the same external host. As the last part, this chapter introduces the Reachability Analyzer service, which we can use for troubleshooting connections. Figure 3-1 illustrates what we are going to build in this chapter.


Figure 3-1: Setting Up an Internet Connection for Public Subnet of AWS VPC.

 

Continue reading

AWS Networking – Part IX: AWS VPC Control-Plane – Mapping Servce

 

Introduction


This chapter explains the VPC Control-Plane operation when two EC2 instances within the same subnet initiate TCP session between themself. In our example, EC2 instances are launched in two different physical servers. Both instances have an Elastic Network Interface (ENI) card. The left-hand side EC2’s ENI has MAC/IP addresses cafe:0001:0001/10.10.1.11 and the right-hand side EC2’s ENI has MAC/IP addresses beef:0001:0001/10.10.1.22. Each physical server hosting EC2 instances has a Nitro Card for VPC [NC4VPC]. It is responsible for routing, data packets encapsulation/decapsulation, and Traffic limiting. In addition, Security Groups (SGs) are implemented in hardware on the  Nitro card for VPC. AWS Control-Plane relies on the Mapping Service system decoupled from the network devices. It means that switches are unaware of Overlay Networks having no state information related to VPC’s, Subnets, EC2 Instances, or any other Overlay Network components. From the Control-Plane perspective, physical network switches participate in the Underlay Network routing process by advertising the reachability information of physical hosts, Mapping Service, and so on. From the Data-Plane point of view, they forwards packet based on the outer IP header.

  

Mapping Register

Starting an EC2 instance triggers the Control-Plane process on a host. Figure 2-1 illustrates that Host-1 and Host-2 store information of their local EC2 instances into the Mapping cache. Then they register these instances into Mapping Service. You can consider the registration process as a routing update. We need to inform the Mapping Service about the EC2 instance’s a) MAC/IP addresses bind to ENI, b) Virtual Network Identifier (=VPC), c) the physical host IP, d) and the encapsulation mode (VPC tunnel header). If you are familiar with Locator/Id Separation Protocol LISP, you may notice that its Control-Plane process follows the same principles. The main difference is that switches in LISP-enabled networks have state information related to virtual/bare-metal servers running in a virtual network. 


Figure 2-1: VPC Control-Plane Operation: Mapping Register.

Continue reading

AWS Networking – Part VIII: AWS Network ACL (NACL)

In this section, I am going to introduce the default Network ACL for subnets in VPC NVKT-VPC-01.

Figure 1-28 shows the complete structure of our VPC NVKT-VPC-01. We have a Public subnet 10.10.0.0/24 in AZ eu-west-2c a Private subnet 10.10.1.0/24 in AZ eu-west-2a. Both subnets are protected by the default VPC’s NACL named NWKT-NACL. NACL allows all traffic to and from the subnet by default.


Figure 1-37: Complete VPC Stack.

Continue reading

AWS Networking – Part VII: Create Subnet and RT Using AWS CloudFormation

In this post, we create a Subnet with the set of properties and attach it to VPC. We also specify a Route Table, which we associate with the Subnet using association.

 In our YAML template (figure 1-34), we have four AWS resources (logical name within parenthesis):

    1) AWS::EC2::VPC (NwktVPC)

    2) AWS::EC2::Subnet (NwktSubnet)

    3) AWS::EC2::RouteTable (NwktPUB2RouteTable)

    4) AWS::EC2::SubnetRouteTableAssociation(NwktRouteTableAssociation)

We are using a Ref function for defining the dependencies between AWS resources when the actual AWS resource Identifier is unknown. For example, the Ref function in AWS::EC2::Subnet resource [2] refers to the resource AWS::EC2::VPC’s logical name NwktVPC (A). We have to use an intrinsic function because we don’t know which  VPC Identifier AWS generates to VPC. After creating the subnet, we specify the subnet-specific Route Table [3]. First, we need to bind it to VPC using the Ref function value NwktVPC (B). Next, we “glue” the Route Table to Subnet using RouteTableAssociation, where we use two Ref functions. The first one refers to Route Table (C), and the second to Subnet (D).


Figure 1-34: Subnet Route Table.

Continue reading