Archive

Category Archives for "Networking"

Commercial quantum networks inch closer to primetime

As commercial availability of quantum computers moves closer to reality, researchers and vendors are investing in efforts to create quantum-secured networks.Quantum networks use entangled photons or other particles to ensure secure communications, but they are not, in and of themselves, used for general communication. Quantum networks are expensive and slow. And though nobody can listen in on the messages without breaking the entanglement of the photons, hackers can still try to attack the systems before the messages get into the quantum network, or after they leave it.Instead, quantum networks today are largely used for quantum key distribution (QKD), which uses quantum mechanics to secure the transmission of symmetric encryption keys. According to a June report by quantum industry analyst firm IQT research, the worldwide market for quantum networks will near $1.5 billion in 2027 and grow to more than $8 billion by 2031, and QKD will be the main revenue driver, followed by a rise in networks that use emerging quantum repeaters to connect quantum computers together and quantum sensor networks.To read this article in full, please click here

Cloud vs on-prem: SaaS vendor 37signals bails out of the public cloud

David Heinemeier Hansson, co-owner and CTO at SaaS vendor 37signals, is quitting the cloud and wants everyone to know about it. In a series of blog posts, Hansson has challenged the cloud business model, rebutted assumptions associated with cloud computing, and argued that the consolidation of power among hyperscalers is not necessarily a good thing.It might seem counterintuitive for a SaaS vendor to be publicly taking pot shots at the cloud and suggesting that other companies re-consider their cloud investments.  Has Hansson, the creator of Ruby on Rails, gone off the rails?Hansson’s argument is simple:  By pulling server workloads off the Amazon AWS infrastructure, purchasing new hardware from Dell, and running his business from a colocation facility, he will save millions of dollars.To read this article in full, please click here

Will ChatGPT Replace Stack Overflow?

TL&DR: No. You can move on.

NANOG87 summary by John Kristoff prompted me to look at NANOG87 presentations, and one of them discussed ChatGPT and Network Engineering (video). I couldn’t resist the clickbait ;)

Like most using ChatGPT for something articles we’re seeing these days, the presentation is a bit too positive for my taste. After all, it’s all fine and dandy to claim ChatGPT generates working router configurations and related Jinja2 templates if you know what the correct configurations should look like and can confidently say “and this is where it made a mistake” afterwards.

Nvidia announces new DPU, GPUs

Nvidia launched its GPU Technology Conference with a mix of hardware and software news, all of it centered around AI.The first big hardware announcement is the BlueField-3 network data-processing unit (DPU) designed to offload network processing tasks from the CPU. BlueField comes from  Nvidia's Mellanox acquisition, and is a SmartNIC fintelligent-networking card.BlueField-3 has double the number of Arm processor cores as the prior generation product as well as more accelerators in general and can run workloads up to eight times faster than the prior generation. BlueField-3 can accelerate network workloads across the cloud and on premises for high-performance computing and AI workloads in a hybrid setting.To read this article in full, please click here

Nvidia annunces new DPU, GPUs

Nvidia launched its GPU Technology Conference with a mix of hardware and software news, all of it centered around AI.The first big hardware announcement is the BlueField-3 network data-processing unit (DPU) designed to offload network processing tasks from the CPU. BlueField comes from  Nvidia's Mellanox acquisition, and is a SmartNIC fintelligent-networking card.BlueField-3 has double the number of Arm processor cores as the prior generation product as well as more accelerators in general and can run workloads up to eight times faster than the prior generation. BlueField-3 can accelerate network workloads across the cloud and on premises for high-performance computing and AI workloads in a hybrid setting.To read this article in full, please click here

Nvidia announces new DPU, GPUs

Nvidia launched its GPU Technology Conference with a mix of hardware and software news, all of it centered around AI.The first big hardware announcement is the BlueField-3 network data-processing unit (DPU) designed to offload network processing tasks from the CPU. BlueField comes from  Nvidia's Mellanox acquisition, and is a SmartNIC fintelligent-networking card.BlueField-3 has double the number of Arm processor cores as the prior generation product as well as more accelerators in general and can run workloads up to eight times faster than the prior generation. BlueField-3 can accelerate network workloads across the cloud and on premises for high-performance computing and AI workloads in a hybrid setting.To read this article in full, please click here

Chapter 1: Azure VM networking – Virtual Filtering Platform and Accelerated Networking

 Note! This post is under the technical review

Introduction


Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale software switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. The forwarding logic of the VFP uses a layered policy model based on policy rules on Match-Action Table (MAT). VFP works on a data plane, while complex control plane operations are handed over to centralized control systems. The VFP includes several layers, including VNET, NAT, ACL, and Metering layers, each with dedicated controllers that program policy rules to the MAT using southbound APIs. The first packet of the inbound/outbound data flow is processed by VFP. The process updates match-action table entries in each layer, which then are copied into the Unified Flow Table (UFT). Subsequent packets are then switched based on the flow-based action in UFT. However, if the Virtual Machine is not using Accelerated Networking (AccelNet), all packets are still forwarded over the software switch, which requires CPU cycles. Accelerated Networking reduces the host’s CPU burden and provides a higher packet rate with a more predictable jitter by switching the packet using hardware NIC yet still relaying to VFP from the traffic policy perspective.


Hyper-V Extensible Virtual Switch


Microsoft’s extensible vSwitch running on Hyper-V operates as a Networking Virtualization Service Provider (NetVSP) for Virtual Machine. VMs, in turn, are Network Virtualization Service Consumers (NetVSP). When a VM starts, it requests the Hyper-V virtualization stack to connect to the vSwitch. The virtualization stack creates a virtual Network Interface (vNIC) for the VM and associates it with the vSwitch. The vNIC is presented to the VM as a physical network adapter. The communication channel between VM and vSwitch uses a synthetic data path Virtual Machine Bus (VMBus), which provides a standardized interface for VMs to access physical resources on the host machine. It helps ensure that virtual machines have consistent performance and can access resources in a secure and isolated manner. 


Virtual Filtering Platform - VFP


A Virtual Filtering Platform (VFP) is Microsoft’s cloud-scale virtual switch operating as a virtual forwarding extension within a Hyper-V basic vSwitch. VFP sits in the data path between virtual ports facing the virtual machines and default vPort associated with physical NIC. VFP uses VM’s vPort-specific layers for filtering traffic to and from VM. A layer in the VFP is a Match-Action Table (MAT) containing policy rules programmed by independent, centralized controllers. The packet is processed through the VFP layers if it’s an exception packet, i.e., no Unified Flow entry (UF) in the Unified Flow Table (UFT), or if it’s the first packet of the flow (TCP SYN packet). When a Virtual Machine initiates a new connection, the first packet of the data flow is stored in the Received Queue (RxQ). The Parser component on VFP then takes the L2 (Ethernet), L3 (IP), and L4 (Protocol) header information as metadata, which is then processed through the layer policies in each VFP layer. The VFP layers involved in packet processing depend on the flow destination and the Azure services associated with the source/destination VM. 

VNET-to-Internet traffic from with VM using a Public IP


The metering layer measures traffic for billing. It is the first layer for VM’s outgoing traffic and the last layer for incoming traffic, i.e., it processes only the original ingress/egress packets ignoring tunnel headers and other header modifications (Azure does not charge you for overhead bytes caused by the tunnel encapsulation). Next, the ACL layer runs the metadata through the NSG policy statements. If the source/destination IP addresses (L3 header group) and protocol, source/destination ports (L4 header group) match one of the allowing policy rules, the traffic is permitted (action#1: Allow). After ACL layer processing, the routing process intercepts the metadata. Because the destination IP address in the L3 header group matches only with the default route (0.0.0.0/0, next-hop Internet), the metadata is handed over to Server Load Balancing/Network Address Translation (SLB/NAT) layer. In this example, a public IP is associated with VM’s vNIC, so the SLB/NAT layer translates the private source IP to the public IP (action#2: Source NAT). The VNet layer is bypassed if both source and destination IP addresses are from the public IP space. When the metadata is processed by each layer, the results are programmed into the Unified Flow Table (UFT). Each flow is identified with a unique Unified Flow Identifier (UFID) - hash value calculated from the flow-based 5-tuple (source/destination IP, Protocol, Source Port, Destination Port). The UFID is also associated with the actions Allow and Source NAT. The Header Transposition (HT) engine then takes the original packet from the RxQ and modifies its L2/L3/L4 header groups as described in the UFT. It changes the source private IP to public IP (Modify) and moves the packet to TxQ. The subsequent packets of the flow are modified by the HT engine based on the existing UFT entry without running related metadata through the VFP layers (slow-path to fast-path switchover). 

Besides the outbound flow entry, the VFP layer processes generate an inbound flow entry for the same connection but with reversed 5-tuple (source/destination addresses and protocol ports in reversed order) and actions (destination NAT instead of source NAT). These outbound and inbound flows are then paired and seen as a connection, enabling the Flow State Tracking process where inactive connections can be deleted from the UFT. For example, the Flow State Machine tracks the TCP RST flags. Let’s say that the destination endpoint sets the TCP RST flags to the L4 header. The TCP state machine notices it and removes the inbound flow together with its paired outbound flow from the UFT. Besides, the TCP state machine tracks the TCP FIN/FIN ACK flags and TIME_WAIT state (after TCP FIN. The connection is kept alive for max. 2 x Max Segment Lifetime to wait if there are delayed/retransmitted packets).


Intra-VNet traffic



The Metering and ACL layers on VFP process inbound/outbound flows for Intra-VNet connections in the same manner as VNet-Internet traffic. When the routing process notices that the destination Direct IP address (Customer Address space) is within the VNet CIDR range, the NAT layer is bypassed. The reason is that Intra-VNet flows use private Direct IP addresses as source and destination addresses. The Host Agent responsible for VNet layer operations, then examines the destination IP address from the L3 header group. Because this is the first packet of the flow, there is no information about the destination DIP-to-physical host mapping (location information) in the cache table. The VNet layer is responsible for providing tunnel headers to Intra-VNet traffic, so the Host Agent requests the location information from the centralized control plane. After getting the reply, it creates a MAT entry where the action part defines tunnel headers (push action). After the metadata is processed, the result is programmed into Unified Flow Table. As a result, the Header Transposition engine takes the original packet from the Received Queue, adds a tunnel header, and moves the packet to Transmit Queue.

Figure 1-1: Azure Host-Based SDN Building Blocks.

Continue reading

Meet Calico at KubeCon EU 2023!

KubeCon EU 2023 is happening from April 18-21 in Amsterdam. We are very excited to announce that Project Calico will be attending, so come meet us at booth #S28—we’ll be there from 10:30 am onwards!

Chat and learn

At the event, you’ll have an opportunity to meet our Project Calico team, collect cool Calico swags, and ask questions in person. Whether you’re an expert Kubernetes user or just getting started, the Project Calico community is here to provide guidance on best practices and help you get the most out of Calico. Here are some of the things you can learn at our booth:

  • Simplified networking: Calico provides a simple, easy-to-use networking solution for Kubernetes. With Calico, you can easily set up and manage your network infrastructure without worrying about complex configurations or difficult networking concepts.
  • Enhanced security: Security is a top priority for any Kubernetes and container environment, and Calico provides the tools you need to keep your applications safe. With Calico, you can enforce network policies, protect against DDoS attacks, and more.
  • Scalability: Whether you’re running a small Kubernetes cluster or a large-scale deployment, Calico can scale to meet your needs.
  • Open-source community: Built on open-source technologies, Project Calico Continue reading

Ethernet at 50: Bob Metcalfe pulls down the Turing Award

“Tickled pink” is Bob Metcalfe’s reaction to his latest accolade – the Association for Computing Machinery’s A. M. Turing Award for inventing and commercializing Ethernet. The award was announced today and will be presented at a ceremony June 10 in San Francisco.With his trademark sense of humor, Metcalfe says, “It’s a big surprise and a delight. I’ve received other awards in the past so I’m familiar with the notion that I have a new obligation to behave myself and live up to the standard of the award and be a role model based on that.”The award carries a $1 million prize. “My wife suggests I spend it on her,” Metcalfe quipped, before adding that he hasn’t worked out the details but will probably pour most of it into his family foundation (after he fills up his boat with diesel fuel).To read this article in full, please click here

Hiding Behind MASQUEs

Privacy was a difficult topic for Internet protocols at the outset of the Internet. Things took a very different turn some 10 years ago following the disclosures of mass surveillence programs in the US, when the IETF declared that pervasive monitoring of users consititued at attack and Internet protocols needed to take measures to contain the way in which data was accessed in the network. The latest offerings in the area of improved privacy include Oblivious HTTP and MASQUE. Lets look at these approaches and the way that they attempt to contain the potential leakage of data.

Arista embraces routing

Arista Networks has taken its first direct step into WAN routing with new software, hardware and services, an enterprise-class system designed to link critical resources with core data-center and campus networks.The package, called the Arista WAN Routing System ties together three new components—enterprise-class routing hardware, software for its CloudVision management platform called Pathfinder, and the ability to set up neutral peering points called Transit Hubs. This trio enables setting up carrier-neutral and cloud-adjacent facilities to provide self-healing and path-optimization links across core, aggregation, and cloud networking interconnects, according to Doug Gourlay, vice president and general manager of Arista’s Cloud Networking Software group in a blog about the new package.To read this article in full, please click here

Arista embraces SD-WAN

Arista Networks has taken its first direct step into WAN routing with new software, hardware and services, an enterprise-class system designed to link critical resources with core data-center and campus networks.The package, called the Arista WAN Routing System ties together three new components—enterprise-class routing hardware, software for its CloudVision management platform called Pathfinder, and the ability to set up neutral peering points called Transit Hubs. This trio enables setting up carrier-neutral and cloud-adjacent facilities to provide self-healing and path-optimization links across core, aggregation, and cloud networking interconnects, according to Doug Gourlay, vice president and general manager of Arista’s Cloud Networking Software group in a blog about the new package.To read this article in full, please click here

Oracle ties up with Nvidia to offer AI supercomputing service

Oracle is partnering with Nvidia to offer a new AI supercomputing service, dubbed DGX Cloud and available immediately, using Oracle Cloud Infrastructure's Supercluster.“OCI has excellent performance. They have a two-tier computing fabric and management network," Nvidia CEO Jensen Huang said during his keynote at the company’s annual GTC conference on Tuesday.Nvidia is working with other cloud providers to provide similar services, but Oracle is its first partner to go live with an offering. "Nvidia's CX7 along with Oracle’s non-blocking remote direct access memory (RDMA) forms the computing fabric," Huang said. "And Bluefield 3 will be the infrastructure processor for the management network. The combination is a state-of-the-art DGX AI supercomputer that can be offered as a multitenant cloud service.”To read this article in full, please click here

Oracle ties up with Nvidia to offer AI supercomputing service

Oracle is partnering with Nvidia to offer a new AI supercomputing service, dubbed DGX Cloud and available immediately, using Oracle Cloud Infrastructure's Supercluster.“OCI has excellent performance. They have a two-tier computing fabric and management network," Nvidia CEO Jensen Huang said during his keynote at the company’s annual GTC conference on Tuesday.Nvidia is working with other cloud providers to provide similar services, but Oracle is its first partner to go live with an offering. "Nvidia's CX7 along with Oracle’s non-blocking remote direct access memory (RDMA) forms the computing fabric," Huang said. "And Bluefield 3 will be the infrastructure processor for the management network. The combination is a state-of-the-art DGX AI supercomputer that can be offered as a multitenant cloud service.”To read this article in full, please click here