Today on the Tech Bytes podcast we’re talking about cloud migration and operating in a multi-cloud environment. We’re sponsored by VMware and we’re speaking with Expedient, a VMware partner. Expedient is also a cloud service provider and runs 14 data centers across the US. Our guest is Bryan Smith, CEO at Expedient.
The post Tech Bytes: Enabling Smart Cloud Migration With VMware And Expedient (Sponsored) appeared first on Packet Pushers.
Protocol Buffers are a popular choice for serializing structured data due to their compact size, fast processing speed, language independence, and compatibility. There exist other alternatives, including Cap’n Proto, CBOR, and Avro.
Usually, data structures are described in a proto definition file
(.proto
). The protoc
compiler and a language-specific plugin convert it into
code:
$ head flow-4.proto syntax = "proto3"; package decoder; option go_package = "akvorado/inlet/flow/decoder"; message FlowMessagev4 { uint64 TimeReceived = 2; uint32 SequenceNum = 3; uint64 SamplingRate = 4; uint32 FlowDirection = 5; $ protoc -I=. --plugin=protoc-gen-go --go_out=module=akvorado:. flow-4.proto $ head inlet/flow/decoder/flow-4.pb.go // Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.28.0 // protoc v3.21.12 // source: inlet/flow/data/schemas/flow-4.proto package decoder import ( protoreflect "google.golang.org/protobuf/reflect/protoreflect"
Akvorado collects network flows using IPFIX or sFlow, decodes them with GoFlow2, encodes them to Protocol Buffers, and sends them to Kafka to be stored in a ClickHouse database. Collecting a new field, such as source and destination MAC addresses, requires modifications in multiple places, including the proto definition file and the ClickHouse migration code. Moreover, Continue reading
Maybe it’s just me, but I always need a few extra devices in my virtual labs to have endpoints I could ping to/from or to have external routing information sources. We used VRF- and VLAN tricks in the days when we had to use physical devices to carve out a dozen hosts out of a single Cisco 2501, and life became much easier when you could spin up a few additional virtual machines in a virtual lab instead.
Unfortunately, those virtual machines eat precious resources. For example, netlab allocates 1GB to every Linux virtual machine when you only need bash
and ping
. Wouldn’t it be great if you could start that ping
in a busybox container instead?
This chapter introduces Azure Virtual WAN (vWAN) service. It offers a single deployment, management, and monitoring pane for connectivity services such as Inter-VNet, Site-to-Site VPN, and Express Route. In this chapter, we are focusing on S2S VPN and VNet connections. The Site-to-Site VPN solutions in vWAN differ from the traditional model, where we create resources as an individual components. In this solution, we only deploy a vWAN resource and manage everything else through its management view. Figure 11-1 illustrates our example topology and deployment order. The first step is to implement a vWAN resource. Then we deploy a vHub. It is an Azure-managed VNet to which we assign a CIDR, just like we do with the traditional VNet. We can deploy a vHub as an empty VNet without associating any connection. A vHub deployment process launches a pair of redundant routers, which exchange reachability information with the VNet Gateway router and VGW instances using BGP. We intend to allow Inter-VNet data flows between vnet-swe1, vnet-swe2, and Branch-to-VNet traffic. For Site-to-Site VPN, we deploy VPN Gateway (VGW) into vHub. The VGW started in the vHub creates two instances, instance0, and instance1, in active/active mode. We don’t deploy a GatewaySubnet for VGW Continue reading
This blog post was written in collaboration with:
Aloys Augustin, Nathan Skrzypczak, Hedi Bouattour, Onong Tayeng, and Jerome Tollet at Cisco. Aloys and Nathan are part of a team of external contributors to Calico Open Source that has been working on an integration between Calico Open Source and the FD.io VPP dataplane technology for the last couple of years.
Mrittika Ganguli, principal engineer and architect at Intel’s Network and Edge (NEX). Ganguli leads a team with Qian Q Xu, Ping Yu, and Xiaobing Qian to enhance the performance of Calico and VPP through software and hardware acceleration.
This blog will cover what the Calico/VPP dataplane is and demonstrate the performance and flexibility advantages of using the VPP dataplane through a benchmarking setup. By the end of this blog post, you will have a clear understanding of how Calico/VPP dataplane, with the help of DPDK and accelerated memif interfaces, can provide high throughput and low-latency Kubernetes cluster networking for your environment. Additionally, you will learn how these technologies can be used to reduce CPU utilization by transferring packets directly in memory between different hosts, making it an efficient solution for building distributed network functions with lightning-fast speeds.
Robert Graham published a blog post describing how his IDS/IPS system handled 2 Mpps on a Pentium III CPU 20 years ago… and yet some people keep claiming that “Driving a 100 Gbps network at 80% utilization in both directions consumes 10–20 cores just in the networking stack” (in 2023). I guess a suboptimal-enough implementation can still consume all the CPU cycles it can get and then some.