This blog post describes yet another bizarre behavior discovered during the netlab integration testing.
It started innocently enough: I was working on the VRRP integration test and wanted to use Arista EOS as the second (probe) device in the VRRP cluster because it produces nice JSON-formatted results that are easy to use in validation tests.
Everything looked great until I ran the test on all platforms on which netlab configures VRRP, and all of them passed apart from Arista EOS (that was before we figured out how Sturgeon’s Law applies to VRRPv3) – a “That’s funny” moment that was directly responsible for me wasting a few hours chasing white rabbits down this trail.
It all started with an innocuous article describing the MTU basics. As the real purpose of the MTU is to prevent packet drops due to fixed-size receiver buffers, and I waste spend most of my time in virtual labs, I wanted to check how various virtual network devices react to incoming oversized packets.
As the first step, I created a simple netlab topology in which a single link had a slightly larger than usual MTU… and then all hell broke loose.
When I announced the Stub Networks in Virtual Labs blog post on LinkedIn, I claimed it was the last chapter in the “links in virtual labs” saga. I was wrong; here comes the fourth part of the virtual links trilogy – capturing “on the wire” traffic in virtual networking labs.
While network devices provide traffic capture capabilities (usually tcpdump in disguise generating a .pcap
file), it’s often better to capture the traffic outside of the device to see what the root cause of the problems you’re experiencing might be.
The previous blog posts described how virtualization products create LAN segments and point-to-point links.
However, sometimes we need stub segments – segments connected to a single router or switch – because we don’t want to waste resources creating hosts attached to a network device, but would still prefer a more realistic mechanism than static routes to inject IP subnets into routing protocols.
In the previous blog post, I described the usual mechanisms used to connect virtual machines or containers in a virtual lab, and the drawbacks of using Linux bridges to connect virtual network devices.
In this blog post, we’ll see how KVM/QEMU/libvirt/Vagrant use UDP tunnels to connect virtual machines, and how containerlab creates point-to-point vEth links between Linux containers.
Once a virtual machine running a network operating system boots, you’d expect its data-plane interfaces to be operational, right? Some vendors disagree. It takes over a minute for some network operating systems to figure out they have this thing called interfaces.1
I would love to figure out what takes them so long (a minute is an eternity on modern CPUs), but I guess we’ll never know.
netlab uses two device provisioning mechanisms: it can start virtual machines with Vagrant or containers with containerlab. Some of those containers might use KVM/QEMU to run a hidden virtual machine (see also: RFC 1925 rule 6a).
There are three major ways to connect network devices in the physical world:
Implementing these connections in virtual labs is a bit harder than one might think, as all virtualization solutions assume you plan to run virtual servers connected to Ethernet segments.
We all had been wondering what VMware would look like when it became part of Broadcom’s massive universe following the semiconductor giant’s $69 billion acquisition of the virtualization juggernaut. …
VMware Wants To Redefine Private Cloud With VCF 9 was written by Jeffrey Burt at The Next Platform.
Broadcom’s $69 billion acquisition of virtualization stalwart VMware was not an easy proposition. …
Cloud Foundation Updates Reflect The New VMware By Broadcom was written by Jeffrey Burt at The Next Platform.
Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).
However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.”2
Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).
However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.”2
I have built my lab for VXLAN on the Nexus9300v platform. Since I have a leaf and spine topology, there are ECMP routes towards the spines for the other leafs’ loopbacks. When performing labs though, I noticed that I didn’t have any ECMP routes in the forwarding table (FIB). They are in the RIB, though:
Leaf1# show ip route 203.0.113.4 IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 203.0.113.4/32, ubest/mbest: 2/0 *via 192.0.2.1, Eth1/1, [110/81], 1w0d, ospf-UNDERLAY, intra *via 192.0.2.2, Eth1/2, [110/81], 1w0d, ospf-UNDERLAY, intra
There is only one entry in the FIB, though:
Leaf1# show forwarding route 203.0.113.4? A.B.C.D Display single longest match route A.B.C.D/LEN Display single exact match route Leaf1# show forwarding route 203.0.113.4/32 slot 1 ======= IPv4 routes for table default/base ------------------+-----------------------------------------+----------------------+-----------------+----------------- Prefix | Next-hop | Interface | Labels | Partial Install ------------------+-----------------------------------------+----------------------+-----------------+----------------- 203.0.113.4/32 192.0.2.1 Ethernet1/1
This seemed strange to me and I was concerned that maybe something was Continue reading
Our choice of words in founding this publication nearly a decade ago were no accident. …
The post Everybody Tries To Leverage VMware To Their Advantage first appeared on The Next Platform.
Everybody Tries To Leverage VMware To Their Advantage was written by Timothy Prickett Morgan at The Next Platform.
Xcitium is an Endpoint Detection and Response (EDR) vendor that sells client software that uses multiple methods to protect endpoints. Methods include anti-virus, a host firewall, a Host Intrusion Protection System (HIPS), and a technique it calls ZeroDwell Containment. The first three components are straightforward. The AV software relies on signatures to detect known malware. […]
The post Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware appeared first on Packet Pushers.
The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:
We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?
First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.
The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:
We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?
First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)