In the previous blog post, I described the usual mechanisms used to connect virtual machines or containers in a virtual lab, and the drawbacks of using Linux bridges to connect virtual network devices.
In this blog post, we’ll see how KVM/QEMU/libvirt/Vagrant use UDP tunnels to connect virtual machines, and how containerlab creates point-to-point vEth links between Linux containers.
Once a virtual machine running a network operating system boots, you’d expect its data-plane interfaces to be operational, right? Some vendors disagree. It takes over a minute for some network operating systems to figure out they have this thing called interfaces.1
I would love to figure out what takes them so long (a minute is an eternity on modern CPUs), but I guess we’ll never know.
netlab uses two device provisioning mechanisms: it can start virtual machines with Vagrant or containers with containerlab. Some of those containers might use KVM/QEMU to run a hidden virtual machine (see also: RFC 1925 rule 6a).
There are three major ways to connect network devices in the physical world:
Implementing these connections in virtual labs is a bit harder than one might think, as all virtualization solutions assume you plan to run virtual servers connected to Ethernet segments.
We all had been wondering what VMware would look like when it became part of Broadcom’s massive universe following the semiconductor giant’s $69 billion acquisition of the virtualization juggernaut. …
VMware Wants To Redefine Private Cloud With VCF 9 was written by Jeffrey Burt at The Next Platform.
Broadcom’s $69 billion acquisition of virtualization stalwart VMware was not an easy proposition. …
Cloud Foundation Updates Reflect The New VMware By Broadcom was written by Jeffrey Burt at The Next Platform.
Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).
However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.”2
Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).
However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.”2
I have built my lab for VXLAN on the Nexus9300v platform. Since I have a leaf and spine topology, there are ECMP routes towards the spines for the other leafs’ loopbacks. When performing labs though, I noticed that I didn’t have any ECMP routes in the forwarding table (FIB). They are in the RIB, though:
Leaf1# show ip route 203.0.113.4 IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 203.0.113.4/32, ubest/mbest: 2/0 *via 192.0.2.1, Eth1/1, [110/81], 1w0d, ospf-UNDERLAY, intra *via 192.0.2.2, Eth1/2, [110/81], 1w0d, ospf-UNDERLAY, intra
There is only one entry in the FIB, though:
Leaf1# show forwarding route 203.0.113.4? A.B.C.D Display single longest match route A.B.C.D/LEN Display single exact match route Leaf1# show forwarding route 203.0.113.4/32 slot 1 ======= IPv4 routes for table default/base ------------------+-----------------------------------------+----------------------+-----------------+----------------- Prefix | Next-hop | Interface | Labels | Partial Install ------------------+-----------------------------------------+----------------------+-----------------+----------------- 203.0.113.4/32 192.0.2.1 Ethernet1/1
This seemed strange to me and I was concerned that maybe something was Continue reading
Our choice of words in founding this publication nearly a decade ago were no accident. …
The post Everybody Tries To Leverage VMware To Their Advantage first appeared on The Next Platform.
Everybody Tries To Leverage VMware To Their Advantage was written by Timothy Prickett Morgan at The Next Platform.
Xcitium is an Endpoint Detection and Response (EDR) vendor that sells client software that uses multiple methods to protect endpoints. Methods include anti-virus, a host firewall, a Host Intrusion Protection System (HIPS), and a technique it calls ZeroDwell Containment. The first three components are straightforward. The AV software relies on signatures to detect known malware. […]
The post Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware appeared first on Packet Pushers.
The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:
We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?
First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.
The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:
We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?
First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
Sometimes a painfully troublesome networking problem can have a complicated and brain-twisting root cause, one which you dread having to explain to peers and managers. However, sometimes the root cause is dead simple and makes you feel silly for how long it took you to find it. Today, I had one of the latter and […]
The post Linux Bonding, LLDP, and MAC Flapping appeared first on Packet Pushers.
VMware announced an ambitious project, VMware Aria, at VMware Explore 2022. Aria offers multi-cloud management for enterprises that use services in more than one public cloud. The speed and sprawl of cloud adoption has become a problem for enterprises. Companies are having a hard time containing costs, monitoring performance, and enforcing security and compliance policies. […]
The post VMware Aria: If You Can’t Beat Public Cloud, Maybe You Can Manage It appeared first on Packet Pushers.
For the last four years I’ve worked on Network Functions Virtualization (NFV) projects at a couple of European Cloud Service Providers (CSPs). The implementation of these projects has proven to be messy (messiness is part of human nature, after all), and I wanted to share some of the lessons I’ve learned.
The post Human Challenges Of Network Virtualization – Lessons Learned From NFV Projects appeared first on Packet Pushers.
Hello my friend,
We use Proxmox in our Karneliuk Lab Cloud (KLC), which is a driving power behind our Network Automation and Nornir trainings. It allows to run out of the box the vast majority of VMs with network opening systems: Cisco IOS or Cisco IOS XR, Arista EOS, Nokia SR OS, Nvidia Cumulus, and many others. However, when we faced recently a need to emulate a customer’s data centre, which is build using Cisco Nexus 9000 switches, it transpired that this is not that straightforward and we had to spend quite a time in order to find a working solution. Moreover, we figured out that there are no public guides explaining how to do it. As such, we decide to create this blog.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
A lot of network automation trainings worldwide imply that a student has to build a lab his/her-own. Such an approach, obviously, is the easiest for Continue reading