We all had been wondering what VMware would look like when it became part of Broadcom’s massive universe following the semiconductor giant’s $69 billion acquisition of the virtualization juggernaut. …
VMware Wants To Redefine Private Cloud With VCF 9 was written by Jeffrey Burt at The Next Platform.
Broadcom’s $69 billion acquisition of virtualization stalwart VMware was not an easy proposition. …
Cloud Foundation Updates Reflect The New VMware By Broadcom was written by Jeffrey Burt at The Next Platform.
Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).
However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.”2
I have built my lab for VXLAN on the Nexus9300v platform. Since I have a leaf and spine topology, there are ECMP routes towards the spines for the other leafs’ loopbacks. When performing labs though, I noticed that I didn’t have any ECMP routes in the forwarding table (FIB). They are in the RIB, though:
Leaf1# show ip route 203.0.113.4 IP Route Table for VRF "default" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 203.0.113.4/32, ubest/mbest: 2/0 *via 192.0.2.1, Eth1/1, [110/81], 1w0d, ospf-UNDERLAY, intra *via 192.0.2.2, Eth1/2, [110/81], 1w0d, ospf-UNDERLAY, intra
There is only one entry in the FIB, though:
Leaf1# show forwarding route 203.0.113.4? A.B.C.D Display single longest match route A.B.C.D/LEN Display single exact match route Leaf1# show forwarding route 203.0.113.4/32 slot 1 ======= IPv4 routes for table default/base ------------------+-----------------------------------------+----------------------+-----------------+----------------- Prefix | Next-hop | Interface | Labels | Partial Install ------------------+-----------------------------------------+----------------------+-----------------+----------------- 203.0.113.4/32 192.0.2.1 Ethernet1/1
This seemed strange to me and I was concerned that maybe something was Continue reading
Our choice of words in founding this publication nearly a decade ago were no accident. …
The post Everybody Tries To Leverage VMware To Their Advantage first appeared on The Next Platform.
Everybody Tries To Leverage VMware To Their Advantage was written by Timothy Prickett Morgan at The Next Platform.
Xcitium is an Endpoint Detection and Response (EDR) vendor that sells client software that uses multiple methods to protect endpoints. Methods include anti-virus, a host firewall, a Host Intrusion Protection System (HIPS), and a technique it calls ZeroDwell Containment. The first three components are straightforward. The AV software relies on signatures to detect known malware. […]
The post Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware appeared first on Packet Pushers.
The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:
We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?
First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
Sometimes a painfully troublesome networking problem can have a complicated and brain-twisting root cause, one which you dread having to explain to peers and managers. However, sometimes the root cause is dead simple and makes you feel silly for how long it took you to find it. Today, I had one of the latter and […]
The post Linux Bonding, LLDP, and MAC Flapping appeared first on Packet Pushers.
VMware announced an ambitious project, VMware Aria, at VMware Explore 2022. Aria offers multi-cloud management for enterprises that use services in more than one public cloud. The speed and sprawl of cloud adoption has become a problem for enterprises. Companies are having a hard time containing costs, monitoring performance, and enforcing security and compliance policies. […]
The post VMware Aria: If You Can’t Beat Public Cloud, Maybe You Can Manage It appeared first on Packet Pushers.
For the last four years I’ve worked on Network Functions Virtualization (NFV) projects at a couple of European Cloud Service Providers (CSPs). The implementation of these projects has proven to be messy (messiness is part of human nature, after all), and I wanted to share some of the lessons I’ve learned.
The post Human Challenges Of Network Virtualization – Lessons Learned From NFV Projects appeared first on Packet Pushers.
Hello my friend,
We use Proxmox in our Karneliuk Lab Cloud (KLC), which is a driving power behind our Network Automation and Nornir trainings. It allows to run out of the box the vast majority of VMs with network opening systems: Cisco IOS or Cisco IOS XR, Arista EOS, Nokia SR OS, Nvidia Cumulus, and many others. However, when we faced recently a need to emulate a customer’s data centre, which is build using Cisco Nexus 9000 switches, it transpired that this is not that straightforward and we had to spend quite a time in order to find a working solution. Moreover, we figured out that there are no public guides explaining how to do it. As such, we decide to create this blog.
1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
A lot of network automation trainings worldwide imply that a student has to build a lab his/her-own. Such an approach, obviously, is the easiest for Continue reading
If you have ever used Proxmox, you know it’s a capable and robust open-source hypervisor. When coupled with Ceph, the two can provide a powerful HyperConverged (HCI) platform; rivaling mainstream closed-source solutions like those from Dell, Nutanix, VMWare, etc., and all based on free (paid support available) and open-source software. The distributed nature of HCI […]
The post Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing appeared first on Packet Pushers.
On March 30th 2022, AWS announced automatic recovery of EC2 instances. Does that mean that AWS got feature-parity with VMware High Availability, or that VMware got it right from the very start? No and No.
Reading the AWS documentation (as opposed to the feature announcement) quickly reveals a caveat or two. The automatic recovery is performed if an instance becomes impaired because of an underlying hardware failure or a problem that requires AWS involvement to repair.
Virtualization has been an engine of efficiency in the IT industry over the past two decades, decoupling workloads from the underlying hardware and thus allowing multiple workloads to be consolidated into a single physical system as well as moved around relatively easily with live migration of virtual machines. …
Will Open Compute Backing Drive SIOV Adoption? was written by Daniel Robinson at The Next Platform.
Iwan Rahabok’s open-source VMware Operations Guide is now also available in Markdown-on-GitHub format. Networking engineers support vSphere/NSX infrastructure might be particularly interested in the Network Metrics chapter.
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.
Server virtualization has been around a long time, has come to different classes of machines and architectures over the decades to drive efficiency increases, and has seemingly reached a level of maturity that means we don’t have to give it a lot of thought. …
A Complete Rethinking Of Server Virtualization Hypervisors was written by Timothy Prickett Morgan at The Next Platform.