Archive

Category Archives for "Virtualization"

Famous Last Words: I’m Too Stupid for That

Some networking vendors realized that one way to gain mindshare is to make their network operating systems available as free-to-download containers or virtual machines. That’s the right way to go; I love their efforts and point out who went down that path whenever possible1 (as well as others like Cisco who try to make our lives miserable).

However, those virtual machines better work out of the box, or you’ll get frustrated engineers who will give up and never touch your warez again, or as someone said in a LinkedIn comment to my blog post describing how Junos vPTX consistently rejects its DHCP-assigned IP address: “If I had encountered an issue like this before seeing Ivan’s post, I would have definitely concluded that I am doing it wrong.2

Nexus9000v and “Missing” Routes

I have built my lab for VXLAN on the Nexus9300v platform. Since I have a leaf and spine topology, there are ECMP routes towards the spines for the other leafs’ loopbacks. When performing labs though, I noticed that I didn’t have any ECMP routes in the forwarding table (FIB). They are in the RIB, though:

Leaf1# show ip route 203.0.113.4
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

203.0.113.4/32, ubest/mbest: 2/0
    *via 192.0.2.1, Eth1/1, [110/81], 1w0d, ospf-UNDERLAY, intra
    *via 192.0.2.2, Eth1/2, [110/81], 1w0d, ospf-UNDERLAY, intra

There is only one entry in the FIB, though:

Leaf1# show forwarding route 203.0.113.4?
  A.B.C.D      Display single longest match route
  A.B.C.D/LEN  Display single exact match route

Leaf1# show forwarding route 203.0.113.4/32

slot  1
=======


IPv4 routes for table default/base

------------------+-----------------------------------------+----------------------+-----------------+-----------------
Prefix            | Next-hop                                | Interface            | Labels          | Partial Install 
------------------+-----------------------------------------+----------------------+-----------------+-----------------
203.0.113.4/32       192.0.2.1                                 Ethernet1/1         

This seemed strange to me and I was concerned that maybe something was Continue reading

Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware

Xcitium is an Endpoint Detection and Response (EDR) vendor that sells client software that uses multiple methods to protect endpoints. Methods include anti-virus, a host firewall, a Host Intrusion Protection System (HIPS), and a technique it calls ZeroDwell Containment. The first three components are straightforward. The AV software relies on signatures to detect known malware. […]

The post Xcitium’s Endpoint Virtual Jail Aims To Lock Up Mystery Malware appeared first on Packet Pushers.

DPU Hype Considered Harmful

The hype generated by the “VMware supports DPU offload” announcement already resulted in fascinating misunderstandings. Here’s what I got from a System Architect:

We are dealing with an interesting scenario where a customer had limited data center space, but applications demand more resources. We are evaluating whether we could offload ESXi processing to DPUs (Pensando) to use existing servers as bare-metal servers. Would it be a use case for DPU?

First of all, congratulations to whichever vendor marketer managed to put that guy in that state of mind. Well done, sir, well done. Now for a dose of reality.

Are DPUs Any Good?

After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.

What Is a DPU

DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)

Ubuntu 20.04 Docker image – Python For Network Engineers

This is an updated Docker image of Python For Network Engineers (PFNE) based on Ubuntu 20.04 (minimal server distro). It contains all necessary tools for network / devops engineers to test automation and learn Python: OpensslNet-toolsIPutilsIProuteIPerfTCPDumpNMAPPython 2Python 3ParamikoNetmikoAnsiblePyntcNAPALMNetcatSocat If you notice a missing package which could be a value added for the scope of the … Continue reading Ubuntu 20.04 Docker image – Python For Network Engineers

Linux Bonding, LLDP, and MAC Flapping

Sometimes a painfully troublesome networking problem can have a complicated and brain-twisting root cause, one which you dread having to explain to peers and managers. However, sometimes the root cause is dead simple and makes you feel silly for how long it took you to find it. Today, I had one of the latter and […]

The post Linux Bonding, LLDP, and MAC Flapping appeared first on Packet Pushers.

VMware Aria: If You Can’t Beat Public Cloud, Maybe You Can Manage It

VMware  announced an ambitious project, VMware Aria, at VMware Explore 2022. Aria offers multi-cloud management for enterprises that use services in more than one public cloud. The speed and sprawl of cloud adoption has become a problem for enterprises. Companies are having a hard time containing costs, monitoring performance, and enforcing security and compliance policies. […]

The post VMware Aria: If You Can’t Beat Public Cloud, Maybe You Can Manage It appeared first on Packet Pushers.

Human Challenges Of Network Virtualization – Lessons Learned From NFV Projects

For the last four years I’ve worked on Network Functions Virtualization (NFV) projects at a couple of European Cloud Service Providers (CSPs). The implementation of these projects has proven to be messy (messiness is part of human nature, after all), and I wanted to share some of the lessons I’ve learned.

The post Human Challenges Of Network Virtualization – Lessons Learned From NFV Projects appeared first on Packet Pushers.

Infrastructure 4. How to Run Cisco Nexus 9000v in Proxmox to Lab Cisco Data Centre

Hello my friend,

We use Proxmox in our Karneliuk Lab Cloud (KLC), which is a driving power behind our Network Automation and Nornir trainings. It allows to run out of the box the vast majority of VMs with network opening systems: Cisco IOS or Cisco IOS XR, Arista EOS, Nokia SR OS, Nvidia Cumulus, and many others. However, when we faced recently a need to emulate a customer’s data centre, which is build using Cisco Nexus 9000 switches, it transpired that this is not that straightforward and we had to spend quite a time in order to find a working solution. Moreover, we figured out that there are no public guides explaining how to do it. As such, we decide to create this blog.


1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

How Does KLC Help with Automation?

A lot of network automation trainings worldwide imply that a student has to build a lab his/her-own. Such an approach, obviously, is the easiest for Continue reading

Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing

If you have ever used Proxmox, you know it’s a capable and robust open-source hypervisor. When coupled with Ceph, the two can provide a powerful HyperConverged (HCI) platform; rivaling mainstream closed-source solutions like those from Dell, Nutanix, VMWare, etc., and all based on free (paid support available) and open-source software. The distributed nature of HCI […]

The post Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing appeared first on Packet Pushers.

AWS Automatic EC2 Instance Recovery

On March 30th 2022, AWS announced automatic recovery of EC2 instances. Does that mean that AWS got feature-parity with VMware High Availability, or that VMware got it right from the very start? No and No.

Automatic Instance Recover Is Not High Availability

Reading the AWS documentation (as opposed to the feature announcement) quickly reveals a caveat or two. The automatic recovery is performed if an instance becomes impaired because of an underlying hardware failure or a problem that requires AWS involvement to repair.

Will Open Compute Backing Drive SIOV Adoption?

Virtualization has been an engine of efficiency in the IT industry over the past two decades, decoupling workloads from the underlying hardware and thus allowing multiple workloads to be consolidated into a single physical system as well as moved around relatively easily with live migration of virtual machines.

Will Open Compute Backing Drive SIOV Adoption? was written by Daniel Robinson at The Next Platform.

Running BGP between Virtual Machines and Data Center Fabric

Got this question from one of my readers:

When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?

Short answer: you don’t.

Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.

A Complete Rethinking Of Server Virtualization Hypervisors

Server virtualization has been around a long time, has come to different classes of machines and architectures over the decades to drive efficiency increases, and has seemingly reached a level of maturity that means we don’t have to give it a lot of thought.

A Complete Rethinking Of Server Virtualization Hypervisors was written by Timothy Prickett Morgan at The Next Platform.

1 2 3 14