Archive

Category Archives for "Virtualization"

Infrastructure 4. How to Run Cisco Nexus 9000v in Proxmox to Lab Cisco Data Centre

Hello my friend,

We use Proxmox in our Karneliuk Lab Cloud (KLC), which is a driving power behind our Network Automation and Nornir trainings. It allows to run out of the box the vast majority of VMs with network opening systems: Cisco IOS or Cisco IOS XR, Arista EOS, Nokia SR OS, Nvidia Cumulus, and many others. However, when we faced recently a need to emulate a customer’s data centre, which is build using Cisco Nexus 9000 switches, it transpired that this is not that straightforward and we had to spend quite a time in order to find a working solution. Moreover, we figured out that there are no public guides explaining how to do it. As such, we decide to create this blog.

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

How Does KLC Help with Automation?

A lot of network automation trainings worldwide imply that a student has to build a lab his/her-own. Such an approach, obviously, is the easiest for Continue reading

Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing

If you have ever used Proxmox, you know it’s a capable and robust open-source hypervisor. When coupled with Ceph, the two can provide a powerful HyperConverged (HCI) platform; rivaling mainstream closed-source solutions like those from Dell, Nutanix, VMWare, etc., and all based on free (paid support available) and open-source software. The distributed nature of HCI […]

The post Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing appeared first on Packet Pushers.

AWS Automatic EC2 Instance Recovery

On March 30th 2022, AWS announced automatic recovery of EC2 instances. Does that mean that AWS got feature-parity with VMware High Availability, or that VMware got it right from the very start? No and No.

Automatic Instance Recover Is Not High Availability

Reading the AWS documentation (as opposed to the feature announcement) quickly reveals a caveat or two. The automatic recovery is performed if an instance becomes impaired because of an underlying hardware failure or a problem that requires AWS involvement to repair.

Will Open Compute Backing Drive SIOV Adoption?

Virtualization has been an engine of efficiency in the IT industry over the past two decades, decoupling workloads from the underlying hardware and thus allowing multiple workloads to be consolidated into a single physical system as well as moved around relatively easily with live migration of virtual machines.

Will Open Compute Backing Drive SIOV Adoption? was written by Daniel Robinson at The Next Platform.

Running BGP between Virtual Machines and Data Center Fabric

Got this question from one of my readers:

When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?

Short answer: you don’t.

Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.

A Complete Rethinking Of Server Virtualization Hypervisors

Server virtualization has been around a long time, has come to different classes of machines and architectures over the decades to drive efficiency increases, and has seemingly reached a level of maturity that means we don’t have to give it a lot of thought.

A Complete Rethinking Of Server Virtualization Hypervisors was written by Timothy Prickett Morgan at The Next Platform.

Marvell’s OCTEON 10 Challenges All Comers For DPU Supremacy

This article was originally posted on the Packet Pushers Ignition site on July 9, 2021. The ascendance of Software Defined Networking (SDN) has catalyzed a renaissance in specialized hardware designed to accelerate and offload workloads from general-purpose CPUs. Decoupling network transport and services via software-defined abstraction layers lets a new generation of programmable networking hardware […]

The post Marvell’s OCTEON 10 Challenges All Comers For DPU Supremacy appeared first on Packet Pushers.

MikroTik CHR – Breaking the 100G barrier

Introduction

The world is strange today. Despite the Covid-19 crisis all over the world, most ISPs are fighting a battle to deliver more bandwidth on a daily basis.

  • Work from home
  • Online schools
  • increasing content consumption

All pushed ISPs to their bandwidth limits, leaving ISP’s no option but to look for upgrades, for everlasting bandwidth demands. There, they are having another set of problems, facing them in this completely new and strange world. Chip shortage, logistic and labor health issues caused higher prices and no stock availability. Here in IP ArchiTechs, we are spending lot of our time finding a good solution for our customers and to help them overcome these hard times. Whether that is our regular Team meeting or just a chat with our colleagues in almost any occasion someone mentions something about new solution to improve capacity and performance for our customers.

Starting with a thought, what is available as a platform today, and of course it’s ready to be shipped immediately after you checkout and pay one thing obviously was just in front of me. X86 server, dozens of them. They are left from the time when we were buying new hardware just because new generation Continue reading

Infrastructure 3. Deploying High Performance Pure Virtual Linux Router – 6WIND

Hello my friend,

Network Function Virtualisation (NFV) is not a new topic. There are numerous blogpost and articles, even in our blog, which review this topic. Yet, there is much more we can cover. Today we’ll share some insights on one of the very interesting products existing on the market today: 6WIND vRouter Turbo Router. We have a limited amount of days to write a few articles under our evaluation license. Hence, we’ll focus only on the most critical elements.

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

Is Linux Suitable for Automation?

It absolutely is. In fact, Linux is the real home for automation systems, as in many cases it hosts the tools you create in Ansible, Python, Bash, Go or any other language. At the same time, in order to effectively work with Linux, you need to know how to automate management and operation of Linux operating system itself. And you will be absolutely capable to do that, once you attend our Continue reading

Circular Dependencies Considered Harmful

A while ago my friend Nicola Modena sent me another intriguing curveball:

Imagine a CTO who has invested millions in a super-secure data center and wants to consolidate all compute workloads. If you were asked to run a BGP Route Reflector as a VM in that environment, and would like to bring OSPF or ISIS to that box to enable BGP ORR, would you use a GRE tunnel to avoid a dedicated VLAN or boring other hosts with routing protocol hello messages?

While there might be good reasons for doing that, my first knee-jerk reaction was:

Infrastructure 2. Building Multi Server Cloud with Proxmox (Debian Linux) and Local Storage

Hello my friend,

In the previous blogpost we covered the installation of Proxmox as a core platform for building open source virtualisation environment. Today we’ll continue this discussion and will show how to create a multi server cloud in order to better spread the load and provide resiliency for your applications.

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

How to Automate Infrastructure?

In many cases, Linux is a major driving power behind modern clouds. In fact, if you look across all current big clouds, such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, you will see Linux everywhere: on servers and on network devices (e.g., data centre switches). Therefore, knowledge how to deal with Linux and how to automate it is crucial to be successful in automation current IT systems.

At our trainings, advanced network automation and automation with Nornir (2nd step after advanced network automation), we give you detailed knowledge of all the technologies relevant:

Infrastructure 1. Building Virtualized Environment with Debian Linux and Proxmox on HP and Supermicro

Hello my friend,

Just the last week we finished our Zero-to-Hero Network Automation Training, which was very intensive and very interesting. The one could think: it is time for vacation now!.. Not quite yet. We decided to use the time wisely and upgrade our lab to bring possibilities for customers to use it. Lab upgrade means a major infrastructure project, which involves brining new hardware, changing topology and new software to simplify its management. Sounds interesting? Jump to details!

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

What is Infrastructure Automation?

Each and every element of your entire IT landscape requires two actions. It shall be monitored and it shall be managed. Being managed means that the element shall be configured and this is the first step for all sort of automations. Configuration management is a perfect use case to start automating your infrastructure, which spans servers, network devices, VMs, containers and much more. And we are here to help you to do Continue reading

Implementing Layer-2 Networks in a Public Cloud

A few weeks ago I got an excited tweet from someone working at Oracle Cloud Infrastructure: they launched full-blown layer-2 virtual networks in their public cloud to support customers migrating existing enterprise spaghetti mess into the cloud.

Let’s skip the usual does everyone using the applications now have to pay for Oracle licenses and I wonder what the lock in might be when I migrate my workloads into an Oracle cloud jokes and focus on the technical aspects of what they claim they implemented. Here’s my immediate reaction (limited to the usual 280 characters, because that’s the absolute upper limit of consumable content these days):

VMware After Gelsinger: Integrating Fiefdoms For A Post-Hypervisor World

VMware's next CEO has two tasks: to construct a narrative about VMware's role and value as a company in a post-hypervisor world, and to integrate its various fiefdoms into a cohesive set of products that can provide greater utility when used together than when used individually.

The post VMware After Gelsinger: Integrating Fiefdoms For A Post-Hypervisor World appeared first on Packet Pushers.

Repost: VMware Fault Tolerance Woes

I always claimed that VMware Fault Tolerance makes no sense. After all, the only thing it does is protect a VM against a server hardware failure… in the world where software crashes are way more common, and fat fingers cause most of the outages.

But wait, it gets worse, the whole thing is incredibly complex – you might like this description Minh Ha left as a comment to my Fifty Shades of High Availability blog post.

Making LLDP Work with Linux Bridge

Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.

Ansible playbook displaying LLDP neighbors on selected interface
---
- name: Display LLDP neighbors on selected interface
  hosts: all
  gather_facts: true
  vars:
    target_interface: GigabitEthernet0/1
  tasks:
  - name: Display neighbors gathered with ios_facts
    debug:
      var: ansible_net_neighbors[target_interface]

Alas, none of the routers saw any neighbors on the target interface.

VMware TKGI – Deployment of Harbor Container Registry fails with error

This is an article from the VMware from Scratch series During the process of preparation to Install Tanzu Kubernetes Grid Integrated Edition (TKGI v1.8) on vSphere with NSX-T Data Center (v3.0.2) one of the steps is to use Ops Manager to deploy Harbor Container Registry (in this case v2.1.0). The process of deployment ended with Harbor error several times so I’m sharing here my solution in order to ease things out for you giving the fact that I didn’t come across any solution googling around. In the process, the Harbor Registry product tile is downloaded from the VMware Tanzu network portal, imported

The post VMware TKGI – Deployment of Harbor Container Registry fails with error appeared first on How Does Internet Work.

1 2 3 13