If you have ever used Proxmox, you know it’s a capable and robust open-source hypervisor. When coupled with Ceph, the two can provide a powerful HyperConverged (HCI) platform; rivaling mainstream closed-source solutions like those from Dell, Nutanix, VMWare, etc., and all based on free (paid support available) and open-source software. The distributed nature of HCI […]
The post Proxmox/Ceph – Full Mesh HCI Cluster w/ Dynamic Routing appeared first on Packet Pushers.
On March 30th 2022, AWS announced automatic recovery of EC2 instances. Does that mean that AWS got feature-parity with VMware High Availability, or that VMware got it right from the very start? No and No.
Reading the AWS documentation (as opposed to the feature announcement) quickly reveals a caveat or two. The automatic recovery is performed if an instance becomes impaired because of an underlying hardware failure or a problem that requires AWS involvement to repair.
Virtualization has been an engine of efficiency in the IT industry over the past two decades, decoupling workloads from the underlying hardware and thus allowing multiple workloads to be consolidated into a single physical system as well as moved around relatively easily with live migration of virtual machines. …
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.
Server virtualization has been around a long time, has come to different classes of machines and architectures over the decades to drive efficiency increases, and has seemingly reached a level of maturity that means we don’t have to give it a lot of thought. …
When I finally1 managed to get SR Linux running with netsim-tools, I wanted to test how it interacts with Cumulus VX and FRR in an OSPF+BGP lab… and failed. Jeroen Van Bemmel quickly identified the culprit: MTU. Yeah, it’s always the MTU (or DNS, or BGP).
I never experienced a similar problem, so of course I had to identify the root cause:
This article was originally posted on the Packet Pushers Ignition site on July 9, 2021. The ascendance of Software Defined Networking (SDN) has catalyzed a renaissance in specialized hardware designed to accelerate and offload workloads from general-purpose CPUs. Decoupling network transport and services via software-defined abstraction layers lets a new generation of programmable networking hardware […]
The post Marvell’s OCTEON 10 Challenges All Comers For DPU Supremacy appeared first on Packet Pushers.
The world is strange today. Despite the Covid-19 crisis all over the world, most ISPs are fighting a battle to deliver more bandwidth on a daily basis.
All pushed ISPs to their bandwidth limits, leaving ISP’s no option but to look for upgrades, for everlasting bandwidth demands. There, they are having another set of problems, facing them in this completely new and strange world. Chip shortage, logistic and labor health issues caused higher prices and no stock availability. Here in IP ArchiTechs, we are spending lot of our time finding a good solution for our customers and to help them overcome these hard times. Whether that is our regular Team meeting or just a chat with our colleagues in almost any occasion someone mentions something about new solution to improve capacity and performance for our customers.
Starting with a thought, what is available as a platform today, and of course it’s ready to be shipped immediately after you checkout and pay one thing obviously was just in front of me. X86 server, dozens of them. They are left from the time when we were buying new hardware just because new generation Continue reading
Hello my friend,
Network Function Virtualisation (NFV) is not a new topic. There are numerous blogpost and articles, even in our blog, which review this topic. Yet, there is much more we can cover. Today we’ll share some insights on one of the very interesting products existing on the market today: 6WIND vRouter Turbo Router. We have a limited amount of days to write a few articles under our evaluation license. Hence, we’ll focus only on the most critical elements.
It absolutely is. In fact, Linux is the real home for automation systems, as in many cases it hosts the tools you create in Ansible, Python, Bash, Go or any other language. At the same time, in order to effectively work with Linux, you need to know how to automate management and operation of Linux operating system itself. And you will be absolutely capable to do that, once you attend our Continue reading
A while ago my friend Nicola Modena sent me another intriguing curveball:
Imagine a CTO who has invested millions in a super-secure data center and wants to consolidate all compute workloads. If you were asked to run a BGP Route Reflector as a VM in that environment, and would like to bring OSPF or ISIS to that box to enable BGP ORR, would you use a GRE tunnel to avoid a dedicated VLAN or boring other hosts with routing protocol hello messages?
While there might be good reasons for doing that, my first knee-jerk reaction was:
Hello my friend,
In the previous blogpost we covered the installation of Proxmox as a core platform for building open source virtualisation environment. Today we’ll continue this discussion and will show how to create a multi server cloud in order to better spread the load and provide resiliency for your applications.
In many cases, Linux is a major driving power behind modern clouds. In fact, if you look across all current big clouds, such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, you will see Linux everywhere: on servers and on network devices (e.g., data centre switches). Therefore, knowledge how to deal with Linux and how to automate it is crucial to be successful in automation current IT systems.
Hello my friend,
Just the last week we finished our Zero-to-Hero Network Automation Training, which was very intensive and very interesting. The one could think: it is time for vacation now!.. Not quite yet. We decided to use the time wisely and upgrade our lab to bring possibilities for customers to use it. Lab upgrade means a major infrastructure project, which involves brining new hardware, changing topology and new software to simplify its management. Sounds interesting? Jump to details!
Each and every element of your entire IT landscape requires two actions. It shall be monitored and it shall be managed. Being managed means that the element shall be configured and this is the first step for all sort of automations. Configuration management is a perfect use case to start automating your infrastructure, which spans servers, network devices, VMs, containers and much more. And we are here to help you to do Continue reading
A few weeks ago I got an excited tweet from someone working at Oracle Cloud Infrastructure: they launched full-blown layer-2 virtual networks in their public cloud to support customers migrating existing enterprise spaghetti mess into the cloud.
Let’s skip the usual does everyone using the applications now have to pay for Oracle licenses and I wonder what the lock in might be when I migrate my workloads into an Oracle cloud jokes and focus on the technical aspects of what they claim they implemented. Here’s my immediate reaction (limited to the usual 280 characters, because that’s the absolute upper limit of consumable content these days):
VMware's next CEO has two tasks: to construct a narrative about VMware's role and value as a company in a post-hypervisor world, and to integrate its various fiefdoms into a cohesive set of products that can provide greater utility when used together than when used individually.
The post VMware After Gelsinger: Integrating Fiefdoms For A Post-Hypervisor World appeared first on Packet Pushers.
I always claimed that VMware Fault Tolerance makes no sense. After all, the only thing it does is protect a VM against a server hardware failure… in the world where software crashes are way more common, and fat fingers cause most of the outages.
Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.
--- - name: Display LLDP neighbors on selected interface hosts: all gather_facts: true vars: target_interface: GigabitEthernet0/1 tasks: - name: Display neighbors gathered with ios_facts debug: var: ansible_net_neighbors[target_interface]
Alas, none of the routers saw any neighbors on the target interface.
This is an article from the VMware from Scratch series During the process of preparation to Install Tanzu Kubernetes Grid Integrated Edition (TKGI v1.8) on vSphere with NSX-T Data Center (v3.0.2) one of the steps is to use Ops Manager to deploy Harbor Container Registry (in this case v2.1.0). The process of deployment ended with Harbor error several times so I’m sharing here my solution in order to ease things out for you giving the fact that I didn’t come across any solution googling around. In the process, the Harbor Registry product tile is downloaded from the VMware Tanzu network portal, imported
The post VMware TKGI – Deployment of Harbor Container Registry fails with error appeared first on How Does Internet Work.
I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.
Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?
There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment: