I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.
Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?
There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment:
Remember my rants about VMware and firewall vendors promoting crazy solutions that work best in PowerPoint and cause more headaches than anything else (excluding increased vendor margins and sales team bonuses, of course)?
Here’s another we-don’t-need-all-that-complexity real-life story coming from one of my long-term subscribers:
Every now and then I call someone’s baby ugly (or maybe it was their third cousin’s baby and they nonetheless feel offended). In such cases a common resort is to cite business or market needs to prove how ignorant and clueless I am. Here’s a sample LinkedIn comment talking about my ignorance about the need for smart NICs:
The rise of custom silicon by Presando [sic], Mellanox, Amazon, Intel and others confirms there is a real market need.
Now let’s get something straight: while there are good reasons to use tons of different things that might look inappropriate, irrelevant or plain stupid to an outsider, I don’t believe in real market need argument being used to justify anything without supporting technical facts (tell me why you need that stuff and prove to me that using it is the best way of solving a problem).
Several engineers formerly working for a large virtualization vendor were pretty upset with me when I claimed that the virtualization consultants promote “disaster recovery using stretched VLANs” designs instead of alternatives that would implement proper separation of failure domains.
Guess what… it’s even worse than I thought.
Here’s a sequence of comments I received after reposting one of my “disaster recovery doesn’t need stretched VLANs” blog posts on LinkedIn sometime in late 2019:
شير بت بدون فيلتر
شير بت بدون فيلتر | سایت شير بت بدون فيلتر | شرط بندی شير بت بدون فيلتر | اپليکيشن شير بت | shirbet بدون فیلتر | ثبت نام در شير بت
سایت های شرط بندی زیادی این روز ها وجود شير بت بدون فيلتر دارند که تقریبا امکانات مشابهی را ارایه می کنند. یکی از مواردی که سایت شرط بندی شیر بت را از دیگران متمایز کرده است طراحی بسیار جدید آن است. این سایت از متد های به روز امسال در زبان طراحی شير بت بدون فيلتر سایت خود استفاده کرده است که حسی بسیار مدرن را به کاربر القا می کند. از طرفی شير بت بدون فيلتر با استفاده از سرور های بسیار قوی این سایت قطعا به یکی از بهترین مراجع برای شرط بندی زنده و بازی های آنلاین تبدیل خواهد شد. همچنین پشتیبانی این سایت بسیار خوب است و در هر ساعت شبانه روز در خدمت کاربران خود می باشند. همه این موارد نشان از یک شروع پر قدرت از سوی shir bet دارند. شير بت بدون فيلتر
سایت Continue reading
Got this question from one of ipSpace.net subscribers:
Do we really need those intelligent datacenter switches for underlay now that we have NSX in our datacenter? Now that we have taken a lot of the intelligence out of our underlying network, what must the underlying network really provide?
Reading the marketing white papers the answer would be IP connectivity… but keep in mind that building your infrastructure based on information from vendor white papers usually gives you the results your gullibility deserves.
Read more ...Antidote is the network emulator that runs the labs on the Network Reliability Labs web site. You may install a standalone version of Antidote on your personal computer using the Vagrant virtual environment provisioning tool.
In this post, I show you how to run Antidote on a Linux system with KVM, instead of VirtualBox, on your local PC to achieve better performance — especially on older hardware.
Antidote runs emulated network nodes inside a host virtual machine. If these emulated nodes must also run on a hypervisor, as most commercial router images require, then they are running as nested virtual machines inside the host virtual machine. Unless you can pass through your computer’s hardware support for virtualization to the nested virtual machines, they will run slowly.
VirtualBox offers only limited support for nested virtualization. If you are using a Linux system, you can get better performance if you use Libvirt and KVM, which provide native support for nested virtualization.
If you plan to run Antidote on a Mac or a PC, you should use Antidote’s standard installation with VirtualBox1. Vagrant and VirtualBox are both cross-platform, open-source tools.
A Docker networking rant coming from my good friend Marko Milivojević triggered a severe case of Deja-Moo, resulting in a flood of unpleasant memories caused by too-successful “disruptive” IT vendors.
Imagine you’re working for a startup creating a cool new product in the IT infrastructure space (if you have an oversized ego you would call yourself “disruptive thought leader” on your LinkedIn profile) but nobody is taking you seriously. How about some guerrilla warfare: advertising your product to people who hate the IT operations (today we’d call that Shadow IT).
Read more ...I want to show you how to configure a host server so, when it is shut down, it executes a script that runs commands on any running virtual machines before the host tries to stop them. I will configure the host server to wait until the script completes configuring the virtual machines before continuing with the shutdown process, shutting down the virtual machines, and eventually powering off.
I had to learn how Systemd service unit configuration files work and some more details about how Libvirt is configured in different Linux distributions. Read on to see the solution, plus some details about how to test the solution in Ubuntu and CentOS.
Create a new Systemd service named graceful-shutdown that runs a script when the host system shuts down, but before Libvirt shuts down any virtual machines. Ensure that the libvirt-guests service is already started and enabled, and is configured appropriately.
Create a new Systemd unit configuration file named graceful-shutdown.service and save it in the directory, /etc/systemd/system, where it is advised you put custom configuration files.
For example:
# vi /etc/systemd/system/graceful-shutdown.service
Enter the following text into the file, then save it:
Continue reading
I want to thank both Bhushan Pai, and Matt Karnowski, who joined VMware from the Avi Networks acquisition, for helping with the Avi Networks setup in my VMware Cloud on AWS lab and helping with some of the details in this blog.
Humair Ahmed, Sr. Technical Product Manager, VMware NSBU
Bhushan Pai, Sr. Technical Product Manager, VMware NSBU
Matt Karnowski , Product Line Manager, VMware NSBU
With the recent acquisition of Avi Networks, a complete VMware solution leveraging advanced load balancing and Application Delivery Controller (ADC) capabilities can be leveraged. In addition to load balancing, these capabilities include global server load balancing, web application firewall (WAF) and advanced analytics and monitoring.
In this blog, we walk through an example of how the Avi Networks load balancer can be leveraged within a VMware Cloud on AWS software-defined data center (SDDC).
Check out my latest book co-authored with my colleagues Gilles Chekroun (@twgilles) and Nico Vibert (@nic972) on VMware NSX networking and security in VMware Cloud on AWS. Thank you Tom Gillis (@_tomgillis), Senior Vice President/General Manager, Networking and Security Business Unit for writing the foreword and providing some great insight.
I’ve been very fortunate to have the opportunity to publish my second VMware Press book. My first book was VMware NSX Multi-site Solutions and Cross-vCenter NSX Design: Day 1 Guide. This book was focused very much on NSX on prem and across multiple sites. In my latest book with Gilles and Nico, the focus was on NSX networking and security in the cloud and cloud/hybrid cloud solutions.
You can download the free ebook here:
In this book you’ll learn how VMware Cloud on AWS with NSX networking and security provides a robust cloud/hybrid cloud solution. With VMware Cloud on AWS extending or moving to the cloud is no longer a daunting task. In this book, we discuss use cases and solutions while also providing a detailed walkthrough of Continue reading
By Bruce Davie, CTO, Asia Pacific & Japan
As I’m currently preparing my breakout session for VMworld 2019, I’ve been spending plenty of time looking into what’s new in the world of networking. A lot of what’s currently happening in networking is driven by the requirements of modern applications, and in that context it’s hard to miss the rise of service mesh. I see service mesh as a novel approach to meeting the networking needs of applications, although there is rather more to it than just networking.
There are about a dozen talks at VMworld this year that either focus on service mesh or at least touch on it – including mine – so I thought it would be timely to comment on why I think this technology has appeared and what it means for networking.
To be clear, there are a lot of different ways to implement a service mesh today, of which Istio – an open-source project started at Google – is probably the most well-known. Indeed some people use Istio as a synonym for service mesh, but the broader use of the term rather than a particular implementation is my Continue reading
By Tom Gillis, SVP/GM of Networking and Security BU
When we first announced our intent to acquire Avi Networks, the excitement within our customer base, with industry watchers and within our own business was overwhelming. IDC analysts wrote, “In announcing its intent to acquire software ADC vendor Avi Networks, VMware both enters the ADC market and transforms its NSX datacenter and multicloud network-virtualization overlay (NVO) into a Layer 2-7 full-stack SDN fabric (1).”
Avi possesses exceptional alignment with VMware’s view of where the network is going, and how data centers must evolve to operate like public clouds to help organizations reach their full digital potential. It’s for these reasons that I am happy to announce VMware has closed the acquisition of Avi Networks and they are now officially part of the VMware family going forward.
I’ve heard Pat Gelsinger say many times that VMware wants to aggressively “automate everything.” With Avi, we’re one step closer to meeting this objective. The VMware and Avi Networks teams will work together to advance our Virtual Cloud Network vision, build out our full stack L2-7 services, and deliver the public cloud experience for on-prem environments. We will introduce the Avi platform Continue reading
The latest version of VMware Cloud on AWS SDDC (SDDC Version 1.7) was released recently and is being rolled out to customers. In this post, I’ll discuss the new NSX Networking and Security features.
Looking at the features released in VMware Cloud on AWS SDDC 1.7 in the below diagram, we can see the features can be grouped into three categories: Connectivity, Services, and Operations. Further below I go into more detail in each of these specific NSX features. For a complete list of all new features in VMware Cloud on AWS SDDC 1.7 in general, check out the release notes here. Continue reading
In this post I will introduce and showcase how security groups can be used to enable certain scenarios.
Security groups allow fine-grained access control to - and from - the oVirt VMs attached to external OVN networks.
The Networking API v2 defines security groups as a white list of rules - the user specifies in it which traffic is allowed. That means, that when the rule list is empty, neither incoming nor outgoing traffic is allowed (from the VMs perspective).
A demo recording of the security group feature can be found below.
This repo adds tools, and information on how to use them, to help manage the security groups in oVirt, since currently there is no supported mechanism to provision security groups, other than the REST API, and ManageIQ. ManageIQ also doesn't fully support security groups, since it lacks a way to attach security groups to logical ports.
In the following links you can also find playbooks that can be built upon to reach different types of scenarios.
An attendee in my Building Next-Generation Data Center online course was asked to deploy numerous relatively small OpenStack cloud instances and wanted select the optimum virtual networking technology. Not surprisingly, every $vendor had just the right answer, including Arista:
We’re considering moving from hypervisor-based overlays to ToR-based overlays using Arista’s CVX for approximately 2000 VLANs.
As I explained in Overlay Virtual Networking, Networking in Private and Public Clouds and Designing Private Cloud Infrastructure (plus several presentations) you have three options to implement virtual networking in private clouds:
Read more ...