In oVirt 4.2 we have introduced support for the Link Layer Discovery Protocol (LLDP). It is used by network devices for advertising the identity and capabilities to neighbors on a LAN. The information gathered by the protocol can be used for better network configuration.Learn more about LLDP.
When adding a host into oVirt cluster, the network administrator usually needs to attach various networks to it. However, a modern host can have multiple interfaces, each with its non-descriptive name.
In the screenshot below, taken from the Administration Portal, a network administrator has to know
to which interface to attach the network named m2
with VLAN_ID 162. Should it be interface
enp4s0
, ens2f0
or even ens2f1
? With oVirt 4.2, the administrator can hover over enp4s0
and see that this interface is connected to peer switch rack01-sw03-lab4
, and learn that this
peer switch does not support VLAN 162 on this interface. By looking at every interface, the
administrator can choose which interface is the right option for networkm2
.
A similar situation arises with the configuration of mode 4 bonding (LACP). Configurating LACP usually starts with network administrator defining a port group Continue reading
On behalf of oVirt and the Xen Project, we are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2018.
This year will mark FOSDEM’s 18th anniversary as one of the longest-running free and open source software developer events, attracting thousands of developers and users from all over the world. FOSDEM will take place in Brussels, Belgium, February 3 & 4, 2018.
Also coming up is DEVCOM, The 10th annual, free community conference for developers, admins, and users of free and open source Linux, JBoss technologies. DEVCONF will take place in Brno, Czech Republic, January 26-28, 2018.
This Virtualization & IaaS devroom at FOSDEM is a collaborative effort, organized by dedicated folks from projects such as OpenStack, Xen Project,, oVirt, QEMU, and Foreman. Featured sessions will include topics such as open source hypervisors and virtual machine managers such as Xen Project, KVM,bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as Apache CloudStack, OpenStack, oVirt, QEMU, OpenNebula, and Ganeti.
This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with Continue reading
In every SDDC workshop I tried to persuade the audience that the virtual appliances (particularly per-application instances of virtual appliances) are the way to go. I usually got the questions along the lines of “who will manage and audit all these instances?” but once someone asked “and how will we upgrade them?”
Short answer: you won’t.
Read more ...I wrote this post prior on my personal blog at HumairAhmed.com. You can also see many of my prior blogs on multisite and Cross-vCenter NSX here on the VMware Network Virtualization blog site. This post expands on my prior post, Multi-site Active-Active Solutions with NSX-V and F5 BIG-IP DNS. Specifically, in this post, deploying applications in an Active-Active model across data centers is demonstrated where ingress/egress is always at the data center local to the client, or in other words localized ingress/egress. Continue reading
Packet is a hardware-as-a-service vendor that provides dedicated servers on demand at very low cost. For me and my readers, Packet offers a solution to the problem of using cloud services to run complex network emulation scenarios that require hardware-level support for virtualization. Packet users may access powerful servers that empower them to perform activities they could not run on a normal personal computer.
In this post, I will describe the procedure to set up an on-demand bare metal server and to create and maintain persistent data storage for applications. I will describe a generic procedure that can be applied to any application and that works for users who access Packet services from a laptop computer running any of the common operating systems: Windows, Mac, and Linux. In a future post, I will describe how I run network emulation scenarios on a Packet server.
Bringing high performance virtual machines to oVirt!
Introducing a new VM type in oVirt 4.2.0 Alpha. A newly added checkbox in the all-new Administration Portal delivers the highest possible virtual machine performance, very close to bare metal.
Some of the magic includes:
For the full feature set, see the very detailed High Performance VM feature page
Simple. Go to the Administration Portal and from the vertical menu select Compute > Virtual machines. Click the New VM tab to open up the New Virtual Machine dialog box. In the General tab next to the Optimized for field, click the drop down menu and select High Performance. Click OK. Depending on your current configurations, a smart pop-up may open with a list of additional recommended manual configurations, specific to your setup. To address these recommended changes, click Cancel.
New Virtual Machine dialog box with the High Performance VM type highlighted
Starting up a virtual machine (VM) is not an easy task, there are a lot of things going on hidden from the plain sight and studying it alone is a challenging task. The goal of this post is to simplify the process of learning how the oVirt hypervisor works. The concept of the oVirt is illustrated in the process of starting up the VM, which covers everything from top to the bottom.
Disclaimer: I am an engineer working close to the host part of the oVirt, therefore my knowledge of the engine is limited.
Let me start with explaining the main parts oVirt hypervisor architecture. It will help a lot in understanding the overall process of the VM startup flow. Following is the simplified architecture, where I omit auxiliary components and support scripts. This will allow me to focus on the core concept without distractions.
The architecture comprises of three main components: 1) web UI and engine, 2) VDSM, 3) guest agent.
The web UI is where the user makes the first contact with the oVirt hypervisor. The web UI is the main command center of the engine and allows a user to control all aspects and states of Continue reading
On September 28, the oVirt project released version 4.2.0 Alpha, available for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.
This pre-release version should not be used in production, and is not feature complete.
oVirt is the open source virtualization solution that provides an awesome KVM management interface for multi-node virtualization. This maintenance version is super stable and there are some nice new features.
Here's an overview of the new main features:
The Administration Portal has been redesigned from scratch using Patternfly, a widely adopted standard in web application design that promotes consistency and usability across IT applications, through UX patterns and widgets. The result is a cleaner, more intuitive and user-friendly user interface. The old horizontal menu has been replaced by a two-level vertical menu. The system tree is gone, and its functionality has been integrated into the vertical menus. Here are some screenshots:
Dashboard
Virtual Machines View
Adding a New Virtual Machine
Storage View
An all new VM Portal for non-admin users - designed with React-based UI and Patternfly principles - replaces the existing User Portal. Built with performance and ease of use in mind, Continue reading
VMware started talking about VMware Cloud on AWS a while ago, and my first response was “yeah, it’s just vCloud Air but they wanted to get rid of CapEx, so it’s running on someone else’s servers”
Last week Frank Denneman published a technical overview of the solution and I was mostly correct.
Read more ...DPDK (Data Plane Development Kit) provides high-performance packet processing libraries and user space drivers. Open vSwitch uses DPDK libraries to perform entirely within the user space. According to Intel ONP performance tests, using OVS with DPDK can provide a huge performance enhancement, increasing network packet throughput and reducing latency.
OVS-DPDK has been added to the oVirt Master branch as an experimental feature. The following post describes the OVS-DPDK installation and configuration procedures.
Please note: Accessing the OVS-DPDK feature requires installing the oVirt Master version. In addition, OVS-DPDK cannot access any features located within the Linux kernel. This includes Linux bridge, tun/tap devices, iptables, etc.
In order to achieve the best performance, please follow the instructions at: http://dpdk.org/doc/guides-16.11/linux_gsg/nic_perf_intel_platform.html
The network card must be on the supported vendor matrix, located here: http://dpdk.org/doc/nics
Ensure that your system supports 1GB hugepages:
grep -q pdpe1gb /proc/cpuinfo && echo "1GB supported"
iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=N
In the event that 1GB Continue readingBuilding a platform is hard enough, and there are very few companies that can build something that scales, supports a diversity of applications, and, in the case of either cloud providers or software or whole system sellers, can be suitable for tens of thousands, much less hundreds of thousands or millions, of customers.
But if building a platform is hard, keeping it relevant is even harder, and those companies who demonstrate the ability to adapt quickly and to move to new ground while holding old ground are the ones that get to make money and wield influence in the datacenter. …
VMware’s Platform Revolves Around ESXi, Except Where It Can’t was written by Timothy Prickett Morgan at The Next Platform.
Openstack | Amazon AWS | VMware (VSwitch / DVSwitch) | |
---|---|---|---|
Virtual Network Continue reading |
We’re almost done with our data center infrastructure optimization journey. In this step, we’ll virtualize the network services.
The recent post, An Introduction to Ansible Roles, discussed the new roles that were introduced in the oVirt 4.1.6 release. This follow-up post will explain how to set up and use Ansible roles, using either Ansible Galaxy or oVirt Ansible Roles RPM.
To make life easier, Ansible Galaxy stores multiple Ansible roles, including oVirt Ansible roles. To install the roles, perform the next steps:
To install roles on your local machine, run the following command:
$ ansible-galaxy install ovirt.ovirt-ansible-roles
This will install your roles into directory /etc/ansible/roles/ovirt.ovirt-ansible-roles/
.
By default, Ansible only searches for roles in /etc/ansible/roles/
directory and your current working directory.
To change the directories where Ansible looks for roles, modify the roles_path
option of [defaults]
section in ansible.cfg
configuration file.
The default location of this file is in /etc/ansible/ansible.cfg
.
$ sed -i 's|#roles_path = /etc/ansible/roles|roles_path = /etc/ansible/roles:/etc/ansible/roles/ovirt.ovirt-ansible-roles/roles|' /etc/ansible/ansible.cfg
For more information on changing the directories where Ansible searches for roles, see the Ansible documentation pages.
Copy one of the examples from the directory /etc/ansible/roles/ovirt.ovirt-ansible-roles/examples/
into your working directory, then modify the needed variables and run the playbook.
In the latest oVirt repositories Continue reading
When a radically different technology comes along it usually takes time before we figure out how to apply it. When we had steam engines running factories there was one engine in each factory with a giant driveshaft running through the whole factory. When the electric engine came along people started replacing the giant steam engine with a giant electric motor. It took time before people understood that they could deploy several small motors in different parts of the factory and connect electric cables rather than having a common driveshaft. It takes time to understand the technology and its applicability.
The situation with unikernels is similar. We have this new thing and to some extent we’re using it to replace some general purpose operating system workloads. But we’re still very much limited by how we think about operating systems and computers.
Unikernels are radically different. Naturally the question of the killer app has come up on a number of occasions. As unikernels are quite different from the dominant operating systems of today it isn’t as easy to spot what it will be. Here I’ll try to answer why it’s hard to spot the killer app.
Let’s start Continue reading
As a developer, one drawback of using Google Web Toolkit (GWT) for the oVirt Administration Portal (aka webadmin) is that the GWT compile process takes an exceptionally long time. If you make a change in some code and rebuild the ovirt-engine project using make install-dev ...
, you'll be waiting several minutes to test your change. In practice, such a long delay in the usual code-compile-refresh-test cycle would be unbearable.
Luckily, we can use GWT Super Dev Mode ("SDM") to start up a quick refresh-capable instance of the application. With SDM running, you can make a change in GWT and test the refreshed change within seconds.
If you want to step through code and use the Chrome debugger, oVirt and SDM don't work well together for debugging due to the oVirt Administration Portal's code and source map size. Therefore, below we demonstrate how to disable source maps.
Open a terminal, build the engine normally, and start it.
``` make clean install-dev PREFIX=$HOME/ovirt-engine DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.cssResourceStyle=pretty -Dgwt.userAgent=safari" BUILD_UT=0 DEV_EXTRA_BUILD_FLAGS="-Dgwt.compiler.localWorkers=1"
…
$HOME/ovirt-engine/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py start
```
In a second terminal, run:
Chrome:
make gwt-debug DEV_BUILD_GWT_SUPER_DEV_MODE=1 DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=safari"
or
Firefox:
make gwt-debug DEV_BUILD_GWT_SUPER_DEV_MODE=1 DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=gecko1_8"
Wait about two minutes Continue reading
Network optimization is an incredibly important component to scalability and efficiency. Without solid network optimization, an organization will be confronted with a quickly building overhead and vastly reduced efficiency. Network optimization aids a business in making the most of its technology, reducing costs and even improving upon security. Through virtualization, businesses can leverage their technology more effectively — they just need to follow a few virtual networking best practices.
There are certainly applications that are optional, but there are others that are critical. The most important applications on a network are the ones that need to be prioritized in terms of system resources. These are generally cyber security suites, firewalls, and monitoring services. Optional applications may still be preferred for business operations, but because they aren’t critical they can still operate slowly in the event of system wide issues.
Prioritizing security applications is especially important as there are many cyber security exploits that operate with the express purpose of flooding the system until security elements fail. When security apps are prioritized, the risk of this type of exploit is greatly reduced.
Application monitoring services will be able to automatically detect when Continue reading
At times I have trouble focusing on writing articles for some of the presentations I am exposed to at Tech Field Day. Because of that, I really wanted to try something different. This article is more of my free-formed thoughts about NSX and why I’m excited to deploy it at my current $job. From the time I heard that the NSX team was going to be presenting at TFD15 for 4 hours, I knew that I would be writing this article because. Unfortunately it took me far too long to gather up this half formed thought.
I love the concept of Micro-Segmentation that NSX enables. Think of NSX as a virtual distributed firewall that is integrated with your hypervisor, but it really is so much more. This allows you to connect a security policy directly to the vNIC of your guest VM’s. Attaching it to the VM allows that policy to follow the VM anywhere, and everywhere it goes. You don’t have to worry about inter- or intra-VLAN segmentation as all of that is done on each vNIC. On top of that, NSX’s firewall is PCI DSS 3.2 compliant! Another rather compelling Continue reading
Hi folks, one of the many things that I’ve been working on behind the scenes has finally seen the light of day: Best Practices for Red Hat Virtualization 4. This takes over where the product documentation leaves off.What I mean by that is this:
The product documentation is (mostly) great about telling you “how” to do the many activities related to deploying Red Hat Virtualization.
This new document tells you “why” to do many of the activities related to deploying Red Hat Virtualization. It does NOT have code examples, but it DOES have lots of things to consider. Things like:
In other words, when you go to plan out your deployment, this is the document that you want to read before you paint yourself into a corner. Many of the items are best practices, like “don’t turn off SELinux”. Others are more considerations and implications, like “NAS or SAN”.
If this is something you’re interested in, you can download it here:
Best Practices for Red Hat Virtualization 4
Hope this helps,
Captain KVM
The post Best Practices for RHV Continue reading