In 4.2 release we have splitted our oVirt Ansible roles into separate RPM packages and also separate git repositories, so it is possible for user to install specific role either from Ansible Galaxy or as RPM package.
The reason to split the roles into separate packages and git repositories was mainly the usage from the AWX/Ansible Tower. Since Ansible Galaxy is only integrated with github you need to store your Ansible role in single git repostiory in order to have separate Ansible role in Galaxy. Previously we used one single repository where we have stored all the roles, but because of that manual configuration was required to make those roles usable in AWX/Ansible Tower. So as you can see on image below we now have many roles in Ansible Galaxy under oVirt user:
There are still two ways how to install the roles: either using Ansible Galaxy or using RPM package available from oVirt repositories.
You are now able to install just a single role and not necessarily all of them at once like in previous versions For example to install just oVirt cluster upgrade role, you have to run Continue reading
Hi Folks, recently my friend and colleague, Tony James prepared and delivered an excellent webinar internally at Red Hat on how to configure Open Virtual Networking (OVN) in Red Hat Virtualization. For those of you that are unfamiliar with OVN, or what it offers, allow me to provide you with the proper illumination.
Way back in the dark ages, the only way that mere mortals could get encapsulation, segmentation, and other benefits of SDN in RHV was via third party integration. Or if there was an OpenStack deployment that could be tapped into via the RHV Neutron integration. Recently though, native SDN (via OVN) is in Tech Preview in RHV 4.1, and I’m going to spend the next few posts going over the basics.
NOTE – Tech Preview is Red Hat’s way of providing the software bits for folks to try out, but there is no support for software in Tech Preview. The official statement is here. In short, the more interest and bugs filed against Tech Preview, the sooner it gets put in production.
The current fully supported virtual networking in RHV is built around “Linux Bridging”. It’s solid and it’s simple. That is to say that Continue reading
Now oVirt features a simple way to prevent a host from scanning and then activating logical volumes that are not required directly by the host. In particular, the solution addresses logical volumes on shared storage managed by oVirt, and logical volumes created by a guest in oVirt raw volumes. Why is a solution required? Because scanning and activating other logical volumes may cause data corruption, slow boot, and other issues.
The solution is configuring an LVM filter on each host, which allows LVM on a host to scan only the logical volumes required directly by the host. To achieve this, we have introduced a vdsm-tool command, config-lvm-filter, that will configure the host for you.
The new command, vdsm-tool config-lvm-filter
analyzes the current LVM configuration to decide
whether a filter should be configured. Then, if the LVM filter has yet to be configured, the command generates an LVM filter option for this host, and adds the option to the LVM configuration.
On a host yet to be configured, the command automatically configures the LVM once the user confirms the operation:
# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:
logical volume: /dev/mapper/vg0-lv_home
Continue reading
In oVirt 4.2 we have introduced support for the Link Layer Discovery Protocol (LLDP). It is used by network devices for advertising the identity and capabilities to neighbors on a LAN. The information gathered by the protocol can be used for better network configuration. Learn more about LLDP.
When adding a host into oVirt cluster, the network administrator usually needs to attach various networks to it. However, a modern host can have multiple interfaces, each with its non-descriptive name.
In the screenshot below, taken from the Administration Portal, a network administrator has to know
to which interface to attach the network named m2
with VLAN_ID 162. Should it be interface
enp4s0
, ens2f0
or even ens2f1
? With oVirt 4.2, the administrator can hover over enp4s0
and see that this interface is connected to peer switch rack01-sw03-lab4
, and learn that this
peer switch does not support VLAN 162 on this interface. By looking at every interface, the
administrator can choose which interface is the right option for networkm2
.
A similar situation arises with the configuration of mode 4 bonding (LACP). Configurating LACP usually starts with network administrator defining a port group Continue reading
on November 8, oVirt 4.2 saw the introduction of an important behind-the-scenes enhancement.
The change is associated with the exchange of information between the engine and the VDSM. It addresses the issue of multiple abstraction layers, with each layer needing to convert its input into a suitably readable format in order to report to the next layer.
This change improves data communication between the engine and Libvirt - the tool that manages platform virtualization.
Previously, the configuration file for a newly created virtual machine (VM) originated in the engine as a map or dictionary. Then, in the VDSM, it was converted into an XML file that was readable by Libvirt. This process required a greater coding effort which in turn slowed down the development process.
Now, this map or dictionary has been replaced by engine XML, an XML configuration file that complies with the Libvirt API. VDSM now simply routes this Libvirt-readable file to/from Libvirt, in VM lifecycle (virt) related flows.
As an oVirt user, it’s business as usual.
However, if you are a developer dealing with debugging issues that involve running a VM, simply be aware that the Domain XML is now generated by ovirt-engine, Continue reading
Assume you have an oVirt cluster with hundreds of VM networks. Now you add a
new host to the cluster. In order for it to move to the Operational
state,
it must have all required networks attached to it. The easiest way to do it is
to attach networks to a label, and then place that label on a NIC of the added
host. However, if there are too many networks, Engine could fail to setup them
all at once. This is caused by a slow VDSM setupNetworks call that is not able
to finish within the 180 seconds long vdsTimeout
of Engine.
VDSM performance changes would be included in ovirt-4.2, currently in ovirt-master.
Initscripts performance patch is targeted for EL 7.5.
The following table shows maximal number of networks that can be handled within the vdsTimeout. The measured setupNetworks command handles one network with static IP and N VLAN+bridge networks with no IP. Edit covered a move of all networks from one NIC to another.
Please note that given numbers are for reference only.
installed | N | add | edit | del |
---|---|---|---|---|
ovirt-4.2 | 190 | 180s | 127s | 67s |
ovirt-4.2 and patched initscripts | 350 | 138s | 176s | 89s |
ovirt-4.1 | 150 | Continue reading |
On October 31st, the oVirt project released version 4.2.0 Beta, available for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.
Since the release of oVirt 4.2.0 Alpha a month ago, a substantial number of stabilization fixes have been introduced.
Support for LLDP, a protocol for network devices for advertising identity and capabilities to neighbors on a LAN. LLDP information can now be displayed in both the UI and via the API. The information gathered by the protocol can be used for better network configuration.
oVirt 4.2.0 Beta features Gluster 3.12.
oVirt's hyperconverged solution now enables a single replica Gluster deployment.
OVN (Open Virtual Network) is now fully supported and recommended for isolated overlay networks. OVN is automatically deployed on the the host, and made available for VM connectivity.
Snapshots can now be uploaded and downloaded via the REST API (and the SDKs).
An improvement has been introduced to the self-hosted engine. Now, the self-hosted-engine will connect to all IPs discovered, allowing both higher performance via multiple paths as well as high availability in the event that one of the targets fails.
A new Continue reading
In oVirt 4.2 we have introduced support for the Link Layer Discovery Protocol (LLDP). It is used by network devices for advertising the identity and capabilities to neighbors on a LAN. The information gathered by the protocol can be used for better network configuration.Learn more about LLDP.
When adding a host into oVirt cluster, the network administrator usually needs to attach various networks to it. However, a modern host can have multiple interfaces, each with its non-descriptive name.
In the screenshot below, taken from the Administration Portal, a network administrator has to know
to which interface to attach the network named m2
with VLAN_ID 162. Should it be interface
enp4s0
, ens2f0
or even ens2f1
? With oVirt 4.2, the administrator can hover over enp4s0
and see that this interface is connected to peer switch rack01-sw03-lab4
, and learn that this
peer switch does not support VLAN 162 on this interface. By looking at every interface, the
administrator can choose which interface is the right option for networkm2
.
A similar situation arises with the configuration of mode 4 bonding (LACP). Configurating LACP usually starts with network administrator defining a port group Continue reading
On behalf of oVirt and the Xen Project, we are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2018.
This year will mark FOSDEM’s 18th anniversary as one of the longest-running free and open source software developer events, attracting thousands of developers and users from all over the world. FOSDEM will take place in Brussels, Belgium, February 3 & 4, 2018.
Also coming up is DEVCOM, The 10th annual, free community conference for developers, admins, and users of free and open source Linux, JBoss technologies. DEVCONF will take place in Brno, Czech Republic, January 26-28, 2018.
This Virtualization & IaaS devroom at FOSDEM is a collaborative effort, organized by dedicated folks from projects such as OpenStack, Xen Project,, oVirt, QEMU, and Foreman. Featured sessions will include topics such as open source hypervisors and virtual machine managers such as Xen Project, KVM,bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as Apache CloudStack, OpenStack, oVirt, QEMU, OpenNebula, and Ganeti.
This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with Continue reading
In every SDDC workshop I tried to persuade the audience that the virtual appliances (particularly per-application instances of virtual appliances) are the way to go. I usually got the questions along the lines of “who will manage and audit all these instances?” but once someone asked “and how will we upgrade them?”
Short answer: you won’t.
Read more ...I wrote this post prior on my personal blog at HumairAhmed.com. You can also see many of my prior blogs on multisite and Cross-vCenter NSX here on the VMware Network Virtualization blog site. This post expands on my prior post, Multi-site Active-Active Solutions with NSX-V and F5 BIG-IP DNS. Specifically, in this post, deploying applications in an Active-Active model across data centers is demonstrated where ingress/egress is always at the data center local to the client, or in other words localized ingress/egress. Continue reading
Packet is a hardware-as-a-service vendor that provides dedicated servers on demand at very low cost. For me and my readers, Packet offers a solution to the problem of using cloud services to run complex network emulation scenarios that require hardware-level support for virtualization. Packet users may access powerful servers that empower them to perform activities they could not run on a normal personal computer.
In this post, I will describe the procedure to set up an on-demand bare metal server and to create and maintain persistent data storage for applications. I will describe a generic procedure that can be applied to any application and that works for users who access Packet services from a laptop computer running any of the common operating systems: Windows, Mac, and Linux. In a future post, I will describe how I run network emulation scenarios on a Packet server.
Bringing high performance virtual machines to oVirt!
Introducing a new VM type in oVirt 4.2.0 Alpha. A newly added checkbox in the all-new Administration Portal delivers the highest possible virtual machine performance, very close to bare metal.
Some of the magic includes:
For the full feature set, see the very detailed High Performance VM feature page
Simple. Go to the Administration Portal and from the vertical menu select Compute > Virtual machines. Click the New VM tab to open up the New Virtual Machine dialog box. In the General tab next to the Optimized for field, click the drop down menu and select High Performance. Click OK. Depending on your current configurations, a smart pop-up may open with a list of additional recommended manual configurations, specific to your setup. To address these recommended changes, click Cancel.
New Virtual Machine dialog box with the High Performance VM type highlighted
Starting up a virtual machine (VM) is not an easy task, there are a lot of things going on hidden from the plain sight and studying it alone is a challenging task. The goal of this post is to simplify the process of learning how the oVirt hypervisor works. The concept of the oVirt is illustrated in the process of starting up the VM, which covers everything from top to the bottom.
Disclaimer: I am an engineer working close to the host part of the oVirt, therefore my knowledge of the engine is limited.
Let me start with explaining the main parts oVirt hypervisor architecture. It will help a lot in understanding the overall process of the VM startup flow. Following is the simplified architecture, where I omit auxiliary components and support scripts. This will allow me to focus on the core concept without distractions.
The architecture comprises of three main components: 1) web UI and engine, 2) VDSM, 3) guest agent.
The web UI is where the user makes the first contact with the oVirt hypervisor. The web UI is the main command center of the engine and allows a user to control all aspects and states of Continue reading
On September 28, the oVirt project released version 4.2.0 Alpha, available for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.
This pre-release version should not be used in production, and is not feature complete.
oVirt is the open source virtualization solution that provides an awesome KVM management interface for multi-node virtualization. This maintenance version is super stable and there are some nice new features.
Here's an overview of the new main features:
The Administration Portal has been redesigned from scratch using Patternfly, a widely adopted standard in web application design that promotes consistency and usability across IT applications, through UX patterns and widgets. The result is a cleaner, more intuitive and user-friendly user interface. The old horizontal menu has been replaced by a two-level vertical menu. The system tree is gone, and its functionality has been integrated into the vertical menus. Here are some screenshots:
Dashboard
Virtual Machines View
Adding a New Virtual Machine
Storage View
An all new VM Portal for non-admin users - designed with React-based UI and Patternfly principles - replaces the existing User Portal. Built with performance and ease of use in mind, Continue reading
VMware started talking about VMware Cloud on AWS a while ago, and my first response was “yeah, it’s just vCloud Air but they wanted to get rid of CapEx, so it’s running on someone else’s servers”
Last week Frank Denneman published a technical overview of the solution and I was mostly correct.
Read more ...DPDK (Data Plane Development Kit) provides high-performance packet processing libraries and user space drivers. Open vSwitch uses DPDK libraries to perform entirely within the user space. According to Intel ONP performance tests, using OVS with DPDK can provide a huge performance enhancement, increasing network packet throughput and reducing latency.
OVS-DPDK has been added to the oVirt Master branch as an experimental feature. The following post describes the OVS-DPDK installation and configuration procedures.
Please note: Accessing the OVS-DPDK feature requires installing the oVirt Master version. In addition, OVS-DPDK cannot access any features located within the Linux kernel. This includes Linux bridge, tun/tap devices, iptables, etc.
In order to achieve the best performance, please follow the instructions at: http://dpdk.org/doc/guides-16.11/linux_gsg/nic_perf_intel_platform.html
The network card must be on the supported vendor matrix, located here: http://dpdk.org/doc/nics
Ensure that your system supports 1GB hugepages:
grep -q pdpe1gb /proc/cpuinfo && echo "1GB supported"
iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=N
In the event that 1GB Continue readingBuilding a platform is hard enough, and there are very few companies that can build something that scales, supports a diversity of applications, and, in the case of either cloud providers or software or whole system sellers, can be suitable for tens of thousands, much less hundreds of thousands or millions, of customers.
But if building a platform is hard, keeping it relevant is even harder, and those companies who demonstrate the ability to adapt quickly and to move to new ground while holding old ground are the ones that get to make money and wield influence in the datacenter. …
VMware’s Platform Revolves Around ESXi, Except Where It Can’t was written by Timothy Prickett Morgan at The Next Platform.
Openstack | Amazon AWS | VMware (VSwitch / DVSwitch) | |
---|---|---|---|
Virtual Network Continue reading |