Archive

Category Archives for "Virtualization"

Build oVirt Reports Using Grafana

Grafana, The open platform for beautiful analytics and monitoring, recently added support for PostgreSQL.

It in now possible to connect Grafana to oVirt DWH, in order to visualize and monitor the oVirt environment.

Grafana dashboard example

If you wish to create dashboards to monitor oVirt environment, you will need to install Grafana.

Grafana automatically creates an admin user and password.

You will need to add a PostgreSQL data source that connects to the DWH database.

For example:

You may want to add a read only user to connect the history database - Allowing read only access to the history database

Now you can start creating your dashboard widgets.

Go to Dashboards -> + New.

Graph panel example:

To add a Graph type panel, on the left side you have the Row controls menu. Go to the + Add Panel, and pick Graph.

Query example for the - Five Most Utilized Hosts by Memory / CPU:

SELECT DISTINCT
    min(time) AS time,
    MEM_Usage,
    host_name || 'MEM_Usage' as metric
FROM (
    SELECT
        stats_hosts.host_id,
        CASE
            WHEN delete_date IS NULL
                THEN host_name
            ELSE
                host_name
                ||
                ' (Removed on '
                ||
                CAST ( CAST ( delete_date AS date ) AS varchar )
                 Continue reading

VMware Cloud on AWS with NSX: Communicating with Native AWS Resources

If you haven’t already, please read my prior two blogs on VMware Cloud on AWS: VMware SDDC with NSX Expands to AWS and VMware Cloud on AWS with NSX – Connecting SDDCs Across Different AWS Regions; also posted on my personal blog at humairahmed.com. The prior blogs provide a good intro and information of some of the functionality and advantages of the service. In this blog post I expand the discussion to the advantages of VMware Cloud on AWS being able to communicate with native AWS resources. This is something that would be desired if you have native AWS EC2 instances you want VMware Cloud on AWS workloads to communicate with or if you want to leverage other native AWS services like AWS S3 VPC Endpoint or RDS. Continue reading

OVN and Red Hat Virtualization: Installing OVN

Hi folks, in the last article I provided an overview and introduction to OVN. This time around, I’ll provide a walkthrough on how to actually install it in your RHV environment. My colleague Tony created an Ansible playbook to automate the installation, and I’ll share the link to that at the end. Let’s get started.

Necessary Channels and Subscriptions

One of the first thing that Tony covers in the demo is that he used the standard channels for both RHV-M (engine) and Hosts (hypervisors) – nothing special is needed from a subscription standpoint as all of the packages are now included in RHV 4.1. Using the `ovs-vsctl show` command, we see that even though the openvswitch package is pulled in as part of the host install, nothing is configured by default.

Automate the Installation with Ansible

Next, we see Tony’s Ansible playbook. It covers 2 plays, one for the engine (RHV-M) and one for the hosts. Not only does it install the packages, but it configures firewalld. Specifically, the playbook does the following:

On the Engine:

It installs “ovirt-provider-ovn” package then it starts/restarts multiple services, north and southbound connections are set.

On the Hosts:

It installs the “ovirt-provider-ovn-driver” package. Continue reading

VMware Cloud on AWS with NSX: Connecting SDDCs Across Different AWS Regions

I prior shared this post on the LinkedIN publishing platform and my personal blog at HumairAhmed.com. In my prior blog post, I discussed how with VMware Cloud on AWS (VMC on AWS) customers get the best of both worlds for their move to a Software Defined Data Center (SDDC) – the leading compute, storage, and network virtualization stack for enterprises deployed on dedicated, elastic, bare-metal, and highly available AWS infrastructure. Another benefit of VMC on AWS, and the focus of this post, is that you can easily have a global footprint by deploying multiple VMC SDDCs in different regions. Continue reading

oVirt 4.2 Is Now Generally Available

We are delighted to announce the general availability of oVirt 4.2, as of December 19, 2017, for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.

oVirt 4.2 is an altogether more powerful and flexible open source virtualization solution. The release is a major milestone for the project, encompassing over 1000 individual changes and a wide range of enhancements spanning storage, network, engine, user interface, and analytics.

What’s new in oVirt 4.2?

The big new features:

The Administration Portal has been redesigned using Patternfly, a widely adopted standard in web application design that promotes consistency and usability across IT applications. The result is a more intuitive and user-friendly user interface, featuring improved performance. Here is a screenshot of the Administration Portal dashboard:

A new VM Portal for non-admin users. Built with performance and ease of use in mind, the new VM portal delivers a more streamlined experience.

A High Performance VM type has been added to the existing "Server" and "Desktop" types. The new type enables administrators to easily optimize a VM for high performance workloads.

The oVirt Metrics Store is a real-time monitoring solution, providing complete infrastructure visibility for decision making Continue reading

oVirt 4.2 Is Now Generally Available

We are delighted to announce the general availability of oVirt 4.2, as of December 19, 2017, for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.

oVirt 4.2 is an altogether more powerful and flexible open source virtualization solution. The release is a major milestone for the project, encompassing over 1000 individual changes and a wide range of enhancements spanning storage, network, engine, user interface, and analytics.

What’s new in oVirt 4.2?

The big new features:

The Administration Portal has been redesigned using Patternfly, a widely adopted standard in web application design that promotes consistency and usability across IT applications. The result is a more intuitive and user-friendly user interface, featuring improved performance. Here is a screenshot of the Administration Portal dashboard:

A new VM Portal for non-admin users. Built with performance and ease of use in mind, the new VM portal delivers a more streamlined experience.

A High Performance VM type has been added to the existing "Server" and "Desktop" types. The new type enables administrators to easily optimize a VM for high performance workloads.

The oVirt Metrics Store is a real-time monitoring solution, providing complete infrastructure visibility for decision making Continue reading

Monitor Your oVirt Environment with oVirt Metrics Store

The oVirt project now includes a unified metrics and logs real-time monitoring solution for the oVirt environment.

Using Elasticsearch - a search and analytics engine - and its native visualization layer, Kibana, we now provide oVirt project users with a fully functional monitoring solution.

The solution includes self-service dashboards for creating your own dashboard, reports, and log analysis for both the engine and VDSM logs.

The Kibana dashboard

Combining Elasticsearch and kibana - both built on top of the OpenShift Container Platform (OCP) - with the collectd and fluentd client-side daemons, results in a powerful end-to-end solution.

For additional details, including how to set up the oVirt Metrics Store, please see the oVirt Metrics Store Feature page.

Monitor Your oVirt Environment with oVirt Metrics Store

The oVirt project now includes a unified metrics and logs real-time monitoring solution for the oVirt environment.

Using Elasticsearch - a search and analytics engine - and its native visualization layer, Kibana, we now provide oVirt project users with a fully functional monitoring solution.

The solution includes self-service dashboards for creating your own dashboard, reports, and log analysis for both the engine and VDSM logs.

The Kibana dashboard

Combining Elasticsearch and kibana - both built on top of the OpenShift Container Platform (OCP) - with the collectd and fluentd client-side daemons, results in a powerful end-to-end solution.

For additional details, including how to set up the oVirt Metrics Store, please see the oVirt Metrics Store Feature page.

VMware SDDC with NSX Expands to AWS

I prior shared this post on the LinkedIN publishing platform and my personal blog at HumairAhmed.com. There has been a lot of interest in the VMware Cloud on AWS  (VMC on AWS) service since its announcement and general availability. Writing this brief introductory post, the response  received confirmed the interest and value consumers see in this new service, and I hope to share more details in several follow-up posts.

VMware Software Defined Data Center (SDDC) technologies like vSphere ESXi, vCenter, vSAN, and NSX have been leveraged by thousands of customers globally to build reliable, flexible, agile, and highly available data center environments running thousands of workloads. I’ve also discussed prior how partners leverage VMware vSphere products and NSX to offer cloud environments/services to customers. In the VMworld Session NET1188BU: Disaster Recovery Solutions with NSX, I discussed how VMware Cloud Providers like iLand and IBM use NSX to provide cloud services like DRaaS. In 2016, VMware and AWS announced a strategic partnership, and, at VMworld this year, general availability of VMC on AWS was announced; this new service, and, how NSX is an integral component to this service, is the focus of this post.

Continue reading

Enable nested virtualization on Google Cloud

Google Cloud Platform introduced nested virtualization support in September 2017. Nested virtualization is especially interesting to network emulation research since it allow users to run unmodified versions of popular network emulation tools like GNS3, EVE-NG, and Cloonix on a cloud instance.

Google Cloud supports nested virtualization using the KVM hypervisor on Linux instances. It does not support other hypervisors like VMware ESX or Xen, and it does not support nested virtualization for Windows instances.

In this post, I show how I set up nested virtualization in Google Cloud and I test the performance of nested virtual machines running on a Google Cloud VM instance.

Create Google Cloud account

Sign up for a free trial on Google Cloud. Google offers a generous three hundred dollar credit that is valid for a period of one year. So you pay nothing until either you have consumed $300 worth of services or one year has passed. I have been hacking on Google cloud for one month, using relatively large VMs, and I have consumed only 25% of my credits.

If you already use Google services like G-mail, then you already have a Google account and adding Google Cloud to your account is easy. Continue reading

Customizing the host deploy process

In 4.2 release we have introduced a possibility to customize the host-deploy process by running the Ansible post-tasks after the host-deploy process successfully finishes.

The reason

Prior to oVirt 4.2 release administrators could customize host's firewall rules using engine-config option IPTablesConfigSiteCustom. Unfortunately writing custom iptables rules into string value to be used in engine-config was very user unfriendly and using engine-config to provide custom firewalld rules would be even much worse. Because of above we have introduced Ansible integration as a part of host deploy flow, which allows administrators to add any custom actions executed on the host during host deploy flow.

Special tasks file

As part of this role we also include additional tasks, which could be written by the user, to modify the host-deploy process for example to open some more FirewallD ports.

Those additional tasks can be added to following file:

/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml

This post-task file is executed as part of host-deploy process just before setup network invocation.

Example

An example post-tasks file is provided by ovirt-engine installation, at following location:

/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example

This is just an example file, to add some task into host deploy flow, you need to create below mentioned file Continue reading

oVirt roles Ansible Galaxy integration

In 4.2 release we have splitted our oVirt Ansible roles into separate RPM packages and also separate git repositories, so it is possible for user to install specific role either from Ansible Galaxy or as RPM package.

The reason

The reason to split the roles into separate packages and git repositories was mainly the usage from the AWX/Ansible Tower. Since Ansible Galaxy is only integrated with github you need to store your Ansible role in single git repostiory in order to have separate Ansible role in Galaxy. Previously we used one single repository where we have stored all the roles, but because of that manual configuration was required to make those roles usable in AWX/Ansible Tower. So as you can see on image below we now have many roles in Ansible Galaxy under oVirt user:

oVirt roles in Ansible Galaxy

How to install the roles

There are still two ways how to install the roles: either using Ansible Galaxy or using RPM package available from oVirt repositories.

Ansible Galaxy

You are now able to install just a single role and not necessarily all of them at once like in previous versions For example to install just oVirt cluster upgrade role, you have to run Continue reading

oVirt roles Ansible Galaxy integration

In 4.2 release we have splitted our oVirt Ansible roles into separate RPM packages and also separate git repositories, so it is possible for user to install specific role either from Ansible Galaxy or as RPM package.

The reason

The reason to split the roles into separate packages and git repositories was mainly the usage from the AWX/Ansible Tower. Since Ansible Galaxy is only integrated with github you need to store your Ansible role in single git repostiory in order to have separate Ansible role in Galaxy. Previously we used one single repository where we have stored all the roles, but because of that manual configuration was required to make those roles usable in AWX/Ansible Tower. So as you can see on image below we now have many roles in Ansible Galaxy under oVirt user:

oVirt roles in Ansible Galaxy

How to install the roles

There are still two ways how to install the roles: either using Ansible Galaxy or using RPM package available from oVirt repositories.

Ansible Galaxy

You are now able to install just a single role and not necessarily all of them at once like in previous versions For example to install just oVirt cluster upgrade role, you have to run Continue reading

Introduction to OVN and Red Hat Virtualization

Hi Folks, recently my friend and colleague, Tony James prepared and delivered an excellent webinar internally at Red Hat on how to configure Open Virtual Networking (OVN) in Red Hat Virtualization. For those of you that are unfamiliar with OVN, or what it offers, allow me to provide you with the proper illumination.

Background

Way back in the dark ages, the only way that mere mortals could get encapsulation, segmentation, and other benefits of SDN in RHV was via third party integration. Or if there was an OpenStack deployment that could be tapped into via the RHV Neutron integration. Recently though, native SDN (via OVN) is in Tech Preview in RHV 4.1, and I’m going to spend the next few posts going over the basics.

NOTE – Tech Preview is Red Hat’s way of providing the software bits for folks to try out, but there is no support for software in Tech Preview. The official statement is here. In short, the more interest and bugs filed against Tech Preview, the sooner it gets put in production.

The current fully supported virtual networking in RHV is built around “Linux Bridging”. It’s solid and it’s simple. That is to say that Continue reading

LVM Configuration the Easy Way

Now oVirt features a simple way to prevent a host from scanning and then activating logical volumes that are not required directly by the host. In particular, the solution addresses logical volumes on shared storage managed by oVirt, and logical volumes created by a guest in oVirt raw volumes. Why is a solution required? Because scanning and activating other logical volumes may cause data corruption, slow boot, and other issues.

The solution is configuring an LVM filter on each host, which allows LVM on a host to scan only the logical volumes required directly by the host. To achieve this, we have introduced a vdsm-tool command, config-lvm-filter, that will configure the host for you.

The new command, vdsm-tool config-lvm-filter analyzes the current LVM configuration to decide whether a filter should be configured. Then, if the LVM filter has yet to be configured, the command generates an LVM filter option for this host, and adds the option to the LVM configuration.

Scenario 1: An Unconfigured Host

On a host yet to be configured, the command automatically configures the LVM once the user confirms the operation:

# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/vg0-lv_home
   Continue reading

LLDP Information Now Available via the Administration Portal

In oVirt 4.2 we have introduced support for the Link Layer Discovery Protocol (LLDP). It is used by network devices for advertising the identity and capabilities to neighbors on a LAN. The information gathered by the protocol can be used for better network configuration. Learn more about LLDP.

Why do you need LLDP?

When adding a host into oVirt cluster, the network administrator usually needs to attach various networks to it. However, a modern host can have multiple interfaces, each with its non-descriptive name.

Examples

In the screenshot below, taken from the Administration Portal, a network administrator has to know to which interface to attach the network named m2 with VLAN_ID 162. Should it be interface enp4s0, ens2f0 or even ens2f1? With oVirt 4.2, the administrator can hover over enp4s0 and see that this interface is connected to peer switch rack01-sw03-lab4, and learn that this peer switch does not support VLAN 162 on this interface. By looking at every interface, the administrator can choose which interface is the right option for networkm2.

screen

A similar situation arises with the configuration of mode 4 bonding (LACP). Configurating LACP usually starts with network administrator defining a port group Continue reading

Engine XML Brings a Smoother Flow of Data Into oVirt

on November 8, oVirt 4.2 saw the introduction of an important behind-the-scenes enhancement.

The change is associated with the exchange of information between the engine and the VDSM. It addresses the issue of multiple abstraction layers, with each layer needing to convert its input into a suitably readable format in order to report to the next layer.

This change improves data communication between the engine and Libvirt - the tool that manages platform virtualization.

Background

Previously, the configuration file for a newly created virtual machine (VM) originated in the engine as a map or dictionary. Then, in the VDSM, it was converted into an XML file that was readable by Libvirt. This process required a greater coding effort which in turn slowed down the development process.

What's changed?

Now, this map or dictionary has been replaced by engine XML, an XML configuration file that complies with the Libvirt API. VDSM now simply routes this Libvirt-readable file to/from Libvirt, in VM lifecycle (virt) related flows.

As an oVirt user, it’s business as usual.

However, if you are a developer dealing with debugging issues that involve running a VM, simply be aware that the Domain XML is now generated by ovirt-engine, Continue reading

Setting up multiple networks is going to be much faster in oVirt 4.2

Assume you have an oVirt cluster with hundreds of VM networks. Now you add a new host to the cluster. In order for it to move to the Operational state, it must have all required networks attached to it. The easiest way to do it is to attach networks to a label, and then place that label on a NIC of the added host. However, if there are too many networks, Engine could fail to setup them all at once. This is caused by a slow VDSM setupNetworks call that is not able to finish within the 180 seconds long vdsTimeout of Engine.

VDSM performance changes would be included in ovirt-4.2, currently in ovirt-master.

Initscripts performance patch is targeted for EL 7.5.

The following table shows maximal number of networks that can be handled within the vdsTimeout. The measured setupNetworks command handles one network with static IP and N VLAN+bridge networks with no IP. Edit covered a move of all networks from one NIC to another.

Please note that given numbers are for reference only.

installed N add edit del
ovirt-4.2 190 180s 127s 67s
ovirt-4.2 and patched initscripts 350 138s 176s 89s
ovirt-4.1 150 Continue reading

Introducing oVirt 4.2.0 Beta

On October 31st, the oVirt project released version 4.2.0 Beta, available for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.

Since the release of oVirt 4.2.0 Alpha a month ago, a substantial number of stabilization fixes have been introduced.

What's new in this release?

Support for LLDP, a protocol for network devices for advertising identity and capabilities to neighbors on a LAN. LLDP information can now be displayed in both the UI and via the API. The information gathered by the protocol can be used for better network configuration.

oVirt 4.2.0 Beta features Gluster 3.12.

oVirt's hyperconverged solution now enables a single replica Gluster deployment.

OVN (Open Virtual Network) is now fully supported and recommended for isolated overlay networks. OVN is automatically deployed on the the host, and made available for VM connectivity.

Snapshots can now be uploaded and downloaded via the REST API (and the SDKs).

An improvement has been introduced to the self-hosted engine. Now, the self-hosted-engine will connect to all IPs discovered, allowing both higher performance via multiple paths as well as high availability in the event that one of the targets fails.

A new Continue reading

1 4 5 6 7 8 14