Hi folks, I’m finally getting around to the high availability for RHV-M (hosted engine) walk through demo that I promised. The truth is that due to unforeseen circumstances, I had to go to “plan b”. The end result is still the same, and the workflows are almost identical, but the “in betweens” are just a bit different.
Allow me to illuminate..
So when I last left off, I was explaining the virtues of both the lightweight virtualization host (RHVH) as well as the hosted engine configuration for use as a means of providing high availability for RHV-M, the management piece for RHV. Hosted engine can support either (not both at the same time) RHVH or RHEL hosts as the hypervisor nodes.. While I really wanted to show you how get things up and running with RHVH first, I’m going to show you the “RHEL way” first. I’ll come back around the RHVH, I promise.
The workflow for getting things up and ready is very similar when comparing RHVH and RHEL – hosts, networks, and storage all get setup. DNS (forward and reverse, FQDN for hosts and RHV-M) is configured. Subscriptions are set and hosts are updated. The biggest differences are Continue reading
oVirt's development is continuing on pace, as the calendar year draws to a close and we get ready for a new year of development, evangelism, and making virtual machine management a simple process for everyone.
Here's what happened in November of 2016.
oVirt 4.0.6 Third Release Candidate is now available
oVirt 4.1.0 First Beta Release is now available for testing
Testing ovirt-engine changes without a real cluster
Request for oVirt Ansible modules testing feedback
Important Open Source Cloud Products [German]
Red Hat IT runs OpenShift Container Platform on Red Hat Virtualization and Ansible
Keynote: Blurring the Lines: The Continuum Between Containers and VMs [Video]
Quick Guide: How to Plan Your Red Hat Virtualization 4.0 Deployment
Next week HPE will host more than 10,000 top IT executives, architects, engineers, partners and thought-leaders from across Europe at Discover 2016 London, November 29th – December 1st in London.
Come visit Docker in Booth #208 to learn how Docker’s Containers-as-a-Service platform is transforming modern application infrastructures, allowing business to benefit from a more agile development environment.
Docker experts will be on-hand to for in-booth demos, hands-on-labs, breakout sessions and Transformation Zone sessions to demonstrate how Docker’s infrastructure platform, provides businesses with a unifying framework to embrace hybrid infrastructures and optimize resource utilization across legacy and modern Linux and Windows applications.
Not attending Discover London? Don’t miss a thing and “Save the Date” for the live streaming of keynotes and top sessions beginning November 29th at 11:00 GMT and through the duration of the event.
Be sure to add these key Docker sessions to your HPE Discover London agenda:
Ongoing: Transformation Zone Hours Show Floor
DEMO315: HPE IT Docker success stories
Supercharge your container deployments on bare metal and VMs by orchestrating large workloads using simple Docker mechanisms. See how the HPE team automated Continue reading
After introducing the fundamentals of Docker networking, Dinesh Dutt focused on various Docker networking options, including multi-host networking with overlays.
After watching the video, you might also want to listen to Episode 49 of Software Gone Wild with Brent Salisbury, Dave Tucker and Madhu Venugopal.
Welcome to Technology Short Take #73. Sorry for the long delay since the last Technology Short Take; personal matters have been taking quite the toll (if you follow me on Twitter, you’ll know to what personal matters I’m referring). In any case, enough of that—here’s some data center-related content that I hope you find useful!
I got a long list of VXLAN-related questions from one of my subscribers. It started with an easy one:
Does Cisco ACI use VXLAN inside the fabric or is something else used instead of VXLAN?
ACI uses VXLAN but not in a way that would be (AFAIK) interoperable with any non-Cisco product. While they do use some proprietary tagging bits, the real challenge is the control plane.
Read more ...Hi folks! After plowing through my home lab, I’m ready to walk you through setting up RHV-M in a “self-hosted engine” (HA) configuration. I’ve talked about this in some previous articles if you need to familiarize yourself with what the significance is or why someone might want to go with this approach over a standard deployment.
Let’s get to it.
Pre-Setup
Sounds funny, right? “Pre-setup”.. like you’re going to setup before you setup? But really, that’s what you need to do. In this case, everything needs to be right before you just dive right into the deep end of the lake, or you’re going to hit rocks. What I mean is that your underlying environment needs to be right, or things will not go smoothly at all.
Specifically, you’re going to need to pay attention to the requirements of the hosts and RHV-M software.. the specs are well published. For example, you need to have fully qualified domain names for all of your hosts and RHV-M, and they need to resolve (forward and reverse!) in some form of DNS. Just using “/etc/hosts” isn’t going to cut it here.. Don’t have running DNS in your lab, don’t sweat it, look Continue reading
The ovirt-engine component of oVirt is the brain of oVirt and is responsible for managing attached systems; providing the webadmin UI and REST interfaces; and other core tasks. The process of setting up a real cluster on which to deploy the project is a time-consuming task that greatly increases patch turnaround time and can provide a significant barrier of entry to those wanting to contribute to the project.
There are couple of preparation steps you must take to create your development environment. I am using CentOS 7 as my development machine so I will use that system to describe everything, but it should be pretty straightforward to adapt the article to Fedora.
We first need the source code for the ovirt-engine itself. You can get it from the project's code review tool: gerrit.ovirt.org. Just execute the following command and wait for it to finish:
# git clone git://gerrit.ovirt.org/ovirt-engine.git
You will also need a directory for the development deployments, so create a directory somewhere. Mine is in ~/Applications/ovirt-engine-prefix. I have set the$OVIRT_PREFIX environment variable to point to that path, so when you see it used throughout this article, substitute the path for your own Continue reading
I got into an interesting discussion with Johannes Luther on the need for VRFs and he wrote:
If VRF = L3 virtualization technologies, then I saw that link. However, VRFs are again just a tiny piece of the whole story.
Of course he’s right, but it turns out that VRFs are the fundamental building block of most L3 virtualization technologies using a shared infrastructure.
Read more ...This post provides a basic introduction to the VirtualBox CLI (command-line interface) tool, vboxmanage
. This post does not attempt to replace the comprehensive documentation; rather, its purpose is to help users who are new to vboxmanage
(such as myself, having recently adopted VirtualBox for my Vagrant environments) get somewhat up to speed as quickly and as painlessly as possible.
Let’s start with some basic operations. Here are a few to get you started:
To list all the registered VMs, simply run vboxmanage list vms
. Note that if you are using Vagrant with VirtualBox, this command will also show VirtualBox VMs that have been instantiated by Vagrant. Similarly, if you are using Docker Machine with VirtualBox, this command will show you VMs created by Docker Machine.
To list all the running VMs, use vboxmanage list runningvms
.
To start a VM, run vboxmanage startvm <name or UUID>
. You can optionally specify a --type
parameter to control how the VM is started. Using --type gui
will show it via the host GUI; using --type headless
means you’ll need to interact over the network (typically via SSH). To emulate Vagrant/Docker Machine-like behavior, you’d use --type headless
.
Once a VM is Continue reading
One of my readers sent me an interesting question a while ago:
Isn’t IS-IS a better fit for building L3-only networks than BGP, particularly considering that IS-IS already has a protocol to communicate with the end systems (ES-IS)?
In theory, he’s correct (see also this blog post).
Read more ...Hi folks, here’s another “pre” post. What I mean by that is that in the process of creating a demo and the surrounding article, I found I needed to create a sidebar article in order to show how configure an important component. In this case, the requirement to fulfill forward and reverse name server resolution in RHV has lead me to create a basic DNS server. In this case, “dnsmasq” is a perfect solution…
Let me be clear here: I am NOT recommending dnsmasq for production DNS. For production I would recommend deploying BIND, Red Hat IdM, or something else. I’m using dnsmasq because I need something for my home lab and I think you might benefit from the configuration I’m using in your home or test lab. I don’t have that many systems, and a lightweight service like dnsmasq will work nicely.
Background
The RHV 4 documentation is very clear about the requirement for FQDN and fully functional DNS. Simply relying on “/etc/hosts” isn’t going to cut it anymore. Dnsmasq will provide a great and simple solution for small labs. For the uninitiated, dnsmasq provides DHCP, TFTP, DNS, and DNS forwarding. We’ll really only be concerned with the DNS Continue reading
oVirt offers not only its own internal networking, but also an API for external network providers. This API enables using external network management software inside environments managed by oVirt and takes advantage of their extended capabilities. One of such solutions is OVN: Open Virtual Network. OVN is an OVS (Open vSwitch) extension that brings Software Defined Networking to OVS.
OVN enables support for virtual networks abstraction by adding native OVS support for virtual L2 and L3 overlays. This allows the user to create as many VM networks as required, without troubling the adminstrator with vlan requests or infrastructure changes.
The oVirt provider for OVN consists of two parts: * The oVirt OVN driver * The oVirt OVN provider
The oVirt OVN driver is the Virtual Interface Driver placed on oVirt hosts that handle the wiring of VM NICs to OVN networking.
The driver allows Vdsm, libvirt, and OVN to interact whenever a NIC is plugged in such a way that the VM NIC is added to an appropriate OVN Logical Switch and the appropriate OVN overlays on all the hosts in the oVirt environment.
The oVirt OVN driver rpm is now available for testing. The latest version Continue reading
Here is a quick post for you guys. I’m in the midst of creating a follow up to one of my other articles and it dawns on me that I need to do this particular post first.. A post within a post, or before a post, or something. In either case, I need to provide an update to configuring NFS to poke through a firewall in RHEL 7 for the purpose of RHV in a home lab.. or other use cases. Read on, if you will…
Background
In some older posts, I show you how to configure NFSv3 to use predictable ports in RHEL so that it is more IPtables friendly. You don’t want to shut your firewall down and leave your security wide open. And if your firewall is also doing other work for you like port forwarding, then your ~really~ can’t shut it down…
So here’s the skinny: I’m in the process of setting up new systems for “RHV w/ Hosted Engine”, and I’m using an NFS server for the storage. It’s a home lab, so I’m not exactly worried about performance. I really don’t recommend using a Linux server for production NFS in virtualization, but again, this Continue reading
The oVirt community is made up of a diverse mix of individuals using and contributing to all aspects of the project from all over the world, and we want to make sure that the community is a safe and friendly place for everyone.
This code of conduct applies equally to founders, mentors, and those seeking help and guidance. It applies to all spaces managed by the oVirt project, including IRC, mailing lists, GitHub, Gerrit, oVirt events, and any other forums created by the project team which the community uses for communication.
While we have contribution guidelines for specific tools, we expect all members of our community to follow these general guidelines and be accountable to the community. This isn’t an exhaustive list of things that you can’t do. Rather, take it in the spirit in which it’s intended—a guide to make it easier to enrich all of us and the technical communities in which we participate.
To that end, some members of the oVirt community have put together a new Community Code of Conduct to help guide everyone through what it means to be respectful and tolerant in a global community like the oVirt Project.
We're not looking for a Continue reading
Hi folks, so time ago (years?) I wrote about how to put together High Availability for RHV-M. At the time the actual configuration that I proposed was solid, if a little unorthodox. Still, it certainly left room for improvement. In this week’s post, I’m updating the configuration with something that Red Hat fully supports. They refer to the configuration as Self-Hosted Engine.
Why Hosted Engine?
The primary benefits to using the Self-Hosted Engine, or “HE”, is that it provides a fully supported HA configuration for RHV-M as well as a smaller overall footprint as compared to a traditional deployment of RHV. Also, RHV-M is delivered as an appliance for the HE configuration, so the entire process is streamlined. Who doesn’t like that?
Let’s go back to the smaller footprint statement a few times though.. First off, in a traditional deployment of RHV, you have RHV-M, plus hosts. That deployment of RHV may be on a bare-metal host or it may be on a VM in a different virtualization environment. Regardless, you’re already using up resources and software subscriptions that you may not want to use. Not to mention the fact that it may cause you to cross-deploy resource across Continue reading
After explaining the basics of Linux containers, Dinesh Dutt moved on to the basics of Docker networking, starting with an in-depth explanation of how a container communicates with other containers on the same host, with containers residing on other hosts, and the outside world.
We did several podcasts describing how one could get stellar packet forwarding performance on x86 servers reimplementing the whole forwarding stack outside of kernel (Snabb Switch) or bypassing the Linux kernel and moving the packet processing into userspace (PF_Ring).
Now let’s see if it’s possible to improve the Linux kernel forwarding performance. Thomas Graf, one of the authors of Cilium claims it can be done and explained the intricate details in Episode 64 of Software Gone Wild.
Read more ...Welcome to Technology Short Take #72. Normally, I try to publish these on Fridays, but some personal travel prevented that this time around so I’m publishing on a Monday instead. Enough of that, though…bring on the content! As usual, here’s my random collection of links, articles, and thoughts about various data center technologies.