Archive

Category Archives for "Virtualization"

VMware NSX in Redundant L3-only Data Center Fabric

During the Networking in Private and Public Clouds webinar I got an interesting question: “Is it possible to run VMware NSX on redundantly-connected hosts in a pure L3 data center fabric?

TL&DR: I thought the answer is still No, but after a very helpful discussion with Anthony Burke it seems that it changed to Yes (even through the NSX Design Guide never explicitly says Yes, it’s OK and here’s how you do it).

Read more ...

Up and Running with oVirt 4.1 and Gluster Storage

Last month, the oVirt Project shipped version 4.1 of its open source virtualization management system. With a new release comes an update to this howto for running oVirt together with Gluster storage using a trio of servers to provide for the system's virtualization and storage needs, in a configuration that allows you to take one of the three hosts down at a time without disrupting your running VMs.

If you're looking instead for a simpler, single-machine option for trying out oVirt, your best bet is the oVirt Live ISO. This is a LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you're up, this is definitely a low-performance option, and not suitable for extended use or expansion.

Read on to learn about my favorite way of running oVirt.

oVirt, Glusterized

Prerequisites

Hardware: You’ll need three machines with 16GB or more of RAM and processors with hardware virtualization extensions. Physical machines are best, but you can test oVirt using nested KVM as well. I've written this howto using VMs running on my "real" oVirt+Gluster Continue reading

Up and Running with oVirt 4.1 and Gluster Storage

Last month, the oVirt Project shipped version 4.1 of its open source virtualization management system. With a new release comes an update to this howto for running oVirt together with Gluster storage using a trio of servers to provide for the system's virtualization and storage needs, in a configuration that allows you to take one of the three hosts down at a time without disrupting your running VMs.

If you're looking instead for a simpler, single-machine option for trying out oVirt, your best bet is the oVirt Live ISO page. This is a LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you're up, this is definitely a low-performance option, and not suitable for extended use or expansion.

Read on to learn about my favorite way of running oVirt.

oVirt, Glusterized

Prerequisites

Hardware: You’ll need three machines with 16GB or more of RAM and processors with hardware virtualization extensions. Physical machines are best, but you can test oVirt using nested KVM as well. I've written this howto using VMs running on my "real" Continue reading

Say Hello to oVirt 4.1.1

On March 22, the oVirt project released version 4.1.1, available for Red Hat Enterprise Linux 7.3, CentOS Linux 7.3, or similar.

oVirt is the open source virtualization solution that provides an awesome KVM management interface for multi-node virtualization. This maintenance version is super stable and there are some nice new features.

So what's new in oVirt 4.1.1?

Storage Team

  • LUNs can be removed from a block data domain, provided that there is enough free space on the other domain devices to contain the data stored on the LUNs being removed.
  • Support for NFS version 4.2 connections (when supported by storage).

Integration Team

  • oVirt-hosted-engine-setup now works with NetworkManager enabled.

Network Team

  • NetworkManager keeps running when a host is added to oVirt. This allows users to review networking configurations in cockpit whenever they want.

Infra Team

  • A new tool, engine-vacuum, performs a vacuum on the PostgreSQL database in order to reclaim disk space on the operating system. It also updates and removes garbage from tables.
  • Alerts for all data centers and clusters that are not upgraded to the highest compatibility version.
  • Time zones are now shown in log records to make it easier to correlate Continue reading

Technology Short Take #81

Welcome to Technology Short Take #81! I have another collection of links, articles, and thoughts about key data center technologies, and hopefully I’ve managed to include something here that will prove useful or thought-provoking. Enjoy!

Networking

Technology Short Take #80

Welcome to Technology Short Take #80! This post is a week late (I try to publish these every other Friday), so my apologies for the delay. However, hopefully I’ve managed to gather together some articles with useful information for you. Enjoy!

Networking

  • Biruk Mekonnen has an introductory article on using Netmiko for network automation. It’s short and light on details, but it does provide an example snippet of Python code to illustrate what can be done with Netmiko.
  • Gabriele Gerbino has a nice write-up about Cisco’s efforts with APIs; his article includes a brief description of YANG data models and a comparison of working with network devices via SSH or via API.
  • Giuliano Bertello shares why it’s important to RTFM; or, how he fixed an issue with a Cross-vCenter NSX 6.2 installation caused by duplicate NSX Manager UUIDs.
  • Andrius Benokraitis provides a preview of some of the networking features coming soon in Ansible 2.3. From my perspective, Ansible has jumped out in front in the race among tools for network automation; I’m seeing more coverage and more interest in using Ansible for network automation.
  • Need to locate duplicate MAC addresses in your environment, possibly caused by cloning Continue reading

Container Namespaces – How to add networking to a docker container

I've discussed how we can network a docker container directly with the host's networking stack bypassing docker0, the default bridge docker creates for you. That method involves asking docker to create a port on a user defined bridge and from the inside configuring the container to ask for an IP by DHCP. A more advanced way of achieving this is to bring up a docker container without networking and later configure the stack out-of-band of docker. This approach is one of the methods used by Calico for example to network containers and I've spoken about that here.

Today, lets deep dive into adding interfaces to a container manually and in-turn gain some insight into how all of this works. Since this discussion is going to revolve around network namespaces I assume you have some background in that area. If you are new to the concept of namespaces and network namespaces,  I recommend reading this.


Step 1: We will first bring up a docker container without networking. From docker docs, using the --network none when running a docker container leaves out container interface creation for that docker instance. Although docker skips network interface creation it brings up the container with Continue reading

Nutanix

Maximum Performance from Acropolis Hypervisor and Open vSwitch describes the network architecture within a Nutanix converged infrastructure appliance - see diagram above. This article will explore how the Host sFlow agent can be deployed to enable sFlow instrumentation in the Open vSwitch (OVS)  and deliver streaming network and system telemetry from nodes in a Nutanix cluster.
This article is based on a single hardware node running Nutanix Community Edition (CE), built following the instruction in Part I: How to setup a three-node NUC Nutanix CE cluster. If you don't have hardware readily available, the article, 6 Nested Virtualization Resources To Get You Started With Community Edition, describes how to run Nutanix CE as a virtual machine.
The sFlow standard is widely supported by network equipment vendors, which combined with sFlow from each Nutanix appliance, delivers end to end visibility in the Nutanix cluster. The following screen captures from the free sFlowTrend tool are representative examples of the data available from the Nutanix appliance.
The Network > Top N chart displays the top flows traversing OVS. In this case an HTTP connection is responsible for most of the traffic. Inter-VM and external traffic flows traverse OVS and are efficiently Continue reading

RHV 4.1, Hosted Engine, Red Hat Summit

Hi folks, I’m still heads down on a lot of different things. The release of RHV 4.1 is right around the corner, as is a new product that involves RHV 4.1. I’ve also cut some new demo’s on Hosted Engine using RHVH – just like I promised I would several weeks ago. Ok, a couple months ago. You’ll just have to come see me at Red Hat Summit to see them…Or wait until just after Red Hat Summit. I still don’t have my “new” lab, but I did get my hands on some good gear that allows me show you the goodness that is Hosted Engine, especially with RHVH (Red Hat Virtualization Host). Hopefully I’ll have the new lab soon…..

As I mentioned in my last post, I’m presenting at Red Hat Summit again this year, focusing on providing HA for RHV – by way of Hosted Engine. Here are the session details if you’re going to be there:

Thursday, May 4, 3:30 PM – 4:15 PM – Room 152
Red Hat Summit, May 2-4, Boston, MA

I promise to give the full write-up and share the demo’s post Summit.

Captain KVM

The post RHV 4.1, Continue reading

Container networking: What is it and how can it help your data center?

There has been a lot of buzz in the industry about containers and how they are streamlining organizational processes. In short, containers are a modern application sandboxing mechanism that are gaining popularity in all aspects of computing from the home desktop to web-scale enterprises. In this post we’ll cover the basics: what is container networking and how can it help your data center? In the future, we’ll cover how you can optimize a web-scale network using Cumulus Linux and containers.

What is a container?

A container is an isolated execution environment on a Linux host that behaves much like a full-featured Linux installation with its own users, file system, processes and network stack. Running an application inside of a container isolates it from the host and other containers, meaning that even when the applications inside of them are running as root, they can not access or modify the files, processes, users, or other resources of the host or other containers.

Containers have become popular due to the way they simplify the process of installing and running an application on a Linux server. Applications can have a complicated web of dependencies. The newest version of an application may require a newer Continue reading

Technology Short Take #79

Welcome to Technology Short Take #79! There’s lots of interesting links for you this time around.

Networking

  • I was sure I had mentioned Skydive before, but apparently not (a grep of all my blog posts found nothing), so let me rectify that first. Skydive is (in the project’s own words) an “open source real-time network topology and protocols analyzer.” The project’s GitHub repository is here, and documentation for Skydive is here.
  • OK, now that I’ve mentioned Skydive, I can talk about this article that provides an example of functional SDN testing with Terraform and Skydive. Terraform is used to turn up OpenStack infrastructure, and Skydive (via connections into Neutron and OpenContrail, in this example) is used to validate SDN functionality.
  • Tony Sangha took PowerNSX (a set of PowerShell cmdlets for interacting with NSX) and created a tool to help document the NSX Distributed Firewall configuration. This tool exports the DFW configuration and then converts it into Excel format, and is available on GitHub. (What’s that? You haven’t heard of PowerNSX before? See here.)

Servers/Hardware

Nothing this time around. Should I keep this section, or ditch it? Feel free to give me your feedback on Twitter.

Security

oVirt Gamification–The oVirt Game You Didn’t Know you Were Playing

Gamification is the concept of applying game mechanics and game design techniques to engage and motivate people to achieve their goals.

It taps into the basic desires and needs of the users impulses which revolve around the idea of Status and Achievement.

To put it in other words, it is turning day-to-day tasks, the kind you might do at home or work, into a game which you can earn points, badges and compete with other people that are doing the same things.

oVirt & Gamification

You probably didn't know, but this isn't the first time oVirt gamification is being used. A few years ago there was an initiative to use oVirt UI plugins system to add Gamification to the project, there was even a "space invaders" game written and available to play inside oVirt!

So What is New?

The oVirt infra team recently reached out to 'GetBadges', a company which provides 'Gamification as a Service'. Luckily for us, open source projects get to have a free game! So oVirt was rewarded with its own oVirt Open Source Game.

The game works automagically every time you contribute to the project. Current integrations are only active on specific projects like 'ovirt-engine' and Continue reading

Adding Metadata to the Arista vEOS Vagrant Box

This post addresses a (mostly) cosmetic issue with the current way that Arista distributes its Vagrant box for vEOS. I say “mostly cosmetic” because while the Vagrant box for vEOS is perfectly functional if you use it via Arista’s instructions, adding metadata as I explain here provides a small bit of additional flexibility should you need multiple versions of the vEOS box on your system.

If you follow Arista’s instructions, then you’ll end up with something like this when you run vagrant box list:

arista-veos-4.18.0    (virtualbox, 0)
bento/ubuntu-16.04    (virtualbox, 2.3.1)
centos/6              (virtualbox, 1611.01)
centos/7              (virtualbox, 1611.01)
centos/atomic-host    (virtualbox, 7.20170131)
coreos-stable         (virtualbox, 1235.9.0)
debian/jessie64       (virtualbox, 8.7.0)

Note that the version of the vEOS box is embedded in the name. Now, you could not put the version in the name, but because there’s no metadata—which is why it shows (virtualbox, 0) on that line—you wouldn’t have any way of knowing which version you had. Further, what happens when you want to have multiple versions of the vEOS box?

Fortunately, there’s an easy fix (inspired by the way CoreOS distributes their Vagrant box). Just create a file with the Continue reading

Test driving App Firewall with IPTables

With more and more application moving to the cloud, web based applications have become ubiquitous. They are ideal for providing access to applications sitting on the cloud (over HTTP through a standard web browser). This has removed the need to install specialized application on the client system, the client just needs to install is a … Continue reading Test driving App Firewall with IPTables

Technology Short Take #78

Welcome to Technology Short Take #78! Here’s another collection of links and articles from around the Internet discussing various data center-focused technologies.

Networking

Servers/Hardware

Nothing this time around, sorry!

Security

Using oVirt and Vagrant

Introducing oVirt virtual machine management via Vagrant.

In this short tutorial I'm going to give a brief introduction on how to use vagrant to manage oVirt with the new community developed oVirt v4 Vagrant provider.

Background

Vagrant is a way to tool to create portable and reproducible environments. We can use it to provision and manage virtual machines in oVirt by managing a base box (small enough to fit in github as an artifact) and a Vagrantfile. The Vagrantfile is the piece of configuration that defines everything about the virtual machines: memory, cpu, base image, and any other configuration that is specific to the hosting environment.

Prerequisites

  • A fully working and configured oVirt cluster of any size
  • A system capable of compiling and running the oVirt ruby SDK gem
  • Vagrant 1.8 or later
  • The oVirt vagrant plugin installed via $ vagrant plugin install vagrant-ovirt4

The Vagrantfile

To start off, I'm going to use this Vagrantfile:

Vagrant.configure("2") do |config|
  config.vm.box = 'ovirt4'
  config.vm.hostname = "test-vm"
  config.vm.box_url = 'https://github.com/myoung34/vagrant-ovirt4/blob/master/example_box/dummy.box?raw=true'

  config.vm.network :private_network,
    :ip => '192.168.56.100', :nictype => 'virtio', :netmask  Continue reading

Speaking at Red Hat Summit 2017

Hi Folks, I know it’s been a few weeks but I assure you I’ve been heads down on good stuff. You’ll get to see much of it on the blog, but also at Red Hat Summit 2017 in Boston, MA if you’re so inclined.

So what will I (and my colleagues) be talking about at “Summit” this year?Well, there are several RHV & KVM specific activities at Summit that I’ll have something to do with, 2 directly and multiple indirectly:

Breakout Session – High Availability for Red Hat Virtualization Manager 
This will be my primary presentation on RHV, where I talk about and provide demo’s on RHV Hosted Engine, mostly in the context of HA (why and how), but also in the context of how it’s used in a new Red Hat product… (cue dramatic music..)

Breakout Session – Red Hat Virtualization and KVM Roadmaps
This is my colleagues’ session, and typically standing room only. I may help organize, but the Product Managers (Moran & Yaniv) will knock this out. It lays out the future of both Red Hat Virtualization and the core technology, KVM.

Lightning Talk – Reporting and Metrics Update
Again, my colleague’s session (Yaniv), but Continue reading

1 7 8 9 10 11 14