Archive

Category Archives for "Virtualization"

Demo: Multi-site Active-Active with NSX, F5 Networks GSLB, and Palo Alto Networks Security

I wrote this post prior on my personal blog at HumairAhmed.com. You can also see many of my prior blogs on multisite and Cross-vCenter NSX here on the VMware Network Virtualization blog site. This post expands on my prior post, Multi-site Active-Active Solutions with NSX-V and F5 BIG-IP DNS. Specifically, in this post, deploying applications in an Active-Active model across data centers is demonstrated where ingress/egress is always at the data center local to the client, or in other words localized ingress/egress. Continue reading

Set up a dedicated virtualization server on Packet.net

Packet is a hardware-as-a-service vendor that provides dedicated servers on demand at very low cost. For me and my readers, Packet offers a solution to the problem of using cloud services to run complex network emulation scenarios that require hardware-level support for virtualization. Packet users may access powerful servers that empower them to perform activities they could not run on a normal personal computer.

In this post, I will describe the procedure to set up an on-demand bare metal server and to create and maintain persistent data storage for applications. I will describe a generic procedure that can be applied to any application and that works for users who access Packet services from a laptop computer running any of the common operating systems: Windows, Mac, and Linux. In a future post, I will describe how I run network emulation scenarios on a Packet server.

Table of Contents

  1. Packet.net
    1. Controlling costs when using bare metal servers
    2. Create a Packet account and Login
    3. Create a project
  2. Generate SSH Keys
    1. Windows
    2. Mac
    3. Linux
    4. Copy public key to Packet.net
  3. Deploy a Server
  4. SSH Server on local machine
    1. Windows
    2. Mac
    3. Linux
  5. Set up the remote server
    1. Test X11 forwarding
  6. Create block storage
    1. Create Continue reading

Introducing High Performance Virtual Machines

Bringing high performance virtual machines to oVirt!

Introducing a new VM type in oVirt 4.2.0 Alpha. A newly added checkbox in the all-new Administration Portal delivers the highest possible virtual machine performance, very close to bare metal.

What does it do?

Some of the magic includes:

  • Enable Pass-Through Host CPU
  • Enable IO Threads, Num Of IO Threads = 1
  • Set the IO and Emulator threads pinning topology

For the full feature set, see the very detailed High Performance VM feature page

Count me in! How do I set it up?

Simple. Go to the Administration Portal and from the vertical menu select Compute > Virtual machines. Click the New VM tab to open up the New Virtual Machine dialog box. In the General tab next to the Optimized for field, click the drop down menu and select High Performance. Click OK. Depending on your current configurations, a smart pop-up may open with a list of additional recommended manual configurations, specific to your setup. To address these recommended changes, click Cancel.

New Virtual Machine dialog box with the High Performance VM type highlighted

How VM is started

Starting up a virtual machine (VM) is not an easy task, there are a lot of things going on hidden from the plain sight and studying it alone is a challenging task. The goal of this post is to simplify the process of learning how the oVirt hypervisor works. The concept of the oVirt is illustrated in the process of starting up the VM, which covers everything from top to the bottom.

Disclaimer: I am an engineer working close to the host part of the oVirt, therefore my knowledge of the engine is limited.

ovirt web admin

Architecture

Let me start with explaining the main parts oVirt hypervisor architecture. It will help a lot in understanding the overall process of the VM startup flow. Following is the simplified architecture, where I omit auxiliary components and support scripts. This will allow me to focus on the core concept without distractions.

The architecture comprises of three main components: 1) web UI and engine, 2) VDSM, 3) guest agent.

The web UI is where the user makes the first contact with the oVirt hypervisor. The web UI is the main command center of the engine and allows a user to control all aspects and states of Continue reading

Introducing oVirt 4.2.0 Alpha

On September 28, the oVirt project released version 4.2.0 Alpha, available for Red Hat Enterprise Linux 7.4, CentOS Linux 7.4, or similar.

This pre-release version should not be used in production, and is not feature complete.

oVirt is the open source virtualization solution that provides an awesome KVM management interface for multi-node virtualization. This maintenance version is super stable and there are some nice new features.

what's new in oVirt 4.2.0?

Here's an overview of the new main features:

The Administration Portal has been redesigned from scratch using Patternfly, a widely adopted standard in web application design that promotes consistency and usability across IT applications, through UX patterns and widgets. The result is a cleaner, more intuitive and user-friendly user interface. The old horizontal menu has been replaced by a two-level vertical menu. The system tree is gone, and its functionality has been integrated into the vertical menus. Here are some screenshots:

Dashboard

Virtual Machines View

Adding a New Virtual Machine

Storage View

An all new VM Portal for non-admin users - designed with React-based UI and Patternfly principles - replaces the existing User Portal. Built with performance and ease of use in mind, Continue reading

oVirt Now Supports OVS-DPDK

DPDK (Data Plane Development Kit) provides high-performance packet processing libraries and user space drivers. Open vSwitch uses DPDK libraries to perform entirely within the user space. According to Intel ONP performance tests, using OVS with DPDK can provide a huge performance enhancement, increasing network packet throughput and reducing latency.

OVS-DPDK has been added to the oVirt Master branch as an experimental feature. The following post describes the OVS-DPDK installation and configuration procedures.

Please note: Accessing the OVS-DPDK feature requires installing the oVirt Master version. In addition, OVS-DPDK cannot access any features located within the Linux kernel. This includes Linux bridge, tun/tap devices, iptables, etc.

Requirements

In order to achieve the best performance, please follow the instructions at: http://dpdk.org/doc/guides-16.11/linux_gsg/nic_perf_intel_platform.html

The network card must be on the supported vendor matrix, located here: http://dpdk.org/doc/nics

Ensure that your system supports 1GB hugepages: grep -q pdpe1gb /proc/cpuinfo && echo "1GB supported"

  • Hardware support: Make sure VT-d / AMD-Vi is enabled in BIOS
  • Kernel support: When adding a new host via the oVirt GUI, go to Hosts -> Edit > Kernel, and add the following parameters to the Kernel command line: iommu=pt intel_iommu=on default_hugepagesz=1GB hugepagesz=1G hugepages=N In the event that 1GB Continue reading

VMware’s Platform Revolves Around ESXi, Except Where It Can’t

Building a platform is hard enough, and there are very few companies that can build something that scales, supports a diversity of applications, and, in the case of either cloud providers or software or whole system sellers, can be suitable for tens of thousands, much less hundreds of thousands or millions, of customers.

But if building a platform is hard, keeping it relevant is even harder, and those companies who demonstrate the ability to adapt quickly and to move to new ground while holding old ground are the ones that get to make money and wield influence in the datacenter.

VMware’s Platform Revolves Around ESXi, Except Where It Can’t was written by Timothy Prickett Morgan at The Next Platform.

VMWare Networking to Openstack Networking to AWS Networking

Infrastructure and management of infrastructure has come a long way in the past few years. Buzzwords today (2016-17) are Private clouds, Public clouds and more recently Hybrid clouds and containers (docker, kubernetes, et.al.). Datacenter design is also changing rapidly with some companies expanding their server footprint building massive private clouds while others reduce them by adopting hybrid cloud strategies. Networking also has be re-thought and reworked by both the public cloud providers and their customers who move towards a hybrid cloud approach. One major area of investment by cloud adopters is to mimic and apply network policies and topologies present in a private data center onto one or more of public cloud providers.

Understanding the networking constructs in Openstack, VMware and AWS will help in making these networking design decisions. I will try to compare and equate network constructs in these three cloud technologies below:

Openstack

Amazon AWS

VMware (VSwitch / DVSwitch)
                                                             Virtual Network                             Continue reading

How to use oVirt Ansible roles

The recent post, An Introduction to Ansible Roles, discussed the new roles that were introduced in the oVirt 4.1.6 release. This follow-up post will explain how to set up and use Ansible roles, using either Ansible Galaxy or oVirt Ansible Roles RPM.

Ansible Galaxy

To make life easier, Ansible Galaxy stores multiple Ansible roles, including oVirt Ansible roles. To install the roles, perform the next steps:

To install roles on your local machine, run the following command:

$ ansible-galaxy install ovirt.ovirt-ansible-roles

This will install your roles into directory /etc/ansible/roles/ovirt.ovirt-ansible-roles/.

By default, Ansible only searches for roles in /etc/ansible/roles/ directory and your current working directory.

To change the directories where Ansible looks for roles, modify the roles_path option of [defaults] section in ansible.cfg configuration file.

The default location of this file is in /etc/ansible/ansible.cfg.

$ sed -i 's|#roles_path    = /etc/ansible/roles|roles_path = /etc/ansible/roles:/etc/ansible/roles/ovirt.ovirt-ansible-roles/roles|'  /etc/ansible/ansible.cfg

For more information on changing the directories where Ansible searches for roles, see the Ansible documentation pages.

Copy one of the examples from the directory /etc/ansible/roles/ovirt.ovirt-ansible-roles/examples/ into your working directory, then modify the needed variables and run the playbook.

oVirt Ansible Roles RPM

In the latest oVirt repositories Continue reading

SecureNet: Simulating a Secure Network with Mininet

I have been working with OpenStack(devstack) for a while and I must say it is quite convenient to bring up a test setup using devstack. At times, I still feel it is an overkill to use devstack for a quick test to verify your understanding of the network/security rules/routing etc. This is where Mininet shines. … Continue reading SecureNet: Simulating a Secure Network with Mininet

The search for the killer app of unikernels

When a radically different technology comes along it usually takes time before we figure out how to apply it. When we had steam engines running factories there was one engine in each factory with a giant driveshaft running through the whole factory. When the electric engine came along people started replacing the giant steam engine with a giant electric motor. It took time before people understood that they could deploy several small motors in different parts of the factory and connect electric cables rather than having a common driveshaft. It takes time to understand the technology and its applicability.

Steam engine

The situation with unikernels is similar. We have this new thing and to some extent we’re using it to replace some general purpose operating system workloads. But we’re still very much limited by how we think about operating systems and computers.

Unikernels are radically different. Naturally the question of the killer app has come up on a number of occasions. As unikernels are quite different from the dominant operating systems of today it isn’t as easy to spot what it will be. Here I’ll try to answer why it’s hard to spot the killer app.

Defining characteristics of unikernels

Let’s start Continue reading

oVirt Webadmin GWT Debug Quick Refresh

As a developer, one drawback of using Google Web Toolkit (GWT) for the oVirt Administration Portal (aka webadmin) is that the GWT compile process takes an exceptionally long time. If you make a change in some code and rebuild the ovirt-engine project using make install-dev ..., you'll be waiting several minutes to test your change. In practice, such a long delay in the usual code-compile-refresh-test cycle would be unbearable.

Luckily, we can use GWT Super Dev Mode ("SDM") to start up a quick refresh-capable instance of the application. With SDM running, you can make a change in GWT and test the refreshed change within seconds.

If you want to step through code and use the Chrome debugger, oVirt and SDM don't work well together for debugging due to the oVirt Administration Portal's code and source map size. Therefore, below we demonstrate how to disable source maps.

Demo (40 seconds)

demo

Steps

  1. Open a terminal, build the engine normally, and start it.

    ``` make clean install-dev PREFIX=$HOME/ovirt-engine DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.cssResourceStyle=pretty -Dgwt.userAgent=safari" BUILD_UT=0 DEV_EXTRA_BUILD_FLAGS="-Dgwt.compiler.localWorkers=1"

    $HOME/ovirt-engine/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py start

    ```

    screen

    screen

    screen

  2. In a second terminal, run:

    Chrome:

    make gwt-debug DEV_BUILD_GWT_SUPER_DEV_MODE=1 DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=safari"

    or

    Firefox:

    make gwt-debug DEV_BUILD_GWT_SUPER_DEV_MODE=1 DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=gecko1_8"

    screen

    Wait about two minutes Continue reading

Virtual networking optimization and best practices

Network optimization is an incredibly important component to scalability and efficiency. Without solid network optimization, an organization will be confronted with a quickly building overhead and vastly reduced efficiency. Network optimization aids a business in making the most of its technology, reducing costs and even improving upon security. Through virtualization, businesses can leverage their technology more effectively — they just need to follow a few virtual networking best practices.

Prioritize the most important applications

There are certainly applications that are optional, but there are others that are critical. The most important applications on a network are the ones that need to be prioritized in terms of system resources. These are generally cyber security suites, firewalls, and monitoring services. Optional applications may still be preferred for business operations, but because they aren’t critical they can still operate slowly in the event of system wide issues.

Prioritizing security applications is especially important as there are many cyber security exploits that operate with the express purpose of flooding the system until security elements fail. When security apps are prioritized, the risk of this type of exploit is greatly reduced.

Install application monitoring services

Application monitoring services will be able to automatically detect when Continue reading

VMware NSX is something something awesome

At times I have trouble focusing on writing articles for some of the presentations I am exposed to at Tech Field Day. Because of that, I really wanted to try something different. This article is more of my free-formed thoughts about NSX and why I’m excited to deploy it at my current $job. From the time I heard that the NSX team was going to be presenting at TFD15 for 4 hours, I knew that I would be writing this article because. Unfortunately it took me far too long to gather up this half formed thought.

First things first – NSX and Micro-Segmentation

I love the concept of Micro-Segmentation that NSX enables. Think of NSX as a virtual distributed firewall that is integrated with your hypervisor, but it really is so much more. This allows you to connect a security policy directly to the vNIC of your guest VM’s. Attaching it to the VM allows that policy to follow the VM anywhere, and everywhere it goes. You don’t have to worry about inter- or intra-VLAN segmentation as all of that is done on each vNIC. On top of that, NSX’s firewall is PCI DSS 3.2 compliant! Another rather compelling Continue reading

Best Practices for RHV 4

Hi folks, one of the many things that I’ve been working on behind the scenes has finally seen the light of day: Best Practices for Red Hat Virtualization 4. This takes over where the product documentation leaves off.What I mean by that is this:

The product documentation is (mostly) great about telling you “how” to do the many activities related to deploying Red Hat Virtualization.

This new document tells you “why” to do many of the activities related to deploying Red Hat Virtualization. It does NOT have code examples, but it DOES have lots of things to consider. Things like:

  • “Standard” deployment or “Hosted Engine” deployment
  • RHEL host or RHV host
  • NAS or SAN
  • Lager or Ale (just kidding)

In other words, when you go to plan out your deployment, this is the document that you want to read before you paint yourself into a corner. Many of the items are best practices, like “don’t turn off SELinux”. Others are more considerations and implications, like “NAS or SAN”.

If this is something you’re interested in, you can download it here:

Best Practices for Red Hat Virtualization 4

Hope this helps,

Captain KVM

 

The post Best Practices for RHV Continue reading

An Introduction to oVirt Ansible Roles

Today I would like to share with you some of the integration work with Ansible 2.3 that was done in the latest oVirt 4.1 release. The Ansible integration work was quite extensive and included Ansible modules that can be utilized for automating a wide range of oVirt tasks, including tiered application deployment and virtualization infrastructure management.

While Ansible has multiple levels of integrations, I would like to focus this article on oVirt Ansible roles. As stated in the Ansible documentation: “Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow you to focus more on the big picture and only dive into the details when needed.”

We used the above logic as a guideline for developing the oVirt Ansible roles. We will cover three of the many Ansible roles available for oVirt:

For each example, I will describe the role's purpose and how it is used.

oVirt Infra

The purpose of this role is to automatically configure and manage an oVirt datacenter. It will take a newly deployed- but not yet configured- oVirt engine (RHV-M for RHV users), hosts, and storage and Continue reading

Hyper-converged infrastructure – Part 1 : Is it a real thing ?

Recently I was lucky enough to play with Cisco Hyperflex in a lab and since it was funny to play with, I decided to write a basic blog post about the hyper-converged infrastructure concept (experts, you can move forward and read something else ? ). It has really piqued my interest. I know I may be […]

The post Hyper-converged infrastructure – Part 1 : Is it a real thing ? appeared first on VPackets.net.

1 5 6 7 8 9 14