Archive

Category Archives for "Virtualization"

No surprise performance test

Recently I had a need to deploy some python FLASK based application. Although FLASK has a convenience CLI built-in to run your application while developing the deployment documentation provided a bunch of production ready deployment method. After going through the various documentation and learning about the event loop based implementation of Gevent, I decided to … Continue reading No surprise performance test

Automating NSX-T

An attendee of our Building Network Automation Solutions online course decided to automate his NSX-T environment and sent me this question:

I will be working on NSX-T quite a lot these days and I was wondering how could I automate my workflow (lab + production) to produce a certain consistency in my work.
I’ve seen that VMware relies a lot on PowerShell and I’ve haven’t invested a lot in that yet … and I would like to get more skills and become more proficient using Python right now.

Always select the most convenient tool for the job, and regardless of personal preferences PowerShell seems to be the one to use in this case.

Read more ...

Test Driving transmission for multi-site file sync

As the industry moves towards more distributed deployment of services, syncing files across multiple location is a problem that often needs to be solved. In the world of file synching there are two algorithms that are outstanding. One being rsync which is a very efficient tool for synching files. It works great when you have … Continue reading Test Driving transmission for multi-site file sync

Cross-vCenter NSX at the Center for Advanced Public Safety

Jason Foster is an IT Manager at the Center for Advanced Public Safety at the University of Alabama. The Center for Advanced Public Safety (CAPS) originally developed a software that provided crash reporting and data analytics software for the State of Alabama. Today, CAPS specializes in custom software mostly in the realm of law enforcement and public safety. They have created systems for many states and government agencies across the country.

Bryan Salek, Networking and Security Staff Systems Engineer, spoke with Jason about network virtualization and what led the Center for Advanced Public Safety to choosing VMware NSX Data Center and what the future holds for their IT transformation.

 

The Need for Secure and Resilient Infrastructure

As part of a large modernize data center initiative, the forward-thinking CAPS IT team began to investigate micro-segmentation. Security is a primary focus at CAPS due to the fact that the organization develops large software packages for various state agencies. The applications that CAPS writes and builds are hosted together, but contain confidential information and need to be segmented from one another.

Once CAPS rolled out the micro-segmentation use-case, the IT team decided to leverage NSX Data Center for disaster recovery purposes as Continue reading

Last Week on ipSpace.net (2019W4)

The crazy pace of webinar sessions continued last week. Howard Marks continued his deep dive into Hyper-Converged Infrastructure, this time focusing on go-to-market strategies, failure resiliency with replicas and local RAID, and the eternal debate (if you happen to be working for a certain $vendor) whether it’s better to run your HCI code in a VM and not in hypervisor kernel like your competitor does. He concluded with the description of what major players (VMware VSAN, Nutanix and HPE Simplivity) do.

On Thursday I started my Ansible 2.7 Updates saga, describing how network_cli plugin works, how they implemented generic CLI modules, how to use SSH keys or usernames and passwords for authentication (and how to make them secure), and how to execute commands on network devices (including an introduction into the gory details of parsing text outputs, JSON or XML).

The last thing I managed to cover was the cli_command module and how you can use it to execute any command on a network device… and then I ran out of time. We’ll continue with sample playbooks and network device configurations on February 12th.

You can get access to both webinars with Standard ipSpace.net subscription.

Federate oVirt engine authentication to OpenID Connect infrastructure

In this post I will introduce how to integrate OIDC with oVirt engine using Keycloak and LDAP user federation.

Prerequisites: I assume you have already setup the 389ds directory server, but the solution is very similar for any other LDAP provider. As OIDC is not integrated into oVirt directly, we use Apache to do the OIDC authentication for us. The mod_auth_openidc module nicely covers all needed functionality.

Overview

Integrate with external OpenID Connect Identity Provider (IDP) to provide Single Sign-On (SSO) across products that use the IDP for authenticating users. We currently have oVirt SSO for providing unified authentication across Administrator and VM portals. The oVirt engine SSO also provides tokens for REST API clients and supports bearer authentication to reuse tokens to access oVirt engine RESTAPI. With external IDP integration the internal oVirt SSO is disabled and browser users will be redirected to the external IDP for authentication. After successful authentication users can access both Admin and VM portals as they normally do. REST API clients don't have to change, they can still obtain a token from engine SSO and use the token for bearer authentication to access oVirt engine RESTAPI. Engine SSO acts as a proxy obtaining the Continue reading

oVirt and OKD

This is a series of posts to demonstrate how to install OKD 3.11 on oVirt and what you can do with it. Part I - How to install OKD 3.11 on oVirt

How to install OKD 3.11 on oVirt (4.2 and up)

Installing OKD or Kubernetes on oVirt has many advantages, and it's also gotten a lot easier these days. Admins and users who want to take container platform management for a spin, on oVirt, will be encouraged by this.
Few of the advantages are:

  • Virtualizing the control plane for Kubernetes - provide HA/backup/affinity capabilities to the controllers and allowing hardware maintenance cycles
  • Providing persistent volume for containers via the IAAS, without the need for additional storage array dedicated to Kubernetes
  • Allowing a quick method to build up/tear down Kubernetes clusters, providing hard tenancy model via VMs between clusters.

The installation uses openshift-ansible and, specifically the openshift_ovirt ansible-role. The integration between OpenShift and oVirt is tighter, and provides storage integration. If you need persistent volumes for your containers you can get that directly from oVirt using ovirt-volume-provisioner and ovirt-flexvolume-driver.
For the sake of simplicity, this example will cover an all-in-one OpenShift cluster, on a single VM.
Continue reading

VMworld 2018 Europe Sessions on NSX Networking and Security in VMware Cloud on AWS

VMworld 2018 Europe in Barcelona is a week away. Want to learn more about NSX Networking and Security in VMware Cloud on AWS, how you can easily deploy and secure workloads in the cloud, or how to build hybrid cloud solutions with the familiarity and capabilities of vSphere? Make sure to attend the below sessions at VMworld 2018 Europe next week. We will go into a deep dive of all the functionality and show how VMware Cloud on AWS is being used by customers. Continue reading

VMware NSX: The Good, the Bad and the Ugly

After four live sessions we finished the VMware NSX Technical Deep Dive webinar yesterday. Still have to edit the materials, but right now the whole thing is already over 6 hours long, and there are two more guest speaker sessions to come.

Anyways, in the previous sessions we covered all the good parts of NSX and a few of the bad ones. Everything that was left for yesterday were the ugly parts.

Read more ...

Test Driving Inter Regional VPC peering in AWS

Connect AWS VPCs hosted in different regions. AWS Virtual Private Cloud(VPC) provides a way to isolate a tenant’s cloud infrastructure. To a tenant a VPCs provide a view of his own virtual infrastructure in the cloud that is completely isolated, has its own compute, storage, network connectivity, security settings etc. In the physical world, Amazon’s … Continue reading Test Driving Inter Regional VPC peering in AWS

Custom VPC and Internet Access in AWS

Create your VPC, launch EC2 instances and get internet access with Public IP. With a Virtual Private Cloud(VPC), tenants can create his own cloud based infrastructure in AWS. While AWS provides a default VPC for a new tenant, there are always use cases that need creation of custom VPC. While exploring custom VPC, I found … Continue reading Custom VPC and Internet Access in AWS

VXLAN and EVPN on Hypervisor Hosts

One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:

I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.

Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.

Read more ...

oVirt SAML with keyloak using 389ds user federation

In this post I will introduce how simple it is to integrate SAML with oVirt using Keycloak and LDAP user federation.

Prerequisites: I assume you have already setup the 389ds directory server, but the solution is very similar for any other LDAP provider. As SAML is not integrated into oVirt directly, we use Apache to do the SAML authentication for us. The mod_auth_mellon module nicely covers all needed functionality.

mod_auth_mellon configuration

First we need to configure oVirt's apache. SSH to the oVirt engine and create a directory where we'll store all SAML related certificates.

ssh [email protected]
yum install -y mod_auth_mellon
mkdir -p /etc/httpd/saml2

When we install the mod_auth_mellon package, it will create /etc/httpd/conf.d/auth_mellon.conf. We need to modify this file to our needs, as follows:

<Location />
    MellonEnable "info"
    MellonDecoder "none"
    MellonVariable "cookie"
    MellonSecureCookie On
    MellonSessionDump On
    MellonSamlResponseDump On
    MellonSessionLength 86400

    MellonUser "NAME_ID"
    MellonEndpointPath /saml2

    MellonSPCertFile /etc/httpd/saml2/ovirtsp-cert.cert
    MellonSPPrivateKeyFile /etc/httpd/saml2/ovirtsp-key.key
    MellonSPMetadataFile /etc/httpd/saml2/ovirtsp-metadata.xml
    MellonIdPMetadataFile /etc/httpd/saml2/idp-metadata.xml

    RewriteEngine On
    RewriteCond %{LA-U:REMOTE_USER} ^(.*)$
    RewriteRule ^(.*)$ - [L,NS,P,E=REMOTE_USER:%1]
    RequestHeader set X-Remote-User %{REMOTE_USER}s
</Location>

<LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/api>
  <If "req('Authorization') !~ /^(Bearer| Continue reading

oVirt SAML with keyloak using 389ds user federation

In this post I will introduce how simple it is to integrate SAML with oVirt using Keycloak and LDAP user federation.

Prerequisites: I assume you have already setup the 389ds directory server, but the solution is very similar for any other LDAP provider. As SAML is not integrated into oVirt directly, we use Apache to do the SAML authentication for us. The mod_auth_mellon module nicely covers all needed functionality.

mod_auth_mellon configuration

First we need to configure oVirt's apache. SSH to the oVirt engine and create a directory where we'll store all SAML related certificates.

ssh [email protected]
yum install -y mod_auth_mellon
mkdir -p /etc/httpd/saml2

When we install the mod_auth_mellon package, it will create /etc/httpd/conf.d/auth_mellon.conf. We need to modify this file to our needs, as follows:

<Location />
    MellonEnable "info"
    MellonDecoder "none"
    MellonVariable "cookie"
    MellonSecureCookie On
    MellonSessionDump On
    MellonSamlResponseDump On
    MellonSessionLength 86400

    MellonUser "NAME_ID"
    MellonEndpointPath /saml2

    MellonSPCertFile /etc/httpd/saml2/ovirtsp-cert.cert
    MellonSPPrivateKeyFile /etc/httpd/saml2/ovirtsp-key.key
    MellonSPMetadataFile /etc/httpd/saml2/ovirtsp-metadata.xml
    MellonIdPMetadataFile /etc/httpd/saml2/idp-metadata.xml

    RewriteEngine On
    RewriteCond %{LA-U:REMOTE_USER} ^(.*)$
    RewriteRule ^(.*)$ - [L,NS,P,E=REMOTE_USER:%1]
    RequestHeader set X-Remote-User %{REMOTE_USER}s
</Location>

<LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/api>
  <If "req('Authorization') !~ /^(Bearer| Continue reading

Skydive With oVirt

Skydive network is an open source real-time network topology and protocols analyzer providing a comprehensive way of understanding what is happening in your network infrastructure. The common use cases will be, troubleshooting, monitoring, SDN integration and much more. It has features such as:

  • Topology capturing - Captures network topology, interface, bridge and more
  • Flow capture - Distributed probe, L2-L4 classifier, GRE, VXLAN, GENEVE, MPLS/GRE, MPLS/UDP tunnelling support
  • Extendable - Support for external SDN Controllers or container based infrastructure, OpenStack. Supports extensions through API

Benefit to oVirt users

Skydive allows oVirt administrators to see the network configuration and topology of their oVirt cluster. Administrators can capture traffic from VM1 to VM2 or monitor the traffic between VMs or hosts. Skydive can generate traffic between 2 running VMs on different hosts and then analyze. Administrators can create alerts in Skydive UI to notify when traffic is disconnected or down.

Installation steps

  1. git clone https://github.com/skydive-project/skydive.git
  2. Create inventory file

     [skydive:children]
     analyzers
     agents
    
     [skydive:vars]
     skydive_listen_ip=0.0.0.0
     skydive_fabric_default_interface=ovirtmgmt
    
     skydive_os_auth_url=https://<ovn_provider_FQDN>:35357/v2.0
     skydive_os_service_username=<ovn_provider_username>
     skydive_os_service_password=<ovn_provider_password>
     skydive_os_service_tenant_name=service
     skydive_os_service_domain_name=Default
     skydive_os_service_region_name=RegionOne
    
     [analyzers]
     <analyzer_FQDN> ansible_ssh_user=root ansible_ssh_pass=<ssh_password>
    
     [agents]
     <agent_FQDN> ansible_ssh_user=root  Continue reading

Skydive With oVirt

Skydive network is an open source real-time network topology and protocols analyzer providing a comprehensive way of understanding what is happening in your network infrastructure. The common use cases will be, troubleshooting, monitoring, SDN integration and much more. It has features such as:

  • Topology capturing - Captures network topology, interface, bridge and more
  • Flow capture - Distributed probe, L2-L4 classifier, GRE, VXLAN, GENEVE, MPLS/GRE, MPLS/UDP tunnelling support
  • Extendable - Support for external SDN Controllers or container based infrastructure, OpenStack. Supports extensions through API

Benefit to oVirt users

Skydive allows oVirt administrators to see the network configuration and topology of their oVirt cluster. Administrators can capture traffic from VM1 to VM2 or monitor the traffic between VMs or hosts. Skydive can generate traffic between 2 running VMs on different hosts and then analyze. Administrators can create alerts in Skydive UI to notify when traffic is disconnected or down.

Installation steps

  1. git clone https://github.com/skydive-project/skydive.git
  2. Create inventory file

     [skydive:children]
     analyzers
     agents
    
     [skydive:vars]
     skydive_listen_ip=0.0.0.0
     skydive_fabric_default_interface=ovirtmgmt
    
     skydive_os_auth_url=https://<ovn_provider_FQDN>:35357/v2.0
     skydive_os_service_username=<ovn_provider_username>
     skydive_os_service_password=<ovn_provider_password>
     skydive_os_service_tenant_name=service
     skydive_os_service_domain_name=Default
     skydive_os_service_region_name=RegionOne
    
     [analyzers]
     <analyzer_FQDN> ansible_ssh_user=root ansible_ssh_pass=<ssh_password>
    
     [agents]
     <agent_FQDN> ansible_ssh_user=root  Continue reading