Archive

Category Archives for "Systems"

ANSIBLE COMMUNITY – 2017 YEAR IN REVIEW

2017 Community Post

It's time again for the annual Ansible community review. Let's start again, as we do every year, with a quick look at the numbers.

Debian Popcon

Debian’s Popularity Contest is an opt-in way for Debian users to share information about the software they’re running on their systems.

As with every year, caveats abound with this graph -- but even though it represents only a small sample of the Linux distro world, it’s useful because it’s one of the few places where we can see an apples-to-apples comparison of install bases of various automation tools. Because Ansible is agentless, we compare the Ansible package to the server packages of other configuration management tools. (Chef does not make a Debian package available for Chef server.)

We see that Ansible has continued its steady growth in 2017, increasing its user base here by approximately 50% in the past year.

GitHub Metrics

2017 was a busy year for Ansible on the GitHub front, and in 2017 we caught the notice of GitHub itself. Ansible now has its own top level topic for GitHub searches, and that search reveals over 5000 repositories of Ansible content. We also made the 2017 GitHub Octoverse report, placing Continue reading

Using Ansible to manage RHEL 5 yesterday, today and tomorrow

Ansible and RHEL

With the release of Ansible 2.4, we now require that managed nodes have a Python version of at least 2.6. Most notable, this leaves RHEL 5 users asking how to manage RHEL 5 systems in the future - since it only provides Python 2.4.

Background

With the release of Ansible 2.4 in September 2017, we have moved to support Python 2.6 or higher on the managed nodes. This means previous support for Python-2.4 or Python-2.5 is no longer available:

Support for Python-2.4 and Python-2.5 on the managed system's side was dropped. If you need to manage a system that ships with Python-2.4 or Python-2.5, you'll need to install Python-2.6 or better on the managed system.

This was bound to happen at some point in time because Python 2.6 was released almost 10 years ago, and most systems in production these days are based upon 2.6 or newer version. Furthermore, Python 3 is getting more and more traction, and in the long term we need to be able to support it. However, as the official Python documentation shows, code that runs on both Python 2. Continue reading

Getting Started: LDAP Authentication in Ansible Tower

Ansible Getting Started LDAP

Next in the Getting Started series is covering the basics of configuring Red Hat Ansible Tower to allow users to log in with LDAP credentials. In this post, we'll explain a few troubleshooting tips to help narrow down problems and correct them. As long as you have a map of your LDAP tree/forest, this post should help get users logging in with their LDAP credentials.

CONFIGURATION SETTINGS

To configure your Ansible Tower for LDAP authentication, navigate to Settings (the gear icon) and to the "Configure Tower" section. The area within these configuration settings we're focusing on is "Authentication", and the sub category should be set to "LDAP".

Ansible-Getting-Started-Tower-LDAP-7

The fields that will be the primary focus are:

  • LDAP server URI
  • Bind DN and password
  • User/group searches

The other fields will allow you to refine your LDAP searches to reduce the resources used in production or map your organization.

The LDAP URI is simply the IP or hostname of your LDAP server prepended with the protocol (ldap://).


The bind DN will be a user credential and password (followed by the group and domain) with access to read the LDAP structure.

REFINING USER SEARCH

With Ansible Tower able to connect to the LDAP Continue reading

Running OVS on Fedora Atomic Host

In this post, I’d like to share the results of some testing I’ve been doing to run Open vSwitch (OVS) in containers on a container-optimized Linux distribution such as Atomic Host (Fedora Atomic Host, specifically). I’m still relatively early in my exploration of this topic, but I felt like sharing what I’ve found so far might be helpful to others, and might help spark conversations within the relevant communities about how this experience might be improved.

The reason for the use of Docker containers in this approach is twofold:

  1. Many of the newer container-optimized Linux distributions—CoreOS Container Linux (soon to be part of Red Hat in some fashion), Project Atomic, etc.—eschew “traditional” package management solutions in favor of containers.
  2. Part of the reason behind my testing was to help the OVS community better understand what it would look like to run OVS in containers so as to help make OVS a better citizen on container-optimized Linux distributions.

In this post, I’ll be using Fedora 27 Atomic Host (via Vagrant with VirtualBox). If you use a different version or release of Atomic Host, your results may differ somewhat. For the OVS containers, I’m using the excellent keldaio/ovs Docker containers.

Continue reading

Docker for Windows Desktop… Now With Kubernetes!

Today we are excited to announce the beta for Docker for Windows Desktop with integrated Kubernetes is now available in the edge channel! This release includes Kubernetes 1.8, just like the Docker for Mac and Docker Enterprise Edition and will allow you to develop Linux containers.

The easiest way to get Kubernetes on your desktop is here.

Simply check the box and go

Windows containers Kubernetes

What You Can Do with Kubernetes on your desktop?

Docker for Mac and Docker for Windows are the most popular way to configure a Docker dev environment, and are each used everyday by millions of developers to build, test, and debug containerized apps. The beauty of building with Docker for Mac or Windows is that you can deploy the exact same set of Docker container images on your desktop as you do on your production systems with Docker EE.

Docker for Mac and Docker for Windows are used for building, testing and preparing to ship applications, whereas Docker EE provides the ability to secure and manage your applications in production at scale. You eliminate the “it worked on my machine” problem because you run the same Docker containers on the same Docker engines in development, testing, and production environments, along with the Continue reading

Dynamic Inventory: Past, Present & Future

In Red Hat Ansible Engine 2.4, we made some changes to how inventory works. We introduced a new cli tool, and added an inventory plugin type.

The goal of this plugin type, was well, to make Ansible Engine even more pluggable. All kidding aside, we wanted to provide Ansible Engine content authors and Ansible Engine users a new way to visualize their target inventory.  Using the ansible-inventory command,  the targeted inventory will be listed with the details of the hosts in that inventory, and the hosts groups.

For example:

[thaumos@ecb51a545078 /]# ansible-inventory -i ~/Development/playbooks/inventory/prod.aws_ec2.yml --list
{
    "_meta": {
        "hostvars": {
            "ec2-5x-xx-x-xxx.us-west-2.compute.amazonaws.com": {
                "AmiLaunchIndex": 2,
                "Architecture": "x86_64",
                "BlockDeviceMappings": [
                    {
                        "DeviceName": "/dev/sda1",
                        "Ebs": {
                            "AttachTime": "2017-12-13T15:40:19+00:00",
                            "DeleteOnTermination": false,
                            "Status": "attached",
                            "VolumeId": "vol-0514xxx"
                        }
                    }
                ],
                "ClientToken": "",
                "EbsOptimized": false,
                "Hypervisor": "xen",
                "ImageId": "ami-0c2aba6c",
                "InstanceId": "i-009xxxx3",
                "InstanceType": "t2.micro",
                "KeyName": "blogKey",
                "LaunchTime": "2017-12-13T15:40:18+00:00",
                "Monitoring": {
                    "State": "disabled"
                },
                "NetworkInterfaces": [
                    {
                        "Association": {
                            "IpOwnerId": "amazon",
                            "PublicDnsName": "ec2-5x-xx-x-xxx.us-west-2.compute.amazonaws.com",
                            "PublicIp": "5x.xx.x.xxx"
                        },
                        "Attachment": {
                            "AttachTime": "2017-12-13T15:40:18+00:00",
                            "AttachmentId": "eni-attach-97c4xxxx",
                            "DeleteOnTermination": true,
                            "DeviceIndex": 0,
                            "Status": "attached"
                        },
                        "Description": "",
                        "Groups": [
                            {
                                "GroupId": "sg-e63xxxxd",
                                "GroupName": "blogGroup"
                            }
                        ],
                        "Ipv6Addresses":  Continue reading

Meltdown, Spectre and Security Automation

Keeping computer systems secure is one of those never ending tasks. You could be forgiven for thinking of it like "Painting the Forth Bridge". Most of the time it's 'put new software' in place, and you're good. Every now and then it’s, well, a bit more complicated.

The first week of January saw two flaws announced, called “Meltdown” and “Spectre.” Both involved the hardware at the heart of more or less every computing device on the planet – the processor. There is a great in-depth review of the two flaws here. You can also find some additional information in this blog by Red Hatter Jon Masters.

In the complex world of IT, keeping on top of security can be less painful with the help of an easy automation tool. One of our Ansible engineers, Sam Doran, has written a couple of Ansible plays to patch systems. While Meltdown and Spectre are not completely mitigated, we'd like to share these plays with you to demonstrate how to easily deploy the patches that are available; you can find them here:

If you make any improvements to them we'd welcome pull requests!

Using Docker Machine with Azure

I’ve written about using Docker Machine with a number of different providers, such as with AWS, with OpenStack, and even with a local KVM/Libvirt daemon. In this post, I’ll expand that series to show using Docker Machine with Azure. (This is a follow-up to my earlier post on experimenting with Azure.)

As with most of the other Docker Machine providers, using Docker Machine with Azure is reasonably straightforward. Run docker-machine create -d azure --help to get an idea of some of the parameters you can use when creating VMs on Azure using Docker Machine. A full list of the various parameters and options for the Azure drive is also available.

The only required parameter is --azure-subscription-id, which specifies your Azure subscription ID. If you don’t know this, or want to obtain it programmatically, you can use this Azure CLI command:

az account show --query "id" -o tsv

If you have more than one subscription, you’ll probably need to modify this command to filter it down to the specific subscription you want to use.

Additional parameters that you can supply include (but aren’t limited to):

  • Use the --azure-image parameter to specify the VM image you’d like to Continue reading

An Update on Using Docker Machine with Vagrant

As part of a project on which I’m working, I’ve been spending some time working with Docker Machine and Vagrant over the last few days. You may recall that I first wrote about using these two tools together back in August 2015. As a result of spending some additional time with these tools—which I chose because I felt like they streamlined some work around this project—I’ve uncovered some additional information that I wanted to share with readers.

As a brief recap to the original article, I showed how you could use Vagrant to quickly and easily spin up a VM, then use Docker Machine’s generic driver to add it to Docker Machine, like this:

docker-machine create -d generic \
--generic-ssh-user vagrant \
--generic-ssh-key ~/.vagrant.d/insecure_private_key \
--generic-ip-address <IP address of VM> \
<name of VM>

This approach works fine if the Vagrant-created VM is reachable without port forwarding. What do I mean? In the past, the VMware provider for Vagrant used functionality in VMware Fusion or VMware Workstation to provide an RFC 1918-addressed network that had external access via network address translation (NAT). In Fusion, for example, this was the default “Share with my Mac” network. Thus, when Continue reading

Role-based Access Control for Kubernetes with Docker EE

Last week we released the latest beta for Docker Enterprise Edition. Without a doubt one of the most significant features in this release is providing a single management control plane for both Swarm and Kubernetes-based clusters – including clusters made up of both Swarm and Kubernetes workers. This offers customers unparalleled choice in how they manage both their traditional and cloud native applications.

When we were looking at doing this release we knew we couldn’t just slap a GUI on top of Kubernetes and call it good. We wanted to find areas where we could simplify and secure the deployment of  applications onto Kubernetes nodes.

One such area is role-based access control (RBAC). Docker EE 17.06 introduced an enhanced RBAC solution that provided flexible and granular access controls across multiple teams and users. While Kubernetes first introduced a basic RBAC solution with the 1.6 release, in this upcoming release, we extend Docker EE’s existing RBAC support to support Kubernetes primitives.

(If you’re not familiar how RBAC works in Docker EE, please read my blog post from August 2017)

In addition to the five predefined authentication roles in Docker EE (view only, full control, none, etc) there are Continue reading

Getting Started: Adding Proxy Support

Getting-Started-with-Tower-Adding-Proxy-Support.png

Getting Started with Adding Proxy Support

There are many reasons why proxies are implemented into an environment. Some can be put in place for security, others as load balancers for your systems. No matter the use, if you have a proxy in place, Red Hat Ansible Tower may need to utilize it. For a more in-depth look at what we will be doing in this post, you can visit our docs specifically on Proxy Support within Ansible Tower here.

Adding a Load Balancer (Reverse Proxy)

In some instances, you might have Ansible Tower behind a load balancer and need that information added to your instance. Sessions in Ansible Tower associate an IP address upon creation, and Ansible Tower’s policy requires that any use of the session match the original IP address.

To allow for support of a proxy, you will have to make a few changes to your Ansible Tower configuration. Previously, this would have been done in a settings.py file found on your Ansible Tower host, but as of 3.2 you can now make these changes in the UI. To make these edits, you must be an admin on the instance and navigate to Settings, and then Continue reading

Technology Short Take 93

Welcome to Technology Short Take 93! Today I have another collection of data center technology links, articles, thoughts, and rants. Here’s hoping you find something useful!

Networking

Servers/Hardware

Nothing this time around. Feel free to hit me up on Twitter if you have links you think I should include next time!

Security

Cloud Computing/Cloud Management

Beta for Docker Enterprise Edition with Kubernetes Integration Now Available

Beta for Docker EE with Kubernetes

Today we are excited to launch the public beta for Docker Enterprise Edition (Docker EE), our container management platform. First announced at DockerCon Europe, this release features Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. With this solution, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow users have come to expect from Docker, especially when combined with the recent edge release of Docker for Mac with Kubernetes support. In addition to Kubernetes, this release includes enhancements to Swarm and to Docker Trusted Registry (DTR) which can be tested during the beta period.

Due to the high interest in this beta, license keys will be rolled out in batches over the next few weeks. Individuals who signed up for beta at www.docker.com/kubernetes will receive instructions on how to access this release and where to submit feedback. We also encourage our partners to use this time to test and validate their Docker and Kubernetes solutions against this release. Registrations will remain open throughout this beta testing period.

Explore the New Features

At DockerCon Europe, we demonstrated the management integration of Kubernetes within Docker EE. You can Continue reading

DockerCon 2018: Registration, CFP and first speakers

DockerCon 2018

Good news: Our favorite time of the year is less than 6 months away! After Seattle and Austin, we’re excited to bring DockerCon US 2018 back to San Francisco. With more than 6000 attendees gathering at Moscone Center on June 12-15, 2018, this year’s DockerCon US should be the biggest one to date.

In case you’ve never attended this conference before, DockerCon is the original container conference and the largest community and industry event for companies looking to define or refine their container platform strategy or cloud initiatives. If containers are important to your daily workflow or your business initiatives, you and your team (10% at registration for groups of 4 or more) should attend to learn about the latest updates from the Docker container platform, customer use cases and innovation coming from the Docker and cloud native ecosystems.

With 8 tracks, workshops, official Docker training, exec fireside chat, panels, community theaters and hands-on labs, attending DockerCon is one of the most effective ways to learn Docker no matter your level of systems expertise. We’ll have plenty of learning materials for everyone including developers, IT Professionals, Architects and Executives.

DockerCon Schedule

DockerCon 2018

DockerCon CFP and speakers

Don’t forget the deadline for CFP submission is Continue reading

Ansible Tower Feature Spotlight: Custom Credentials

RH-Ansible-Tower

One of the new features we added in Red Hat Ansible Tower 3.2 was support for custom credentials. The idea of custom credentials is pretty simple. In addition to the built-in credentials supported by Ansible Tower such as SSH keys, username/password combos, or cloud provider credentials, we now let the user define their own credentials for storing securely in Ansible Tower and exposing to playbooks.

However, this simplicity belies real power, opening up many new ways to automate. Let's dive into the details.

HOW CUSTOM CREDENTIALS WORK

To set up a custom credential, first you must define the custom credential Type.

Credential types consist of two key concepts - "inputs" and "injectors".

Inputs define the value types that are used for this credential - such as a username, a password, a token, or any other identifier that's part of the credential.

Injectors describe how these credentials are exposed for Ansible to use - this can be Ansible extra variables, environment variables, or templated file content.

Once you've defined a new Credential Type, you can then create any number of credentials that use that type. We'll go through examples to shed light on how this Continue reading

Experimenting with Azure

I’ve been experimenting with Microsoft Azure recently, and I thought it might be useful to share a quick post on using some of my favorite tools with Azure. I’ve found it useful to try to leverage existing tools whenever I can, and so as I’ve been experimenting with Azure I’ve been leveraging familiar tools like Docker Machine and Vagrant.

The information here isn’t revolutionary or unique, but hopefully it will still be useful to others, even if only as a “quick reference”-type of post.

Launching an Instance on Azure Using Docker Machine

To launch an instance on Azure and provision it with Docker using docker-machine:

docker-machine create -d azure \
--azure-subscription-id $(az account show --query "id" -o tsv) \
--azure-ssh-user azureuser \
--azure-size "Standard_B1ms" azure-test

The first time you run this you’ll probably need to allow Docker Machine access to your Azure subscription (you’ll get prompted to log in via a browser and allow access). This will create a service principal that is visible via az ad sp list. Note that you may be prompted for authentication for future uses, although it will re-use the existing service principal once it is created.

Launching an Instance Using the Azure Provider Continue reading

Cisco Now Reselling Docker Enterprise Edition

Today we are excited to announce the expansion of our partnership with the availability of Docker Enterprise Edition (EE), our container management platform on the Cisco Global Price List (GPL) and the release of the latest Cisco Validated Design (CVD):

Cisco UCS Infrastructure with Contiv and Docker Enterprise Edition for Container Management

Cisco

Now customers can purchase Docker EE directly from Cisco and their joint resellers to jumpstart their new year’s resolution for a more modern application architecture, reduce IT costs and redirect saving to innovation projects.  And with our latest CVD for Cisco UCS compute infrastructure with secure container networking fabric, Contiv,  we’ve provided a roadmap on how to get started so customers and partners can gain a faster, more reliable and predictable implementation of Docker EE.

For enterprises looking to use Docker’s container management platform but not sure where to start, we can help you take the first step. The Migrating Traditional Applications (MTA) Program, designed for IT operations teams, helps enterprises modernize existing legacy .NET Windows or Java Linux applications without modifying source code or re-architecting the application in just five days with Docker and Cisco Advanced Services. The results have been incredible, with customers saving over 50% on infrastructure costs and Continue reading

Issue with VMware-Formatted Cumulus VX Vagrant Box

I recently had a need to revisit the use of Cumulus VX (the Cumulus Networks virtual appliance running Cumulus Linux) in a Vagrant environment, and I wanted to be sure to test what I was doing on multiple virtualization platforms. Via Vagrant Cloud, Cumulus distributes VirtualBox and Libvirt versions of Cumulus VX, and there is a slightly older version that also provides a VMware-formatted box. Unfortunately, there’s a simple error in the VMware-formatted box that prevents it from working. Here’s the fix.

The latest version (as of this writing) of Cumulus VX was 3.5.0, and for this version both VirtualBox-formatted and Libvirt-formatted boxes are provided. For a VMware-formatted box, the latest version is 3.2.0, which you can install with this command:

vagrant box add CumulusCommunity/cumulus-vx --box-version 3.2.0

When this Vagrant box is installed using the above command, what actually happens is something like this (at a high level):

  1. The *.box file for the specific box, platform, and version is downloaded. This .box file is nothing more than a TAR archive with specific files included (see here for more details).

  2. The *.box file is expanded into the ~/.vagrant.d/boxes directory Continue reading

Using Your Own Private Registry with Docker Enterprise Edition

docker trusted registry

One of the things that makes Docker really cool, particularly compared to using virtual machines, is how easy it is to move around Docker images. If you’ve already been using Docker, you’ve almost certainly pulled images from Docker Hub. Docker Hub is Docker’s cloud-based registry service and has tens of thousands of Docker images to choose from. If you’re developing your own software and creating your own Docker images though, you’ll want your own private Docker registry. This is particularly true if you have images with proprietary licenses, or if you have a complex continuous integration (CI) process for your build system.

Docker Enterprise Edition includes Docker Trusted Registry (DTR), a highly available registry with secure image management capabilities which was built to run either inside of your own data center or on your own cloud-based infrastructure. In the next few weeks, we’ll go over how DTR is a critical component of delivering a secure, repeatable and consistent software supply chain.  You can get started with it today through our free hosted demo or by downloading and installing the free 30-day trial. The steps to get started with your own installation are below.

Setting Up Docker Enterprise Edition

Docker Trusted Registry runs on Continue reading

Docker for Mac with Kubernetes

Docker Community Edition

You heard about it at DockerCon Europe and now it is here: we are proud to announce that Docker for Mac with beta Kubernetes support is now publicly available as part of the Edge release channel. We hope you are as excited as we are!

With this release you can now run a single node Kubernetes cluster right on your Mac and use both kubectl commands and docker commands to control your containers.

First, a few things to keep in mind:

  • Docker for Mac required
    Kubernetes features are only accessible on macOS for now; Docker for Windows and Docker Enterprise Edition betas will follow at a later date. If you need to install a new copy of Docker for Mac you can download it from the Docker Store.
  • Edge channel required
    Kubernetes support is still considered experimental with this release, so to enable the download and use of Kubernetes components you must be on the Edge channel. The Docker for Mac version should be 17.12.0-ce-mac45 or later after updating.
  • Already using other Kubernetes tools?
    If you are already running a version of kubectl pointed at another environment, for example minikube, you will want to follow the activation Continue reading
1 57 58 59 60 61 126