Long-time readers know that my wife, Crystal, has been running this thing called Spousetivities for a few (OK, eight) years now. While Spousetivities originally started out as a VMworld thing, it rapidly expanded, and this year Spousetivities will be at a number of events. That includes the spring OpenStack Summit in Austin, TX!
If you’re planning to attend the Summit in Austin, why not bring your spouse/fiancé/partner/significant other with you? I can tell you from personal experience that having him or her there with you makes the conference experience more pleasant. In this particular case, Austin is a great place to visit in April and it is very affordable. Besides, Spousetivities has a great set of activities planned to keep your traveling companion(s) entertained while you’re at the conference.
Here’s a quick look at some of what’s planned for that week:
On the Spousetivities Continue reading

This is the first in a series of posts about how Ansible and Ansible Tower enable you to manage your infrastructure simply, securely, and efficiently.
When we talk about Tower, we often talk in terms of Control, Knowledge, and Delegation. But what does that mean? In this series of blog posts, we'll describe some of the ways you can use Ansible and Ansible Tower to manage your infrastructure.
The first step of controlling your infrastructure is to define what it is actually supposed to be. For example, you may want to apply available updates - here's a basic playbook that does that.
---
- hosts: all
gather_facts: true
become_method: sudo
become_user: root
tasks:
- name: Apply any available updates
yum:
name: "*"
state: latest
update_cache: yes
Or you may have more detailed configuration. Here's an example playbook for basic system configuration.This playbook:
Configures some users
Installs and configures chrony, sudo, and rsyslog remote logging
Sets some SELinux parameters
Normally, we’d organize our configuration into Ansible roles for reusability, but for the purpose of this exercise we're just going to use one long playbook.
We'd want to apply this as part of our standard system configuration.
Continue reading
In this post I’m going to share how to add some Git and Docker Machine “awareness” to your OS X Bash prompt. This isn’t anything new; these tricks are things that Bash users have been employing for years, especially on Linux. For most OS X users, though, I think these are tricks/tools that aren’t particularly well-known so I wanted to share them here.
I’ll divide this post into two sections:
Please note that I’ve only tested these on El Capitan (OS X 10.11), but it should work similarly for most recent versions of OS X.
Before I get started, allow me to explain what I mean by “awareness”:
eval $(docker-machine env <name>)) in your Bash prompt as well as tab completion for most Docker Machine commands and machines.Ready? Let’s get started!
To add some Continue reading
In this post I’m going to talk about how to use Docker Machine to build a Docker Swarm cluster on Amazon Web Services (AWS). This post is an adaptation of this Docker documentation post that shows how to build a Swarm cluster using VirtualBox.
This post builds on the earlier post I wrote on using Docker Machine with AWS, so feel free to refer back to that post for more information or more details anywhere along the way.
At a high level, the process looks like this:
Let’s take a look at these steps in a bit more detail.
There’s at least a couple ways to do this, but they pretty much all involve a Linux VM using the Swarm Docker image. It’s up to you exactly how you want to do this—you can use a local VM, or you can use an AWS instance. The Docker documentation tutorial uses a local VM with the VirtualBox driver:
docker-machine create -d virtualbox local
env $(docker-machine env local)
docker run swarm create
The first command above creates a VirtualBox VM (named “local”) and Continue reading

Knowing the members of our Ansible community is important to us, and we want you to get to know the members of our team in (and outside of!) the Ansible office. Stay tuned to the blog to learn more about the people who are helping to bring Ansible to life.
This week we’d like to introduce Jason McKerr, who joined Red Hat in January as the director of the Ansible Core team. Jason has been in the space before as the VP of Engineering for Puppet. Before Puppet he worked at SocialCode (The Washington Post Company) and MyWebGrocer as both a software architect and manager. And back in the day he was the first Operations Manager at the Open Source Lab at OSU.
What’s your role at Ansible?
The title says “director, Ansible Core team” but the role is really about working with all of the various user groups and communities around Ansible. The first priority is getting new features, bug and security fixes, and releases out the door - and to that end we published our first public roadmap for the 2.1 release. Additionally, I am really focused on getting Ansible into Red Hat product development cycles.
One of the projects that I started last year was my GitHub “learning-tools” repository, in which I store tools (of various sorts) to help with learning new technologies. Many of these tools are Vagrant environments, but some are sample templates for other tools like Terraform. I recently made some updates to a couple of the tools in this repo, so I wanted to briefly update my readers.
This area of the repository was already present, but I had a note in the repo’s main README.md noting that it wasn’t fully functional. After having to work through some other issues (issues that resulted in this blog post), I was finally able to create the tools and assets to make this environment easily repeatable. So, if you’d like to work with Docker using IPVLAN interfaces in L2 mode, then have a look in the docker-ipvlan folder of the repository. The folder-specific README.md is pretty self-explanatory, but if you run into any problems or issues feel free to open a GitHub issue.
This is an entirely new area of the repo. Thanks in part to being able to complete Continue reading

When you first start using Ansible, you go from writing bash scripts that you upload and run on machines to running desired end state playbooks. You go from a write-once read-never set of scripts to an easily readable and updatable yaml. Life is good.
Fast forward to when you become an Ansible power user. You’re now:
Writing playbooks that run on multiple distros
Breaking down your complex Ansible project into multiple bite-sized roles
Using variables like a boss: host vars, group vars, include variable files
Tagging every possible task and role so you can jump to any execution point and control the execution flow
Sharing your playbooks with colleagues and they’ve started contributing back
As you gain familiarity with Ansible, you inevitably end up doing more and more stuff-- which in turn makes the playbooks and roles that you’re creating and maintaining longer and a bit more complex. The side effect is that you may feel that development begins to move a bit slower as you manually take the time to verify variable permutations. When you find yourself in this situation, it’s time to start testing. Here’s how to get started by using Docker and Ansible to automatically test Continue reading
As part of a broader effort (see the post on my 2016 projects) to leverage public cloud resources more than I have in the past, some Docker Engine-related testing I’ve been conducting recently has been done using AWS EC2 instances instead of VMs in my home lab. Along the way, I’ve found Docker Machine to be quite a handy tool, and in this post I’ll share how to use Docker Machine with AWS.
By and large, using Docker Machine with AWS is pretty straightforward. You can get an idea of what information Docker Machine needs by running docker-machine create -d amazonec2 --help. (You can also view the documentation for the AWS driver specifically.) The key pieces of information you need are:
--amazonec2-access-key: This is your AWS access key. Docker Machine can read it from the $AWS_ACCESS_KEY_ID environment variable, or—if you have the AWS CLI installed—Docker Machine can read it from there.--amazonec2-secret-key: This is your AWS secret key. As with the AWS access key, Docker Machine can read this from an environment variable ($AWS_SECRET_ACCESS_KEY) or from the AWS CLI credentials file (by default, found in ~/.aws/credentials).--amazonec2-region: The AWS driver defaults to Continue readingIn what has been a fairly classic “yak shaving” exercise, I’ve been working on getting Ubuntu 15.10 “Wily Werewolf” running with Vagrant so that I can perform some testing with some other technologies that need a Linux kernel version of at least 4.2 (which comes with Ubuntu 15.10 by default). Along the way, I ran smack into a problem with Ubuntu 15.10’s networking configuration when used with Vagrant, and in this post I’m going to explain what’s happening here and provide a workaround.
The issue (described here on GitHub, among other places) involves a couple of changes in Ubuntu Linux (and upstream Debian GNU/Linux as well, although I haven’t personally tested it). One of the changes is in regards to how network interfaces are named; instead of the “old” eth0 or eth1 naming convention, Ubuntu 15.10 now uses persistent interface names like ens32 or ens33. Additionally, an update to the “ifupdown” package now returns an error where an error apparently wasn’t returned before.
The end result is that when you try to create a Vagrant VM with multiple network interfaces, it fails. Using a single network interface is fine; the issue only rears its Continue reading
Welcome to Technology Short Take #63. I’ve managed to (mostly) get back to my Friday publishing schedule, though I’m running much later in the day this time around than usual. I’ll try to correct that for the next one. In any case, here’s another collection of links and articles from around the Net on the major data center technology areas. Have fun reading!
While talking with customers over the past couple of weeks during a multi-country/multi-continent trip, one phrase that kept coming up is “lock-in”, as in “we’re trying to avoid lock-in” or “this approach doesn’t have any lock-in”. While I’m not a huge fan of memes, this phrase always brings to mind The Princess Bride, Vizzini’s use of the word “inconceivable,” and Inigo Montoya’s famous response. C’mon, you know you want to say it: “You keep using that word. I do not think it means what you think it means.” I feel the same way about lock-in, and here’s why.
Lock-in, as I understand how it’s viewed, is an inability to migrate from your current solution to some other solution. For example, you might feel “locked in” to Microsoft (via Office or Windows) or “locked in” to Oracle (via their database platform or applications), or even “locked in” to VMware through vCenter and vSphere. Although these solutions/platforms/products might be the right fit for your particular problem/need, the fact that you can’t migrate is a problem. Here you are, running a product or solution or platform that is the right fit for your needs, but because you may not be able Continue reading