After quite a bit of deliberation and planning, I’m excited (and nervous!) to announce the launch of a new podcast, titled “The Full Stack Journey Podcast”. Here are the details, structured in a Q&A format.
What topics does this new podcast cover?
The focus of The Full Stack Journey Podcast is to talk about the journey to becoming a full-stack engineer. That term is a bit of a loaded term—some people like it, some people don’t, and there’s some disagreement over exactly what it means. I use the term to describe someone who can work at multiple layers of the modern data center stack, crossing between different silos. This isn’t to say that a full-stack engineer is an expert in all these areas, but it does mean that a full-stack engineer has at least some knowledge and experience in all these areas, with expertise and experience in at least one of them. The podcast aims to provide real, relevant, practical information at helping people with their “full-stack journey.”
Why is the idea of becoming a full-stack engineer important enough to warrant launching a podcast?
I strongly believe the future of the infrastructure engineer does not lie in being Continue reading
Almost six years ago I shared my (then) current Getting Things Done (GTD) setup, in which I described how I used various tools, techniques, and applications to try to maximize my productivity. I’d been toying with updating that post, but I wasn’t sure that anyone would find it useful. However, a recent e-mail from a reader indicated that there probably is some interest; with that in mind, then, here’s an update on my GTD-like setup, circa early 2016.
Before I dive into the details, a couple quick notes:
If you read the 2010 post, you may recall that I think of my workflow as having three “layers” of applications:
In early 2015, I posted a look ahead at my planned 2015 projects, where I took a quick look at some of the self-development projects I set out for myself over the course of 2015. In this post, I’m going to review my progress on those 2015 projects.
The 2015 projects were as follows:
So, how well did I do? Let’s take a look.
Complete a new book: Technically, I haven’t (fully) completed a new book, but given that my new book project with Jason Edelman and Matt Oswalt on network automation is available now as an Early Access edition, I suppose this should count for something. Strangely enough, this wasn’t the book project I had in mind at the start of 2015, but sometimes things like this take unexpected turns. Grade: C
Make more open source contributions: I expected this one to be easy, but it turns out that this is the area where my performance is the worst. I submitted a pull request to Terraform (for a docs update), but I did not make the contributions Continue reading
There’s no question that the networking industry is undergoing significant changes. Sparked in part by software-defined networking (SDN), this sea change now includes an expanded focus on application programming interfaces (APIs), automation frameworks and toolkits, and improved manageability. As the industry undergoes this change, though, networking engineers must also undergo a change.
To help address this change, I’m very excited to announce a new book project targeting “next-generation network engineering skills.” I’ve joined forces with two folks that I really admire—Jason Edelman and Matt Oswalt—to write a new book focusing on the skills we believe are essential for the next-generation network engineer:

The Early Access edition of the book is available now. If you’re familiar with O’Reilly’s Early Access program, you know that this is an incomplete version right now, but you’ll get regular updates and the final version of the book once it is complete. Plus, you get to provide feedback to us (the authors) while we write, which in turn helps improve the book. (And we greatly desire your feedback!)
So what’s in this book? Here’s a quick look at some of the topics we’re tackling:
This post will expand on some previous posts—one showing you how to set up and use an SSH bastion host and a second describing one use case for an SSH bastion host—to show how the popular configuration management tool Ansible can be used through an SSH bastion host.
The configuration/setup required to run Ansible through an SSH bastion host is actually reasonably straightforward, but I saw a lot of incomplete articles out there as I was working through this myself. My hope is to supplement the existing articles, as well as the Ansible documenation, to make this sort of configuration easier for others to embrace and understand.
There are two key concepts involved here that you’ll want to be sure you understand before you proceed:
Welcome to Technology Short Take #58. This will be the last Technology Short Take of 2015, as next week is Christmas and the following week is the New Year’s holiday. Before I present this episode’s collection of links, articles, and thoughts on various data center technologies, allow me to first wish all of my readers a very merry and very festive holiday season. Now, on to the content!
This post describes a method for using cloud-init to register a cloud instance into Consul on provisioning. I tested this on OpenStack, but it should work on any cloud platform that supports metadata services that can be leveraged by cloud-init.
I worked out the details for this method because I was interested in using Consul as a means to provide a form of “dynamic DNS” for OpenStack instances. (You can think of it as service registration and discovery for OpenStack instances.) As I’ll point out later in this post, there are a number of problems with this approach, but—if for no other reason—it was helpful as a learning exercise.
The idea was to automatically register OpenStack instances into Consul as they were provisioned. Since Consul offers a DNS interface, other instances and/or workloads could use DNS to look up these nodes’ registration. Consul offers an HTTP API (see here for details), so I started there. I used Paw (a tool I described here) to explore Consul’s HTTP API, building the necessary curl commands along the way. Once I had the right curl commands, the next step was to build a shell script that would pull the current Continue reading
In this post I’d like to discuss a potential (minor) issue with modifying OpenStack security groups with Terraform. I call this a “potential minor” issue because there is an easy workaround, which I’ll detail in this post. I wanted to bring it to my readers’ attention, though, because as of this blog post this matter had not yet been documented.
As you probably already know if you read my recent introduction to Terraform blog post, Terraform is a way to create configurations that automate the creation or configuration of infrastructure components, possibly across a number of different providers and/or platforms. In the introductory blog post, I showed you how to write a Terraform configuration that would create an OpenStack logical network and subnet, create a logical router and attach it to the logical network, and then create an OpenStack instance and associate a floating IP. In that example, I used a key part of Terraform, known as interpolation.
Broadly speaking, interpolation allows Terraform to reference variables or attributes of other objects created by Terraform. For example, how does one refer to a network that he or she has just created? Here’s an example taken from the introductory blog post:
Welcome to Technology Short Take #57. I hope you find something useful here!
Vagrantfile and turns up the environment. Parts 4 and 5 walk through auto-configured OSPF and manually-configured OSPF, respectively. I’m looking forward to more posts in this series!In this post, I’m going to discuss how to configure and use SSH multiplexing. This is yet another aspect of Secure Shell (SSH), the “Swiss Army knife” for administering and managing Linux (and other UNIX-like) workloads.
Generally speaking, multiplexing is the ability to carry multiple signals over a single connection (see this Wikipedia article for a more in-depth discussion). Similarly, SSH multiplexing is the ability to carry multiple SSH sessions over a single TCP connection. This Wikibook article goes into more detail on SSH multiplexing; in particular, I would call your attention to the table under the “Advantages of Multiplexing” to better understand the idea of multiple SSH sessions with a single TCP connection.
One of the primary advantages of using SSH multiplexing is that it speeds up certain operations that rely on or occur over SSH. For example, let’s say that you’re using SSH to regularly execute a command on a remote host. Without multiplexing, every time that command is executed your SSH client must establish a new TCP connection and a new SSH session with the remote host. With multiplexing, you can configure SSH to establish a single TCP connection that is kept alive for a specific period Continue reading
In this post, I’m going to explore one specific use case for using an SSH bastion host. I described this configuration and how to set it up in a previous post; in this post, though, I’d like to focus on one practical use case.
This use case is actually one I depicted graphically in my earlier post:

This diagram could represent a couple different examples. For example, perhaps this is an AWS VPC. Security best practices suggest that you should limit access from the Internet to your instances as much as possible; unless an instance needs to accept traffic from the Internet, don’t assign a public IP address (or an Elastic IP address). However, without a publicly-accessible IP address, how does one connect to and manage the instance? You can’t SSH to it without a publicly-accessible IP address—unless you use an SSH bastion host.
Or perhaps this diagram represents an OpenStack private cloud, where users can deploy instances in a private tenant network. In order for those instances to be accessible externally (where “externally” means external to the OpenStack cloud), the tenant must assign each instance a floating IP address. Security may not be as much of a concern Continue reading
In this post, I’m going to provide a quick introduction to Terraform, a tool that is used to provision and configure infrastructure. Terraform allows you to define infrastructure configurations and then have those configurations implemented/created by Terraform automatically. In this respect, you could compare Terraform to similar solutions like OpenStack Heat, AWS CloudFormation, and others.
Before I continue, though, allow me to first address this question: why Terraform?
This is a fair question, and one that you should be asking. After all, if Terraform is considered similar to OpenStack Heat or AWS CloudFormation, then why use Terraform instead of one of the comparable solutions? I believe there are a couple (related) reasons why you might consider Terraform over a similar solution:
Within a single Terraform definition, you can orchestrate across multiple cloud services. For example, you could create instances with a cloud provider (AWS, DigitalOcean, etc.), create DNS records with a DNS provider, and register key/value entries in Consul. Heat and CloudFormation are, quite naturally, designed to work almost exclusively with OpenStack and AWS, respectively. (Astute readers will know that Heat supports CloudFormation templates, but you get the idea.) Therefore, one reason to use Terraform Continue reading
A while ago, I wrote an article about bootstrapping servers into Ansible—in other words, how to prepare servers to be managed via Ansible. In order for a server to be managed via Ansible, you usually must first create a user account for Ansible, populate the appropriate SSH keys, and grant the new Ansible user sudo permissions. The process I described in my earlier blog post works great for manually-built servers (physical or virtual), but I recently needed to revisit this process for cloud instances. Was it possible to use the process I’d found to bootstrap cloud instances into Ansible?
Cloud instances are a slightly different beast than manually-built servers primarily because password authentication isn’t an option—generally speaking, you’re required to use SSH keys when working with cloud instances. Ansible is SSH-based, as you probably already know, so this shouldn’t be an issue, but it was still something I hadn’t tested or verified. After a bit of testing, I found the bootstrap process I described in my earlier post can be easily adapted for cloud instances.
For reference, here’s the command I use when bootstrapping manually-built servers into Ansible:
ansible-playbook bootstrap.yml -k -K --extra-vars
"hosts=newhost.domain.com user=admin"
Secure Shell, or SSH, is something of a “Swiss Army knife” when it comes to administering and managing Linux (and other UNIX-like) workloads. In this post, I’m going to explore a very specific use of SSH: the SSH bastion host. In this sort of arrangement, SSH traffic to servers that are not directly accessible via SSH is instead directed through a bastion host, which proxies the connection between the SSH client and the remote servers.
At first, it may sound like the use of an SSH bastion host is a pretty specialized use case. In reality, though, I believe this is a design pattern that can actually be useful in a variety of situations. I plan to explore the use cases for an SSH bastion host in a future blog post.
This diagram illustrates the concept of using an SSH bastion host to provide access to Linux instances running inside some sort of cloud network (like an OpenStack Neutron tenant network or an AWS VPC):

Let’s take a closer look at the nuts and bolts of actually setting up an SSH bastion host.
First, you’ll want to ensure you have public key authentication properly configured, both on the bastion host Continue reading
Welcome to Technology Short Take #56! In this post, I’ve collected a few links on various data center technologies, news, events, and trends. I hope you find something useful here.
In this post I’m going to share with you an OS X graphical application I found that makes it easier to work with RESTful APIs. The topic of RESTful APIs has come up here before (see this post on using cURL to interact with RESTful APIs), and RESTful APIs have been a key part of a number of other posts (like my recent post on using jq to work with JSON). Unlike these previous posts—which were kind of geeky and focused on the command line—this time around I’m going to show you an application called Paw, which provides a graphic interface for working with APIs.
Before I start talking about Paw, allow me to first explain why I’m talking about working with APIs using this application. I firmly believe that the future of “infrastructure engineers”—that is, folks who today are focused on managing servers, hypervisors, VM, storage, networks, and firewalls—lies in becoming the “full-stack engineer,” someone who has knowledge and skills across multiple areas, including automation/orchestration. In order to gain those skills in automation/orchestration, it’s pretty likely that you’re going to end up having to work with APIs. Hence, why I’m talking about this stuff, and why Continue reading
I wanted to call out a couple of software packages whose vendors I’ve worked with recently that I felt had really good customer service. The software packages are BusyCal (from BusyCal, LLC) and Textual (from Codeux Software, LLC).
As many of you know, the Mac App Store (MAS) recently suffered an issue due to an expired security certificate, and this caused many MAS apps to have to be re-downloaded or, in limited cases, to stop working (I’m looking at you, Tweetbot 1.6.2). This incident just underscored an uncomfortable feeling I’ve had for a while about using MAS apps (for a variety of reasons that I won’t discuss here because that isn’t the focus of this post). I’d already started focusing on purchasing new software licenses outside of the MAS, but I still had (and have) a number of MAS apps on my Macs.
As a result of this security certificate snafu, I started looking for ways to migrate from MAS apps to non-MAS apps, and BusyCal (a OS X Calendar replacement) and Textual (an IRC client) were two apps that I really wanted to continue to use but were MAS apps. The alternatives were dismal, at best.
While I was at Kubecon this past week, one of the presenters showed off a handy CLI tool for working with JSON. It’s called jq, and in this post I’m going to show you a few ways to use jq. For the source of JSON output, I’ll use the OpenStack APIs.
If you’re not familiar with JSON, I suggest having a look at this non-programmer’s introduction to JSON. Also, refer to this article on using cURL to interact with a RESTful API for a bit more background on some of the commands I’m going to use in this post.
Let’s start by getting an authorization token for OpenStack, using the following curl command:
curl -d '{"auth":{"passwordCredentials":
{"username": "admin","password": "secret"},
"tenantName": "customer-A"}}'
-H "Content-Type: application/json"
http://192.168.100.100:5000/v2.0/tokens
This will return a pretty fair amount of JSON in the response, and it presents the first opportunity to use jq. Let’s say you only wanted the authorization token, and not all the other output. Simply add the following jq command to the end of your curl request:
curl -d '{"auth":{"passwordCredentials":
{"username": "admin","password": "secret"},
"tenantName": "customer-A"}}'
-H "Content-Type: application/json"
http://192.168.100.100:5000/v2.0/tokens |
Continue reading
This is a liveblog of the opening keynote at the inaugural Kubernetes conference, Kubecon, taking place this week at the Palace Hotel in San Francisco. Brendan Burns, Senior Staff Software Engineer at Google, is delivering the opening keynote. Burns is a co-founder of the Kubernetes project.
Burns starts out with a quick review of a bit of Kubernetes history, and reviews the broad diversity of submitters that are participating in the development of Kubernetes. He doesn’t spend much time there, though, and quickly transitions into a “where are we going?” discussion.
He says that Kubernetes wasn’t really about containers, or scheduling; it was really about making reliable, scalable, agile distributed systems a CS101 exercise. Kubernetes is really about making it easier to build distributed systems, to scale distributed systems, to update distributed systems, and to make distributed systems more reliable. Burns demonstrates how Kubernetes makes this easier by showing a recorded demo of scaling Nginx web servers up to handle 1 million requests per second, and then updating the Nginx application while still under load.
After the demo completes, Burns takes a few minutes to break down the architecture behind the demonstration. “Loadbots,” managed by a Kubernetes replication controller, Continue reading
Generally speaking, when launching instances in a cloud environment (such as AWS or an OpenStack-based cloud), the preferred/default way of accessing that instance is via SSH using an injected SSH key pair. There are times, though, when—for whatever reason—this approach won’t work. (I’ll describe one such situation below.) In such instances, it’s possible to configure cloud-init, the same tool used to inject SSH keys, to change passwords for user accounts. Here’s how.
Please note that this is a total hack. (Do NOT use this for any sort of production workload!) That being said, sometimes things like this are necessary to complete preliminary evaluations of a new technology, new product, or new architecture. In my case, I had a demo environment (using DevStack) that I needed to get up and running, and the instances would not have any external connectivity. This meant I was limited to console access only—hence, SSH keys are useless. The only means of access would be via password login through the console. So, I found this snippet of cloud-init code:
#cloud-config
chpasswd:
list: |
user1:password1
user2:password2
user3:password3
expire: False
For this particular use case, I needed to change the default user on the Ubuntu Continue reading