Ahhh, a new year.
While 2015 was certainly a big year for us as we joined the Red Hat family, in many ways we’re still right at home with our roots deeply planted in the ways of open source. That means we’re listening (as we always do) to our customers and community members about what what they see as their problems to solve and goals to achieve in the year ahead.
Here’s a bit of what we see:
DevOps! It’s everywhere! If ever there was a buzzword to officially deserve the “jumped the shark” label, this might just be it. General understanding of DevOps as a practice that can potentially accelerate IT project delivery has permeated most IT departments, from the smallest of businesses to the most daunting of large enterprises, sometimes from the grassroots level, and sometimes from the top down.
Thankfully, along with this recognition, people are increasingly recognizing that DevOps isn’t simply tools -- that building a healthy organizational culture is a significant part of their journey. Many organizations are beginning to recognize that it’s not a lightswitch, or a flat-out reorg. The idea that small wins can matter when bringing DevOps practices into your Continue reading
Ahhh, a new year.
While 2015 was certainly a big year for us as we joined the Red Hat family, in many ways we’re still right at home with our roots deeply planted in the ways of open source. That means we’re listening (as we always do) to our customers and community members about what what they see as their problems to solve and goals to achieve in the year ahead.
Here’s a bit of what we see:
DevOps! It’s everywhere! If ever there was a buzzword to officially deserve the “jumped the shark” label, this might just be it. General understanding of DevOps as a practice that can potentially accelerate IT project delivery has permeated most IT departments, from the smallest of businesses to the most daunting of large enterprises, sometimes from the grassroots level, and sometimes from the top down.
Thankfully, along with this recognition, people are increasingly recognizing that DevOps isn’t simply tools -- that building a healthy organizational culture is a significant part of their journey. Many organizations are beginning to recognize that it’s not a lightswitch, or a flat-out reorg. The idea that small wins can matter when bringing DevOps practices into your Continue reading
In this post I’d like to show you how to use a JSON-formatted data file to create and configure multi-machine Vagrant environments. This isn’t a new idea, and certainly not anything that I came up with or created. I’m simply presenting it here as an alternative option to the approach of using YAML with Vagrant for multi-machine environments (some people may prefer JSON over YAML).
If you’re unfamiliar with Vagrant, I’d start with my introduction to Vagrant. Then I’d recommend reviewing my original article on using YAML with Vagrant, followed by the updated/improved method that addresses a shortcoming with the original approach. These earlier posts will provide some basics that I’ll build on in this post.
To use a JSON-formatted data file as an external data source for Vagrant, the code in the Vagrantfile
looks really similar to the code you’d use for YAML:
# -*- mode: ruby -*-
# # vi: set ft=ruby :
# Specify minimum Vagrant version and Vagrant API version
Vagrant.require_version '>= 1.6.0'
VAGRANTFILE_API_VERSION = '2'
# Require JSON module
require 'json'
# Read YAML file with box details
servers = JSON.parse(File.read(File.join(File.dirname Continue reading
Welcome to Technology Short Take #59, the first Technology Short Take of 2016. As we start a new year, here’s a collection of links and articles from around the web. Here’s hoping you find something useful to you!
In this post, I’d like to share with you an improved way to use YAML with Vagrant. I first discussed the use of YAML with Vagrant in a post on simplifying multi-machine Vagrant environments, where I simply factored out variable data into an external YAML file. The original approach I described had (at least) one significant drawback, though, which this new approach adddresses.
(By the way, this “improved” way is probably just a matter of better coding. I’m not an expert with Ruby, so Ruby experts may look at this and find it to be quite obvious.)
Here’s the original snippet of a Vagrantfile
that I shared in that first Vagrant/YAML post:
# -*- mode: ruby -*-
# # vi: set ft=ruby :
# Specify minimum Vagrant version and Vagrant API version
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
# Require YAML module
require 'yaml'
# Read YAML file with box details
servers = YAML.load_file('servers.yaml')
# Create boxes
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Iterate through entries in YAML file
servers.each do |servers|
config.vm.define servers["name"] do |srv|
Continue reading
After a year of work, we are extremely proud to announce that Ansible 2.0 has been released and is now generally available. This is by far one of the most ambitious Ansible releases to date, and it reflects an enormous amount of work by the community, which continues to amaze me. Approximately 300 users have contributed code to what has been known as “v2” for some time, and 500 users have contributed code to modules since the last major Ansible release.
There are many pitfalls to refactoring software, so why did we decide to tackle such a major project? At the time we started the work on v2, Ansible was approximately three years old and had recently crossed the 1,000 contributor mark. This huge rate in growth also resulted in a degree of technical debt in the code, which was beginning to show as we continued to add features.
Ultimately, we decided it was worth it to take a step back and rework some aspects of the codebase which had been prone to having features bolted on without a clear-cut architectural vision. We also rewrote from scratch much of the code which was responsible Continue reading
After a year of work, we are extremely proud to announce that Ansible 2.0 ("Over the Hills and Far Away") has been released and is now generally available. This is by far one of the most ambitious Ansible releases to date, and it reflects an enormous amount of work by the community, which continues to amaze me. Approximately 300 users have contributed code to what has been known as “v2” for some time, and 500 users have contributed code to modules since the last major Ansible release.
There are many pitfalls to refactoring software, so why did we decide to tackle such a major project? At the time we started the work on v2, Ansible was approximately three years old and had recently crossed the 1,000 contributor mark. This huge rate in growth also resulted in a degree of technical debt in the code, which was beginning to show as we continued to add features.
Ultimately, we decided it was worth it to take a step back and rework some aspects of the codebase which had been prone to having features bolted on without a clear-cut architectural vision. We also rewrote from scratch much Continue reading
I’m extremely honored to have the opportunity to help support VMware User Group (VMUG) meetings all over the world. I will be speaking at a few upcoming events; if you’re going to be at one of these events, I’d love to meet you, say hi, and chat for a bit. Here are the details.
Tuesday, February 23, 2016 I’m really excited to be back in Sydney again for an opportunity to speak at the Sydney VMUG UserCon (see the event page for full details).
Thursday, February 25, 2016 Two days after the Sydney event I’ll be in Melbourne to help support the Melbourne VMUG UserCon. (More details here.)
First week in March 2016 The dates for these events are still being finalized (I’ll update this post when I have more details), but I’ll be in South Africa for a series of VMUG events there as well (Johannesburg, Durban, and Cape Town). This will be my first time in South Africa, and I’m really looking forward to meeting and talking with customers there.
Aside from these VMUG events, if you’re in one of these regions, are a current (or potential) customer of VMware, and you’d like to meet to talk Continue reading
In this first-ever episode of the Full Stack Journey podcast, I talk with Bart Smith (old GitHub account migrating to new GitHub account, YouTube channel). Bart shares some details about his journey from being a Microsoft-centric infrastructure engineer to what he calls a cloud-native full-stack engineer. Here are some notes from our conversation, along with some additional resources Bart wanted to share with readers/listeners. Enjoy!
The podcast audio recording is available on Soundcloud.
After quite a bit of deliberation and planning, I’m excited (and nervous!) to announce the launch of a new podcast, titled “The Full Stack Journey Podcast”. Here are the details, structured in a Q&A format.
What topics does this new podcast cover?
The focus of The Full Stack Journey Podcast is to talk about the journey to becoming a full-stack engineer. That term is a bit of a loaded term—some people like it, some people don’t, and there’s some disagreement over exactly what it means. I use the term to describe someone who can work at multiple layers of the modern data center stack, crossing between different silos. This isn’t to say that a full-stack engineer is an expert in all these areas, but it does mean that a full-stack engineer has at least some knowledge and experience in all these areas, with expertise and experience in at least one of them. The podcast aims to provide real, relevant, practical information at helping people with their “full-stack journey.”
Why is the idea of becoming a full-stack engineer important enough to warrant launching a podcast?
I strongly believe the future of the infrastructure engineer does not lie in being Continue reading
Almost six years ago I shared my (then) current Getting Things Done (GTD) setup, in which I described how I used various tools, techniques, and applications to try to maximize my productivity. I’d been toying with updating that post, but I wasn’t sure that anyone would find it useful. However, a recent e-mail from a reader indicated that there probably is some interest; with that in mind, then, here’s an update on my GTD-like setup, circa early 2016.
Before I dive into the details, a couple quick notes:
If you read the 2010 post, you may recall that I think of my workflow as having three “layers” of applications:
In early 2015, I posted a look ahead at my planned 2015 projects, where I took a quick look at some of the self-development projects I set out for myself over the course of 2015. In this post, I’m going to review my progress on those 2015 projects.
The 2015 projects were as follows:
So, how well did I do? Let’s take a look.
Complete a new book: Technically, I haven’t (fully) completed a new book, but given that my new book project with Jason Edelman and Matt Oswalt on network automation is available now as an Early Access edition, I suppose this should count for something. Strangely enough, this wasn’t the book project I had in mind at the start of 2015, but sometimes things like this take unexpected turns. Grade: C
Make more open source contributions: I expected this one to be easy, but it turns out that this is the area where my performance is the worst. I submitted a pull request to Terraform (for a docs update), but I did not make the contributions Continue reading