I recently needed to find a simple way of switching between Kubernetes contexts. I already use powerline-go
(here’s the GitHub repo), which allows me to display the Kubernetes context in the prompt so I always know which context is the active (current) context. However, switching between contexts using kubectl config set-context <name>
isn’t the easiest approach; not to mention it requires merging multiple config files into a single file (which is itself a bit of a task). So, I set out to create a simple Kubernetes context switcher—and here’s the initial results of my efforts.
Before I go any further, I’d like to stress 2 important points. First, I’m not a programmer, so keep that in mind. Second, this is a simple Kubernetes context switcher—it’s not meant to address any and every possible use case out there, nor do I claim any sort of sophistication in the code.
With those disclaimers out of the way, allow me to introduce kcs
: the simple Kubernetes context switcher. kcs
is built on the idea that it’s easiest to manage Kubernetes contexts in their own files, rather than trying to merge config files. So, it makes the assumption that you’ll store your Continue reading
The etcd distributed key-value store is an integral part of Kubernetes. I first wrote about etcd back in 2014 in this post, but haven’t really discussed it in any great detail since then. However, as part of my recent efforts to dive much deeper into Kubernetes, I needed to revisit etcd. In this post, I wanted to share how to boostrap a new etcd cluster with TLS certificates using kubeadm
.
Before I go on, I feel compelled to state that this is certainly not the only way to bootstrap an etcd cluster with TLS certificates. I feel I must also state that nothing in what I’m about to share is new, novel, revolutionary, or unusual. In fact, a fair amount of it is based on these instructions, although this post will focus on using systemd unit files instead of static pods under Kubernetes. I’m simply documenting it here in the hopes of getting the information more broadly disseminated, and to help document my own journey of learning.
Before you bootstrap the etcd cluster, you’ll first need to prepare the nodes for the process. Although I’ll list the steps manually below, in practice you’ll want to Continue reading
The etcd distributed key-value store is an integral part of Kubernetes. I first wrote about etcd back in 2014 in this post, but haven’t really discussed it in any great detail since then. However, as part of my recent efforts to dive much deeper into Kubernetes, I needed to revisit etcd. In this post, I wanted to share how to boostrap a new etcd cluster with TLS certificates using kubeadm
.
Before I go on, I feel compelled to state that this is certainly not the only way to bootstrap an etcd cluster with TLS certificates. I feel I must also state that nothing in what I’m about to share is new, novel, revolutionary, or unusual. In fact, a fair amount of it is based on these instructions, although this post will focus on using systemd unit files instead of static pods under Kubernetes. I’m simply documenting it here in the hopes of getting the information more broadly disseminated, and to help document my own journey of learning.
Before you bootstrap the etcd cluster, you’ll first need to prepare the nodes for the process. Although I’ll list the steps manually below, in practice you’ll want to Continue reading
I was recently working on a blog post involving the use of TLS certificates for encryption and authentication, and was running into errors. I’d checked all the “usual suspects”—AWS security groups, host-level firewall rules (via iptables
), and the application configuration itself—but still couldn’t get it to work. When I did finally find the error, I figured it was probably worth sharing the commands I used in the event others might find it helpful.
The error was manifesting itself in that I was able to successfully connect to the application (with TLS) on the loopback address, but not the IP address assigned to the network adapter. Using ss -lnt
, I verified that the application was listening on all IP addresses (not just loopback), and as I mentioned earlier I had also verified that AWS security groups and host-level firewall weren’t in play. This lead me to believe that there was something wrong with my TLS configuration.
Since the application’s error message was extremely vague (and not even remotely TLS-related), I decided to try using curl
to verify that TLS was working correctly. First I ran this command:
curl --cacert /path/to/CA/certificate https://127.0.0.1 -v
After some output, curl
Continue reading
Welcome to Technology Short Take 103, where I’m back yet again with a collection of links and articles from around the World Wide Web (Ha! Bet you haven’t seen that term used in a while!) on various technology areas. Here’s hoping I’ve managed to include something useful to you!
Nothing this time around, sorry!
For the last several years, I’ve organized a brief morning prayer time at VMworld. This year, I won’t be at the conference, but I’d like to help coordinate a time for believers to meet nevertheless. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.
What: A brief time of prayer
Where: Mandalay Bay Convention Center, level 1 (same level as the food court), at the bottom of the escalators heading upstairs (over near the business center)
When: Monday 8/27 through Thursday 8/30 at 7:45am (this should give everyone enough time to grab breakfast before the keynotes start at 9am)
Who: All courteous attendees are welcome, but please note this will be a distinctly Christian-focused and Christ-centric activity (I encourage believers of other faiths/religions to organize equivalent activities)
Why: To spend a few minutes in prayer over the day, the conference, the attendees, and each other
You don’t need to RSVP or anything like that, although you’re welcome to if you’d like (just hit me up on Twitter). As I mentioned, I won’t be at the conference, so I’ll ask folks who have attended prayer time in Continue reading
I recently tweeted that I was about to undertake a new pet project where I was, in my words, “probably going to fall flat on my face”. Later, I asked on Twitter if I should share some of the learning that will occur (is ocurring) as a result of this new project, and a number of folks indicated that I should. So, with that in mind, I’m announcing this project I’ve undertaken is a software development project aimed at helping me bolster my software development skills, and that I’ll be blogging about it along the way so that others can benefit from my mistakes…er, learning.
Readers may recall that my 2018 project list included a project to learn to write code in Golang. At the time, I indicated I’d use Kubernetes and related projects, along with my goal of making more open source contributions, as a vehicle for helping to accomplish that goal. In retrospect, that was quite ambitious, and I’ve since come to the realization that there are a number of “baby steps” that I need to take before I am ready to use a large software project like Kubernetes as a means to help improve my coding skills. Continue reading
I’ve recently started playing around with Ballerina, and upon the suggestion of some folks on Twitter wanted to clone down some of the “official” Ballerina GitHub repositories to provide code examples and guides that would assist in my learning. Upon attempting to do so, however, I found myself needing to clone down 39 different repositories (all under a single organization), and so I asked on Twitter if there was an easy way to do this. Here’s what I found.
Fairly quickly after I posted my tweet asking about a solution, a follower responded indicating that I should be able to get the list of repositories via the GitHub API. He was, of course, correct:
curl -s https://api.github.com/orgs/ballerina-guides/repos
This returns a list of the repositories in JSON format. Now, if you’ve been paying attention to my site, you know there’s a really handy way of parsing JSON data at the CLI (namely, the jq
utility). However, to use jq
, you need to know the overall structure of the data. What if you don’t know the structure?
No worries, this post outlines another tool—jid
—that allows us to interactively explore the data. So, I ran:
curl Continue reading
In case there was any question whether Spousetivities would be present at VMworld 2018, let this settle it for you: Spousetivities will be there! In fact, registration for Spousetivities at VMworld 2018 is already open. If previous years are any indication, there’s a really good possibility these activities will sell out. Better get your tickets sooner rather than later!
This year’s activities are funded in part by the generous and community-minded support of Veeam, ActualTech Media, Datrium, and VMUG.
Here’s a brief peek at what’s planned for VMworld in Las Vegas this August:
Monday, August 27
Tuesday, August 28
I don’t know if “additive” is the right word, but it was the best word I could come up with to describe the sort of configuration I recently needed to address in Ansible. In retrospect, the solution seems pretty straightforward, but I’ll include it here just in case it proves useful to someone else. If nothing else, it will at least show some interesting things that can be done with Ansible and Jinja2 templates.
First, allow me to explain the problem I was trying to solve. As you may know, Kubernetes 1.11 was recently released, and along with it a new version of kubeadm
, the tool for bootstrapping Kubernetes clusters. As part of the new release, the Kubernetes community released a new setup guide for using kubeadm
to create a highly available cluster. This setup guide uses new functionality in kubeadm
to allow you to create “stacked masters” (control plane nodes running both the Kubernetes components as well as the etcd key-value store). Because of the way etcd clusters work, and because of the way you create HA control plane members, the process requires that you start with a single etcd node, then add the second node, and Continue reading
Welcome to Technology Short Take 102! I normally try to get these things published biweekly (every other Friday), but this one has taken quite a bit longer to get published. It’s no one’s fault but my own! In any event, I hope that you’re able to find something useful among the links below.
network-engine
command parser to parse the output of commands on network devices. It looks like there will be a follow-up to this article as well, so you may want to check back on Ajay’s site.In late 2015 I wrote a post about a command-line tool named jq
, which is used for parsing JSON data. Since that time I’ve referenced jq
in a number of different blog posts (like this one). However, jq
is not the only game in town for parsing JSON data at the command line. In this post, I’ll share a couple more handy CLI tools for working with JSON data.
(By the way, if you’re new to JSON, check out this post for a gentle introduction.)
jp
JMESPath is used by both Amazon Web Services (AWS) in their AWS CLI as well as by Microsoft in the Azure CLI. For examples of JMESPath in action, see the AWS CLI documentation on the --query
functionality, which makes use of server-side JMESPath queries to reduce the amount of data returned by an AWS CLI command (as opposed to filtering on the client side).
However, you can also use JMESPath on the client-side through the jp
command-line utility. As a client-side parsing tool, jp
is similar in behavior to jq
, but I find the JMESPath query language to be a bit easier to use than jq
in Continue reading
This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.
For the purposes of this introduction, I’ll structure it around launching an EC2 instance. As it turns out, there’s a fair amount of information you need before you can launch an AWS instance using the AWS CLI. So, let’s look at how you would use the AWS CLI to help get the information you need in order to launch an instance using the AWS CLI. (Tool inception!)
To launch an instance, you need five pieces of information:
While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.
First, you’ll want to extract the certificate data from the kubeconfig file. For the purposes of this post, I’ll use a kubeconfig file named config
and found in the .kube
subdirectory of your home directory. Assuming there’s only a single certificate embedded in the file, you can use a simple grep
statement to isolate this information:
grep 'client-certificate-data' $HOME/.kube/config
Combine that with awk
to isolate only the certificate data:
grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}'
This data is Base64-encoded, so we decode it (I’ll wrap the command using backslashes for readability now that it has grown a bit longer):
grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d
You could, at this stage, redirect the output into a file (like certificate.crt
) if so desired; the data you have is Continue reading
I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.
Normally, variable interpolation in Terraform would allow one to do something like this (this is taken from the aws_instance
resource):
tags {
Name = "${var.name}-${count.index}"
role = "${var.role}"
}
This approach works, creating tags whose keys are “Name” and “role” and whose values are the interpolated variables. (I am, in fact, using this exact snippet of code in some of my Terraform modules.) Given that this works, I decided to extend it in a way that would allow the code calling the module to supply both the key as well as the value, thus providing more flexibility Continue reading
In October 2016 I wrote about a triple-provider Vagrant environment I’d created that worked with VirtualBox, AWS, and the VMware provider (tested with VMware Fusion). Since that time, I’ve incorporated Linux (Fedora, specifically) into my computing landscape, and I started using the Libvirt provider for Vagrant (see my write-up here). With that in mind, I updated the triple-provider environment to add support for Libvirt and make it a quadruple-provider environment.
To set expectations, I’ll start out by saying there isn’t a whole lot here that is dramatically different than the triple-provider setup that I shared back in October 2016. Obviously, it supports more providers, and I’ve improved the setup so that no changes to the Vagrantfile are needed (everything is parameterized).
With that in mind, let’s take a closer look. First, let’s look at the Vagrantfile
itself:
# Specify minimum Vagrant version and Vagrant API version
Vagrant.require_version '>= 1.6.0'
VAGRANTFILE_API_VERSION = '2'
# Require 'yaml' module
require 'yaml'
# Read YAML file with VM details (box, CPU, and RAM)
machines = YAML.load_file(File.join(File.dirname(__FILE__), 'machines.yml'))
# Create and configure the VMs
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Always use Vagrant's Continue reading
Welcome to Technology Short Take #101! I have (hopefully) crafted an interesting and varied collection of links for you today, spanning all the major areas of modern data center technology. Now you have some reading material for this weekend!
command
modules for network devices.I recently started using kubeadm
more extensively than I had in the past to serve as the primary tool by which I stand up Kubernetes clusters. As part of this process, I also discovered the kubeadm alpha phase
subcommand, which exposes different sections (phases) of the process that kubeadm init
follows when bootstrapping a cluster. In this blog post, I’d like to kick off a series of posts that explore how one could use the kubeadm alpha phase
command to better understand the different components within Kubernetes, the relationships between components, and some of the configuration items involved.
Before I go any further, I’d like to point readers to this URL that provides an overview of kubeadm
and using it to bootstrap a cluster. If you’re new to kubeadm
, go read that before continuing on here.
<aside>Quick side note: it’s my understanding that at some point the intent is to move kubeadm alpha phase
out of alpha, at which point the command might look more like kubeadm phase
or similar (that hasn’t been fully determined yet as far as I know). If you’re reading this at some point in the future, just make note that this was written back Continue reading
As part of my 2018 projects, I committed to reading and reviewing more technical books this year. As part of that effort, I recently finished reading Infrastructure as Code, authored by Kief Morris and published in September 2015 by O’Reilly (more details here). Infrastructure as code is very relevant to my current job function and is an area of great personal interest, and I’d been half-heartedly working my way through the book for some time. Now that I’ve completed it, here are my thoughts.
Overall, Morris does a great job of crisply defining infrastructure as code (a somewhat vague and amorphous term at times) and outlining the key principles that are involved. Morris also does a really good job of staying high-level as he works through the various aspects of infrastructure as code and discusses some of the considerations, patterns (and anti-patterns), and recommended practices in each aspect.
The book’s high-level focus is, however, both its greatest strength as well as its greatest weakness. Because infrastructure as code can be implemented in a variety of ways with a variety of tools, the book must necessarily be high-level and somewhat abstract. As I mentioned, Morris does a really Continue reading
Wow! This marks 100 posts in the Technology Short Take series! For almost eight years (Technology Short Take #1 was published in August 2010), I’ve been collecting and sharing links and articles from around the web related to major data center technologies. Time really flies when you’re having fun! Anyway, here is Technology Short Take 100…I hope you enjoy!
Also, a quick note that I removed the “Servers/Hardware” and “Storage” sections this time around, as I didn’t have any useful content to share. I’ll continue to evaluate whether I will/should include those sections moving forward (your feedback is welcome; hit me up on Twitter).