While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.
I’m making the following assumptions:
kubeadm
. (This means we’ll use kubeadm
to add the additional control plane nodes.)I’d also like to point out that there are a lot of different configurations and variables that come into play with a process like this. It’s (nearly) impossible to cover them all in a single blog post, so this post attempts to address what I believe to be the most common situations.
With those assumptions and that caveat in mind, the high-level overview of the process looks like this:
Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!
Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine
(a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.
If you’re not familiar with the concept of an SSH bastion host, it is a (typically hardened) host through which you, as a user, would proxy your SSH connections to other hosts. For example, you may have a bunch of EC2 instances in an AWS VPC that do not have public IP addresses. (That’s reasonable.) You could use an SSH bastion host—which would require a public IP address—to enable SSH access to otherwise inaccessible hosts. I wrote a post about using SSH bastion hosts back in 2015; give that post a read for more details.
The syntax for connecting to a Docker daemon via SSH looks something like this:
docker -H ssh://user@host <command>
So, if you wanted to run docker container ls
to list the containers running on a remote system, you’d Continue reading
In part 1 of this series, we looked at operators overall, and what they do in OpenShift/Kubernetes. We peeked at the Operator SDK, and why you'd want to use an Ansible Operator rather than other kinds of operators provided by the SDK. We also explored how Ansible Operators are structured and the relevant files created by the Operator SDK when building Kubernetes Operators with Ansible.
In this the second part of this deep dive series, we'll:
We start by creating a new project in OpenShift, which we'll simply call test:
$ oc new-project test --display-name="Testing Ansible Operator" Now using project "test" on server "https://ec2-xx-yy-zz-1.us-east-2.compute.amazonaws.com:8443".
We won't delve too much into this role, however the basic operation is:
Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.
Service Account tokens are stored as Secrets in the “kube-system” namespace of a Kubernetes cluster. To retrieve just the token portion of the Secret, use -o jsonpath
like this (replace “sa-token” with the appropriate name for your environment):
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}'
The output is Base64-encoded, so just pipe the output into base64
:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode
The result you’re seeing is a JSON Web Token (JWT). You could use the JWT web site to decode the token, but given that I’m a fan of the CLI I decided to use this JWT CLI utility instead:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode | \
jwt decode -
The final -
, for those who may not be familiar, is the syntax to tell the jwt
utility to look at STDIN for the JWT it needs to Continue reading
We all know what OpenWrt is. The amazing Linux distro built specifically for embedded devices.
What you can achieve with a rather cheap router running OpenWrt, is mind-boggling.
OpenWrt also gives you a great control over its build system. For normal cases, you probably don’t need to build OpenWrt from source yourself. That has been done for you already and all you need to do, is to just download the appropriate compiled firmware image and then upload it to your router1.
But for more advanced usages, you may find yourself needing to build OpenWrt images yourself. This could be due wanting to make some changes to the code, add some device specific options, etc.
Building OpenWrt from source is easy, well-documented, and works great. That is, until you start using opkg to install some new packages.
opkg will by default fetch new packages from the official repository (as one might expect), but depending on the package, the installation may or may not fail.
In this post, I’m going to walk you through how to add a name (specifically, a Subject Alternative Name) to the TLS certificate used by the Kubernetes API server. This process of updating the certificate to include a name that wasn’t included could find use for a few different scenarios. A couple of situations come to mind, such as adding a load balancer in front of the control plane, or using a new or different URL/hostname used to access the API server (both situations taking place after the cluster was bootstrapped).
This process does assume that the cluster was bootstrapped using kubeadm
. This could’ve been a simple kubeadm init
with no customization, or it could’ve been using a configuration file to modify the behavior of kubeadm
when bootstrapping the cluster. This process also assumes your Kubernetes cluster is using the default certificate authority (CA) created by kubeadm
when bootstrapping a cluster. Finally, this process assumes you are using a non-HA (single control plane node) configuration.
Before getting into the details of how to update the certificate, I’d like to first provide a bit of background on why this is important.
The Kubernetes API server uses digital certificates to both Continue reading
This deep dive series assumes the reader has access to a Kubernetes test environment. A tool like minikube is an acceptable platform for the purposes of this article. If you are an existing Red Hat customer, another option is spinning up an OpenShift cluster through cloud.redhat.com. This SaaS portal makes trying OpenShift a turnkey operation.
In this part of this deep dive series, we'll:
For those who may not be very familiar with Kubernetes, it is, in its most simplistic description - a resource manager. Users specify how much of a given resource they want and Kubernetes manages those resources to achieve the state the user specified. These resources can be pods (which contain one or more containers), persistent volumes, or even custom resources defined by users.
This makes Kubernetes useful for managing resources that don't contain any state (like Continue reading
Ansible became popular largely because we adopted some key principles early, and stuck to them.
The first key principle was simplicity: simple to install, simple to use, simple to find documentation and examples, simple to write playbooks, and simple to make contributions.The second key principle was modularity: Ansible functionality could be easily extended by writing modules, and anyone could write a module and contribute it back to Ansible.
The third key principle was “batteries included”: all of the modules for Ansible would be built-in, so you wouldn’t have to figure out where to get them. They’d just be there.
We’ve come a long way by following these principles, and we intend to stick to them.
Recently though, we’ve been reevaluating how we might better structure Ansible to support these principles. We now find ourselves dealing with problems of scale that are becoming more challenging to solve. Jan-Piet Mens, who has continued to be a close friend to Ansible since our very earliest days, recently described those problems quite succinctly from his perspective as a long-time contributor -- and I think his analysis of the problems we face is quite accurate. Simply, we’ve become victims of our own success.
Success Continue reading
Everyday, I’m in awe of what Ansible has grown to be. The incredible growth of the community and viral adoption of the technology has resulted in a content management challenge for the project.
I don’t want to echo a lot of what’s been said by our dear friend Jan-Piet Mens or our incredible Community team, but give me a moment to take a shot at it.
Our main challenge is rooted in the ability to scale. The volume of pull requests and issues we see day to day severely outweigh the ability of the Ansible community to keep up with that rate of change.
As a result, we are embarking on a journey. This journey is one that we know that the community, both our content creators and content consumers, will be interested in hearing about.
This New World Order (tongue in cheek), as we’ve been calling it, is a model that will allow for us to empower the community of contributors of Ansible content (read: modules, plugins, and roles) to provide their content at their own pace.
To do this, we have made some changes to how Ansible leverages content that is not “shipped” with it. In short, Continue reading
For the last several years, I’ve organized a brief morning prayer time at VMworld. I didn’t attend the conference last year, but organized a prayer time nevertheless (and was able to join one morning for prayer). This year, now that I’m back at VMware (via the Heptio acquisition) and speaking at the conference, I’d once again like to coordinate a time for believers to meet. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.
What: A brief time of prayer
Where: Yerba Buena Gardens behind Moscone North (near the waterfall)
When: Monday 8/26 through Thursday 8/29 at 7:45am (this should give everyone enough time to grab breakfast before keynotes/sessions start at 9am)
Who: All courteous attendees are welcome, but please note this will be a distinctly Christian-focused and Christ-centric activity (note that I encourage believers of other faiths/religions to organize equivalent activities)
Why: To spend a few minutes in prayer over the day, the conference, the attendees, and each other
As in previous years, you don’t need to RSVP or anything like that, although you’re welcome to if you’d like (just hit me up on Twitter).
This blog is part two in a series covering how Red Hat Ansible Automation can integrate with ticket automation. This time we’ll cover dynamically adding a set of network facts from your switches and routers and into your ServiceNow tickets. If you missed Part 1 of this blog series, you can refer to it via the following link: Ansible + ServiceNow Part 1: Opening and Closing Tickets.
Suppose there was a certain network operating system software version that contained an issue you knew was always causing problems and making your uptime SLA suffer. How could you convince your management to finance an upgrade project? How could you justify to them that the fix would be well worth the cost? Better yet, how would you even know?
A great start would be having metrics that you could track. The ability to data mine against your tickets would prove just how many tickets were involved with hardware running that buggy software version. In this blog, I’ll show you how to automate adding a set of facts to all of your tickets going forward. Indisputable facts can then be pulled directly from the device with no chance of mistakes or accidentally being overlooked Continue reading