As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.
In this particular case, I was using Pulumi to create all the infrastructure necessary to spin up an AWS-integrated Kubernetes cluster. That included a new VPC, subnets in the different AZs for that region, an Internet gateway, route tables and route table associations, security groups, an ELB for the control plane, and EC2 instances. As I’ve outlined in my latest post on setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm
, these resources on AWS require specific AWS tags to be assigned in order for the AWS cloud provider to work.
As I started working on this, Continue reading
LuCi is a very popular OpenWrt web interface. For an average user, LuCi is probably one of the main deciding factors between giving OpenWrt a try in the first place, or moving on to another user friendlier firmware like DD-WRT.
If you’re an advanced user however, most of the times you may find yourself adjusting settings either through UCI or by editing the config files manually. In fact at one point you may realize you’re not using LuCi at all and it’s just sitting there idle. Basically a component that’s not only using resources, but also providing an extra attack surface.
Now, one could just disable uHTTPd to address some of these concerns, but LuCi installs too many dependencies, and cluttering a router with things that you’ll hardly ever use, is not the best use of the very limited storage space available in most routers.
Another method that some use to “remove” LuCi, is by issuing:
opkg --autoremove remove luci
This may seem to work, but in reality LuCi packages are not really removed this way and the related files will only be masked by OverlayFS. This is because the packages are built into the firmware itself.
While OpenWrt Continue reading
If you’ve used kubeadm
to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init
command to bootstrap the first node in the cluster, kubeadm
prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init
command? How does one go about reconstructing the proper kubeadm join
command?
Fortunately, the values needed for a kubeadm join
command are relatively easy to find or recreate. First, let’s look at the values that are needed.
Here’s the skeleton of a kubeadm join
command for a control plane node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
--control-plane \
--certificate-key <certificate-key>
And here’s the skeleton of a kubeadm join
command for a worker node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
As you can see, the information needed for the worker node is a subset of the information needed for a control plane node.
Here’s how to find or recreate all the various pieces of information you need:
AnsibleFest Atlanta is September 24th - 26th at the Hilton Atlanta, a few short blocks from Centennial Olympic Park. This year is going to be bigger and better than ever. As AnsibleFest continues to grow, so does its offerings. We are excited to offer more breakout sessions, more hands-on workshops, and more Ask an Expert sessions. This year we have expanded our AnsibleFest programming to offer 10 different tracks. We are also introducing the Open Lounge this year, which is a place to network, relax and recharge. It provides a great opportunity to meet and connect with passionate Ansible users, developers, and industry partners.
The AnsibleFest Agenda is live. Thank you to everyone who answered the call for submission. It was a challenge to narrow down the sessions from the record-setting submissions we received. We love our community, customers, partners, and appreciate everyone who contributed.
For those who are not familiar with AnsibleFest, or have not attended the event before, below are a few highlights of AnsibleFest that you won’t want to miss.
General Sessions
We have some amazing general sessions planned this year. The opening keynote at AnsibleFest will feature talks from Red Hat Ansible Automation Continue reading
In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm
. Over the last year or so, the power and utility of kubeadm
has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.
This post builds on the official documentation for setting up a highly available Kubernetes 1.15 cluster. This post also builds upon previous posts I’ve written about setting up Kubernetes clusters with the AWS cloud provider:
All of these posts are focused on Kubernetes releases prior to 1.15, and given the changes in kubeadm
in the 1.14 and 1.15 releases, I felt it would be helpful to revisit the process again for 1.15. For now, I’m focusing on the in-tree AWS cloud provider; however, in the very near future I’ll look at using the new external AWS cloud provider.
As pointed out in the “original” Continue reading
While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.
I’m making the following assumptions:
kubeadm
. (This means we’ll use kubeadm
to add the additional control plane nodes.)I’d also like to point out that there are a lot of different configurations and variables that come into play with a process like this. It’s (nearly) impossible to cover them all in a single blog post, so this post attempts to address what I believe to be the most common situations.
With those assumptions and that caveat in mind, the high-level overview of the process looks like this:
Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!
Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine
(a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.
If you’re not familiar with the concept of an SSH bastion host, it is a (typically hardened) host through which you, as a user, would proxy your SSH connections to other hosts. For example, you may have a bunch of EC2 instances in an AWS VPC that do not have public IP addresses. (That’s reasonable.) You could use an SSH bastion host—which would require a public IP address—to enable SSH access to otherwise inaccessible hosts. I wrote a post about using SSH bastion hosts back in 2015; give that post a read for more details.
The syntax for connecting to a Docker daemon via SSH looks something like this:
docker -H ssh://user@host <command>
So, if you wanted to run docker container ls
to list the containers running on a remote system, you’d Continue reading
In part 1 of this series, we looked at operators overall, and what they do in OpenShift/Kubernetes. We peeked at the Operator SDK, and why you'd want to use an Ansible Operator rather than other kinds of operators provided by the SDK. We also explored how Ansible Operators are structured and the relevant files created by the Operator SDK when building Kubernetes Operators with Ansible.
In this the second part of this deep dive series, we'll:
We start by creating a new project in OpenShift, which we'll simply call test:
$ oc new-project test --display-name="Testing Ansible Operator" Now using project "test" on server "https://ec2-xx-yy-zz-1.us-east-2.compute.amazonaws.com:8443".
We won't delve too much into this role, however the basic operation is:
Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.
Service Account tokens are stored as Secrets in the “kube-system” namespace of a Kubernetes cluster. To retrieve just the token portion of the Secret, use -o jsonpath
like this (replace “sa-token” with the appropriate name for your environment):
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}'
The output is Base64-encoded, so just pipe the output into base64
:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode
The result you’re seeing is a JSON Web Token (JWT). You could use the JWT web site to decode the token, but given that I’m a fan of the CLI I decided to use this JWT CLI utility instead:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode | \
jwt decode -
The final -
, for those who may not be familiar, is the syntax to tell the jwt
utility to look at STDIN for the JWT it needs to Continue reading
We all know what OpenWrt is. The amazing Linux distro built specifically for embedded devices.
What you can achieve with a rather cheap router running OpenWrt, is mind-boggling.
OpenWrt also gives you a great control over its build system. For normal cases, you probably don’t need to build OpenWrt from source yourself. That has been done for you already and all you need to do, is to just download the appropriate compiled firmware image and then upload it to your router1.
But for more advanced usages, you may find yourself needing to build OpenWrt images yourself. This could be due wanting to make some changes to the code, add some device specific options, etc.
Building OpenWrt from source is easy, well-documented, and works great. That is, until you start using opkg to install some new packages.
opkg will by default fetch new packages from the official repository (as one might expect), but depending on the package, the installation may or may not fail.
In this post, I’m going to walk you through how to add a name (specifically, a Subject Alternative Name) to the TLS certificate used by the Kubernetes API server. This process of updating the certificate to include a name that wasn’t included could find use for a few different scenarios. A couple of situations come to mind, such as adding a load balancer in front of the control plane, or using a new or different URL/hostname used to access the API server (both situations taking place after the cluster was bootstrapped).
This process does assume that the cluster was bootstrapped using kubeadm
. This could’ve been a simple kubeadm init
with no customization, or it could’ve been using a configuration file to modify the behavior of kubeadm
when bootstrapping the cluster. This process also assumes your Kubernetes cluster is using the default certificate authority (CA) created by kubeadm
when bootstrapping a cluster. Finally, this process assumes you are using a non-HA (single control plane node) configuration.
Before getting into the details of how to update the certificate, I’d like to first provide a bit of background on why this is important.
The Kubernetes API server uses digital certificates to both Continue reading