In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using
kubeadm. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.
In addition to the post I linked above, there were a number of other articles I published on this topic:
The topic of combining
kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using
kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the
kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the
kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit
kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).
In the v1alpha2 post (the first post on modifying
kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.
For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are Continue reading
Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!
dnspeepto help folks understand how DNS works.
In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!
If you aren’t already familiar with Cluster API—hereafter just referred to as CAPI—then I would encourage you to read my introduction to Cluster API post. Although it is a bit dated (it was written in the very early days of the project, which recently released version 1.0). Some of the commands referenced in that post have changed, but the underlying concepts remain valid. If you’re not familiar with Cilium, check out their introduction to the project for more information. Finally, if you’re not familiar at all with the idea of ClusterResourceSets, you can read my earlier post or check out the ClusterResourceSet CAEP document.
If you want to install Cilium via a ClusterResourceSet, the process looks something like this:
Welcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!
Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool
jq along the way. I hope you find something useful!
I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.
First, let me share some background on how I structure my Pulumi projects and stacks.
It all starts with a Pulumi project that manages my base AWS infrastructure—VPC, subnets, route tables and routes, Internet gateways, NAT gateways, etc. I use a separate stack in this project for each region where I need base infrastructure.
All other projects build on “top” of this base project, referencing the resources created by the base project in order to create their own resources. Referencing the resources created by the base project is accomplished via a Pulumi StackReference.
In my Continue reading
To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.
Without the necessary tags, the AWS cloud provider—which is responsible for the integration that creates Elastic Load Balancers (ELBs) in response to the creation of a Service of type
LoadBalancer, for example— won’t work properly. Specifically, the following tags are needed:
kubernetes.io/cluster/<cluster-name> kubernetes.io/role/elb kubernetes.io/role/internal-elb
The latter two tags are mutually exclusive: the former should be assigned to public subnets to tell the AWS cloud provider where to place public-facing ELBs, while the latter is assigned to private subnets Continue reading
Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!
In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS'
launchd automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.
These instructions borrow heavily from this post showing how to use macOS as a WireGuard VPN server. These instructions also assume that you’ve already walked through installing the necessary WireGuard components, and that you’ve already created the configuration file(s) for your WireGuard interface(s). Finally, I wrote this using my M1-based MacBook Pro, so my example files and instructions will be referencing the default Homebrew prefix of
/opt/homebrew. If you’re on an Intel-based Mac, change this to
The first step is to create a
launchd job definition. This file should be named
<label>.plist, and it will need to be placed in a specific location. The
<label> value is taken from the name given to the job Continue reading
I’ve written a fair amount about
kubeadm, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using
kubeadm to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using
kubeadm to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.
There is absolutely nothing wrong with the process the official documentation describes (I’m referring to this page, by the way); this process just creates slightly “cleaner” certificates. What do I mean by “cleaner” certificates? The official documentation uses a series of
kubeadm configuration files, one for each etcd cluster member, to control how the utility creates the necessary certificates and configuration files. The user is instructed to use these configuration files on a single system to generate the certificates for all the cluster members. This works fine, with one caveat: each of the certificates will have an extra hostname—the hostname of the system being used Continue reading
Welcome to Technology Short Take #142! This time around, the Networking section is a bit light, but I’ve got plenty of cloud computing links and articles for you to enjoy, along with some stuff on OSes and applications, programming, and soft skills. Hopefully there’s something useful here for you!
Recently, I needed to deploy a Kubernetes cluster via Cluster API (CAPI) into a pre-existing AWS VPC. As I outlined in this post from September 2019, this entails modifying the CAPI manifest to include the VPC ID and any associated subnet IDs, as well as referencing existing security groups where needed. I knew that I could use the
kustomize tool to make these changes in a declarative way, as I’d explored using
kustomize with Cluster API manifests some time ago. This time, though, I needed to add a list of items, not just modify an existing value. In this post, I’ll show you how I used a JSON 6902 patch with
kustomize to add a list of items to a CAPI manifest.
By the way, if you’re not familiar with
kustomize, you may find my introduction to
kustomize post to be helpful. Also, for those readers who are unfamiliar with JSON 6902 patches, the associated RFC is useful, as is this site.
In this particular case, the addition of the VPC ID and the subnet IDs were easily handled with a strategic merge patch that referenced the AWSCluster object. More challenging, though, was the reference to the existing security Continue reading
I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.
Some of this information is also found in this WireGuard quick start. Here I’ll focus only on using macOS as a WireGuard client, not as a server; refer to the WireGuard docs (or to my earlier post) for information on setting up a WireGuard server. I’ll also assume that you’ve installed WireGuard via Homebrew.
The first step is to generate the public/private keys you’ll need. If the
/usr/local/etc/wireguard (or the
/opt/homebrew/etc/wireguard for users on an M1-based Mac) directory doesn’t exist, you’ll need to first create that directory. (It didn’t exist Continue reading
The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of
kumactl, the command-line utility for interacting with Kuma. To make it easy for macOS users to get
kumactl, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for
kumactl. Unfortunately, installing an earlier version of
kumactl on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.
Note that this post really only applies to users of M1-based Macs; users of Intel-based Macs can extract the
kumactl binary from the release archive available linked from the Kuma install docs. (The same goes for users of Linux distributions running on Intel-based hardware.) On the Kuma website, simply select the desired version of Kuma from the drop-down in the upper left, check the page for the direct download link, and off you go. This doesn’t work for M1-based Macs because at the time this post was written, the Kuma community was not providing ARM64 binaries. This leaves Homebrew as the only way (aside Continue reading
After writing the post on using WireGuard on macOS (using the official WireGuard GUI app from the Mac App Store), I found the GUI app’s behavior to be less than ideal. For example, tunnels marked as on-demand would later show up as no longer configured as an on-demand tunnel. When I decided to set up WireGuard on my M1-based MacBook Pro (see my review of the M1 MacBook Pro), I didn’t want to use the GUI app. Fortunately, Homebrew has formulas for WireGuard. Unfortunately, the WireGuard tools as installed by Homebrew on an M1-based Mac won’t work. Here’s how to fix that.
The key issues with WireGuard as installed by Homebrew on an M1-based Mac are:
/opt/homebrewprefix. By comparison, Homebrew uses
/usr/localon Intel-based Macs. Some of the WireGuard-related scripts are hard-coded to use
/usr/localas the Homebrew prefix. Because the prefix has changed, though, these scripts now don’t work on an M1-based Mac.
I recently came across something that wasn’t immediately intuitive with regard to terminating HTTPS traffic on an AWS Elastic Load Balancer (ELB) when using Kubernetes on AWS. At least, it wasn’t intuitive to me, and I’m guessing that it may not be intuitive to some other readers as well. Kudos to my teammates Hart Hoover and Brent Yarger for identifying the resolution, which I’m going to call out in this post.
This AWS Premium Support post outlines the basic scenario:
Consider the following YAML, taken directly from the previously-referenced AWS Premium Support article:
apiVersion: v1 kind: Service metadata: name: Continue reading
Welcome to Technology Short Take #141! This is the first Technology Short Take compiled, written, and published entirely on my M1-based MacBook Pro (see my review here). The collection of links shared below covers a fairly wide range of topics, from old Sun hardware to working with serverless frameworks in the public cloud. I hope that you find something useful here. Enjoy!
As part of an ongoing effort to refine my work environment, several months ago I switched to a Logitech Ergo K860 ergonomic keyboard. While I’m not a “keyboard snob,” I am somewhat particular about the feel of my keyboard, so I wasn’t sure how I would like the K860. In this post, I’ll provide my feedback, and provide some information on how well the keyboard works with both Linux and macOS.
Setting up the K860 is remarkably easy. The first system I tried to pair it with was an older Mac Pro workstation, and apparently the Bluetooth hardware on that particular workstation wasn’t new enough to support the K860 (Logitech indicates that Bluetooth 5.0 is needed; more on that in a moment). Instead, I popped in the USB-A wireless receiver, and was up and running with the K860 less than a minute later. This was using macOS, but the Mac Pro also dual-booted Linux, so I rebooted into Linux and found that the K860 with the Logitech-supplied USB receiver continued to work without any issues.
The key takeaway regarding Linux is this: if you’re interested in getting the K860 for use with Continue reading
I hadn’t done a personal hardware refresh in a while; my laptop was a 2017-era MacBook Pro (with the much-disliked butterfly keyboard) and my tablet was a 2014-era iPad Air 2. Both were serviceable but starting to show their age, especially with regard to battery life. So, a little under a month ago, I placed an order for some new Apple equipment. Included in that order was a new 2020 13" MacBook Pro with the Apple-designed M1 CPU. In this post, I’d like to provide a brief review of the 2020 M1-based MacBook Pro based on the past month of usage.
The “TL;DR” of my review is this: the new M1-based MacBook Pro offers impressive performance and even more impressive battery life. While the raw performance may not “blow away” its 2020 Intel-based counterpart—at least, it didn’t in my real-world usage—the M1-based MacBook Pro offered consistently responsive performance with a battery life that easily blew past any other laptop I’ve ever used, bar none.
Read on for more details.
The build quality is really good, with a significant improvement in keyboard quality relative to the earlier butterfly keyboard models (such as my 2017-era MacBook Pro). However, the overall design Continue reading