Author Archives: Scott Lowe
Author Archives: Scott Lowe
I am by no means a developer (not by a long shot!), but I have been learning lots of development-related things over the last several years and trying to incorporate those into my workflows. One of these is the idea of test-driven development (see Wikipedia for a definition and some additional information), in which one writes tests to validate functionality before writing the code to implement said functionality (pardon the paraphrasing). In this post, I’ll discuss how to use conftest
to (loosely) implement test-driven development for Kustomize overlays.
If you’re unfamiliar with Kustomize, then this introductory article I wrote will probably be useful.
For the discussion around using the principles of test-driven development for Kustomize overlays, I’ll pull in a recent post I did on creating reusable YAML for installing Kuma. In that post, I pointed out four changes that needed to be made to the output of kumactl install control-plane
to make it reusable:
caBundle
value for all webhooks.caBundle
value.Welcome to Technology Short Take #150! This is the last Technology Short Take of 2021, so hopefully I’ll close the year out “with a bang” with this collection of links and articles on various technology areas. Bring on the content!
About six months ago I purchased an OWC Thunderbolt 3 Dock to replace my Anker PowerElite Thunderbolt 3 Dock (see my review here). While there was nothing necessarily wrong with the Anker PowerElite, it lacked a digital audio port that I could use to send audio to a soundbar positioned under my monitor. (I’d grown accustomed to using a soundbar when my 2012 Mac Pro was my primary workstation.) In this post, I’ll provide a brief review of the OWC Thunderbolt 3 Dock.
Note that I’m posting this as a customer. I paid for the dock with my own money, and I have not received any compensation of any kind from anyone for this review.
The OWC Thunderbolt 3 Dock feels well-built, but is larger than the Anker PowerElite. To be frank, I think I prefer the smaller footprint of the Anker PowerElite, but the added ports available on the OWC Thunderbolt Dock sealed the deal for me. Your priorities may be different, of course.
As one might expect, setup was truly “plug-and-play.” I connected all my peripherals to the dock—see below for the list of what I use on a regular basis—and then plugged Continue reading
Welcome to Technology Short Take #149! I’ll have one more Technology Short Take in 2021, scheduled for three weeks from now (on the last day of the year!). For now, though, I have a small collection of articles and links for your reading pleasure—not as many as I usually include in a Technology Short Take, but better than nothing at all (I hope!). Enjoy!
Welcome to Technology Short Take #148, aka the Thanksgiving Edition (at least, for US readers). I’ve been scouring RSS feeds and various social media sites, collecting as many useful links and articles as I can find: from networking hardware and networking CI/CD pipelines to Kernel TLS and tricks for improving your working memory. That’s quite the range! I hope that you find something useful here.
pwru
tool, which aims to help with tracing network packets in the Linux kernel. It seems like it may be a bit too debug-level to be useful to the average person, but I have yet to lay hands on it myself and find out for sure.requests
module to work with REST APIs. Good stuff here.I’ve been using Kustomize with Cluster API (CAPI) to manage my AWS-based Kubernetes clusters for quite a while (along with Pulumi for managing the underlying AWS infrastructure). For all the time I’ve been using this approach, I’ve also been unhappy with the overlay-based approach that had evolved as a way of managing multiple workload clusters. With the recent release of CAPI 1.0 and the v1beta1 API, I took this opportunity to see if there was a better way. I found a different way—time will tell if it is a better way. In this post, I’ll share how I’m using Kustomize components to help streamline managing multiple CAPI workload clusters.
Before continuing, I feel it’s important to point out that while the bulk of the Kustomize API is reasonably stable at v1beta1, the components portion of the API is still in early days (v1alpha1). So, if you adopt this functionality, be aware that it may change (or even get dropped).
More information on Kustomize components can be found in the Kustomize components KEP or in this demo document. The documentation on Kustomize components is somewhat helpful as well. I won’t try to rehash information found in those sources here, but Continue reading
Welcome to Technology Short Take #147! The list of articles is a bit shorter than usual this time around, but I’ve still got a good collection of articles and posts covering topics in networking, hardware (mostly focused on Apple’s processors), cloud computing, and virtualization. There’s bound to be something in here for most everyone! (At least, I hope so.) Enjoy your weekend reading!
ext_authz
filter, which is what allows Envoy to check with an authorization service to see if a request is permitted or denied.The Kubernetes Cluster API (CAPI) project—which recently released v1.0—can, if you wish, help manage the underlying infrastructure associated with a cluster. (You’re also fully able to have CAPI use existing infrastructure as well.) Speaking specifically of AWS, this means that the Cluster API Provider for AWS is able to manage VPCs, subnets, routes and route tables, gateways, and—of course—EC2 instances. These EC2 instances are booted from a set of AMIs (Amazon Machine Images, definitely pronounced “ay-em-eye” with three syllables) that are prepared and maintained by the CAPI project. In this short and simple post, I’ll show you how to influence the AMI selection process that CAPI’s AWS provider uses.
There are a couple different ways to influence AMI selection, and all of them have to do with settings within the AWSMachineSpec, which controls the configuration of an AWSMachine object. (An AWSMachine object is an infrastructure-specific implementation of a logical Machine object.) Specifically, there are these options for influencing AMI selection:
ami
field. (If this field is set, the other options do not apply.)Using CLI tools—instead of a “wall of YAML”—to install things onto Kubernetes is a growing trend, it seems. Istio and Cilium, for example, each have a CLI tool for installing their respective project. I get the reasons why; you can build logic into a CLI tool that you can’t build into a YAML file. Kuma, the open source service mesh maintained largely by Kong and a CNCF Sandbox project, takes a similar approach with its kumactl
tool. In this post, however, I’d like to take a look at creating reusable YAML to install Kuma, instead of using the CLI tool every time you install.
You might be wondering, “Why?” That’s a fair question. Currently, the kumactl
tool, unless configured otherwise, will generate a set of TLS assets to be used by Kuma (and embeds some of those assets in the YAML regardless of the configuration). Every time you run kumactl
, it will generate a new set of TLS assets. This means that the command is not declarative, even if the output is. Unfortunately, you can’t reuse the output, as that would result in duplicate TLS assets across installations. That brings me to the point of this Continue reading
In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using kubeadm
. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.
In addition to the post I linked above, there were a number of other articles I published on this topic:
Most of the information in these posts, if not all of it, is found in the latest iteration, but I wanted to include these links here for some additional context. Also, Continue reading
The topic of combining kustomize
with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize
with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize
transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize
transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize
transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).
In the v1alpha2 post (the first post on modifying kustomize
transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.
For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are Continue reading
Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!
dnspeep
to help folks understand how DNS works.In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!
If you aren’t already familiar with Cluster API—hereafter just referred to as CAPI—then I would encourage you to read my introduction to Cluster API post. Although it is a bit dated (it was written in the very early days of the project, which recently released version 1.0). Some of the commands referenced in that post have changed, but the underlying concepts remain valid. If you’re not familiar with Cilium, check out their introduction to the project for more information. Finally, if you’re not familiar at all with the idea of ClusterResourceSets, you can read my earlier post or check out the ClusterResourceSet CAEP document.
If you want to install Cilium via a ClusterResourceSet, the process looks something like this:
Welcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!
Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq
along the way. I hope you find something useful!
I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.
First, let me share some background on how I structure my Pulumi projects and stacks.
It all starts with a Pulumi project that manages my base AWS infrastructure—VPC, subnets, route tables and routes, Internet gateways, NAT gateways, etc. I use a separate stack in this project for each region where I need base infrastructure.
All other projects build on “top” of this base project, referencing the resources created by the base project in order to create their own resources. Referencing the resources created by the base project is accomplished via a Pulumi StackReference.
In my Continue reading
To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.
Without the necessary tags, the AWS cloud provider—which is responsible for the integration that creates Elastic Load Balancers (ELBs) in response to the creation of a Service of type LoadBalancer
, for example— won’t work properly. Specifically, the following tags are needed:
kubernetes.io/cluster/<cluster-name>
kubernetes.io/role/elb
kubernetes.io/role/internal-elb
The latter two tags are mutually exclusive: the former should be assigned to public subnets to tell the AWS cloud provider where to place public-facing ELBs, while the latter is assigned to private subnets Continue reading
Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!
In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS' launchd
automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.
These instructions borrow heavily from this post showing how to use macOS as a WireGuard VPN server. These instructions also assume that you’ve already walked through installing the necessary WireGuard components, and that you’ve already created the configuration file(s) for your WireGuard interface(s). Finally, I wrote this using my M1-based MacBook Pro, so my example files and instructions will be referencing the default Homebrew prefix of /opt/homebrew
. If you’re on an Intel-based Mac, change this to /usr/local
instead.
The first step is to create a launchd
job definition. This file should be named <label>.plist
, and it will need to be placed in a specific location. The <label>
value is taken from the name given to the job Continue reading
I’ve written a fair amount about kubeadm
, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using kubeadm
to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using kubeadm
to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.
There is absolutely nothing wrong with the process the official documentation describes (I’m referring to this page, by the way); this process just creates slightly “cleaner” certificates. What do I mean by “cleaner” certificates? The official documentation uses a series of kubeadm
configuration files, one for each etcd cluster member, to control how the utility creates the necessary certificates and configuration files. The user is instructed to use these configuration files on a single system to generate the certificates for all the cluster members. This works fine, with one caveat: each of the certificates will have an extra hostname—the hostname of the system being used Continue reading