In late 2023, I added some Go code for use with Pulumi to stand up an Amazon Elastic Kubernetes Service (EKS) cluster “from scratch,” meaning without using any prebuilt Pulumi components (like the AWSX VPC component or the EKS component). The code is largely illustrative for newer users, written to show how to stitch together all the components needed for an EKS cluster. In this post, I’ll show you how to modify that code to use Bottlerocket OS as the node OS for your EKS cluster—and share some information on installing Cilium into (onto?) the cluster.
The example code can be found in the pulumi/eks-from-scratch
folder in my “learning-tools” GitHub repository. As I mentioned, it’s written in Go, and the associated README file has full instructions for how to use that code in your own environment. Since the code was intended to be illustrative, I have tried to provide enough comments in the code for readers to be able to decode what’s happening without too much difficulty.
To use Bottlerocket OS on the EKS nodes in your cluster, you’ll have to modify the main.go
file. Specifically, changes are needed in the section of code that creates a Continue reading
Welcome to Technology Short Take #183! Fall is in the air; the nights and mornings are cooler and the leaves are turning (or have already turned in some areas!). I’ve got a slightly smaller collection of links for you this time around, but I do hope that you’ll find something shared here useful. Enjoy!
Welcome to Technology Short Take #182! I have a slightly bulkier list of links for you today, bolstered by some recent additions to my RSS feeds and supplemented by some articles I found through social media. There should be enough here to keep folks entertained this weekend—enjoy!
iptables
).The Image Builder project is a set of tools aimed at automating the creation of Kubernetes disk images—such as VM templates or Amazon Machine Images (AMIs). (Interesting side note: Image Builder is the evolution of a much older Heptio project where I was a minor contributor.) I recently had a need to build a custom AMI with some extra container images preloaded, and in this post I’ll share with you how to configure Image Builder to preload additional container images.
Image Builder isn’t a single binary; it’s a framework built on top of other tools such as Packer and Ansible. Although in this post I’m discussing Image Builder in the context of building an AMI, it’s not limited to use with AWS. You can use Image Builder for a pretty wide collection of platforms (check the Image Builder web site for more details).
To have Image Builder preload additional images into your disk image, there are three changes needed. All three of these changes belong in the images/capi/packer/config/additional_components.json
file:
load_additional_components
to true
. (The default value is false
.)additional_registry_images
to true
. (This also defaults to false
.)additional_registry_images_list
to a comma-delimited list of fully-qualified image Continue readingPulumi, like Terraform and OpenTofu, has the ability to store its state in a supported backend. You can store the state in one of the blob/object storage services offered by the major cloud providers, via Pulumi’s SaaS offering (called Pulumi Cloud), or even locally. It’s this last option I’ll explore a little bit in this post, where I’ll show you how to configure Pulumi to store the state in the project directory instead of somewhere else.
Let me start with this disclaimer: If you’re working with a team of folks on IaC for your project or employer, don’t do this. Storing project state locally with your project will just make life difficult for you. Instead, just accept that you need to store the state somewhere that your whole team can access it. Howver, if you are a “team of one” then you might find this interesting or useful.
Pulumi supports a “local” backend, which means storing stack state information locally on the same system where Pulumi is running. By default, Pulumi will store the state information in the ${HOME}/.pulumi
folder.
It’s possible to configure the location the local backend uses with the PULUMI_BACKEND_URL
environment variable (see this page for Continue reading
I’ve recently had the opportunity to start using a Lenovo ThinkPad X1 Carbon (X1C) Gen11 as my primary work system. Since I am not a Windows person—I don’t think I’ve used Windows as a daily driver since before the turn of the century—I’m running Linux on the X1C Gen11. Now that I’ve had a few weeks of regular use, in this post I’ll provide my review of this laptop.
This is my second ThinkPad X1 Carbon; my first was a Gen 5 that I received when I joined Heptio in 2018 (see my review of the X1C Gen5). I loved that laptop; my experience with the Gen5 was what made me choose the X1C Gen11 when given the opportunity. What I’ve found is that the Gen11 improves upon the X1C experience in some ways, but falls short in other ways.
Before getting into the details, here’s a quick rundown on the specifications:
As with the Gen5, I’m happy with the build quality and subjective “feel” of the laptop; Continue reading
Welcome to Technology Short Take #181! The summer of 2024 is nearly over, and Labor Day rapidly approaches. Take heart, though; here is some reading material for your weekend. From networking to security and from hardware to the cloud, there’s something in here for just about everyone. Enjoy!
I was first introduced to SOPS at a platform engineering event hosted in Denver last year. SOPS, which is an acronym for Secrets OPerationS, describes itself as “an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP” (taken directly from the project’s GitHub repository). In this post, I’ll explore using Pulumi with SOPS—and I’ll also touch upon whether this combination of tools offers value or users or not.
SOPS is a command-line tool written primarily in Go; users can download binaries from the SOPS GitHub repository or, for Linux users, install it via their distribution’s package manager. Depending upon the encryption mechanism you’re going to use, you may also need to install other tools (for example, if you’re going to use AWS KMS then you’ll need the appropriate AWS tools installed and configured). I chose to use Rage, a Rust-based implementation of Age, as my encryption mechanism.
One of the things I find really interesting about SOPS is the fact that it supports data formats like YAML and JSON. What does this mean, exactly? When you Continue reading
For those that aren’t aware, Talos Linux is a purpose-built Linux distribution designed for running Kubernetes. Bootstrapping a Talos Linux cluster is normally done via the Talos API, but this requires direct network access to the Talos Linux nodes. What happens if you don’t have direct network access to the nodes? In this post, I’ll share with you how to bootstrap a Talos Linux cluster over SSH.
In all honesty, if you can establish direct network access to the Talos Linux nodes then that’s the recommended approach. Consider what I’m going to share here as a workaround—a backup plan, if you will—in the event you can’t establish direct network access. I figured out how to bootstrap a Talos Linux cluster over SSH only because I was not able to establish direct network access.
Before getting into the details, I think it’s useful to point out that I’m talking about using SSH port forwarding (SSH tunneling) here, but Talos Linux doesn’t support SSH (as in, you can’t SSH into a Talos Linux system). In this case, you could do the same thing I did and use an SSH bastion host to handle the port forwarding.
There are two parts to bootstrapping Continue reading
Back in March of this year, I talked about how I started using markdownlint-cli
to perform linting against the Markdown source files that are used by Hugo to generate this site. At the same time, I also started exploring the use of similar tools to check (or lint, if you will) my writing itself. In this post, I’ll share with you how I started using Vale to perform some checks against my writing.
More details on my use of markdownlint-cli
are available here for reference. markdownlint-cli
checks for the structure and formatting of Markdown files, but it doesn’t do any “higher level” checks regarding the writing itself. For that, I needed to add a second tool, and I opted to use Vale, an open source tool specifically aimed at “linting your prose.” Among other things, what I liked about Vale was that it offers integration with graphical editors like Visual Studio Code (what I use when I’m on macOS) and Sublime Text (what I use when I’m on Linux), but it also can be run directly from the command-line. And, if you are so inclined, there’s a GitHub Action for Vale, too. Nice!
The first step is Continue reading
Welcome to Technology Short Take #180! It’s hard to believe that July is almost over, and that 2024 is flying past us. It’s probably time that you, my readers, took some time to slow down and read more technical blogs. To help with that, I just happen to have a little collection of links to share. Enjoy!
Although I’m sure that I’d seen or read about Git commit templates previously, the idea of using a Git commit template was brought to the forefront recently with the release of version 11 of Tower for Mac. When I’m using macOS, Tower is my graphical Git client of choice. However, most of my Git operations are via the terminal, and I’m not always using macOS (I also spend a fair amount of time using Linux). For that reason, I wanted to implement a more “platform-neutral” solution to Git commit templates. In this post, I’ll share with you what I learned about using a Git commit template.
Let me start with explaining why I wanted to start using a Git commit template. Some time ago, I was exposed to the Conventional Commits specification, and decided I wanted to adopt this for my personal projects. As with starting any new habit, it can be challenging to remember, and I found myself committing changes to personal repositories without following the specification. So when I was reminded of Git commit template functionality via its inclusion in Tower, I immediately realized this would be a great way to help build new habits (and Continue reading
Welcome to Technology Short Take #179! I’m back with another set of links to articles on various data center- and IT-related topics. In the interest of full transparency, I’d like to give credit to Russ White for his “Weekend Reads” series of posts, which are similar in nature to my Technology Short Takes. If you aren’t reading Russ’ “Weekend Reads” posts, you’re missing out on a good source of useful information. Several of the links included below are taken from recent posts by Russ. Thanks, Russ—and to all the other content creators and content curators referenced here—for your great work! Now, on to the content.
netlab
just reminds me that I really should spend some quality time with it. Need more than 24 hours in a day…Welcome to Technology Short Take #178! This one is notably shorter than many of the Technology Short Takes I publish; I’m still trying to fine-tune my collection of RSS feeds (such a useful technology that seems to have fallen out of favor), removing inactive feeds and looking for new feeds to replace them. Regardless, I have managed to collect a few links for your reading pleasure this weekend. Enjoy!
While performing some testing with CiliumNetworkPolicies, I came across a behavior that was unintuitive and unexpected to me. The behavior centers around how an endpoint selector behaves in a CiliumNetworkPolicies when Kubernetes namespaces are involved. (If you didn’t understand a bit of what I just said, I’ll provide some additional explanation shortly—stay with me!) After chatting through the behavior with a few folks, I realized the behavior is essentially “correct” and expected. However, if I was confused by the behavior then there’s a good chance others might be confused by the behavior as well, so I thought a quick blog post might be a good idea. Keep reading to get more details on the interaction between endpoint selectors and Kubernetes namespaces in CiliumNetworkPolicies.
Before digging into the behavior, let me first provide some definitions or explanations of the various things involved here:
By Matthew Jones, Chief Architect, Ansible Automation at Red Hat
Back in 2013, a small team of engineers worked for over a year to make the first commercial release of Ansible Tower (before we expanded and evolved to Ansible Automation Platform) and during that time we put down the foundation of an application that I’m immensely proud of.
We, the original architects of Tower, were trying to find the best way to create a system that would allow running Ansible at scale for hundreds of thousands of servers. We wanted there to be a way to not just manage those servers but store the results of that automation and provide auditability and traceability. It needed to make Ansible functional for large teams and it succeeded.
Today, we’re not just talking about hundreds of thousands. We’re thinking in the millions and tens of millions, we’re managing automation for some of the largest IT organizations in the world. And we’re not just managing servers. In the intervening years we’ve been automating containers, cloud platforms, network devices, storage, IoT devices and PLCs (among other things). One of the main challenges that we’re facing is that some of the architectural decisions we made Continue reading
I recently had a need to get Barrier—an open source project aimed at enabling mouse/keyboard sharing across multiple computers, aka a “software KVM”—running between Arch Linux and Ubuntu 22.04. Unfortunately, the process for getting Barrier working isn’t as intuitive as it should be, so I’m posting this information in the hopes it will prove useful to others who find themselves in a similar situation. Below, I’ll share how I got Barrier working between an Arch Linux system and an Ubuntu system.
Although this post specifically mentions Arch Linux and Ubuntu, the process for getting Barrier running should be pretty similar (if not identical) for other Linux distributions and for macOS. I don’t have any Windows-based systems on which to test these instructions, but they should be adaptable to Windows as well. Note that there may be slight differences in the flags for the commands listed here when they are run on platforms other than Linux.
Both Arch and Ubuntu 22.04 have the latest release of Barrier, version 2.4.0, available in their repositories, so the installation is straightforward.
For Arch, just install with pacman
:
pacman -Ss barrier
There’s also a “barrier-headless” package in Continue reading
Welcome to Technology Short Take #177! Wow, is it the middle of May already? The year seems to be flying by—much in the same way that all these technical articles keep flying by my Inbox, occasionally getting caught and included here! In this Technology Short Take, I have links on things ranging from physical network designs to running retro operating systems as virtual machines. Surely there will be something useful in here for you!
netlab
topology readers can use to create a lab and see how it works.systemd-nspawn
containers.As a sort of follow-up to my previous post on using the AWS CLI to track the specific Elastic Network Interfaces (ENIs) used by Amazon Elastic Kubernetes Service (EKS) cluster nodes, this post focuses on the EC2 instances themselves. I feel this is less of a “problem” than tracking ENIs, but I wanted to share this information nevertheless. In this post, I’ll show you which AWS CLI command to use to list all the EC2 instances associated with a particular EKS cluster.
If you read the previous post on tracking ENIs used by EKS, you might think that you could use a very similar AWS CLI command (aws ec2 describe-instances
instead of aws ec2 describe-network-interfaces
) to track the EC2 instances in a cluster—and you’d be mostly correct. Like the ENIs, EKS does add a cluster-specific tag to all EC2 instances in the cluster. However, just to make life interesting, the tag used for EC2 instances is not the same as the tag used for ENIs. (If someone at AWS knows of a technical reason why these tags are different, I’d love to hear it.)
Instead of using the cluster.k8s.amazonaws.com/name
tag that is used Continue reading
I’ve recently been spinning up lots of Amazon Elastic Kubernetes Service (EKS) clusters (using Pulumi, of course) in order to test various Cilium configurations. Along the way, I’ve wanted to verify the association and configuration of Elastic Network Interfaces (ENIs) being used by the EKS cluster. In this post, I’ll share a couple of AWS CLI commands that will help you track the ENIs used by an EKS cluster.
When I first set out to find the easiest way to track the ENIs used by the nodes in an EKS cluster, I thought that AWS resource tags might be the key. I was right—but not in the way I expected. In the Pulumi program (written in Go) that I use to create EKS clusters, I made sure to tag all the resources.
For example, when defining the EKS cluster itself I assigned tags:
eksCluster, err := eks.NewCluster(ctx, "eks-cluster", &eks.ClusterArgs{
Name: pulumi.Sprintf("%s-test", regionNames[awsRegion]),
// Some code omitted here for brevity
Tags: pulumi.StringMap{
"Name": pulumi.Sprintf("%s-test", regionNames[awsRegion]),
"owner": pulumi.String(ownerTag),
Continue reading