I was first introduced to SOPS at a platform engineering event hosted in Denver last year. SOPS, which is an acronym for Secrets OPerationS, describes itself as “an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP” (taken directly from the project’s GitHub repository). In this post, I’ll explore using Pulumi with SOPS—and I’ll also touch upon whether this combination of tools offers value or users or not.
SOPS is a command-line tool written primarily in Go; users can download binaries from the SOPS GitHub repository or, for Linux users, install it via their distribution’s package manager. Depending upon the encryption mechanism you’re going to use, you may also need to install other tools (for example, if you’re going to use AWS KMS then you’ll need the appropriate AWS tools installed and configured). I chose to use Rage, a Rust-based implementation of Age, as my encryption mechanism.
One of the things I find really interesting about SOPS is the fact that it supports data formats like YAML and JSON. What does this mean, exactly? When you Continue reading
For those that aren’t aware, Talos Linux is a purpose-built Linux distribution designed for running Kubernetes. Bootstrapping a Talos Linux cluster is normally done via the Talos API, but this requires direct network access to the Talos Linux nodes. What happens if you don’t have direct network access to the nodes? In this post, I’ll share with you how to bootstrap a Talos Linux cluster over SSH.
In all honesty, if you can establish direct network access to the Talos Linux nodes then that’s the recommended approach. Consider what I’m going to share here as a workaround—a backup plan, if you will—in the event you can’t establish direct network access. I figured out how to bootstrap a Talos Linux cluster over SSH only because I was not able to establish direct network access.
Before getting into the details, I think it’s useful to point out that I’m talking about using SSH port forwarding (SSH tunneling) here, but Talos Linux doesn’t support SSH (as in, you can’t SSH into a Talos Linux system). In this case, you could do the same thing I did and use an SSH bastion host to handle the port forwarding.
There are two parts to bootstrapping Continue reading
Back in March of this year, I talked about how I started using markdownlint-cli to perform linting against the Markdown source files that are used by Hugo to generate this site. At the same time, I also started exploring the use of similar tools to check (or lint, if you will) my writing itself. In this post, I’ll share with you how I started using Vale to perform some checks against my writing.
More details on my use of markdownlint-cli are available here for reference. markdownlint-cli checks for the structure and formatting of Markdown files, but it doesn’t do any “higher level” checks regarding the writing itself. For that, I needed to add a second tool, and I opted to use Vale, an open source tool specifically aimed at “linting your prose.” Among other things, what I liked about Vale was that it offers integration with graphical editors like Visual Studio Code (what I use when I’m on macOS) and Sublime Text (what I use when I’m on Linux), but it also can be run directly from the command-line. And, if you are so inclined, there’s a GitHub Action for Vale, too. Nice!
The first step is Continue reading
Welcome to Technology Short Take #180! It’s hard to believe that July is almost over, and that 2024 is flying past us. It’s probably time that you, my readers, took some time to slow down and read more technical blogs. To help with that, I just happen to have a little collection of links to share. Enjoy!
Although I’m sure that I’d seen or read about Git commit templates previously, the idea of using a Git commit template was brought to the forefront recently with the release of version 11 of Tower for Mac. When I’m using macOS, Tower is my graphical Git client of choice. However, most of my Git operations are via the terminal, and I’m not always using macOS (I also spend a fair amount of time using Linux). For that reason, I wanted to implement a more “platform-neutral” solution to Git commit templates. In this post, I’ll share with you what I learned about using a Git commit template.
Let me start with explaining why I wanted to start using a Git commit template. Some time ago, I was exposed to the Conventional Commits specification, and decided I wanted to adopt this for my personal projects. As with starting any new habit, it can be challenging to remember, and I found myself committing changes to personal repositories without following the specification. So when I was reminded of Git commit template functionality via its inclusion in Tower, I immediately realized this would be a great way to help build new habits (and Continue reading
Welcome to Technology Short Take #179! I’m back with another set of links to articles on various data center- and IT-related topics. In the interest of full transparency, I’d like to give credit to Russ White for his “Weekend Reads” series of posts, which are similar in nature to my Technology Short Takes. If you aren’t reading Russ’ “Weekend Reads” posts, you’re missing out on a good source of useful information. Several of the links included below are taken from recent posts by Russ. Thanks, Russ—and to all the other content creators and content curators referenced here—for your great work! Now, on to the content.
netlab just reminds me that I really should spend some quality time with it. Need more than 24 hours in a day…Welcome to Technology Short Take #178! This one is notably shorter than many of the Technology Short Takes I publish; I’m still trying to fine-tune my collection of RSS feeds (such a useful technology that seems to have fallen out of favor), removing inactive feeds and looking for new feeds to replace them. Regardless, I have managed to collect a few links for your reading pleasure this weekend. Enjoy!
While performing some testing with CiliumNetworkPolicies, I came across a behavior that was unintuitive and unexpected to me. The behavior centers around how an endpoint selector behaves in a CiliumNetworkPolicies when Kubernetes namespaces are involved. (If you didn’t understand a bit of what I just said, I’ll provide some additional explanation shortly—stay with me!) After chatting through the behavior with a few folks, I realized the behavior is essentially “correct” and expected. However, if I was confused by the behavior then there’s a good chance others might be confused by the behavior as well, so I thought a quick blog post might be a good idea. Keep reading to get more details on the interaction between endpoint selectors and Kubernetes namespaces in CiliumNetworkPolicies.
Before digging into the behavior, let me first provide some definitions or explanations of the various things involved here:
By Matthew Jones, Chief Architect, Ansible Automation at Red Hat
Back in 2013, a small team of engineers worked for over a year to make the first commercial release of Ansible Tower (before we expanded and evolved to Ansible Automation Platform) and during that time we put down the foundation of an application that I’m immensely proud of.
We, the original architects of Tower, were trying to find the best way to create a system that would allow running Ansible at scale for hundreds of thousands of servers. We wanted there to be a way to not just manage those servers but store the results of that automation and provide auditability and traceability. It needed to make Ansible functional for large teams and it succeeded.
Today, we’re not just talking about hundreds of thousands. We’re thinking in the millions and tens of millions, we’re managing automation for some of the largest IT organizations in the world. And we’re not just managing servers. In the intervening years we’ve been automating containers, cloud platforms, network devices, storage, IoT devices and PLCs (among other things). One of the main challenges that we’re facing is that some of the architectural decisions we made Continue reading
I recently had a need to get Barrier—an open source project aimed at enabling mouse/keyboard sharing across multiple computers, aka a “software KVM”—running between Arch Linux and Ubuntu 22.04. Unfortunately, the process for getting Barrier working isn’t as intuitive as it should be, so I’m posting this information in the hopes it will prove useful to others who find themselves in a similar situation. Below, I’ll share how I got Barrier working between an Arch Linux system and an Ubuntu system.
Although this post specifically mentions Arch Linux and Ubuntu, the process for getting Barrier running should be pretty similar (if not identical) for other Linux distributions and for macOS. I don’t have any Windows-based systems on which to test these instructions, but they should be adaptable to Windows as well. Note that there may be slight differences in the flags for the commands listed here when they are run on platforms other than Linux.
Both Arch and Ubuntu 22.04 have the latest release of Barrier, version 2.4.0, available in their repositories, so the installation is straightforward.
For Arch, just install with pacman:
pacman -Ss barrier
There’s also a “barrier-headless” package in Continue reading
Welcome to Technology Short Take #177! Wow, is it the middle of May already? The year seems to be flying by—much in the same way that all these technical articles keep flying by my Inbox, occasionally getting caught and included here! In this Technology Short Take, I have links on things ranging from physical network designs to running retro operating systems as virtual machines. Surely there will be something useful in here for you!
netlab topology readers can use to create a lab and see how it works.systemd-nspawn containers.As a sort of follow-up to my previous post on using the AWS CLI to track the specific Elastic Network Interfaces (ENIs) used by Amazon Elastic Kubernetes Service (EKS) cluster nodes, this post focuses on the EC2 instances themselves. I feel this is less of a “problem” than tracking ENIs, but I wanted to share this information nevertheless. In this post, I’ll show you which AWS CLI command to use to list all the EC2 instances associated with a particular EKS cluster.
If you read the previous post on tracking ENIs used by EKS, you might think that you could use a very similar AWS CLI command (aws ec2 describe-instances instead of aws ec2 describe-network-interfaces) to track the EC2 instances in a cluster—and you’d be mostly correct. Like the ENIs, EKS does add a cluster-specific tag to all EC2 instances in the cluster. However, just to make life interesting, the tag used for EC2 instances is not the same as the tag used for ENIs. (If someone at AWS knows of a technical reason why these tags are different, I’d love to hear it.)
Instead of using the cluster.k8s.amazonaws.com/name tag that is used Continue reading
I’ve recently been spinning up lots of Amazon Elastic Kubernetes Service (EKS) clusters (using Pulumi, of course) in order to test various Cilium configurations. Along the way, I’ve wanted to verify the association and configuration of Elastic Network Interfaces (ENIs) being used by the EKS cluster. In this post, I’ll share a couple of AWS CLI commands that will help you track the ENIs used by an EKS cluster.
When I first set out to find the easiest way to track the ENIs used by the nodes in an EKS cluster, I thought that AWS resource tags might be the key. I was right—but not in the way I expected. In the Pulumi program (written in Go) that I use to create EKS clusters, I made sure to tag all the resources.
For example, when defining the EKS cluster itself I assigned tags:
eksCluster, err := eks.NewCluster(ctx, "eks-cluster", &eks.ClusterArgs{
Name: pulumi.Sprintf("%s-test", regionNames[awsRegion]),
// Some code omitted here for brevity
Tags: pulumi.StringMap{
"Name": pulumi.Sprintf("%s-test", regionNames[awsRegion]),
"owner": pulumi.String(ownerTag),
Continue readingWelcome to Technology Short Take #176! This Tech Short Take is a bit heavy on security-related links, but there’s still some additional content in a number of other areas, so you should be able to find something useful—or at least interesting—in here. Thanks for reading!
It’s no secret I’m a fan of Markdown. The earliest mention of Markdown on this site is all the way back in 2011, and it was only a couple years after that when I migrated this site from WordPress to Markdown. Back then, the site was generated from Markdown using Jekyll (via GitHub Pages); today it is generated from Markdown sources using Hugo. One thing I’ve not done, though, is perform linting (checking for errors or potential errors) of the Markdown source files. That’s all about to change! In this post, I’ll share with you how I started linting my Markdown files.
To handle the linting, there are (at least) a couple different options:
Both of these use the same markdownlint library under the hood. They’re both available as both a CLI tool or as a Docker container; markdownlint-cli2 is also available as a GitHub Action. In both cases, the CLI tool is installed via npm install (typically globally with --global or -g). The key difference between the two is that markdownlint-cli2 is configuration-driven, whereas markdownlint-cli offers the ability to use either a configuration file or command-line flags. I Continue reading
Welcome to Technology Short Take #175! Here’s your weekend reading—a collection of links and articles from around the internet on a variety of data center- and cloud-related topics. I hope you find something useful here!
sops.Welcome to Technology Short Take #174! For your reading pleasure, I’ve collected links on topics ranging from Kubernetes Gateway API to recent AWS attack techniques to some geeky Linux and Git topics. There’s something here for most everyone, I’d say! But enough of my rambling, let’s get on to the good stuff. Enjoy!
While Amazon Web Services has first mover advantage when it comes to building a compute and storage cloud, it would be a mistake to believe that the division of the world’s largest online retailer can rest on its laurels. …
A Tale Of Three Cloud Builders, All Seeking Dominance was written by Timothy Prickett Morgan at The Next Platform.
For folks using AWS in their day-to-day jobs, it comes as no secret that AWS’ Managed NAT Gateway—responsible for providing outbound Internet connectivity to otherwise private subnets—is an expensive proposition. While the primary concern for large organizations is the data processing fee, the concern for smaller organizations or folks like me who run a cloud-based lab instead of a hardware-based home lab is the per-hour cost. In this post, I’ll show you how to use Pulumi to use a NAT instance for outbound Internet connectivity instead of a Managed NAT Gateway.
For a bit more about why Managed NAT Gateways aren’t ideal for larger organizations, I’d recommend this article by Corey Quinn. For smaller organizations or cloud-based labs, data processing fees probably aren’t the main concern (although I could be wrong); it would be the ~$32/mo per Managed NAT Gateway. Since many tools configure a Managed NAT Gateway per availability zone, now you’re talking more like $96/mo—and you haven’t even spun up any real workloads yet! Running your own NAT instance can dramatically reduce but not eliminate this expense.
Now that I’ve established why running a NAT instance can be beneficial, let’s review what you’ll need to have installed in Continue reading
Some patterns are very hard to break. From the very early days of the systems business as we know it, which started six decades ago, the fourth quarter of the calendar year has been the money maker for companies like IBM, and the second quarter has been a relatively big one for those who could not get their budgets together before the end of the prior year. …
Big Blue Bucks The Datacenter Server Recession was written by Timothy Prickett Morgan at The Next Platform.