A few weeks ago, I published a post on HA Kubernetes clusters on AWS with Cluster API v1alpha2. That post was itself a follow-up to a post I wrote in September 2019 on setting up HA clusters using Cluster API v1alpha1. In this post, I’ll follow up on both of those posts with a look at setting up HA Kubernetes clusters on AWS using Cluster API v1alpha3. Although this post is similar to the v1alpha2 post, be aware there are some notable changes in v1alpha3, particularly with regard to the control plane.
If you’re not yet familiar with Cluster API, take a look at this high-level overview I wrote in August 2019. That post will provide an explanation of the project’s goals as well as provide some terminology.
In this post, I won’t discuss the process of establishing a management cluster; I’m assuming your Cluster API management cluster is already up and running. (I do have some articles in the content pipeline that discuss creating a management cluster.) Instead, this post will focus on creating a highly-available workload cluster. By “highly available,” I mean a cluster with multiple control plane nodes that are distributed across multiple availability zones (AZs). Continue reading
Welcome to Technology Short Take #125, where I have a collection of articles about various data center and cloud technologies collected from around the Internet. I hope I have managed to find a few useful things for you! (If not, contact me on Twitter and tell me how I can make this more helpful for you.)
Nothing this time around. I’ll try hard to find something useful for the next Technology Short Take.
I’ll admit right up front that this post is more “science experiment” than practical, everyday use case. It all started when I was trying some Cluster API-related stuff that leveraged KinD (Kubernetes in Docker). Obviously, given the name, KinD relies on Docker, and when running Docker on macOS you generally would use Docker Desktop. At the time, though, I was using Docker Machine, and as it turns out KinD doesn’t like Docker Machine. In this post, I’ll show you how to make KinD work with Docker Machine.
By the way, it’s worth noting that, per the KinD maintainers, this isn’t a tested configuration. Proceed at your own risk, and know that while this may work for some use cases it won’t necessarily work for all use cases.
These instructions assume you’ve already installed both KinD and Docker Machine, along with an associated virtualization solution. I’ll be using VirtualBox, but this should be largely the same for VMware Fusion or Parallels (or even HyperKit, if you somehow manage to get that working). I’m also assuming that you have jq
installed; if not, get it here.
Follow the steps below to make Continue reading
A few days ago I wrote an article on configuring kustomize
transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize
transformers—the parts of kustomize
that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize
with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize
transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize
transformers.
In the previous post, I referenced changes to both namereference.yaml
(to configure the nameReference transformer) and commonlabels.yaml
(to configure the commonLabels transformer). CAPI v1alpha3 has changed the default way labels are used with MachineDeployments, so for v1alpha3 you may be able to get away with only changes to namereference.yaml
. (If you know you are going to want/need additional labels on your MachineDeployment, then plan on changes to commonlabels.yaml
as well.)
Here are the CAPI v1alpha3 changes needed to namereference.yaml
:
- kind: Cluster
group: cluster.x-k8s.io
version: v1alpha3
fieldSpecs:
- path: spec/clusterName
kind: MachineDeployment
- path: spec/template/spec/clusterName
kind: MachineDeployment
- kind: AWSCluster
group: infrastructure.cluster.x-k8s.io
Continue reading
In November 2019 I wrote an article on using kustomize
with Cluster API (CAPI) manifests. The idea was to use kustomize
to simplify the management of CAPI manifests for clusters that are generally similar but have minor differences (like the AWS region in which they are running, or the number of Machines in a MachineDeployment). In this post, I’d like to show a slightly different way of using kustomize
with Cluster API that involves configuring the kustomize
transformers.
If you aren’t familiar with kustomize
, I’d recommend having a look at the kustomize
web site and/or reading my introductory post. A transformer in kustomize
is the part that is responsible for modifying a resource, or gathering information about a resource over the course of a kustomize build
process. This page has some useful terminology definitions.
Looking back at the earlier article on using kustomize
with CAPI, you can see that—due to the links/references between objects—modifying the name of the AWSCluster object also means modifying the reference to the AWSCluster object from the Cluster object. The same goes for the KubeadmConfigTemplate and AWSMachineTemplate objects referenced from a MachineDeployment. Out of the box, the namePrefix
transformer will change the names of these Continue reading
After attempting (and failing) to get Sublime Text to have some of the same “intelligence” that Visual Studio Code has with certain languages, I finally stopped trying to make Sublime Text work for me and just went back to using Code full-time. As I mentioned in this earlier post, now that I’ve finally solved how Code handles wrapping text in brackets and braces and the like I’m much happier. (It’s the small things in life.) Now I’ve moved on to tackling how to update Code’s Kubernetes API awareness.
Code’s awareness of the Kubernetes API comes via the YAML extension by Red Hat, and uses the yaml-language-server
project found in this GitHub repository. (This is the same language server I was trying to get working with Sublime Text via LSP.) Note that I tested this procedure with version 0.7.2 of the extension; files may be in different locations or have different contents in other versions.
The information in this post is based heavily on this article by Josh Rosso, who some of you may know from his guest appearances on TGIK. I’ve adapted the information in Josh’s post to apply to the macOS version of Continue reading
Right at the end of 2019 I announced that in early 2020 I was temporarily relocating to Tokyo, Japan, for a six month work assignment. It’s now March, and I’m still in Colorado. So what’s up with that Tokyo assignment, anyway? Since I’ve had several folks ask, I figured it’s probably best to post something here.
First, based on all the information available to me, it looks like Tokyo is still going to happen. It will just be delayed a bit—although the exact extent of the delay is still unclear.
Why the delays? A few things have affected the timing:
It looks like #1 and #2 have been sorted, but it’ll still Continue reading
There are two things I’ve missed since I switched from Sublime Text to Visual Studio Code (I switched in 2018). First, the speed. Sublime Text is so much faster than Visual Studio Code; it’s insane. But, the team behind Visual Studio Code is working hard to improve performance, so I’ve mostly resigned myself to it. The second thing, though, was the behavior of wrapping selected text in brackets (or parentheses, curly braces, quotes, etc.). That part has annoyed me for two years, until this past weekend I’d finally had enough. Here’s how I modified Visual Studio Code’s bracketing behaviors.
Before I get into the changes, allow me to explain what I mean here. With Sublime Text, I used a package (similar to an extension in the VS Code world) called Bracketeer, and one of the things it allowed me to do was highlight a section of text, wrap it in brackets/braces/parentheses, and then—this part is the missing bit—it would deselect the selected text and place my insertion point outside the closing character. Now, this doesn’t sound like a big deal, but let me assure you that it is. I didn’t have to use any extra keystrokes to keep Continue reading
About six months ago, I wrote a post on how to use Cluster API (specifically, the Cluster API Provider for AWS) to establish highly available Kubernetes clusters on AWS. That post was written with Cluster API (CAPI) v1alpha1 in mind. Although the concepts I presented there worked with v1alpha2 (released shortly after that post was written), I thought it might be helpful to revisit the topic with CAPI v1alpha2 specifically in mind. So, with that, here’s how to establish highly available Kubernetes clusters on AWS using CAPI v1alpha2.
By the way, it’s worth pointing out that CAPI is a fast-moving project, and release candidate versions of CAPI v1alpha3 are already available for some providers. Keep that in mind if you decide to start working with CAPI. I will write an updated version of this post for v1alpha3 once it has been out for a little while.
To be sure we’re all speaking the same language, here are some terms/acronyms that I’ll use in this post:
Welcome to Technology Short Take #124! It seems like the natural progression of the Tech Short Takes is moving toward monthly articles, since it’s been about a month since my last one. In any case, here’s hoping that I’ve found something useful for you. Enjoy! (And yes, normally I’d publish this on a Friday, but I messed up and forgot. So, I decided to publish on Monday instead of waiting for Friday.)
Interacting directly with the AWS APIs—using a tool like Postman (or, since I switched back to macOS, an application named Paw)—is something I’ve been doing off and on for a little while as a way of gaining a slightly deeper understanding of the APIs that tools like Terraform, Pulumi, and others are calling when automating AWS. For a while, I struggled with AWS authentication, and after seeing Mark Brookfield’s post on using Postman to authenticate to AWS I thought it might be helpful to share what I learned as well.
The basis of Mark’s post (I highly encourage you to go read it) is that he was having a hard time getting authenticated to AWS in order to automate the creation of some Route 53 DNS records. The root of his issue, as it turns out, was a mismatch between the region specified in his request and the API endpoint for Route 53. I know this because I ran into the exact same issue (although with a different service).
The secret to uncovering this mismatch can be found in this “AWS General Reference” PDF. Specifically with regard to Route 53, check out this quote from the document:
Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.
(By the way, there’s nothing new or revolutionary about the information in this post; I’m simply sharing it to help folks who may be new to Cluster API.)
Once CAPI has created the new workload cluster, it will also create a new Kubeconfig for accessing this cluster. This Kubeconfig will be stored as a Secret (a specific type of Kubernetes object). In order to use this Kubeconfig to access your new workload cluster, you’ll need to retrieve the Secret, decode it, and then use it with kubectl
.
First, you can see the secret by running kubectl get secrets
in the namespace where your new workload cluster was created. If the name of your workload cluster was “workload-cluster-1”, then the name of the Secret created by Cluster API would Continue reading
Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.
The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm
but using manual certificate distribution (in other words, not allowing kubeadm
to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.
This post specifically builds upon this post on setting up an AWS-integrated Kubernetes 1.15 cluster, but is written for Kubernetes 1.17. As mentioned above, this post will also show you how to manually distribute the certificates that kubeadm
generates to other control plane nodes.
What this post won’t cover are the details on some of the prerequisites for making the AWS cloud provider function properly; specifically, this post won’t discuss:
In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.
At a high-level, the process looks something like this:
kubeadm
.It’s important to note that this guide does not replace my earlier post on setting up an AWS-integrated Kubernetes cluster on AWS (written for 1.15, but valid for 1.16 and 1.17). All the requirements in that post still apply here. If you haven’t read that post or aren’t familiar with the requirements for setting up a Kubernetes cluster with the AWS Continue reading
In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.
For those that aren’t familiar, Pulumi allows users to use one of a number of different general-purpose programming languages and apply them to infrastructure-as-code scenarios. In this example, I’ll be using TypeScript, but Pulumi also supports JavaScript and Python (and Go is in the works). (Side note: I intend to start working with the Go support in Pulumi when it becomes generally available as a means of helping accelerate my own Go learning.)
Here’s a snippet of TypeScript code that Continue reading
I recently had a need to manually load some container images into a Linux system running containerd
(instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd
automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.
For an administrator/operations-minded user, the primary means of interacting with containerd
is via the ctr
command-line tool. This tool uses a command syntax very similar to Docker, so users familiar with Docker should be able to be productive with ctr
pretty easily.
In my specific example, I had a bastion host with Internet access, and a couple of hosts behind the bastion that did not have Internet access. It was the hosts behind the bastion that needed the container images preloaded. So, I used the ctr
tool to fetch and prepare the images on the bastion, then Continue reading
In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.
I initially approached the API in terms of how I would encode (serialize?) data on the wire using JSON (I’d decided on using a RESTful API with JSON over HTTP). Starting with how I anticipated storing the data in the back-end database, I created a representation of how a customer’s information would be encoded (serialized) in JSON:
{
"customers": [
{
"customerID": "5678",
"streetAddress": "123 Main Street",
"unitNumber": "Suite 123",
"city": "Anywhere",
"state": "CO",
"postalCode": "80108",
"telephone": "3035551212",
"primaryContactFirstName": "Scott",
"primaryContactLastName": "Lowe"
}
]
Continue reading
Welcome to Technology Short Take #123, the first of 2020! I hope that everyone had a wonderful holiday season, but now it’s time to jump back into the fray with a collection of technical articles from around the Internet. Here’s hoping that I found something useful for you!
crypt32.dll
, a core Windows cryptographic component) that was rumored Continue readingRecently, I’ve been working to remove unnecessary complexity from my work environment. I wouldn’t say that I’m going full-on minimalist (not that there’s anything wrong with that), but I was beginning to feel like maintaining this complexity was taking focus, willpower, and mental capacity away from other, more valuable, efforts. Additionally, given the challenges I know lie ahead of me this year (see here for more information), I suspect I’ll need all the focus, willpower, and mental capacity I can get!
When I say “unnecessary complexity,” by the way, I’m referring to added complexity that doesn’t bring any real or significant benefit. Sometimes there’s no getting around the complexity, but when that complexity doesn’t create any value, it’s unnecessary in my definition.
Primarily, this “reduction in complexity” shows up in three areas:
Readers who have followed me for more than a couple years know that I migrated away from macOS for about 9 months in 2017 (see here for a wrap-up of that effort), then again in 2018 when I joined Heptio (some details are available in this update). Since switching to Fedora on a Lenovo Continue reading
As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a list of personal projects for the upcoming year (here’s the 2019 list). Then, near the end of that same year or very early in the following year, I evaluate how I performed against that list of personal projects (for example, here’s my project report card for 2018). In this post, I’ll continue that pattern with an evaluation of my progress against my 2019 project list.
For reference, here’s the list of projects I set out for myself for 2019 (you can read the associated blog post, if you like, for additional context):
Here’s how I Continue reading