Archive

Category Archives for "blog.scottlowe.org"

Installing MultiMarkdown 6 on Ubuntu 19.10

Markdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.

Just as in the Fedora post, I used Vagrant with the Libvirt provider to spin up a temporary build VM.

In this clean build VM, I perform the following steps to build a multimarkdown binary:

  1. Install the necessary packages with this command:

    sudo apt install gcc make cmake git build-essential
    
  2. Clone the source code repository:

    git clone https://github.com/fletcher/MultiMarkdown-6
    
  3. Switch into the directory where the repository was cloned and run these commands to build the binary:

    make
    cd build
    make
    
  4. Once the second make command is done, you’re left with a multimarkdown binary. Copy that to the host system (scp works fine). Use vagrant destroy to clean up the temporary build VM once you’ve copied the binary to your host system.

And with that, you’re good to go!

Setting up etcd with Kubeadm, containerd Edition

In late 2018, I wrote a couple of blog posts on using kubeadm to set up an etcd cluster. The first one was this post, which used kubeadm only to generate the TLS certs but ran etcd as a systemd service. I followed up that up a couple months later with this post, which used kubeadm to run etcd as a static Pod on each system. It’s that latter post—running etcd as a static Pod on each system in the cluster—that I’ll be revisiting in this post, only this time using containerd as the container runtime instead of Docker.

This post assumes you’ve already created the VMs/instances on which etcd will run, that an appropriate version of Linux is installed (I’ll be using Ubuntu LTS 18.04.4), and that the appropriate packages have been installed. This post also assumes that you’ve already made sure that the correct etcd ports have been opened between the VMs/instances, so that etcd can communicate properly.

Finally, this post builds upon the official Kubernetes documentation on setting up an etcd cluster using kubeadm. The official guide assumes the use of Docker, whereas this post will focus on using containerd as the container Continue reading

HA Kubernetes Clusters on AWS with Cluster API v1alpha3

A few weeks ago, I published a post on HA Kubernetes clusters on AWS with Cluster API v1alpha2. That post was itself a follow-up to a post I wrote in September 2019 on setting up HA clusters using Cluster API v1alpha1. In this post, I’ll follow up on both of those posts with a look at setting up HA Kubernetes clusters on AWS using Cluster API v1alpha3. Although this post is similar to the v1alpha2 post, be aware there are some notable changes in v1alpha3, particularly with regard to the control plane.

If you’re not yet familiar with Cluster API, take a look at this high-level overview I wrote in August 2019. That post will provide an explanation of the project’s goals as well as provide some terminology.

In this post, I won’t discuss the process of establishing a management cluster; I’m assuming your Cluster API management cluster is already up and running. (I do have some articles in the content pipeline that discuss creating a management cluster.) Instead, this post will focus on creating a highly-available workload cluster. By “highly available,” I mean a cluster with multiple control plane nodes that are distributed across multiple availability zones (AZs). Continue reading

Technology Short Take 125

Welcome to Technology Short Take #125, where I have a collection of articles about various data center and cloud technologies collected from around the Internet. I hope I have managed to find a few useful things for you! (If not, contact me on Twitter and tell me how I can make this more helpful for you.)

Networking

Servers/Hardware

Nothing this time around. I’ll try hard to find something useful for the next Technology Short Take.

Security

Using KinD with Docker Machine on macOS

I’ll admit right up front that this post is more “science experiment” than practical, everyday use case. It all started when I was trying some Cluster API-related stuff that leveraged KinD (Kubernetes in Docker). Obviously, given the name, KinD relies on Docker, and when running Docker on macOS you generally would use Docker Desktop. At the time, though, I was using Docker Machine, and as it turns out KinD doesn’t like Docker Machine. In this post, I’ll show you how to make KinD work with Docker Machine.

By the way, it’s worth noting that, per the KinD maintainers, this isn’t a tested configuration. Proceed at your own risk, and know that while this may work for some use cases it won’t necessarily work for all use cases.

Prerequisites/Assumptions

These instructions assume you’ve already installed both KinD and Docker Machine, along with an associated virtualization solution. I’ll be using VirtualBox, but this should be largely the same for VMware Fusion or Parallels (or even HyperKit, if you somehow manage to get that working). I’m also assuming that you have jq installed; if not, get it here.

Making KinD work with Docker Machine

Follow the steps below to make Continue reading

Kustomize Transformer Configurations for Cluster API v1alpha3

A few days ago I wrote an article on configuring kustomize transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize transformers—the parts of kustomize that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize transformers.

In the previous post, I referenced changes to both namereference.yaml (to configure the nameReference transformer) and commonlabels.yaml (to configure the commonLabels transformer). CAPI v1alpha3 has changed the default way labels are used with MachineDeployments, so for v1alpha3 you may be able to get away with only changes to namereference.yaml. (If you know you are going to want/need additional labels on your MachineDeployment, then plan on changes to commonlabels.yaml as well.)

Here are the CAPI v1alpha3 changes needed to namereference.yaml:

- kind: Cluster
  group: cluster.x-k8s.io
  version: v1alpha3
  fieldSpecs:
  - path: spec/clusterName
    kind: MachineDeployment
  - path: spec/template/spec/clusterName
    kind: MachineDeployment

- kind: AWSCluster
  group: infrastructure.cluster.x-k8s.io
   Continue reading

Configuring Kustomize Transformers for Cluster API

In November 2019 I wrote an article on using kustomize with Cluster API (CAPI) manifests. The idea was to use kustomize to simplify the management of CAPI manifests for clusters that are generally similar but have minor differences (like the AWS region in which they are running, or the number of Machines in a MachineDeployment). In this post, I’d like to show a slightly different way of using kustomize with Cluster API that involves configuring the kustomize transformers.

If you aren’t familiar with kustomize, I’d recommend having a look at the kustomize web site and/or reading my introductory post. A transformer in kustomize is the part that is responsible for modifying a resource, or gathering information about a resource over the course of a kustomize build process. This page has some useful terminology definitions.

Looking back at the earlier article on using kustomize with CAPI, you can see that—due to the links/references between objects—modifying the name of the AWSCluster object also means modifying the reference to the AWSCluster object from the Cluster object. The same goes for the KubeadmConfigTemplate and AWSMachineTemplate objects referenced from a MachineDeployment. Out of the box, the namePrefix transformer will change the names of these Continue reading

Updating Visual Studio Code’s Kubernetes API Awareness

After attempting (and failing) to get Sublime Text to have some of the same “intelligence” that Visual Studio Code has with certain languages, I finally stopped trying to make Sublime Text work for me and just went back to using Code full-time. As I mentioned in this earlier post, now that I’ve finally solved how Code handles wrapping text in brackets and braces and the like I’m much happier. (It’s the small things in life.) Now I’ve moved on to tackling how to update Code’s Kubernetes API awareness.

Code’s awareness of the Kubernetes API comes via the YAML extension by Red Hat, and uses the yaml-language-server project found in this GitHub repository. (This is the same language server I was trying to get working with Sublime Text via LSP.) Note that I tested this procedure with version 0.7.2 of the extension; files may be in different locations or have different contents in other versions.

The information in this post is based heavily on this article by Josh Rosso, who some of you may know from his guest appearances on TGIK. I’ve adapted the information in Josh’s post to apply to the macOS version of Continue reading

An Update on the Tokyo Assignment

Right at the end of 2019 I announced that in early 2020 I was temporarily relocating to Tokyo, Japan, for a six month work assignment. It’s now March, and I’m still in Colorado. So what’s up with that Tokyo assignment, anyway? Since I’ve had several folks ask, I figured it’s probably best to post something here.

First, based on all the information available to me, it looks like Tokyo is still going to happen. It will just be delayed a bit—although the exact extent of the delay is still unclear.

Why the delays? A few things have affected the timing:

  1. As part of an acquisition at work, some new folks got involved that hadn’t been previously involved, and it took them some time to understand what this assignment was all about. This is totally understandable, but we hadn’t accounted for the extra time to bring the new stakeholders up to speed.
  2. The finance folks got involved and made everyone do some necessary legwork that hadn’t previously been done in properly allocating costs to the appropriate budgets. So, that took some time.
  3. Finally, there’s this virus thing going around…

It looks like #1 and #2 have been sorted, but it’ll still Continue reading

Modifying Visual Studio Code’s Bracketing Behavior

There are two things I’ve missed since I switched from Sublime Text to Visual Studio Code (I switched in 2018). First, the speed. Sublime Text is so much faster than Visual Studio Code; it’s insane. But, the team behind Visual Studio Code is working hard to improve performance, so I’ve mostly resigned myself to it. The second thing, though, was the behavior of wrapping selected text in brackets (or parentheses, curly braces, quotes, etc.). That part has annoyed me for two years, until this past weekend I’d finally had enough. Here’s how I modified Visual Studio Code’s bracketing behaviors.

Before I get into the changes, allow me to explain what I mean here. With Sublime Text, I used a package (similar to an extension in the VS Code world) called Bracketeer, and one of the things it allowed me to do was highlight a section of text, wrap it in brackets/braces/parentheses, and then—this part is the missing bit—it would deselect the selected text and place my insertion point outside the closing character. Now, this doesn’t sound like a big deal, but let me assure you that it is. I didn’t have to use any extra keystrokes to keep Continue reading

HA Kubernetes Clusters on AWS with Cluster API v1alpha2

About six months ago, I wrote a post on how to use Cluster API (specifically, the Cluster API Provider for AWS) to establish highly available Kubernetes clusters on AWS. That post was written with Cluster API (CAPI) v1alpha1 in mind. Although the concepts I presented there worked with v1alpha2 (released shortly after that post was written), I thought it might be helpful to revisit the topic with CAPI v1alpha2 specifically in mind. So, with that, here’s how to establish highly available Kubernetes clusters on AWS using CAPI v1alpha2.

By the way, it’s worth pointing out that CAPI is a fast-moving project, and release candidate versions of CAPI v1alpha3 are already available for some providers. Keep that in mind if you decide to start working with CAPI. I will write an updated version of this post for v1alpha3 once it has been out for a little while.

To be sure we’re all speaking the same language, here are some terms/acronyms that I’ll use in this post:

  • CAPI = Cluster API (the provider-independent parts)
  • CAPA = Cluster API Provider for AWS
  • CABPK = Cluster API Bootstrap Provider for Kubeadm (new for v1alpha2)
  • Management cluster = a Kubernetes cluster that has the Continue reading

Technology Short Take 124

Welcome to Technology Short Take #124! It seems like the natural progression of the Tech Short Takes is moving toward monthly articles, since it’s been about a month since my last one. In any case, here’s hoping that I’ve found something useful for you. Enjoy! (And yes, normally I’d publish this on a Friday, but I messed up and forgot. So, I decided to publish on Monday instead of waiting for Friday.)

Networking

Servers/Hardware

  • This article is about hardware, just not the hardware I’d typically talk about in this section—instead, it’s about Philips Hue light bulbs. Continue reading

Region and Endpoint Match in AWS API Requests

Interacting directly with the AWS APIs—using a tool like Postman (or, since I switched back to macOS, an application named Paw)—is something I’ve been doing off and on for a little while as a way of gaining a slightly deeper understanding of the APIs that tools like Terraform, Pulumi, and others are calling when automating AWS. For a while, I struggled with AWS authentication, and after seeing Mark Brookfield’s post on using Postman to authenticate to AWS I thought it might be helpful to share what I learned as well.

The basis of Mark’s post (I highly encourage you to go read it) is that he was having a hard time getting authenticated to AWS in order to automate the creation of some Route 53 DNS records. The root of his issue, as it turns out, was a mismatch between the region specified in his request and the API endpoint for Route 53. I know this because I ran into the exact same issue (although with a different service).

The secret to uncovering this mismatch can be found in this “AWS General Reference” PDF. Specifically with regard to Route 53, check out this quote from the document:

Continue reading

Retrieving the Kubeconfig for a Cluster API Workload Cluster

Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.

(By the way, there’s nothing new or revolutionary about the information in this post; I’m simply sharing it to help folks who may be new to Cluster API.)

Once CAPI has created the new workload cluster, it will also create a new Kubeconfig for accessing this cluster. This Kubeconfig will be stored as a Secret (a specific type of Kubernetes object). In order to use this Kubeconfig to access your new workload cluster, you’ll need to retrieve the Secret, decode it, and then use it with kubectl.

First, you can see the secret by running kubectl get secrets in the namespace where your new workload cluster was created. If the name of your workload cluster was “workload-cluster-1”, then the name of the Secret created by Cluster API would Continue reading

Setting up K8s on AWS with Kubeadm and Manual Certificate Distribution

Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.

The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm but using manual certificate distribution (in other words, not allowing kubeadm to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.

This post specifically builds upon this post on setting up an AWS-integrated Kubernetes 1.15 cluster, but is written for Kubernetes 1.17. As mentioned above, this post will also show you how to manually distribute the certificates that kubeadm generates to other control plane nodes.

What this post won’t cover are the details on some of the prerequisites for making the AWS cloud provider function properly; specifically, this post won’t discuss:

Building an Isolated Kubernetes Cluster on AWS

In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.

At a high-level, the process looks something like this:

  1. Build preconfigured AMIs that you’ll use for the instances running Kubernetes.
  2. Stand up your AWS infrastructure, including necessary VPC endpoints for AWS services.
  3. Preload any additional container images, if needed.
  4. Bootstrap your cluster using kubeadm.

It’s important to note that this guide does not replace my earlier post on setting up an AWS-integrated Kubernetes cluster on AWS (written for 1.15, but valid for 1.16 and 1.17). All the requirements in that post still apply here. If you haven’t read that post or aren’t familiar with the requirements for setting up a Kubernetes cluster with the AWS Continue reading

Creating an AWS VPC Endpoint with Pulumi

In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.

For those that aren’t familiar, Pulumi allows users to use one of a number of different general-purpose programming languages and apply them to infrastructure-as-code scenarios. In this example, I’ll be using TypeScript, but Pulumi also supports JavaScript and Python (and Go is in the works). (Side note: I intend to start working with the Go support in Pulumi when it becomes generally available as a means of helping accelerate my own Go learning.)

Here’s a snippet of TypeScript code that Continue reading

Manually Loading Container Images with containerD

I recently had a need to manually load some container images into a Linux system running containerd (instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.

For an administrator/operations-minded user, the primary means of interacting with containerd is via the ctr command-line tool. This tool uses a command syntax very similar to Docker, so users familiar with Docker should be able to be productive with ctr pretty easily.

In my specific example, I had a bastion host with Internet access, and a couple of hosts behind the bastion that did not have Internet access. It was the hosts behind the bastion that needed the container images preloaded. So, I used the ctr tool to fetch and prepare the images on the bastion, then Continue reading

Thinking and Learning About API Design

In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.

I initially approached the API in terms of how I would encode (serialize?) data on the wire using JSON (I’d decided on using a RESTful API with JSON over HTTP). Starting with how I anticipated storing the data in the back-end database, I created a representation of how a customer’s information would be encoded (serialized) in JSON:

{
    "customers": [
        {
            "customerID": "5678",
            "streetAddress": "123 Main Street",
            "unitNumber": "Suite 123",
            "city": "Anywhere",
            "state": "CO",
            "postalCode": "80108",
            "telephone": "3035551212",
            "primaryContactFirstName": "Scott",
            "primaryContactLastName": "Lowe"
        }
    ]
 Continue reading

Technology Short Take 123

Welcome to Technology Short Take #123, the first of 2020! I hope that everyone had a wonderful holiday season, but now it’s time to jump back into the fray with a collection of technical articles from around the Internet. Here’s hoping that I found something useful for you!

Networking

  • Eric Sloof mentions the NSX-T load balancing encyclopedia (found here), which intends to be an authoritative resource to NSX-T load balancing configuration and management.
  • David Gee has an interesting set of articles exploring service function chaining in service mesh environments (part 1, part 2, part 3, and part 4).

Servers/Hardware

Security

  • On January 13, Brian Krebs discussed the critical flaw (a vulnerability in crypt32.dll, a core Windows cryptographic component) that was rumored Continue reading
1 7 8 9 10 11 34