Archive

Category Archives for "blog.scottlowe.org"

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Networking

Servers/Hardware

  • This article from Carlos Fenollosa talks about his experience with a new 2020 MacBook Pro compared to his 2013-era MacBook Air. While there is some Continue reading

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

If you’re unfamiliar with SSH bastion hosts, see this post for an overview.

To use kubectl via an SSH tunnel through a bastion host to a Kubernetes cluster, there are two steps required:

  1. The Kubernetes API server needs an appropriate Subject Alternative Name (SAN) on its certificate.
  2. The Kubeconfig file needs to be updated to reflect the tunnel details.

Ensuring an Appropriate SAN for the API Server

As is the case with just about any TLS-secured connection, if the destination to which you’re connecting with kubectl doesn’t match any of Continue reading

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Normally, in order to use the Quick Start, there are some prerequisites that are needed first (these are all clearly listed on the Quick Start page):

  • You need kubectl installed
  • You need kind (which in turn requires Docker) or an existing Kubernetes cluster up and running

For Linux users (like myself), these prerequisites are pretty easy/simple to handle. But what if you’re a Windows or Mac user? Yes, you could use Docker Desktop and then install kind (or use docker-machine, if you’re feeling adventurous). Then you’d Continue reading

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

For the most part, if you’re familiar with Pulumi and using TypeScript with Pulumi, this will be pretty straightforward. The code I’ll show you makes a couple assumptions:

  1. It assumes you’ve already created the VPC and the subnets earlier in the code. I’ll reference the VPC object as vpc.
  2. I’ll assume you’ve already created subnets in said VPC, and that the subnet-to-AZ ratio is 1:1 (exactly one subnet of each type—public or private—in each AZ). The code will reference the subnet IDs as pubSubnetIds (for public subnets) or privSubnetIds (for private subnets). (How to create the subnets and capture the list of IDs is left as an exercise for the reader. If you’d be interested in seeing how I do it, let me know. Continue reading

Review: Magic Mouse 2 and Magic Trackpad 2 on Fedora

I recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.

The Justification/Background

First, let me talk about the why behind my purchase of these items. Several years ago, I started simultaneously using both an external trackpad/touchpad and an external mouse with my macOS-based home office setup. I realize this is probably odd, but I adopted the practice as a way of eliminating “mouse finger” on my right hand. With this arrangement, I stopped trying to scroll with my right-hand (either using a mouse wheel with older mice or using the scroll-enabled back of the Magic Mouse), and instead shifting scrolling to my left hand (using two-finger scrolling on the trackpad). This “division of labor” worked well. Because my existing Magic Mouse and Magic Trackpad—both earlier generations—don’t work with Continue reading

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

The use case here is to keep a subset of directories in sync between a MacBook Air running macOS “Catalina” 10.15.5 and a Surface Pro 6 running Windows 10. A system running Ubuntu 18.04.4 acted as the “server”; each “client” system (the MacBook Air and the Surface Pro) would synchronize with the Ubuntu system. I’ve used a nearly-identical setup for several years to keep my systems synchronized.

One thing to know about Unison before I continue is that you need compatible versions of Unison on both systems in order for it to work. As I understand it, compatibility is not just based on version numbers, but also on the Ocaml version with which it was compiled.

With that in mind, I already had Continue reading

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…

Networking

Servers/Hardware

Nothing this time around, but I’ll stay alert for items to include next time!

Security

Cloud Continue reading

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!

Networking

Servers/Hardware

Nothing this time around!

Security

I don’t have anything to include this time, but I’ll stay alert for content I can Continue reading

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

etcdadm is an open source project, originally started by Platform9 (here’s the blog post announcing the project being open sourced). As the README in the GitHub repository mentions, the user experience for etcdadm “is inspired by kubeadm.”

Getting etcdadm

The instructions in the repository indicate that you can use go get -u sigs.k8s.io/etcdadm, but I ran into problems with that approach (using Go 1.14). At the suggestion of the one of the maintainers, I also tried Go 1.12, but it failed both on my main Ubuntu laptop as well as on a clean Ubuntu VM. However, running make etcdadm in a clone of the repository worked, and one of the maintainers indicated the documentation will be updated to reflect this approach Continue reading

Using External Etcd with Cluster API on AWS

If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.

The information in this blog post is based on this upstream document. I’ll be adding a little bit of AWS-specific information, since I primarily use the AWS provider for CAPI. This post is written with CAPI v1alpha3 in mind.

The key to this solution is building upon the fact that CAPI leverages kubeadm for bootstrapping cluster nodes. This puts the full power of the kubeadm API at your fingertips—which in turn means you have a great deal of flexibility. This is the mechanism whereby you can tell CAPI to use an external etcd cluster instead of creating a co-located etcd Continue reading

Using Existing AWS Security Groups with Cluster API

I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.

One of the primary mechanisms used in AWS to control communications among instances and services is the security group. I won’t go into any detail on security groups, but this page from AWS provides an explanation and overview of how security groups work.

In order to make a CAPI workload cluster able to communicate with other EC2 instances or other AWS services, you’ll need to somehow use security groups to make that happen. There are at least two—possibly more—ways to accomplish this:

  1. You could add other instances or services to the CAPI-created security groups. The Cluster API Provider Continue reading

Using Paw to Launch an EC2 Instance via API Calls

Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.

I’ll structure this post as a “diff,” if you will, that outlines the differences of using Paw to launch an EC2 instance via API calls versus using Postman to do the same thing. Therefore, if you haven’t already read the Postman post from last week, I strongly recommend reviewing it before proceeding.

Prerequisites

This post assumes you’ve already installed Paw on your macOS system. It also assumes you are somewhat familiar with Paw; refer to the Paw documentation if not. Also, to support AWS authentication, please be sure to install the “AWS Signature 4 Auth Dynamic value” extension (see here or here). This extension is necessary in order to have the API requests sent Continue reading

Using Postman to Launch an EC2 Instance via API Calls

As I mentioned in this post on region and endpoint match in AWS API requests, exploring the AWS APIs is something I’ve been doing off and on for several months. There’s a couple reasons for this; I’ll go into those in a bit more detail shortly. In any case, I’ve been exploring the APIs using Postman (when on Linux) and Paw (when on macOS), and in this post I’ll share how to use Postman to launch an EC2 instance via API calls.

Before I get into the technical details, let me lay out a couple reasons for spending some time on this. I’m pretty familiar with tools like Terraform and Pulumi (my current favorite), and I’m reasonably familiar with AWS CLI itself. In looking at working directly with the APIs, I see this as adding a new perspective on how these other tools work. (I’ve found, in fact, that exploring the APIs has improved my usage of the AWS CLI.) Finally, as I try to deepen my knowledge of programming languages, I wanted to have a reasonable knowledge of the APIs before trying to program around the APIs (hopefully this will make the learning curve a bit less Continue reading

Making File URLs Work Again in Firefox

At some point in the last year or so—I don’t know exactly when it happened—Firefox, along with most of the other major browsers, stopped working with file:// URLs. This is a shame, because I like using Markdown for presentations (at least, when it’s a presentation where I don’t need to collaborate with others). However, using this sort of approach generally requires support for file:// URLs (or requires running a local web server). In this post, I’ll show you how to make file:// URLs work again in Firefox.

I tested this procedure using Firefox 74 on Ubuntu, but it should work on any platform on which Firefox is supported. Note that the locations of the user.js file will vary from OS to OS; see this MozillaZine Knowledge Base entry for more details.

Here’s the process I followed:

  1. Create the user.js file (it doesn’t exist by default) in the correct location for your Firefox profile. (Refer to the MozillaZine KB article linked above for exactly where that is on your OS.)

  2. In the user.js, add these entries:

    // Allow file:// links
    user_pref("capability.policy.policynames", "localfilelinks");
    user_pref("capability.policy.localfilelinks.sites", "file://");
    user_pref("capability.policy.localfilelinks.checkloaduri. Continue reading

Installing MultiMarkdown 6 on Ubuntu 19.10

Markdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.

Just as in the Fedora post, I used Vagrant with the Libvirt provider to spin up a temporary build VM.

In this clean build VM, I perform the following steps to build a multimarkdown binary:

  1. Install the necessary packages with this command:

    sudo apt install gcc make cmake git build-essential
    
  2. Clone the source code repository:

    git clone https://github.com/fletcher/MultiMarkdown-6
    
  3. Switch into the directory where the repository was cloned and run these commands to build the binary:

    make
    cd build
    make
    
  4. Once the second make command is done, you’re left with a multimarkdown binary. Copy that to the host system (scp works fine). Use vagrant destroy to clean up the temporary build VM once you’ve copied the binary to your host system.

And with that, you’re good to go!

Setting up etcd with Kubeadm, containerd Edition

In late 2018, I wrote a couple of blog posts on using kubeadm to set up an etcd cluster. The first one was this post, which used kubeadm only to generate the TLS certs but ran etcd as a systemd service. I followed up that up a couple months later with this post, which used kubeadm to run etcd as a static Pod on each system. It’s that latter post—running etcd as a static Pod on each system in the cluster—that I’ll be revisiting in this post, only this time using containerd as the container runtime instead of Docker.

This post assumes you’ve already created the VMs/instances on which etcd will run, that an appropriate version of Linux is installed (I’ll be using Ubuntu LTS 18.04.4), and that the appropriate packages have been installed. This post also assumes that you’ve already made sure that the correct etcd ports have been opened between the VMs/instances, so that etcd can communicate properly.

Finally, this post builds upon the official Kubernetes documentation on setting up an etcd cluster using kubeadm. The official guide assumes the use of Docker, whereas this post will focus on using containerd as the container Continue reading

HA Kubernetes Clusters on AWS with Cluster API v1alpha3

A few weeks ago, I published a post on HA Kubernetes clusters on AWS with Cluster API v1alpha2. That post was itself a follow-up to a post I wrote in September 2019 on setting up HA clusters using Cluster API v1alpha1. In this post, I’ll follow up on both of those posts with a look at setting up HA Kubernetes clusters on AWS using Cluster API v1alpha3. Although this post is similar to the v1alpha2 post, be aware there are some notable changes in v1alpha3, particularly with regard to the control plane.

If you’re not yet familiar with Cluster API, take a look at this high-level overview I wrote in August 2019. That post will provide an explanation of the project’s goals as well as provide some terminology.

In this post, I won’t discuss the process of establishing a management cluster; I’m assuming your Cluster API management cluster is already up and running. (I do have some articles in the content pipeline that discuss creating a management cluster.) Instead, this post will focus on creating a highly-available workload cluster. By “highly available,” I mean a cluster with multiple control plane nodes that are distributed across multiple availability zones (AZs). Continue reading

Technology Short Take 125

Welcome to Technology Short Take #125, where I have a collection of articles about various data center and cloud technologies collected from around the Internet. I hope I have managed to find a few useful things for you! (If not, contact me on Twitter and tell me how I can make this more helpful for you.)

Networking

Servers/Hardware

Nothing this time around. I’ll try hard to find something useful for the next Technology Short Take.

Security

Using KinD with Docker Machine on macOS

I’ll admit right up front that this post is more “science experiment” than practical, everyday use case. It all started when I was trying some Cluster API-related stuff that leveraged KinD (Kubernetes in Docker). Obviously, given the name, KinD relies on Docker, and when running Docker on macOS you generally would use Docker Desktop. At the time, though, I was using Docker Machine, and as it turns out KinD doesn’t like Docker Machine. In this post, I’ll show you how to make KinD work with Docker Machine.

By the way, it’s worth noting that, per the KinD maintainers, this isn’t a tested configuration. Proceed at your own risk, and know that while this may work for some use cases it won’t necessarily work for all use cases.

Prerequisites/Assumptions

These instructions assume you’ve already installed both KinD and Docker Machine, along with an associated virtualization solution. I’ll be using VirtualBox, but this should be largely the same for VMware Fusion or Parallels (or even HyperKit, if you somehow manage to get that working). I’m also assuming that you have jq installed; if not, get it here.

Making KinD work with Docker Machine

Follow the steps below to make Continue reading

Kustomize Transformer Configurations for Cluster API v1alpha3

A few days ago I wrote an article on configuring kustomize transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize transformers—the parts of kustomize that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize transformers.

In the previous post, I referenced changes to both namereference.yaml (to configure the nameReference transformer) and commonlabels.yaml (to configure the commonLabels transformer). CAPI v1alpha3 has changed the default way labels are used with MachineDeployments, so for v1alpha3 you may be able to get away with only changes to namereference.yaml. (If you know you are going to want/need additional labels on your MachineDeployment, then plan on changes to commonlabels.yaml as well.)

Here are the CAPI v1alpha3 changes needed to namereference.yaml:

- kind: Cluster
  group: cluster.x-k8s.io
  version: v1alpha3
  fieldSpecs:
  - path: spec/clusterName
    kind: MachineDeployment
  - path: spec/template/spec/clusterName
    kind: MachineDeployment

- kind: AWSCluster
  group: infrastructure.cluster.x-k8s.io
   Continue reading
1 7 8 9 10 11 34