Scott Lowe

Author Archives: Scott Lowe

Technology Short Take 123

Welcome to Technology Short Take #123, the first of 2020! I hope that everyone had a wonderful holiday season, but now it’s time to jump back into the fray with a collection of technical articles from around the Internet. Here’s hoping that I found something useful for you!

Networking

  • Eric Sloof mentions the NSX-T load balancing encyclopedia (found here), which intends to be an authoritative resource to NSX-T load balancing configuration and management.
  • David Gee has an interesting set of articles exploring service function chaining in service mesh environments (part 1, part 2, part 3, and part 4).

Servers/Hardware

Security

  • On January 13, Brian Krebs discussed the critical flaw (a vulnerability in crypt32.dll, a core Windows cryptographic component) that was rumored Continue reading

Removing Unnecessary Complexity

Recently, I’ve been working to remove unnecessary complexity from my work environment. I wouldn’t say that I’m going full-on minimalist (not that there’s anything wrong with that), but I was beginning to feel like maintaining this complexity was taking focus, willpower, and mental capacity away from other, more valuable, efforts. Additionally, given the challenges I know lie ahead of me this year (see here for more information), I suspect I’ll need all the focus, willpower, and mental capacity I can get!

When I say “unnecessary complexity,” by the way, I’m referring to added complexity that doesn’t bring any real or significant benefit. Sometimes there’s no getting around the complexity, but when that complexity doesn’t create any value, it’s unnecessary in my definition.

Primarily, this “reduction in complexity” shows up in three areas:

  1. My computing environment
  2. My home office setup
  3. My lab resources

My Computing Environment

Readers who have followed me for more than a couple years know that I migrated away from macOS for about 9 months in 2017 (see here for a wrap-up of that effort), then again in 2018 when I joined Heptio (some details are available in this update). Since switching to Fedora on a Lenovo Continue reading

Looking Back: 2019 Project Report Card

As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a list of personal projects for the upcoming year (here’s the 2019 list). Then, near the end of that same year or very early in the following year, I evaluate how I performed against that list of personal projects (for example, here’s my project report card for 2018). In this post, I’ll continue that pattern with an evaluation of my progress against my 2019 project list.

For reference, here’s the list of projects I set out for myself for 2019 (you can read the associated blog post, if you like, for additional context):

  1. Make at least one code contribution to an open source project. (Stretch goal: Make three code contributions to open source projects.)
  2. Add at least three new technology areas to my “learning-tools” repository. (Stretch goal: Add five new technology areas to the “learning-tools” repository.)
  3. Become more familiar with CI/CD solutions and patterns.
  4. Create at least three non-written content pieces. (Stretch goal: Create five pieces of non-written content.)
  5. Complete a “wildcard project” (if applicable).

Here’s how I Continue reading

New Year, New Adventure

I’ll skip the build-up and jump straight to the whole point of this post: a once-in-a-lifetime opportunity has come up and I’m embarking on a new adventure starting in early 2020. No, I’m not changing jobs…but I am changing time zones.

Sometime in the next month or two (dates are still being finalized), I’ll be temporarily relocating to Tokyo, Japan, to help build out VMware’s Cloud Native Field Engineering team to provide consulting and professional services around cloud-native technologies and modern application platforms for customers in Japan. Basically, my charter is to replicate the former Heptio Field Engineering team (now the Cloud Native Field Engineering Practice within VMware) in Japan.

Accomplishing this feat will involve a variety of responsibilities: a pretty fair amount of training/enablement, engaging customers on the pre-sales side, helping lead projects on the post-sales (delivery) side, mentoring team members, performing some project management, probably some people management, and the infamous “other duties as required.” All in about six months (the inital duration of my assignment), and all while learning Japanese! No big deal, right?

I’m both simultaneously excited and scared. I’m excited by the idea of living in Tokyo, but let’s be honest—the language barrier is Continue reading

Technology Short Take 122

Welcome to Technology Short Take #122! Luckily I did manage to get another Tech Short Take squeezed in for 2019, just so all my readers could have some reading materials for the holidays. I’m kidding! No, I mean I really am kidding—don’t read stuff over the holidays. Spend time with your family instead. The investment in your family will pay off in later years, trust me.

Networking

Servers/Hardware

Security

Cloud Computing/Cloud Management

Technology Short Take 121

Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be able to squeeze in another one), so here’s hoping that you find something useful, helpful, or informative in the links that I’ve collected. Enjoy some light reading over your festive holiday season!

Networking

Technology Short Take 120

Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech Short Take. Sorry about that! Hopefully something I share here in this Tech Short Take is useful or helpful to readers. On to the content!

Networking

Servers/Hardware

I don’t have anything to share this time around, but I’ll stay alert for content to include future Tech Short Takes.

Security

KubeCon 2019 Day 3 and Event Summary

Keynotes

Bryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rails moment”—basically focusing on how Kubernetes enables folks to work on/solve higher-level problems. Key phrase from Bryan’s discussion (which, as usual, incorporated the humor I love to see from Bryan): “Kubernetes isn’t the destination. Kubernetes is the vehicle that takes us to the destination.” Ian Coldwater delivered a talk on looking at Kubernetes from the attacker’s point of view, and using that perspective to secure and harden Kubernetes. Two folks from Walmart also discussed their use case, which involves running Kubernetes clusters in retail locations to support a point-of-sale (POS) application at the check-out register. Finally, there was a discussion of chaos engineering from folks at Gremlin and Target.

No Breakout Sessions

Due to booth duty and my flight home, I wasn’t able to attend any breakout sessions today.

Event Summary

If I’m completely honest, I didn’t get as much out of the event as I’d hoped. I’m not yet sure if that is because I didn’t get to attend as many sessions as I’d hoped/planned (due to problems with sessions being moved/rescheduled or whatever), if my choice of sessions was just poor, Continue reading

KubeCon 2019 Day 2 Summary

Keynotes

This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed the closing keynotes yesterday due to customer meetings and calls.) Only a couple of keynotes really stuck out. Vicki Cheung provided some useful suggestions for tools that are helping to “close the gap” on user experience, and there was an interesting (but a bit overly long) session with a live demo on running a 5G mobile core on Kubernetes.

Running Large-Scale Stateful Workloads

Due to some power outages at the conference venue resulting from rain in San Diego, the Prometheus session I had planned to attend got moved to a different time. As a result, I sat in this session by Lyft instead. The topic was about running large-scale stateful workloads, but the content was really about a custom solution Lyft built (called Flyte) that leveraged CRDs and custom controllers to help manage stateful workloads. While it’s awesome that companies like Lyft can extend Kubernetes to address their specific needs, this session isn’t helpful to more “ordinary” companies that are trying to figure out how to run their stateful workloads on Kubernetes. I’d really like the CNCF and the conference committee to try Continue reading

KubeCon 2019 Day 1 Summary

This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thought I might instead attempt a “daily summary” post that captures highlights from all the sessions each day. Here’s my recap of day 1 at KubeCon + CloudNativeCon.

Keynotes

KubeCon + CloudNativeCon doesn’t have “one” keynote; it uses a series of shorter keynotes by various speakers. This has advantages and disadvantages; one key advantage is that there is more variety, and the attendees are more likely to stay engaged. I particularly enjoyed Bryan Liles’ CNCF project updates; I like Bryan’s sense of humor, and getting updates on some of the CNCF projects is always useful. As for some of the other keynotes, those that were thinly-disguised vendor sales pitches were generally pretty poor.

Introduction to the Virtual Kubelet

I was running late for the start of this session due to booth duty, and I guess the stuff I needed most was presented in that portion I missed. Most of what I saw was about Netflix Titus, and how the Netflix team ported Titus from Mesos to Virtual Kubelet. However, that information was so specific to Netflix’s particular use of Virtual Kubelet that it Continue reading

Using Kustomize with Cluster API Manifests

A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize with kubeadm configuration files, but similar to modifying kubeadm configuration files you’ll generally need to use the patching functionality of kustomize when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize with Cluster API.

By the way, readers who are unfamiliar with kustomize should probably read this introductory post first, and then read the post on using kustomize with kubeadm configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.

A Fictional Use Case

For this post, I’m going to build out a fictional use case/scenario for the use of kustomize and Cluster API. Here are the key points to this fictional use case:

  1. Three different clusters on AWS are needed. The Continue reading

Programmatically Creating Kubernetes Manifests

A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.

The basic idea behind jk is that you could write some relatively simple JavaScript, and jk will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)

Here’s a very simple example:

const service = new api.core.v1.Service('appService', {
    metadata: {
        namespace: 'appName',
        labels: {
            app: 'appName',
            team: 'blue',
        },
    },
    spec: {
        selector:  Continue reading

Spousetivities in Barcelona at VMworld EMEA 2019

Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!

If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:

  • A wine tour of the Penedes region (southwest of Barcelona)—attendees will get to see some amazing wineries not frequented by tourists!
  • A walking tour of Barcelona
  • A tapas cooking class
  • A fantastic walking tour of Costa Brava, Pals, and Girona
  • A sailing tour (it’s a 3 hour tour, but it won’t end up like Gilligan’s)

Lunch and private transportation are included for all activities, and all activities Continue reading

Using Kustomize with Kubeadm Configuration Files

Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.

If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.

While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading

Using Kustomize with Kubeadm Configuration Files

Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.

If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.

While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading

Technology Short Take 119

Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!

Networking

  • Chip Zoller has a write-up on doing HTTPS ingress with Enterprise PKS. Normally I’d put something like this in a different section, but this is as much a write-up on how to configure NSX-T correctly as it is about configuring Ingress objects in Kubernetes.
  • I saw this headline, and immediately thought it was just “cloud native”-washing (i.e., tagging everything as “cloud native”). Fortunately, the diagrams illustrate that there is something substantive behind the headline. The “TL;DR” for those who are interested is that this solution bypasses the normal iptables layer involved in most Kubernetes implementations to load balance traffic directly to Pods in the cluster. Unfortunately, this appears to be GKE-specific.

Servers/Hardware

Nothing this time around. I’ll stay tuned for content to include next time!

Security

  • The Kubernetes project recently underwent a security audit; more information on the Continue reading

Exploring Cluster API v1alpha2 Manifests

The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).

As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.

Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
  name: capi-quickstart
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
    kind: AWSCluster
    name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
  name: capi-quickstart
spec:
  region: us-east-1
  sshKeyName: default

Right off the bat, Continue reading

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

In its simplest form/usage, kustomize is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make leverages a file named Makefile to define its function or the way Docker uses a Dockerfile to build a container, kustomize uses a file named kustomization.yaml to store the instructions on the changes the user wants made to a set of resources.

Here’s a simple kustomization.yaml file:

resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
  environment: development

This article won’t attempt to explain all the various fields that could be Continue reading

Consuming Pre-Existing AWS Infrastructure with Cluster API

All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).

Why would one not want CAPA to create the necessary AWS infrastructure? There are a variety of reasons, but the one that jumps to my mind immediately is that an organization may have established/proven expertise and a process around the use of infrastructure-as-code (IaC) tooling like Terraform, CloudFormation, or Pulumi. In cases like this, such organizations would very likely prefer to continue to use the tooling they already know and with which they are already familiar, instead of relying on CAPA. Further, the use of third-party IaC tooling may allow for greater customization of the infrastructure than CAPA Continue reading

Highly Available Kubernetes Clusters on AWS with Cluster API

In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).

This post assumes that you have already deployed a management cluster, so the examples may mention using kubectl to apply CAPA manifests against the management cluster to deploy a highly-available workload cluster. However, the information needed in the CAPA manifests would also work with clusterctl in order to deploy a highly-available management cluster. (Not familiar with what I mean when I say “management cluster” or “workload cluster”? Be sure to go read the introduction to Cluster API post first.)

Also, this post was written with CAPA v1apha1 in mind; a new Continue reading

1 7 8 9 10 11 19