Author Archives: Scott Lowe
Author Archives: Scott Lowe
Recently, I’ve been working to remove unnecessary complexity from my work environment. I wouldn’t say that I’m going full-on minimalist (not that there’s anything wrong with that), but I was beginning to feel like maintaining this complexity was taking focus, willpower, and mental capacity away from other, more valuable, efforts. Additionally, given the challenges I know lie ahead of me this year (see here for more information), I suspect I’ll need all the focus, willpower, and mental capacity I can get!
When I say “unnecessary complexity,” by the way, I’m referring to added complexity that doesn’t bring any real or significant benefit. Sometimes there’s no getting around the complexity, but when that complexity doesn’t create any value, it’s unnecessary in my definition.
Primarily, this “reduction in complexity” shows up in three areas:
Readers who have followed me for more than a couple years know that I migrated away from macOS for about 9 months in 2017 (see here for a wrap-up of that effort), then again in 2018 when I joined Heptio (some details are available in this update). Since switching to Fedora on a Lenovo Continue reading
As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a list of personal projects for the upcoming year (here’s the 2019 list). Then, near the end of that same year or very early in the following year, I evaluate how I performed against that list of personal projects (for example, here’s my project report card for 2018). In this post, I’ll continue that pattern with an evaluation of my progress against my 2019 project list.
For reference, here’s the list of projects I set out for myself for 2019 (you can read the associated blog post, if you like, for additional context):
Here’s how I Continue reading
I’ll skip the build-up and jump straight to the whole point of this post: a once-in-a-lifetime opportunity has come up and I’m embarking on a new adventure starting in early 2020. No, I’m not changing jobs…but I am changing time zones.
Sometime in the next month or two (dates are still being finalized), I’ll be temporarily relocating to Tokyo, Japan, to help build out VMware’s Cloud Native Field Engineering team to provide consulting and professional services around cloud-native technologies and modern application platforms for customers in Japan. Basically, my charter is to replicate the former Heptio Field Engineering team (now the Cloud Native Field Engineering Practice within VMware) in Japan.
Accomplishing this feat will involve a variety of responsibilities: a pretty fair amount of training/enablement, engaging customers on the pre-sales side, helping lead projects on the post-sales (delivery) side, mentoring team members, performing some project management, probably some people management, and the infamous “other duties as required.” All in about six months (the inital duration of my assignment), and all while learning Japanese! No big deal, right?
I’m both simultaneously excited and scared. I’m excited by the idea of living in Tokyo, but let’s be honest—the language barrier is Continue reading
Welcome to Technology Short Take #122! Luckily I did manage to get another Tech Short Take squeezed in for 2019, just so all my readers could have some reading materials for the holidays. I’m kidding! No, I mean I really am kidding—don’t read stuff over the holidays. Spend time with your family instead. The investment in your family will pay off in later years, trust me.
Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be able to squeeze in another one), so here’s hoping that you find something useful, helpful, or informative in the links that I’ve collected. Enjoy some light reading over your festive holiday season!
Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech Short Take. Sorry about that! Hopefully something I share here in this Tech Short Take is useful or helpful to readers. On to the content!
mitmproxy
to inspect kubectl
traffic. I’m now inspired to go do this myself and see what knowledge I can gain.I don’t have anything to share this time around, but I’ll stay alert for content to include future Tech Short Takes.
firewalld
as found in CentOS 8 may prove useful to some readers. I’ve been messing around with firewalld
ever since Continue readingBryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rails moment”—basically focusing on how Kubernetes enables folks to work on/solve higher-level problems. Key phrase from Bryan’s discussion (which, as usual, incorporated the humor I love to see from Bryan): “Kubernetes isn’t the destination. Kubernetes is the vehicle that takes us to the destination.” Ian Coldwater delivered a talk on looking at Kubernetes from the attacker’s point of view, and using that perspective to secure and harden Kubernetes. Two folks from Walmart also discussed their use case, which involves running Kubernetes clusters in retail locations to support a point-of-sale (POS) application at the check-out register. Finally, there was a discussion of chaos engineering from folks at Gremlin and Target.
Due to booth duty and my flight home, I wasn’t able to attend any breakout sessions today.
If I’m completely honest, I didn’t get as much out of the event as I’d hoped. I’m not yet sure if that is because I didn’t get to attend as many sessions as I’d hoped/planned (due to problems with sessions being moved/rescheduled or whatever), if my choice of sessions was just poor, Continue reading
This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed the closing keynotes yesterday due to customer meetings and calls.) Only a couple of keynotes really stuck out. Vicki Cheung provided some useful suggestions for tools that are helping to “close the gap” on user experience, and there was an interesting (but a bit overly long) session with a live demo on running a 5G mobile core on Kubernetes.
Due to some power outages at the conference venue resulting from rain in San Diego, the Prometheus session I had planned to attend got moved to a different time. As a result, I sat in this session by Lyft instead. The topic was about running large-scale stateful workloads, but the content was really about a custom solution Lyft built (called Flyte) that leveraged CRDs and custom controllers to help manage stateful workloads. While it’s awesome that companies like Lyft can extend Kubernetes to address their specific needs, this session isn’t helpful to more “ordinary” companies that are trying to figure out how to run their stateful workloads on Kubernetes. I’d really like the CNCF and the conference committee to try Continue reading
This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thought I might instead attempt a “daily summary” post that captures highlights from all the sessions each day. Here’s my recap of day 1 at KubeCon + CloudNativeCon.
KubeCon + CloudNativeCon doesn’t have “one” keynote; it uses a series of shorter keynotes by various speakers. This has advantages and disadvantages; one key advantage is that there is more variety, and the attendees are more likely to stay engaged. I particularly enjoyed Bryan Liles’ CNCF project updates; I like Bryan’s sense of humor, and getting updates on some of the CNCF projects is always useful. As for some of the other keynotes, those that were thinly-disguised vendor sales pitches were generally pretty poor.
I was running late for the start of this session due to booth duty, and I guess the stuff I needed most was presented in that portion I missed. Most of what I saw was about Netflix Titus, and how the Netflix team ported Titus from Mesos to Virtual Kubelet. However, that information was so specific to Netflix’s particular use of Virtual Kubelet that it Continue reading
A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize
to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize
with kubeadm
configuration files, but similar to modifying kubeadm
configuration files you’ll generally need to use the patching functionality of kustomize
when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize
with Cluster API.
By the way, readers who are unfamiliar with kustomize
should probably read this introductory post first, and then read the post on using kustomize
with kubeadm
configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize
to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.
For this post, I’m going to build out a fictional use case/scenario for the use of kustomize
and Cluster API. Here are the key points to this fictional use case:
A while ago I came across a utility named jk
, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk
describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk
reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.
The basic idea behind jk
is that you could write some relatively simple JavaScript, and jk
will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)
Here’s a very simple example:
const service = new api.core.v1.Service('appService', {
metadata: {
namespace: 'appName',
labels: {
app: 'appName',
team: 'blue',
},
},
spec: {
selector: Continue reading
Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!
If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:
Lunch and private transportation are included for all activities, and all activities Continue reading
Last week I had a crazy idea: if kustomize
can be used to modify YAML files like Kubernetes manifests, then could one use kustomize
to modify a kubeadm
configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize
to modify kubeadm
configuration files.
If you aren’t already familiar with kustomize
, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm
configuration files to modify, I’ll use kubeadm
configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize
, it certainly doesn’t cover all the functionality kustomize
provides. In this particular use case—modifying kubeadm
configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize
, which allows you to overwrite specific fields within the YAML definition Continue reading
Last week I had a crazy idea: if kustomize
can be used to modify YAML files like Kubernetes manifests, then could one use kustomize
to modify a kubeadm
configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize
to modify kubeadm
configuration files.
If you aren’t already familiar with kustomize
, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm
configuration files to modify, I’ll use kubeadm
configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize
, it certainly doesn’t cover all the functionality kustomize
provides. In this particular use case—modifying kubeadm
configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize
, which allows you to overwrite specific fields within the YAML definition Continue reading
Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!
iptables
layer involved in most Kubernetes implementations to load balance traffic directly to Pods in the cluster. Unfortunately, this appears to be GKE-specific.Nothing this time around. I’ll stay tuned for content to include next time!
The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).
As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.
Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
name: capi-quickstart
spec:
region: us-east-1
sshKeyName: default
Right off the bat, Continue reading
kustomize
is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize
GitHub repository). Users can run kustomize
directly, or—starting with Kubernetes 1.14—use kubectl -k
to access the functionality (although the standalone binary is newer than the functionality built into kubectl
as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize
.
In its simplest form/usage, kustomize
is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make
leverages a file named Makefile
to define its function or the way Docker uses a Dockerfile
to build a container, kustomize
uses a file named kustomization.yaml
to store the instructions on the changes the user wants made to a set of resources.
Here’s a simple kustomization.yaml
file:
resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
environment: development
This article won’t attempt to explain all the various fields that could be Continue reading
All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).
Why would one not want CAPA to create the necessary AWS infrastructure? There are a variety of reasons, but the one that jumps to my mind immediately is that an organization may have established/proven expertise and a process around the use of infrastructure-as-code (IaC) tooling like Terraform, CloudFormation, or Pulumi. In cases like this, such organizations would very likely prefer to continue to use the tooling they already know and with which they are already familiar, instead of relying on CAPA. Further, the use of third-party IaC tooling may allow for greater customization of the infrastructure than CAPA Continue reading
In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).
This post assumes that you have already deployed a management cluster, so the examples may mention using kubectl
to apply CAPA manifests against the management cluster to deploy a highly-available workload cluster. However, the information needed in the CAPA manifests would also work with clusterctl
in order to deploy a highly-available management cluster. (Not familiar with what I mean when I say “management cluster” or “workload cluster”? Be sure to go read the introduction to Cluster API post first.)
Also, this post was written with CAPA v1apha1 in mind; a new Continue reading
Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.
So what is “disaggregated storage”? It’s one of the first questions I asked the Lightbits team. The basic premise behind Lightbits’ solution is that by taking the storage out of nodes—by decoupling storage from compute and memory—they can provide more efficient scaling. Frankly, it’s the same basic premise behind storage area network (SANs), although I think Lightbits wants to distance themselves from that terminology.
Instead of Fibre Channel, Fibre Channel over Ethernet (FCoE), or iSCSI, Lightbits uses NVMe over TCP. This provides good performance over 25, 50, or 100Gbps links with low latency (typically less than 300 microseconds). Disks appear “local” to the node, which allows for some interesting concepts when used in conjunction with hyperconverged platforms (more on that in a moment).
Lightbits has their own operating system, LightOS, which runs on industry-standard x64 servers from Dell, HP, Lenovo, etc. To Continue reading