Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!
If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:
Lunch and private transportation are included for all activities, and all activities Continue reading
Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.
If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading
Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.
If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading
Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!
iptables layer involved in most Kubernetes implementations to load balance traffic directly to Pods in the cluster. Unfortunately, this appears to be GKE-specific.Nothing this time around. I’ll stay tuned for content to include next time!
The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).
As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.
Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
name: capi-quickstart
spec:
region: us-east-1
sshKeyName: default
Right off the bat, Continue reading
kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.
In its simplest form/usage, kustomize is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make leverages a file named Makefile to define its function or the way Docker uses a Dockerfile to build a container, kustomize uses a file named kustomization.yaml to store the instructions on the changes the user wants made to a set of resources.
Here’s a simple kustomization.yaml file:
resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
environment: development
This article won’t attempt to explain all the various fields that could be Continue reading
All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).
Why would one not want CAPA to create the necessary AWS infrastructure? There are a variety of reasons, but the one that jumps to my mind immediately is that an organization may have established/proven expertise and a process around the use of infrastructure-as-code (IaC) tooling like Terraform, CloudFormation, or Pulumi. In cases like this, such organizations would very likely prefer to continue to use the tooling they already know and with which they are already familiar, instead of relying on CAPA. Further, the use of third-party IaC tooling may allow for greater customization of the infrastructure than CAPA Continue reading
In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).
This post assumes that you have already deployed a management cluster, so the examples may mention using kubectl to apply CAPA manifests against the management cluster to deploy a highly-available workload cluster. However, the information needed in the CAPA manifests would also work with clusterctl in order to deploy a highly-available management cluster. (Not familiar with what I mean when I say “management cluster” or “workload cluster”? Be sure to go read the introduction to Cluster API post first.)
Also, this post was written with CAPA v1apha1 in mind; a new Continue reading
Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.
So what is “disaggregated storage”? It’s one of the first questions I asked the Lightbits team. The basic premise behind Lightbits’ solution is that by taking the storage out of nodes—by decoupling storage from compute and memory—they can provide more efficient scaling. Frankly, it’s the same basic premise behind storage area network (SANs), although I think Lightbits wants to distance themselves from that terminology.
Instead of Fibre Channel, Fibre Channel over Ethernet (FCoE), or iSCSI, Lightbits uses NVMe over TCP. This provides good performance over 25, 50, or 100Gbps links with low latency (typically less than 300 microseconds). Disks appear “local” to the node, which allows for some interesting concepts when used in conjunction with hyperconverged platforms (more on that in a moment).
Lightbits has their own operating system, LightOS, which runs on industry-standard x64 servers from Dell, HP, Lenovo, etc. To Continue reading
Yesterday I published a high-level overview of Cluster API (CAPI) that provides an introduction to some of the concepts and terminology in CAPI. In this post, I’d like to walk readers through actually using CAPI to bootstrap a Kubernetes cluster on AWS.
It’s important to note that all of the information shared here is also found in the “Getting Started” guide in the AWS provider’s GitHub repository. My purpose here is provide an additional walkthrough that supplements that official documentation, not to supplant the official documentation, and to spread the word about how the process works.
Four basic steps are involved in bootstrapping a Kubernetes cluster on AWS using CAPI:
The following sections take a look at each of these steps in a bit more detail. First, though, I think it’s important to mention that CAPI is still in its early days (it’s currently at v1alpha1). As such, it’s possible that commands may (will) change, and API specifications may (will) change as further development Continue reading
In this post, I’d like to provide a high-level introduction to the Kubernetes Cluster API. The aim of Cluster API (CAPI, for short) is, as outlined in the project’s GitHub repository, “a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management”. This high-level introduction serves to establish some core terminology and concepts upon which I’ll build in future posts about CAPI.
First, let’s start with some terminology:
Bootstrap cluster: The bootstrap cluster is a temporary cluster used by CAPI. It’s used to create a more permanent cluster that is CAPI-enabled (the management cluster). Typically, the bootstrap cluster is created locally using kind (other options are possible), and is destroyed once the management cluster is up and running.
Management cluster: The CAPI-enabled cluster created by the temporary bootstrap cluster is the management cluster. The management cluster is long-lived, is running the CAPI provider components, and understands the CAPI Custom Resource Definitions (CRDs). Typically, users would use the management cluster to create and manage the lifecycle of one or more workload clusters.
Workload cluster: This is a cluster whose lifecycle is managed by CAPI via the management cluster, but isn’t actually CAPI-enabled itself and it doesn’t manage Continue reading
This is the liveblog from the day 1 general session at VMworld 2019. This year the event is back at Moscone Center in San Francisco, and VMware has already released some juicy news (see here, here, and here) in advance of the keynote this morning, foreshadowing what Pat is expected to talk about.
The keynote kicks off with the usual inspirational video, this one incorporating themes and references from a number of high-tech movies, including “The Matrix” and “Inception,” among others. As the video concludes, Pat Gelsinger takes the stage promptly at 9am.
Gelsingers speaks briefly of his 7 years at VMware (this is his 8th VMworld), then jumps into the content of his presentation with the theme of this morning’s session: “Tech in the Age of Any”. Along those lines, Gelsinger talks about the diversity of the VMworld audience, welcomes the attendees in Klingon, and speaks very quickly to the Pivotal and Carbon Black acquisitions that were announced only a few days ago.
Shifting gears, Gelsinger talks about “digital life” and how that translates into millions of applications and billions of devices and billions of users. He talks about how 5G, Edge, and AI are going Continue reading
Welcome to Technology Short Take #118! Next week is VMworld US in San Francisco, CA, and I’ll be there live-blogging and meeting up with folks to discuss all things Kubernetes. If you’re going to be there, look me up! Otherwise, I leave you with this list of links and articles from around the Internet to keep you busy. Enjoy!
As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.
In this particular case, I was using Pulumi to create all the infrastructure necessary to spin up an AWS-integrated Kubernetes cluster. That included a new VPC, subnets in the different AZs for that region, an Internet gateway, route tables and route table associations, security groups, an ELB for the control plane, and EC2 instances. As I’ve outlined in my latest post on setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm, these resources on AWS require specific AWS tags to be assigned in order for the AWS cloud provider to work.
As I started working on this, Continue reading
If you’ve used kubeadm to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init command to bootstrap the first node in the cluster, kubeadm prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init command? How does one go about reconstructing the proper kubeadm join command?
Fortunately, the values needed for a kubeadm join command are relatively easy to find or recreate. First, let’s look at the values that are needed.
Here’s the skeleton of a kubeadm join command for a control plane node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
--control-plane \
--certificate-key <certificate-key>
And here’s the skeleton of a kubeadm join command for a worker node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
As you can see, the information needed for the worker node is a subset of the information needed for a control plane node.
Here’s how to find or recreate all the various pieces of information you need:
In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm. Over the last year or so, the power and utility of kubeadm has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.
This post builds on the official documentation for setting up a highly available Kubernetes 1.15 cluster. This post also builds upon previous posts I’ve written about setting up Kubernetes clusters with the AWS cloud provider:
All of these posts are focused on Kubernetes releases prior to 1.15, and given the changes in kubeadm in the 1.14 and 1.15 releases, I felt it would be helpful to revisit the process again for 1.15. For now, I’m focusing on the in-tree AWS cloud provider; however, in the very near future I’ll look at using the new external AWS cloud provider.
As pointed out in the “original” Continue reading
While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.
I’m making the following assumptions:
kubeadm. (This means we’ll use kubeadm to add the additional control plane nodes.)I’d also like to point out that there are a lot of different configurations and variables that come into play with a process like this. It’s (nearly) impossible to cover them all in a single blog post, so this post attempts to address what I believe to be the most common situations.
With those assumptions and that caveat in mind, the high-level overview of the process looks like this:
Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!
Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine (a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.
If you’re not familiar with the concept of an SSH bastion host, it is a (typically hardened) host through which you, as a user, would proxy your SSH connections to other hosts. For example, you may have a bunch of EC2 instances in an AWS VPC that do not have public IP addresses. (That’s reasonable.) You could use an SSH bastion host—which would require a public IP address—to enable SSH access to otherwise inaccessible hosts. I wrote a post about using SSH bastion hosts back in 2015; give that post a read for more details.
The syntax for connecting to a Docker daemon via SSH looks something like this:
docker -H ssh://user@host <command>
So, if you wanted to run docker container ls to list the containers running on a remote system, you’d Continue reading
Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.
Service Account tokens are stored as Secrets in the “kube-system” namespace of a Kubernetes cluster. To retrieve just the token portion of the Secret, use -o jsonpath like this (replace “sa-token” with the appropriate name for your environment):
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}'
The output is Base64-encoded, so just pipe the output into base64:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode
The result you’re seeing is a JSON Web Token (JWT). You could use the JWT web site to decode the token, but given that I’m a fan of the CLI I decided to use this JWT CLI utility instead:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode | \
jwt decode -
The final -, for those who may not be familiar, is the syntax to tell the jwt utility to look at STDIN for the JWT it needs to Continue reading