Welcome to Technology Short Take #124! It seems like the natural progression of the Tech Short Takes is moving toward monthly articles, since it’s been about a month since my last one. In any case, here’s hoping that I’ve found something useful for you. Enjoy! (And yes, normally I’d publish this on a Friday, but I messed up and forgot. So, I decided to publish on Monday instead of waiting for Friday.)
Interacting directly with the AWS APIs—using a tool like Postman (or, since I switched back to macOS, an application named Paw)—is something I’ve been doing off and on for a little while as a way of gaining a slightly deeper understanding of the APIs that tools like Terraform, Pulumi, and others are calling when automating AWS. For a while, I struggled with AWS authentication, and after seeing Mark Brookfield’s post on using Postman to authenticate to AWS I thought it might be helpful to share what I learned as well.
The basis of Mark’s post (I highly encourage you to go read it) is that he was having a hard time getting authenticated to AWS in order to automate the creation of some Route 53 DNS records. The root of his issue, as it turns out, was a mismatch between the region specified in his request and the API endpoint for Route 53. I know this because I ran into the exact same issue (although with a different service).
The secret to uncovering this mismatch can be found in this “AWS General Reference” PDF. Specifically with regard to Route 53, check out this quote from the document:
Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.
(By the way, there’s nothing new or revolutionary about the information in this post; I’m simply sharing it to help folks who may be new to Cluster API.)
Once CAPI has created the new workload cluster, it will also create a new Kubeconfig for accessing this cluster. This Kubeconfig will be stored as a Secret (a specific type of Kubernetes object). In order to use this Kubeconfig to access your new workload cluster, you’ll need to retrieve the Secret, decode it, and then use it with kubectl.
First, you can see the secret by running kubectl get secrets in the namespace where your new workload cluster was created. If the name of your workload cluster was “workload-cluster-1”, then the name of the Secret created by Cluster API would Continue reading
Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.
The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm but using manual certificate distribution (in other words, not allowing kubeadm to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.
This post specifically builds upon this post on setting up an AWS-integrated Kubernetes 1.15 cluster, but is written for Kubernetes 1.17. As mentioned above, this post will also show you how to manually distribute the certificates that kubeadm generates to other control plane nodes.
What this post won’t cover are the details on some of the prerequisites for making the AWS cloud provider function properly; specifically, this post won’t discuss:
In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.
At a high-level, the process looks something like this:
kubeadm.It’s important to note that this guide does not replace my earlier post on setting up an AWS-integrated Kubernetes cluster on AWS (written for 1.15, but valid for 1.16 and 1.17). All the requirements in that post still apply here. If you haven’t read that post or aren’t familiar with the requirements for setting up a Kubernetes cluster with the AWS Continue reading
In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.
For those that aren’t familiar, Pulumi allows users to use one of a number of different general-purpose programming languages and apply them to infrastructure-as-code scenarios. In this example, I’ll be using TypeScript, but Pulumi also supports JavaScript and Python (and Go is in the works). (Side note: I intend to start working with the Go support in Pulumi when it becomes generally available as a means of helping accelerate my own Go learning.)
Here’s a snippet of TypeScript code that Continue reading
I recently had a need to manually load some container images into a Linux system running containerd (instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.
For an administrator/operations-minded user, the primary means of interacting with containerd is via the ctr command-line tool. This tool uses a command syntax very similar to Docker, so users familiar with Docker should be able to be productive with ctr pretty easily.
In my specific example, I had a bastion host with Internet access, and a couple of hosts behind the bastion that did not have Internet access. It was the hosts behind the bastion that needed the container images preloaded. So, I used the ctr tool to fetch and prepare the images on the bastion, then Continue reading
In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.
I initially approached the API in terms of how I would encode (serialize?) data on the wire using JSON (I’d decided on using a RESTful API with JSON over HTTP). Starting with how I anticipated storing the data in the back-end database, I created a representation of how a customer’s information would be encoded (serialized) in JSON:
{
"customers": [
{
"customerID": "5678",
"streetAddress": "123 Main Street",
"unitNumber": "Suite 123",
"city": "Anywhere",
"state": "CO",
"postalCode": "80108",
"telephone": "3035551212",
"primaryContactFirstName": "Scott",
"primaryContactLastName": "Lowe"
}
]
Continue reading
Welcome to Technology Short Take #123, the first of 2020! I hope that everyone had a wonderful holiday season, but now it’s time to jump back into the fray with a collection of technical articles from around the Internet. Here’s hoping that I found something useful for you!
crypt32.dll, a core Windows cryptographic component) that was rumored Continue readingRecently, I’ve been working to remove unnecessary complexity from my work environment. I wouldn’t say that I’m going full-on minimalist (not that there’s anything wrong with that), but I was beginning to feel like maintaining this complexity was taking focus, willpower, and mental capacity away from other, more valuable, efforts. Additionally, given the challenges I know lie ahead of me this year (see here for more information), I suspect I’ll need all the focus, willpower, and mental capacity I can get!
When I say “unnecessary complexity,” by the way, I’m referring to added complexity that doesn’t bring any real or significant benefit. Sometimes there’s no getting around the complexity, but when that complexity doesn’t create any value, it’s unnecessary in my definition.
Primarily, this “reduction in complexity” shows up in three areas:
Readers who have followed me for more than a couple years know that I migrated away from macOS for about 9 months in 2017 (see here for a wrap-up of that effort), then again in 2018 when I joined Heptio (some details are available in this update). Since switching to Fedora on a Lenovo Continue reading
As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a list of personal projects for the upcoming year (here’s the 2019 list). Then, near the end of that same year or very early in the following year, I evaluate how I performed against that list of personal projects (for example, here’s my project report card for 2018). In this post, I’ll continue that pattern with an evaluation of my progress against my 2019 project list.
For reference, here’s the list of projects I set out for myself for 2019 (you can read the associated blog post, if you like, for additional context):
Here’s how I Continue reading
I’ll skip the build-up and jump straight to the whole point of this post: a once-in-a-lifetime opportunity has come up and I’m embarking on a new adventure starting in early 2020. No, I’m not changing jobs…but I am changing time zones.
Sometime in the next month or two (dates are still being finalized), I’ll be temporarily relocating to Tokyo, Japan, to help build out VMware’s Cloud Native Field Engineering team to provide consulting and professional services around cloud-native technologies and modern application platforms for customers in Japan. Basically, my charter is to replicate the former Heptio Field Engineering team (now the Cloud Native Field Engineering Practice within VMware) in Japan.
Accomplishing this feat will involve a variety of responsibilities: a pretty fair amount of training/enablement, engaging customers on the pre-sales side, helping lead projects on the post-sales (delivery) side, mentoring team members, performing some project management, probably some people management, and the infamous “other duties as required.” All in about six months (the inital duration of my assignment), and all while learning Japanese! No big deal, right?
I’m both simultaneously excited and scared. I’m excited by the idea of living in Tokyo, but let’s be honest—the language barrier is Continue reading
Welcome to Technology Short Take #122! Luckily I did manage to get another Tech Short Take squeezed in for 2019, just so all my readers could have some reading materials for the holidays. I’m kidding! No, I mean I really am kidding—don’t read stuff over the holidays. Spend time with your family instead. The investment in your family will pay off in later years, trust me.
Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be able to squeeze in another one), so here’s hoping that you find something useful, helpful, or informative in the links that I’ve collected. Enjoy some light reading over your festive holiday season!
Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech Short Take. Sorry about that! Hopefully something I share here in this Tech Short Take is useful or helpful to readers. On to the content!
mitmproxy to inspect kubectl traffic. I’m now inspired to go do this myself and see what knowledge I can gain.I don’t have anything to share this time around, but I’ll stay alert for content to include future Tech Short Takes.
firewalld as found in CentOS 8 may prove useful to some readers. I’ve been messing around with firewalld ever since Continue readingBryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rails moment”—basically focusing on how Kubernetes enables folks to work on/solve higher-level problems. Key phrase from Bryan’s discussion (which, as usual, incorporated the humor I love to see from Bryan): “Kubernetes isn’t the destination. Kubernetes is the vehicle that takes us to the destination.” Ian Coldwater delivered a talk on looking at Kubernetes from the attacker’s point of view, and using that perspective to secure and harden Kubernetes. Two folks from Walmart also discussed their use case, which involves running Kubernetes clusters in retail locations to support a point-of-sale (POS) application at the check-out register. Finally, there was a discussion of chaos engineering from folks at Gremlin and Target.
Due to booth duty and my flight home, I wasn’t able to attend any breakout sessions today.
If I’m completely honest, I didn’t get as much out of the event as I’d hoped. I’m not yet sure if that is because I didn’t get to attend as many sessions as I’d hoped/planned (due to problems with sessions being moved/rescheduled or whatever), if my choice of sessions was just poor, Continue reading
This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed the closing keynotes yesterday due to customer meetings and calls.) Only a couple of keynotes really stuck out. Vicki Cheung provided some useful suggestions for tools that are helping to “close the gap” on user experience, and there was an interesting (but a bit overly long) session with a live demo on running a 5G mobile core on Kubernetes.
Due to some power outages at the conference venue resulting from rain in San Diego, the Prometheus session I had planned to attend got moved to a different time. As a result, I sat in this session by Lyft instead. The topic was about running large-scale stateful workloads, but the content was really about a custom solution Lyft built (called Flyte) that leveraged CRDs and custom controllers to help manage stateful workloads. While it’s awesome that companies like Lyft can extend Kubernetes to address their specific needs, this session isn’t helpful to more “ordinary” companies that are trying to figure out how to run their stateful workloads on Kubernetes. I’d really like the CNCF and the conference committee to try Continue reading
This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thought I might instead attempt a “daily summary” post that captures highlights from all the sessions each day. Here’s my recap of day 1 at KubeCon + CloudNativeCon.
KubeCon + CloudNativeCon doesn’t have “one” keynote; it uses a series of shorter keynotes by various speakers. This has advantages and disadvantages; one key advantage is that there is more variety, and the attendees are more likely to stay engaged. I particularly enjoyed Bryan Liles’ CNCF project updates; I like Bryan’s sense of humor, and getting updates on some of the CNCF projects is always useful. As for some of the other keynotes, those that were thinly-disguised vendor sales pitches were generally pretty poor.
I was running late for the start of this session due to booth duty, and I guess the stuff I needed most was presented in that portion I missed. Most of what I saw was about Netflix Titus, and how the Netflix team ported Titus from Mesos to Virtual Kubelet. However, that information was so specific to Netflix’s particular use of Virtual Kubelet that it Continue reading
A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize with kubeadm configuration files, but similar to modifying kubeadm configuration files you’ll generally need to use the patching functionality of kustomize when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize with Cluster API.
By the way, readers who are unfamiliar with kustomize should probably read this introductory post first, and then read the post on using kustomize with kubeadm configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.
For this post, I’m going to build out a fictional use case/scenario for the use of kustomize and Cluster API. Here are the key points to this fictional use case:
A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.
The basic idea behind jk is that you could write some relatively simple JavaScript, and jk will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)
Here’s a very simple example:
const service = new api.core.v1.Service('appService', {
metadata: {
namespace: 'appName',
labels: {
app: 'appName',
team: 'blue',
},
},
spec: {
selector: Continue reading