Bill Mills

Author Archives: Bill Mills

Designing Your First App in Kubernetes: An Overview

Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.

I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.

Part 1: Getting Started 

Just Enough Kube

With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:

Designing Your First Application in Kubernetes, Part 5: Provisioning Storage

In this blog series on Kubernetes, we’ve already covered:

  1. The basic setup for building applications in Kubernetes
  2. How to set up processes using pods and controllers
  3. Configuring Kubernetes networking services to allow pods to communicate reliably
  4. How to identify and manage the environment-specific configurations to make applications portable between environments

In this series’ final installment, I’ll explain how to provision storage to a Kubernetes application. 

Step 4: Provisioning Storage

The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. If we want to guarantee that data lives beyond the short lifecycle of a container, we must write it out to external storage.

Any container that generates or collects valuable data should be pushing that data out to stable external storage. In our web app example, the database tier should be pushing its on-disk contents out to external storage so they can survive a catastrophic failure of our database pods.

Similarly, any container that requires the provisioning of a lot of data should be getting Continue reading

Designing Your First Application in Kubernetes, Part 4: Configuration

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.

Factoring out Configuration

One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.

A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.

When we design applications, we need to identify what Continue reading

Designing Your First Application in Kubernetes, Part 3: Communicating via Services

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.

Setting up Communication via Services 

At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.

Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, Continue reading

Designing Your First App in Kubernetes, Part 2: Setting up Processes

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications.

Processes as Pods & Controllers in Kubernetes

The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point:

Decision #1: How should our processes be arranged into pods?

The original idea behind a pod was to emulate a logical host – not unlike a VM. The containers in a pod will always be scheduled on the same Kubernetes node, and they’ll be able to communicate with each other via localhost, making pods a good representation of clusters of processes that need to work together closely. 

A pod can contain one or more containers, but containers in the pod must scale together.

But there’s an important consideration: it’s not possible to scale individual containers in a pod separately from each other. If you need to scale Continue reading

Designing Your First App in Kubernetes, Part 1: Getting Started

Image credit: Evan Lovely

Kubernetes: Always Powerful, Occasionally Unwieldy

Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious. 

Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. I’m not, however, going to spend much time reviewing 12-factor design principles and microservice architecture; there are some excellent ideas in those sort of strategic discussions with which anyone designing an application should be familiar, but here on the Docker Training Team I like to keep the focus on concrete, hands-on-keyboard implementation as much as possible.

Furthermore, while my focus is on application architecture, I would strongly encourage devops engineers and developers building to Kubernetes to follow along, in addition to readers in application architecture Continue reading