Archive

Category Archives for "Systems"

Converting Kubernetes to an HA Control Plane

While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.

I’m making the following assumptions:

  • The existing single control plane node was bootstrapped using kubeadm. (This means we’ll use kubeadm to add the additional control plane nodes.)
  • The existing single control plane node is using a “stacked configuration,” in which both etcd and the Kubernetes control plane components are running on the same nodes.

I’d also like to point out that there are a lot of different configurations and variables that come into play with a process like this. It’s (nearly) impossible to cover them all in a single blog post, so this post attempts to address what I believe to be the most common situations.

With those assumptions and that caveat in mind, the high-level overview of the process looks like this:

  1. Create a load balancer for the control plane.
  2. Update the API server’s certificate.
  3. Update the kubelet Continue reading

See Docker Enterprise 3.0 in Action in Our Upcoming Webinar Series

Docker Enterprise 3.0 represents a significant milestone for the industry-leading enterprise container platform. It is the only end-to-end solution for Kubernetes and modern applications that spans from the desktop to the cloud.  With Docker Enterprise 3.0, organizations can build, share, and run modern applications of any language or technology stack, on their choice of infrastructure and operating system.
To showcase all of the capabilities of the platform and highlight what is new in this release, we invite you to join our 5-part webinar series to explore the technologies that make up Docker Enterprise 3.0. You’ll see several demos of the platform and gain a better understanding of how Docker can you help your organization deliver high-velocity innovation while providing you the choice and security you need. We designed the webinar both for those new to containers and Kubernetes, as well as those who are just here to learn more about what’s new. We’re excited to share what we’ve been working on.
Here’s an overview of what we’ll be covering in each session.

Part 1: Content Management

Tuesday, August 13, 2019 @ 11am PDT / 2pm EDT
This webinar will cover the Continue reading

Docker Release Party Recap

We Celebrated the Launch of Docker Enterprise 3.0 and Docker 19.03 Last Week

Last week, Docker Captain Bret Fisher hosted a 3-day Release Party for Docker 19.03 and Docker Enterprise 3.0. Captains and the Docker team demonstrated some of their favorite new features and answered live audience questions. Here are the highlights (You can check out the full release party here).

Docker Desktop Enterprise

To kick things off, Docker Product Manager Ben De St Paer-Gotch shared Docker Desktop Enterprise. Docker Desktop Enterprise ships with the Enterprise Engine and includes a number of features that makes enterprise development easier and more productive. For example, version packs allow developers to switch between Docker Engine versions and Kubernetes versions, all from the desktop.

For admins, Docker Desktop Enterprise includes the ability to lock down the settings of Docker Desktop, so developers’ machines stay aligned with corporate requirements. Ben also demonstrated Docker Application Designer, a feature that allows users to create new Docker applications by using a library of templates, making it easier for developers in the enterprise to get updated app templates – or “gold standard” versions like the right environment variable settings, custom code, custom editor settings, Continue reading

Technology Short Take 117

Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!

Networking

Servers/Hardware

Security

Accessing the Docker Daemon via an SSH Bastion Host

Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine (a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.

If you’re not familiar with the concept of an SSH bastion host, it is a (typically hardened) host through which you, as a user, would proxy your SSH connections to other hosts. For example, you may have a bunch of EC2 instances in an AWS VPC that do not have public IP addresses. (That’s reasonable.) You could use an SSH bastion host—which would require a public IP address—to enable SSH access to otherwise inaccessible hosts. I wrote a post about using SSH bastion hosts back in 2015; give that post a read for more details.

The syntax for connecting to a Docker daemon via SSH looks something like this:

docker -H ssh://user@host <command>

So, if you wanted to run docker container ls to list the containers running on a remote system, you’d Continue reading

Kubernetes Operators with Ansible Deep Dive: Part 2

blog_ansible-and-kubernetes-deep-dive-2

In part 1 of this series, we looked at operators overall, and what they do in OpenShift/Kubernetes. We peeked at the Operator SDK, and why you'd want to use an Ansible Operator rather than other kinds of operators provided by the SDK. We also explored how Ansible Operators are structured and the relevant files created by the Operator SDK when building Kubernetes Operators with Ansible.

In this the second part of this deep dive series, we'll:

  1. Take a look at creating an OpenShift Project and deploying a Galera Operator
  2. Next we’ll check the MySQL cluster, then setup and test a Galera cluster
  3. Then we’ll test scaling down, disaster recovery, and demonstrate cleaning up

Creating the project and deploying the operator

We start by creating a new project in OpenShift, which we'll simply call test:

$ oc new-project test --display-name="Testing Ansible Operator"
Now using project "test" on server "https://ec2-xx-yy-zz-1.us-east-2.compute.amazonaws.com:8443".

We won't delve too much into this role, however the basic operation is:

  1. Use set_fact to generate variables using the k8s lookup plugin or other variables defined in defaults/main.yml.
  2. Determine if any corrective action needs to be taken based on the above variables. For example, one Continue reading

5 Things to Try with Docker Desktop WSL 2 Tech Preview

We are pleased to announce the availability of our Technical Preview of Docker Desktop for WSL 2! 

As a refresher, this preview makes use of the new Windows Subsystem for Linux (WSL) version that Microsoft recently made available on Windows insider fast ring. It has allowed us to provide improvements to file system sharing, boot time and access to some new features for Docker Desktop users. 

To do this we have changed quite a bit about how we interact with the operating system compared to Docker Desktop on Windows today: 

To learn more about the full feature set have a look at our previous blog:   Get Ready for Tech Preview of Docker Desktop for WSL 2  and  Docker WSL 2 – The Future of Docker Desktop for Windows.

Want to give it a go?

  1. Get setup on a Windows machine on the latest Windows Insider build. The first step for this is heading over to the Microsoft and getting set up as a Windows Insider: https://insider.windows.com/en-gb/getting-started/ 

  2. You’ll need to install the latest release branch (at least build version 18932) and you will then want to enable the WSL 2 feature in Windows: https://docs.microsoft. Continue reading

Write Maintainable Integration Tests with Docker

Testcontainer is an open source community focused on making integration tests easier across many languages. Gianluca Arbezzano is a Docker Captain, SRE at Influx Data and the maintainer of the Golang implementation of Testcontainer that uses the Docker API to expose a test-friendly library that you can use in your test cases. 

Photo by Markus Spiske on Unsplash.
The popularity of microservices and the use of third-party services for non-business critical features has drastically increased the number of integrations that make up the modern application. These days, it is commonplace to use MySQL, Redis as a key value store, MongoDB, Postgress, and InfluxDB – and that is all just for the database – let alone the multiple services that make up other parts of the application.

All of these integration points require different layers of testing. Unit tests increase how fast you write code because you can mock all of your dependencies, set the expectation for your function and iterate until you get the desired transformation. But, we need more. We need to make sure that the integration with Redis, MongoDB or a microservice works as expected, not just that the mock works as we wrote it. Both are Continue reading

Decoding a Kubernetes Service Account Token

Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.

Service Account tokens are stored as Secrets in the “kube-system” namespace of a Kubernetes cluster. To retrieve just the token portion of the Secret, use -o jsonpath like this (replace “sa-token” with the appropriate name for your environment):

kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}'

The output is Base64-encoded, so just pipe the output into base64:

kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode

The result you’re seeing is a JSON Web Token (JWT). You could use the JWT web site to decode the token, but given that I’m a fan of the CLI I decided to use this JWT CLI utility instead:

kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode | \
jwt decode -

The final -, for those who may not be familiar, is the syntax to tell the jwt utility to look at STDIN for the JWT it needs to Continue reading

How to compile OpenWrt and still use the official repository

Overview

We all know what OpenWrt is. The amazing Linux distro built specifically for embedded devices.

What you can achieve with a rather cheap router running OpenWrt, is mind-boggling.

OpenWrt also gives you a great control over its build system. For normal cases, you probably don’t need to build OpenWrt from source yourself. That has been done for you already and all you need to do, is to just download the appropriate compiled firmware image and then upload it to your router1.

But for more advanced usages, you may find yourself needing to build OpenWrt images yourself. This could be due wanting to make some changes to the code, add some device specific options, etc.

Building OpenWrt from source is easy, well-documented, and works great. That is, until you start using opkg to install some new packages.

opkg will by default fetch new packages from the official repository (as one might expect), but depending on the package, the installation may or may not fail.

If you only want to add/remove some packages from a firmware, building OpenWrt from scratch is an overkill. You want to use OpenWrt Image Builder instead. OpenWrt Image Builder also does not suffer from Continue reading

Top 4 Tactics To Keep Node.js Rockin’ in Docker

This is a guest post from Docker Captain Bret Fisher, a long time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses including Docker Mastery for Node.js, weekly YouTube Live shows, and consults to companies adopting Docker. Join Bret for an online meetup on August 28th, where he’ll give demos and Q&A on Node.js and Docker topics.

Foxy, my Docker Mastery mascot is a fan of Node and Docker
We’ve all got our favorite languages and frameworks, and Node.js is tops for me. I’ve run Node.js in Docker since the early days for mission-critical apps. I’m on a mission to educate everyone on how to get the most out of this framework and its tools like npm, Yarn, and nodemon with Docker.

There’s a ton of info out there on using Node.js with Docker, but so much of it is years out of date, and I’m here to help you optimize your setups for Node.js 10+ and Docker 18.09+. If you’d rather watch my DockerCon 2019 talk that covers these topics and more, check it out on YouTube.

Let’s go through 4 steps Continue reading

Adding a Name to the Kubernetes API Server Certificate

In this post, I’m going to walk you through how to add a name (specifically, a Subject Alternative Name) to the TLS certificate used by the Kubernetes API server. This process of updating the certificate to include a name that wasn’t included could find use for a few different scenarios. A couple of situations come to mind, such as adding a load balancer in front of the control plane, or using a new or different URL/hostname used to access the API server (both situations taking place after the cluster was bootstrapped).

This process does assume that the cluster was bootstrapped using kubeadm. This could’ve been a simple kubeadm init with no customization, or it could’ve been using a configuration file to modify the behavior of kubeadm when bootstrapping the cluster. This process also assumes your Kubernetes cluster is using the default certificate authority (CA) created by kubeadm when bootstrapping a cluster. Finally, this process assumes you are using a non-HA (single control plane node) configuration.

Before getting into the details of how to update the certificate, I’d like to first provide a bit of background on why this is important.

Background

The Kubernetes API server uses digital certificates to both Continue reading

Accelerate Application Delivery with Application Templates in Docker Desktop Enterprise


The Application Templates interface.
Docker Enterprise 3.0, now generally available, includes several new features that make it simpler and faster for developers to build and deliver modern applications in the world of Docker containers and Kubernetes. One such feature is the new Application Templates interface that is included with Docker Desktop Enterprise.
Application Templates enable developers to build modern applications using a library of predefined and organization-approved application and service templates, without requiring prior knowledge of Docker commands. By providing re-usable “scaffolding” for developing modern container-based applications, Application Templates accelerate developer onboarding and improve productivity.
The Application Templates themselves include many of the discrete components required for developing a new application, including the Dockerfile, custom base images, common compose service YAML, and application parameters (external ports and upstream image versions). They can even include boilerplate code and code editor configs.
With Application Templates, development leads, application architects, and security and operations teams can customize and share application and service templates that align to corporate standards. As a developer, you know you’re starting from pre-approved templates that  eliminate time-consuming configuration steps and error-prone manual setup. Instead, you have the freedom to customize and experiment so you can focus on Continue reading

Kubernetes Operators with Ansible Deep Dive: Part 1

blog_ansible-and-kubernetes-deep-dive-1

This deep dive series assumes the reader has access to a Kubernetes test environment. A tool like minikube is an acceptable platform for the purposes of this article. If you are an existing Red Hat customer, another option is spinning up an OpenShift cluster through cloud.redhat.com. This SaaS portal makes trying OpenShift a turnkey operation.

In this part of this deep dive series, we'll:

  1. Take a look at operators overall, and what they do in OpenShift/Kubernetes.
  2. Take a quick look at the Operator SDK, and why you'd want to use an Ansible operator rather than other kinds of operators provided by the SDK.
  3. And finally, how Ansible Operators are structured and the relevant files created by the Operator SDK.

What Are Operators?

For those who may not be very familiar with Kubernetes, it is, in its most simplistic description - a resource manager. Users specify how much of a given resource they want and Kubernetes manages those resources to achieve the state the user specified. These resources can be pods (which contain one or more containers), persistent volumes, or even custom resources defined by users.

This makes Kubernetes useful for managing resources that don't contain any state (like Continue reading

Introducing the new Docker Technology Partner Program

We’re pleased to announce the launch of the Docker Technology Partner (DTP) program as a strong foundation for the ongoing collaboration with our ecosystem partners. Together through the new program, Docker and our partners will accelerate providing our enterprise customers with proven collaborative solutions. 
Our industry-leading container platform has proceeded to become central to continuous, high-velocity innovation for more than 750 enterprises around the world. As such, we recognized the need to enhance our partner program to make it easier for customers to identify key partners from the ecosystem that will provide them with the most value. The DTP program is designed to ensure that Docker customers across a variety of company sizes and industries have access to our massive ecosystem of partners and are able to integrate Docker containers with other chosen technologies.  This program provides clear insight into our formal partnerships, as well as the depth of joint product integration. 
Our partners also receive due recognition for their hard work in ensuring compatibility and support with Docker Enterprise. As always, we truly do appreciate the continued support of our partners, and are proud to showcase their accomplishments in integrating and validating with the Docker platform. Continue reading

Thoughts on Restructuring the Ansible Project

Blog_restructuring-the-Ansible-Project

Ansible became popular largely because we adopted some key principles early, and stuck to them.

The first key principle was simplicity: simple to install, simple to use, simple to find documentation and examples, simple to write playbooks, and simple to make contributions.

The second key principle was modularity: Ansible functionality could be easily extended by writing modules, and anyone could write a module and contribute it back to Ansible.

The third key principle was “batteries included”: all of the modules for Ansible would be built-in, so you wouldn’t have to figure out where to get them. They’d just be there.

We’ve come a long way by following these principles, and we intend to stick to them.

Recently though, we’ve been reevaluating how we might better structure Ansible to support these principles. We now find ourselves dealing with problems of scale that are becoming more challenging to solve. Jan-Piet Mens, who has continued to be a close friend to Ansible since our very earliest days, recently described those problems quite succinctly from his perspective as a long-time contributor -- and I think his analysis of the problems we face is quite accurate. Simply, we’ve become victims of our own success.

Success Continue reading

The Future of Ansible Content Delivery

Blog_the-future-of-content-delivery

Everyday, I’m in awe of what Ansible has grown to be. The incredible growth of the community and viral adoption of the technology has resulted in a content management challenge for the project.

I don’t want to echo a lot of what’s been said by our dear friend Jan-Piet Mens or our incredible Community team, but give me a moment to take a shot at it.

Our main challenge is rooted in the ability to scale. The volume of pull requests and issues we see day to day severely outweigh the ability of the Ansible community to keep up with that rate of change.

As a result, we are embarking on a journey. This journey is one that we know that the community, both our content creators and content consumers, will be interested in hearing about.

This New World Order (tongue in cheek), as we’ve been calling it, is a model that will allow for us to empower the community of contributors of Ansible content (read: modules, plugins, and roles) to provide their content at their own pace.

To do this, we have made some changes to how Ansible leverages content that is not “shipped” with it. In short, Continue reading

VMworld 2019 Prayer Time

For the last several years, I’ve organized a brief morning prayer time at VMworld. I didn’t attend the conference last year, but organized a prayer time nevertheless (and was able to join one morning for prayer). This year, now that I’m back at VMware (via the Heptio acquisition) and speaking at the conference, I’d once again like to coordinate a time for believers to meet. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.

What: A brief time of prayer

Where: Yerba Buena Gardens behind Moscone North (near the waterfall)

When: Monday 8/26 through Thursday 8/29 at 7:45am (this should give everyone enough time to grab breakfast before keynotes/sessions start at 9am)

Who: All courteous attendees are welcome, but please note this will be a distinctly Christian-focused and Christ-centric activity (note that I encourage believers of other faiths/religions to organize equivalent activities)

Why: To spend a few minutes in prayer over the day, the conference, the attendees, and each other

As in previous years, you don’t need to RSVP or anything like that, although you’re welcome to if you’d like (just hit me up on Twitter).

Continue reading

Announcing Docker Enterprise 3.0 General Availability


Today, we’re excited to announce the general availability of Docker Enterprise 3.0 – the only desktop-to-cloud enterprise container platform enabling organizations to build and share any application and securely run them anywhere – from hybrid cloud to the edge.

Docker Enterprise 3.0 Demo

Leading up to GA, more than 2,000 people participated in the Docker Enterprise 3.0 public beta program to try it for themselves. We gathered feedback from some of these beta participants to find out what excites them most about the latest iteration of Docker Enterprise. Here are 3 things that customers are excited about and the features that support them:

Simplifying Kubernetes

Kubernetes is a powerful orchestration technology but due to its inherent complexity, many enterprises (including Docker customers) have struggled to realize the full value of Kubernetes on their own. Much of Kubernetes’ perceived complexity stems from a lack of intuitive security and manageability configurations that most enterprises expect and require for production-grade software. We’re addressing this challenge with Docker Kubernetes Service (DKS) – a Certified Kubernetes distribution that is included with Docker Enterprise 3.0. It’s the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure Continue reading

Ansible + ServiceNow Part 2: Parsing facts from network devices using PyATS/Genie

blog_ansible-and-service-now-part2

This blog is part two in a series covering how Red Hat Ansible Automation can integrate with ticket automation. This time we’ll cover dynamically adding a set of network facts from your switches and routers and into your ServiceNow tickets. If you missed Part 1 of this blog series, you can refer to it via the following link: Ansible + ServiceNow Part 1: Opening and Closing Tickets.

Suppose there was a certain network operating system software version that contained an issue you knew was always causing problems and making your uptime SLA suffer. How could you convince your management to finance an upgrade project? How could you justify to them that the fix would be well worth the cost? Better yet, how would you even know?

A great start would be having metrics that you could track. The ability to data mine against your tickets would prove just how many tickets were involved with hardware running that buggy software version. In this blog, I’ll show you how to automate adding a set of facts to all of your tickets going forward. Indisputable facts can then be pulled directly from the device with no chance of mistakes or accidentally being overlooked Continue reading

1 38 39 40 41 42 125