I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.
etcdadm is an open source project, originally started by Platform9 (here’s the blog post announcing the project being open sourced). As the README in the GitHub repository mentions, the user experience for etcdadm “is inspired by kubeadm.”
etcdadmThe instructions in the repository indicate that you can use go get -u sigs.k8s.io/etcdadm, but I ran into problems with that approach (using Go 1.14). At the suggestion of the one of the maintainers, I also tried Go 1.12, but it failed both on my main Ubuntu laptop as well as on a clean Ubuntu VM. However, running make etcdadm in a clone of the repository worked, and one of the maintainers indicated the documentation will be updated to reflect this approach Continue reading

After receiving many excellent CFP submissions, we are thrilled to finally announce the first round of speakers for DockerCon LIVE on May 28th starting at 9am PT / GMT-7. Check out the agenda here.
In order to maximize the opportunity to connect with speakers and learn from their experience, talks are pre-recorded and speakers are available for live Q&A for their whole session. From best practices and how tos to new product features and use cases; from technical deep dives to open source projects in action, there are a lot of great sessions to choose from, like:

Simon Ferquel, Docker

Julie Lerman, The Data Farm

Lukonde Mwila, Entelect

Clemente Biondo, Engineering Ingegneria Informatica

Erika Heidi, Digital Ocean

Elton Stoneman, Container Consultant and Trainer

Brandon Mitchell, Boxboat
If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.
The information in this blog post is based on this upstream document. I’ll be adding a little bit of AWS-specific information, since I primarily use the AWS provider for CAPI. This post is written with CAPI v1alpha3 in mind.
The key to this solution is building upon the fact that CAPI leverages kubeadm for bootstrapping cluster nodes. This puts the full power of the kubeadm API at your fingertips—which in turn means you have a great deal of flexibility. This is the mechanism whereby you can tell CAPI to use an external etcd cluster instead of creating a co-located etcd Continue reading
I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.
One of the primary mechanisms used in AWS to control communications among instances and services is the security group. I won’t go into any detail on security groups, but this page from AWS provides an explanation and overview of how security groups work.
In order to make a CAPI workload cluster able to communicate with other EC2 instances or other AWS services, you’ll need to somehow use security groups to make that happen. There are at least two—possibly more—ways to accomplish this:
In today’s fast-paced development world CTOs, dev managers and product managers demand quicker turnarounds for features and defect fixes. “No problem, boss,” you say. “We’ll just use containers.” And you would be right but once you start digging in and looking at ways to get started with containers, well quite frankly, it’s complex.
One of the biggest challenges is getting a toolset installed and setup where you can build images, run containers and duplicate a production kubernetes cluster locally. And then shipping containers to the Cloud, well, that’s a whole ‘nother story.
Docker Desktop and Docker Hub are two of the foundational toolsets to get your images built and shipped to the cloud. In this two-part series, we’ll get Docker Desktop set up and installed, build some images and run them using Docker Compose. Then we’ll take a look at how we can ship those images to the cloud, set up automated builds, and deploy our code into production using Docker Hub.
Docker Desktop is the easiest way to get started with containers on your development machine. The Docker Desktop comes with the Docker Engine, Docker CLI, Docker Compose and Kubernetes. With Docker Desktop there Continue reading
When IBM announced that it was acquiring Red Hat for $34 billion eighteen months ago, one of the things we said that Big Blue needed most and would get from taking over – but not messing with – the world’s largest commercial open source software company was a coherent story that it could tell to its customers about how IBM, which more than any other company helped define data processing, was still relevant to the future. …
The Next IBM Platform, Revisited was written by Timothy Prickett Morgan at The Next Platform.
Multistage builds feature in Dockerfiles enables you to create smaller container images with better caching and smaller security footprint. In this blog post, I’ll show some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing to get most out of the feature. If you are new to multistage builds you probably want to start by reading the usage guide first.
The latest Docker versions come with new opt-in builder backend BuildKit. While all the patterns here work with the older builder as well, many of them run much more efficiently when BuildKit backend is enabled. For example, BuildKit efficiently skips unused stages and builds stages concurrently when possible. I’ve marked these cases under the individual examples. If you use these patterns, enabling BuildKit is strongly recommended. All other BuildKit based builders support these patterns as well.
• • •
Multistage builds added a couple of new syntax concepts. First of all, you can name a stage that starts with a FROM command with AS stagename and use --from=stagename option in a COPY command to copy files from that stage. In fact, FROM command and --from flag Continue reading
One of the most common challenges we hear from developers is how getting started with containers can sometimes feel daunting. It’s one of the needs Docker is focusing on in its commitment to developers and dev teams. Our two aims: teach developers and help accelerate their onboarding.
With the benefits of Docker so appealing, many developers are eager to get something up and running quickly. That’s why, with Docker Desktop Edge 2.2.3 Release, we have launched a brand new “Quick Start” guide which displays after first installation and shows users the Docker basics: how to quickly clone, build, run, and share an image to Docker Hub directly in Docker Desktop.
To keep everything in one place, we’ve crafted the guide with a built-in terminal so that you can paste commands directly — or type them out yourself. It’s a light-touch and integrated way to get something up and running.
You might expect that this new container you’ve spun up would be just a run-of-the-mill “hello world”. Instead, we’re providing you with a resource for further hands-on learning that you can do at your own pace.
This Docker tutorial, accessible on Continue reading
Recently we have released a new Edge version 2.2.3.0 of Docker Desktop for Windows. This can be considered as a release candidate for the next Stable version that will officially support WSL 2. With Windows 10 version 2004 in sight we are giving the next version of Docker Desktop the final touches to give you the best experience running Linux containers on Windows 10.
One of the great benefits is that with the next update of Windows 10 we will also support running Docker Desktop on Windows 10 Home. We worked closely with Microsoft during the last few months to make Docker Desktop and WSL 2 fit together.
In this blog post we look behind the scenes at how we set up new WSL 2 capable test machines to run automated tests in our CI pipeline.
Let’s keep in mind that all automation somehow starts with manual steps and you evolve from there to get better and more automated. At the beginning of this project we were given a laptop back at KubeCon 2019 with an early version of WSL 2.
With that single laptop our development team could start getting their Continue reading

The scale and complexity of modern infrastructures require not only that you be able to define a security policy for your systems, but also be able to apply that security policy programmatically or make changes as a response to external events. As such, the proper automation tooling is a necessary building block to allow you to apply the appropriate actions in a fast, simple and consistent manner.
Check Point has a certified Ansible Content Collection of modules to help enable organizations to automate their response and remediation practices, and to embrace the DevOps model to accelerate application deployment with operational efficiency. The modules, based on Check Point security management APIs* are also available on Ansible Galaxy, in the upstream version of Check Point Collection for the Management Server.
The operational flow is exactly the same for the API as it is for the Check Point security management GUI SmartConsole, i.e. Login > Get Session > Do changes > Publish > Logout.
Security professionals can leverage these modules to automate various tasks for the identification, search, and response to security events. Additionally, in combination with other modules that are part of Ansible security automation, existing Continue reading
Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.
I’ll structure this post as a “diff,” if you will, that outlines the differences of using Paw to launch an EC2 instance via API calls versus using Postman to do the same thing. Therefore, if you haven’t already read the Postman post from last week, I strongly recommend reviewing it before proceeding.
This post assumes you’ve already installed Paw on your macOS system. It also assumes you are somewhat familiar with Paw; refer to the Paw documentation if not. Also, to support AWS authentication, please be sure to install the “AWS Signature 4 Auth Dynamic value” extension (see here or here). This extension is necessary in order to have the API requests sent Continue reading

Although March has come and gone, you can still take part in the awesome activities put together by the community to celebrate Docker’s 7th birthday.
Denise Rey and Captains Łukasz Lach, Marcos Nils, Elton Stoneman, Nicholas Dille, and Brandon Mitchell put together an amazing birthday challenge for the community to complete and it is still available. If you haven’t checked out the hands-on learning content yet, go to the birthday page and earn your seven badges (and don’t forget to share them on twitter).
Captain Bret Fisher hosted a 3-hour live Birthday Show with the Docker team and Captains. You can check out the whole thing on Docker’s Youtube Channel, or skip ahead using the timestamps below:

And while many Community Leaders had to cancel in-person meetups due to the evolving COVID 19 situation, they and their communities still showed up and shared their #mydockerbday stories. There Continue reading
As I mentioned in this post on region and endpoint match in AWS API requests, exploring the AWS APIs is something I’ve been doing off and on for several months. There’s a couple reasons for this; I’ll go into those in a bit more detail shortly. In any case, I’ve been exploring the APIs using Postman (when on Linux) and Paw (when on macOS), and in this post I’ll share how to use Postman to launch an EC2 instance via API calls.
Before I get into the technical details, let me lay out a couple reasons for spending some time on this. I’m pretty familiar with tools like Terraform and Pulumi (my current favorite), and I’m reasonably familiar with AWS CLI itself. In looking at working directly with the APIs, I see this as adding a new perspective on how these other tools work. (I’ve found, in fact, that exploring the APIs has improved my usage of the AWS CLI.) Finally, as I try to deepen my knowledge of programming languages, I wanted to have a reasonable knowledge of the APIs before trying to program around the APIs (hopefully this will make the learning curve a bit less Continue reading

At Docker, we are always looking for ways to make developers’ lives easier either directly or by working with our partners. Improving developer productivity is a core benefit of using Docker products and recently one of our partners made an announcement that makes developing cloud-native apps easier.
AWS announced that its customers can now configure their Amazon Elastic Container Service (ECS) applications deployed in Amazon Elastic Compute Cloud (EC2) mode to access Amazon Elastic File Storage (EFS) file systems. This is good news for Docker developers who use Amazon ECS. It means that Amazon ECS now natively integrates with Amazon EFS to automatically mount shared file systems into Docker containers. This allows you to deploy workloads that require access to shared storage such as machine learning workloads, containerizing legacy apps, or internal DevOps workloads such as GitLab, Jenkins, or Elasticsearch.
The beauty of containerizing your applications is to provide a better way to create, package, and deploy software across different computing environments in a predictable and easy-to-manage way. Containers were originally designed to be stateless and ephemeral (temporary). A stateless application is one that neither reads nor stores information about its state from one time that it is run Continue reading
Today, we will demonstrate how to migrate part of the existing Ansible content (modules and plugins) into a dedicated Ansible Collection. We will be using modules for managing DigitalOcean's resources as an example so you can follow along and test your development setup. But first, let us get the big question out of the way: Why would we want to do that?
Ansible on a Diet
In late March 2020, Ansible's main development branch lost almost all of its modules and plugins. Where did they go? Many of them moved to the ansible-collections GitHub organization. More specifically, the vast majority landed in the community.general GitHub repository that serves as their temporary home (refer to the Community overview README for more information).
The ultimate goal is to get as much content in the community.general Ansible Collection "adopted" by a caring team of developers and moved into a dedicated upstream location, with a dedicated Galaxy namespace. Maintainers of the newly migrated Ansible Collection can then set up the development and release processes as they see fit, (almost) free from the requirements of the comunity.general collection. For more information about the future of Ansible content delivery, please Continue reading

Docker is pleased to announce that we have created a new open community to develop the Compose Specification. This new community will be run with open governance with input from all interested parties allowing us together to create a new standard for defining multi-container apps that can be run from the desktop to the cloud.
Docker is working with Amazon Web Services (AWS), Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms like Kubernetes and Amazon Elastic Container Service (Amazon ECS) in addition to the existing Compose platforms. Opening the specification will allow innovation to flourish and deliver more choices to developers, accelerating how development teams build and ship applications.
Currently used by millions of developers and with over 650,000 Compose files on GitHub, Compose has been widely embraced by developers because it is a simple cloud and platform-agnostic way of defining multi-container based applications. Compose dramatically simplifies the code to cloud process and toolchain for developers by allowing them to define a complex stack in a single file and run it with a single command. This eliminates the need to build and start every container manually, saving development Continue reading
Last November we introduced Ansible security automation as our answer to the lack of integration across the IT security industry. Let's have a closer look at one of the scenarios where Ansible can facilitate typical operational challenges of security practitioners.
A big portion of security practitioners' daily activity is dedicated to investigative tasks. Enrichment is one of those tasks, and could be both repetitive and time-consuming, making it a perfect candidate for automation. Streamlining these processes can free up their analysts to focus on more strategic tasks, accelerate the response in time-sensitive situations and reduce human errors. However, in many large organizations , the multiple security solutions aspect of these activities are not integrated with each other. Hence, different teams may be in charge of different aspects of IT security, sometimes with no processes in common.
That often leads to manual work and interaction between people of different teams which can be error-prone and above all, slow. So when something suspicious happens and further attention is needed, security teams spend a lot of valuable time operating on many different security solutions and coordinating work with other teams, instead of focusing on the suspicious activity directly.
In this blog post we Continue reading
At some point in the last year or so—I don’t know exactly when it happened—Firefox, along with most of the other major browsers, stopped working with file:// URLs. This is a shame, because I like using Markdown for presentations (at least, when it’s a presentation where I don’t need to collaborate with others). However, using this sort of approach generally requires support for file:// URLs (or requires running a local web server). In this post, I’ll show you how to make file:// URLs work again in Firefox.
I tested this procedure using Firefox 74 on Ubuntu, but it should work on any platform on which Firefox is supported. Note that the locations of the user.js file will vary from OS to OS; see this MozillaZine Knowledge Base entry for more details.
Here’s the process I followed:
Create the user.js file (it doesn’t exist by default) in the correct location for your Firefox profile. (Refer to the MozillaZine KB article linked above for exactly where that is on your OS.)
In the user.js, add these entries:
// Allow file:// links
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "file://");
user_pref("capability.policy.localfilelinks.checkloaduri. Continue readingMarkdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.
Just as in the Fedora post, I used Vagrant with the Libvirt provider to spin up a temporary build VM.
In this clean build VM, I perform the following steps to build a multimarkdown binary:
Install the necessary packages with this command:
sudo apt install gcc make cmake git build-essential
Clone the source code repository:
git clone https://github.com/fletcher/MultiMarkdown-6
Switch into the directory where the repository was cloned and run these commands to build the binary:
make
cd build
make
Once the second make command is done, you’re left with a multimarkdown binary. Copy that to the host system (scp works fine). Use vagrant destroy to clean up the temporary build VM once you’ve copied the binary to your host system.
And with that, you’re good to go!
Docker Desktop is getting ready to celebrate its fourth birthday in June this year. We have come a long way from our first version and have big plans of what we would like to do next. As part of our future plans we are going to be kicking off a new early access program for Docker Desktop called Docker Desktop Developer Preview and we need your help!
This program is for a small number of heavy Docker Desktop users who want to interact with the Docker team and impact the future of Docker Desktop for millions of users around the world.
As a member of this group we will be working with you to look at and experiment with our new features. You will get direct access to the people who are building Docker Desktop everyday. You will meet with our engineering team, product manager and community leads, to share your feedback, tell us what is working in our new features and how we could improve, and also help us really dig in when something doesn’t work quite right.
On top of that, you will have a chance to Continue reading