If your organization has adopted DevOps culture, you're probably practicing some form of Infrastructures-as-Code (IaC) to manage and provision servers, storage, applications and networking using human-readable, and machine-consumable definitions and automation tools such as Ansible. It's also likely that Git is an essential in your DevOps toolchain, not only for the development of applications and services, but also in managing your infrastructure definitions.
With the release of Red Hat Ansible Tower v3.6, part of the Red Hat Ansible Automation Platform, we introduced Automation Webhooks that helps enable easier and more intuitive Git-centric workflows natively.
In this post we'll explore how you can make use of Automation Webhooks, but first let's look at IaC for some context and how this feature can be used to your organization's benefit.
Infrastructure-as-Code (IaC) is managing and provisioning computing resources through human-readable machine-consumable definition files, rather than physical hardware configuration or interactive configuration tools. It is considered one of the essential practices of DevOps along with continuous integration (CI), continuous delivery (CD), observability and others in order to automate development and operations processes and create a faster release cycle with higher quality. Treating infrastructure like it is code and subjecting Continue reading
Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.
(By the way, there’s nothing new or revolutionary about the information in this post; I’m simply sharing it to help folks who may be new to Cluster API.)
Once CAPI has created the new workload cluster, it will also create a new Kubeconfig for accessing this cluster. This Kubeconfig will be stored as a Secret (a specific type of Kubernetes object). In order to use this Kubeconfig to access your new workload cluster, you’ll need to retrieve the Secret, decode it, and then use it with kubectl
.
First, you can see the secret by running kubectl get secrets
in the namespace where your new workload cluster was created. If the name of your workload cluster was “workload-cluster-1”, then the name of the Secret created by Cluster API would Continue reading
In October of 2019, as part of Red Hat Ansible Engine 2.9, the Ansible Network Automation team introduced the concept of resource modules. These opinionated network modules make network automation easier and more consistent for those automating various network platforms in production. The goal for resource modules was to avoid creating overly complex jinja2 templates for rendering network configuration. This blog post goes through the eos_vlans module for the Arista EOS network platform. I walk through several examples and describe the use cases for each state parameter and how we envision these being used in real world scenarios.
Before starting let’s quickly explain the rationale behind naming of the network resource modules. Notice for resource modules that configure VLANs there is a singular form (eos_vlan, ios_vlan, junos_vlan, etc) and a plural form (eos_vlans, ios_vlans, junos_vlans). The new resource modules are the plural form that we are covering today. We have deprecated the singular form. This was done so that those using existing network modules would not have their Ansible Playbooks stop working and have sufficient time to migrate to the new network automation modules.
Let's start with an example of the eos_vlans Continue reading
This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.
Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.
There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes Continue reading
Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.
The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm
but using manual certificate distribution (in other words, not allowing kubeadm
to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.
This post specifically builds upon this post on setting up an AWS-integrated Kubernetes 1.15 cluster, but is written for Kubernetes 1.17. As mentioned above, this post will also show you how to manually distribute the certificates that kubeadm
generates to other control plane nodes.
What this post won’t cover are the details on some of the prerequisites for making the AWS cloud provider function properly; specifically, this post won’t discuss:
Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project . This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the new cnabio organization. The reference implementations – cnab-go which is the Golang library implementation of the specification and duffle which is the CLI reference implementation – were also moved.
Docker helped with the development of the CNAB specification and its reference implementations, and led the work on the cnab-to-oci library for sharing a CNAB bundle using an existing container registry. This library is now used by 3 CNAB tools, Docker App, Porter and duffle, as well as Docker Hub. It successfully demonstrated how to push, pull and share a CNAB bundle using a registry. This work will be used as a foundation for the future CNAB Registries specification.
The transfer is already in effect, so starting now please refer to github.com/cnabio/cnab-to-oci in your Golang imports.
As you may know, Continue reading
On February 10th, The NRE Labs project launched four Ansible Network Automation exercises, made possible by Red Hat and Juniper Networks. This blog post covers job responsibilities of an NRE, the goal of Juniper’s NRE Labs, and a quick overview of new exercises and the concepts Red Hat and Juniper are jointly demonstrating. The intended audience for these initial exercises is someone new to Ansible Network Automation with limited experience with Ansible and network automation. The initial network topology for these exercises covers Ansible automating Juniper Junos OS and Cumulus VX virtual network instances.
Juniper has defined an NRE or network reliability engineer, as someone that can help an organization with modern network automation. This concept has many different names including DevOps for networks, NetDevOps, or simply just network automation. Juniper and Red Hat realized that this skill set is new to many traditional network engineers and worked together to create online exercises to help folks get started with Ansible Network Automation. Specifically, Juniper worked with us through NRE Labs, a project they started and co-sponsor that offers a no-strings-attached, community-centered initiative to bring the skills of automation within reach Continue reading
Since its founding, Docker’s mission has been to help developers bring their ideas to life by conquering the complexity of app development. With millions of Docker developers worldwide, Docker is the de facto standard for building and sharing containerized apps.
So what is one source of ideas we use to simplify the lives of developers? It starts with being a company of software developers who builds products for software developers. One of the more creative ways Docker has been driving innovation internally is through hackathons. These hackathons have proven to be a great platform for Docker employees to showcase their talent and provide unique opportunities for teams across Docker’s business functions to come together. Our employees get to have fun while creating solutions to problems that simplify the lives of Docker developers.
At Docker, our engineers are always looking for ways to improve their own workflows so as to ship quality code faster. Hack Week gives us a chance to explore the boundaries of what’s possible, and the winning ‘hacks’ make their way into our products to benefit our global developer community.
-Scott Johnston, Docker CEO
With that context, let’s break down how Docker runs employee hackathons. Docker is Continue reading
Last Wednesday, the CNCF released the KubeCon Europe 2020 schedule. There are so many talks at KubeCon it can be daunting even to decide what to go to see! Here are some talks by the team at Docker, and some others we think will be particularly interesting. Looking forward to seeing you in Amsterdam!
Chris is an engineer in our Paris office and is also co-executive director of the CNAB project. CNAB (Cloud Native Application Bundle) is a specification for bundling up cloud-native applications, which can consist of multiple containers, into a single object that can be pushed to a registry. Open source projects using CNAB, like Docker App or Porter allow you to package apps that would normally require multiple tools like Terraform, Helm, and shell to deploy, into a single tooling agnostic packaging format. These packages can then be shared using existing container registries and used with other CNAB compliant tools. This can really simplify cloud-native development.
Did you know that you can store anything into a container registry? Continue reading
In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.
At a high-level, the process looks something like this:
kubeadm
.It’s important to note that this guide does not replace my earlier post on setting up an AWS-integrated Kubernetes cluster on AWS (written for 1.15, but valid for 1.16 and 1.17). All the requirements in that post still apply here. If you haven’t read that post or aren’t familiar with the requirements for setting up a Kubernetes cluster with the AWS Continue reading
While many people know about Docker, not that many know its history and where it came from. Docker was started as a project in the dotCloud company, founded by Solomon Hykes, which provided a PaaS solution. The project became so successful that dotCloud renamed itself to Docker, Inc. and focused on Docker as its primary product.
As the “Docker project” grew from being a proof of concept shown off at various meetups and at PyCon in 2013 to a real community project, it needed a website where people could learn about it and download it. This is why the “dockerproject.org” and “dockerproject.com” domains were registered.
With the move from dotCloud to Docker, Inc. and the shift of focus onto the Docker product, it made sense to move everything to the “docker.com” domain. This is where you now find the company website, documentation, and of course the APT and YUM repositories at download.docker.com have been there since 2017.
On the 31st of March 2020, we will be shutting down the legacy APT and YUM repositories hosted at dockerproject.org and dockerproject.com. These repositories haven’t been updated with the latest releases of Docker and Continue reading
8 billion pulls! Yes, that’s billion with a B! This number represents a little known level of activity and innovation happening across the community and ecosystem, all in just one average month. How do we know? From the number of pulls and most popular images to top architectures, data from Docker Hub and Docker Desktop provide a window into application development trends in the age of containers.
Today, we are sharing these findings in something we call the Docker Index – a look at developers’ preferences and trends, as told by using anonymized data from five million Docker Hub and two million Docker Desktop users, as well as countless other developers engaging with content on Hub.
At Docker, we’re always looking for ways to make life easier for developers. Understanding the what, why and how behind these projects is imperative. As these trends evolve, we will continue to share updates on the findings.
Whether containers will become mainstream is no longer a topic of debate. As the Docker Index data suggests, containers have become a mainstay to how modern, distributed apps are built and shared so they can run anywhere.
Usage is showing no signs of slowing Continue reading
As a Docker Compose maintainer, my daily duty is to check for newly reported issues and try to help users through misunderstanding and possible underlying bugs. Sometimes issues are very well documented, sometimes they are nothing much but some “please help” message. And sometimes they look really weird and can result in funny investigations. Here is the story of how we solved one such report…
An issue was reported as “docker-compose super slow on macOS Catalina” – no version, no details. How should I prioritize this? I don’t even know if the reporter is using the latest version of the tool – the opened issue doesn’t follow the bug reporting template. This is just a one-liner. But for some reason, I decided to take a look at it anyway and diagnose the issue.
Without any obvious explanation for super-slowness, I decided to take a risk and upgrade my own MacBook to OSX Catalina. I was able to reproduce significant slow down in docker-compose execution, waiting seconds for the very first line printed on the console even to display usage on invalid command.
In the meantime, some Continue reading
This is a guest post by Docker Captain Nicholas Dille, a blogger, speaker and author with 15 years of experience in virtualization and automation. He works as a DevOps Engineer at Haufe Group, a digital media company located in Freiburg, Germany. He is also a Microsoft Most Valuable Professional.
In this virtual meetup, I share how to improve image builds using the features in BuildKit. BuildKit is an alternative builder with great features like caching, concurrency and the ability to separate your image build into multiple stages – which is useful for separating the build environment from the runtime environment.
The default builder in Docker is the legacy builder. This is recommended for use when you need support for Windows. However, in nearly every other case, using BuildKit is recommended because of the fast build time, ability to use custom BuildKit front-ends, building stages in parallel and other features.
Catch the full replay below and view the slides to learn:
One of the most productive meetings I had KubeCon in San Diego last November was a meeting with Docker, Amazon and Microsoft to plan a collaboration around a new version of the CNCF project Notary. We held the Notary v2 kickoff meeting a few weeks later in Seattle in the Amazon offices.
Emphasising that this is a cross-industry collaboration, we had eighteen people in the room (with more dialed in) from Amazon, Microsoft, Docker, IBM, Google, Red Hat, Sylabs and JFrog. This represented all the container registry providers and developers, other than the VMware Harbor developers who could unfortunately not make it in person. Unfortunately, we forgot to take a picture of everyone!
The consensus and community are important because of the aims of Notary v2. But let’s go back a bit as some of you may not know what Notary is and what it is for.
The Notary project was originally started at Docker back in 2015 to provide a general signing Continue reading
In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.
For those that aren’t familiar, Pulumi allows users to use one of a number of different general-purpose programming languages and apply them to infrastructure-as-code scenarios. In this example, I’ll be using TypeScript, but Pulumi also supports JavaScript and Python (and Go is in the works). (Side note: I intend to start working with the Go support in Pulumi when it becomes generally available as a means of helping accelerate my own Go learning.)
Here’s a snippet of TypeScript code that Continue reading
I recently had a need to manually load some container images into a Linux system running containerd
(instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd
automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.
For an administrator/operations-minded user, the primary means of interacting with containerd
is via the ctr
command-line tool. This tool uses a command syntax very similar to Docker, so users familiar with Docker should be able to be productive with ctr
pretty easily.
In my specific example, I had a bastion host with Internet access, and a couple of hosts behind the bastion that did not have Internet access. It was the hosts behind the bastion that needed the container images preloaded. So, I used the ctr
tool to fetch and prepare the images on the bastion, then Continue reading
In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.
I initially approached the API in terms of how I would encode (serialize?) data on the wire using JSON (I’d decided on using a RESTful API with JSON over HTTP). Starting with how I anticipated storing the data in the back-end database, I created a representation of how a customer’s information would be encoded (serialized) in JSON:
{
"customers": [
{
"customerID": "5678",
"streetAddress": "123 Main Street",
"unitNumber": "Suite 123",
"city": "Anywhere",
"state": "CO",
"postalCode": "80108",
"telephone": "3035551212",
"primaryContactFirstName": "Scott",
"primaryContactLastName": "Lowe"
}
]
Continue reading
One of the most requested features for the docker-compose tool is definitely support for building using Buildkit which is an alternative builder with great capabilities, like caching, concurrency and ability to use custom BuildKit front-ends just to mention a few… Ahhh with a nice blue output! And the good news is that Docker Compose 1.25.1 – that was just released early January – includes BuildKit support!
BuildKit support for Docker Compose is actually achieved by redirecting the docker-compose build to the Docker CLI with a limited feature set.
To enable this, we have to align some stars.
First, it requires that the Docker CLI binary present in your PATH:
$ which
docker/usr/local/bin/docker
Second, docker-compose has to be run with the environment variable COMPOSE_DOCKER_CLI_BUILD
set to 1 like in:
$ COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build
This instruction tells docker-compose to use the Docker CLI when executing a build. You should see the same build output, but starting with the experimental warning.
As docker-compose passes its environment variables to the Docker CLI, we can also tell the CLI to use BuildKit instead of the default builder. To accomplish that, we can execute this:
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build
We are excited to announce that we released a new Docker Desktop version today! Thanks to the user feedback on the new features initially released in the Edge channel, we are now ready to publish them into Stable.
Before getting to each feature into detail, let’s see what’s new in Docker Desktop 2.2:
Back in July we released on Edge the technical preview of Docker Desktop for WSL 2, where we included an experimental integration of Docker running on an existing user Linux distribution. We learnt from our experience and re-architected our solution (covered in Simon’s blog) .
This new architecture for WSL 2 allows users to: