Last week, during the Docker Community All Hands, we announced the availability of a developer preview build of Docker Desktop for Macs running on M1 through the Docker Developer Preview Program. We already have more than 1,000 people testing these builds as of today. If you’re interested in joining the program for future releases you should do it today!
As I’m sure you know by now, Apple has recently shipped the first Macs based on the new Apple M1 chips. Last month my colleague Ben shared our roadmap for building a Docker Desktop that runs on this new hardware. And I’m delighted to tell you that today we have a public preview that you can download and try out.
Like many of you, we at Docker have been super excited to receive and code with these new computers: they just feel so fast! We also know that Docker Desktop is a key part of the development cycle for over 3M developers using Docker Desktop with over half of you on Macs. To support all our Mac users we’ve been working hard to get Docker Desktop ready to run on the new M1 hardware. It is not release quality yet, or Continue reading
Welcome to Technology Short Take #135! This will likely be the last Technology Short Take of 2020, so it’s a tad longer than usual. Sorry about that! You know me—I just want to make sure everyone has plenty of technical content to read during the holidays. And speaking of holidays…whatever holidays you do (or don’t) celebrate, I hope that the rest of the year is a good one for you. Now, on to the content!
kube-proxy
, a key part of Kubernetes networking, to expose the internals, and along the way exposes readers to a few different technologies. This is a good read if you’re trying to better understand some aspects of Kubernetes networking.curl
and jq
when working with networking-related APIs.Today with the release of Docker Desktop 3.0.0, we’re launching several major improvements to the way we distribute Docker Desktop. From now on we will be providing all updates as deltas from the previous version, which will reduce the size of a typical update from hundreds of MB to tens of MB. We will also download the update in the background so that all you need to do to benefit from it is to restart Docker Desktop. Finally, we are removing the Stable and Edge channels, and moving to a single release stream for all users.
Many of you have given us feedback that our updates are too large, and too time consuming to download and install. Until now, we only provided complete installers, which meant that each update required downloading a file of several hundred MB. But from now on we will be providing all updates as deltas from the previous version, which will typically be only tens of MB per release.
We have also heard that updates are offered at an inconvenient time, when you launch Docker Desktop or when you reboot your machine, which are times that you want Continue reading
Back in April, we did a limited launch of a Desktop Developer Preview Program, an early access program set up to enable Docker power-users to test, experiment with and provide feedback on new unreleased features on Docker Desktop. The aim was to empower the community to work in lock-step with Docker engineers and help shape our product roadmap.
For this first phase of the program, we limited the program to a small cohort of community members to test the waters and gather learnings as we planned to roll-out a full-fledged program later in the year.
Today, we’re thrilled to announce the official launch of the program, renaming it the Docker Developer Preview Program and broadening its scope to also include Docker Engine on Linux.
First and foremost, this is an opportunity for anyone in the community to help shape and improve the experience of millions of Docker users around the world. As a member, you get direct access to the people who are building our products everyday: our engineering team, product managers, community leads etc… Through the program’s private Slack channel, you get to share your feedback, tell us Continue reading
Hello and welcome to another introductory Ansible blog post, where we'll be covering a new command-line interface (CLI) tool, Ansible Builder. Please note that this article will cover some intermediate-level topics such as containers (Ansible Builder uses Podman by default), virtual environments, and Ansible Content Collections. If you have some familiarity with those topics, then read on to find out what Ansible Builder is, why it was developed, and how to use it.
This project is currently in development upstream on GitHub and is not yet part of the Red Hat Ansible Automation Platform product. As with all Red Hat software, our code is open and we have an open source development model for our enterprise software. The goal of this blog post is to show the current status of this initiative, and start getting the community and customers comfortable with our methodologies, thought process, and concept of Execution Environments. Feedback on this upstream project can be provided on GitHub via comments and issues, or provided via the various methods listed on our website. There is also a great talk on AnsibleFest.com, titled “Creating and Using Ansible Execution Environments,” available on-demand, which Continue reading
Hello and welcome to another introductory Ansible blog post, where we'll be covering a new command-line interface (CLI) tool, Ansible Builder. Please note that this article will cover some intermediate-level topics such as containers (Ansible Builder uses Podman by default), virtual environments, and Ansible Content Collections. If you have some familiarity with those topics, then read on to find out what Ansible Builder is, why it was developed, and how to use it.
This project is currently in development upstream on GitHub and is not yet part of the Red Hat Ansible Automation Platform product. As with all Red Hat software, our code is open and we have an open source development model for our enterprise software. The goal of this blog post is to show the current status of this initiative, and start getting the community and customers comfortable with our methodologies, thought process, and concept of Execution Environments. Feedback on this upstream project can be provided on GitHub via comments and issues, or provided via the various methods listed on our website. There is also a great talk on AnsibleFest.com, titled “Creating and Using Ansible Execution Environments,” available on-demand, which Continue reading
We are pleased to announce that we have completed the next major release of the Docker Engine 20.10. This release continues Docker’s investment in our community Engine adding multiple new features including support for cgroups V2, moving multiple features out of experimental including docker run --mount
and rootless, along with a ton of other improvements to the API, client and build experience. The full list of changes can be found as part of our change log.
Docker engine is the underlying tooling/client that enables users to easily build, manage, share and run their container objects on Linux. The Docker engine is made up of 3 core components:
For those who are curious about the recent questions about Docker Engine/K8s, please have a look at Dieu’s blog to learn more.
Along with this I want to give a huge thank you to everyone in the community and all of our maintainers who have also contributed towards this Engine release. Without their contribution, hard work and support we Continue reading
At Docker, we are committed to building a platform that enables collaboration and innovation within the development community. Last month, we announced the launch of a special program to expand our support for Open Source projects that use Docker. The eligible projects that meet the program’s requirements (ie. they must be open source and non-commercial) can request to have their respective OSS namespaces whitelisted and see their data-storage and data-egress restrictions lifted.
The projects we’re supporting, and the organizations behind them, are as diverse as they are numerous, ranging from independent researchers developing frameworks for machine learning, to academic consortia collecting environmental data to human rights NGOs building encryption tools. To date, we’re thrilled to see that more than 80 non-profit organizations, large and small, from the four corners of the world have joined the program.
Here are but a few diverse projects we’re supporting:
The Vaccine Impact Modelling Consortium aims to deliver more sustainable, efficient, and transparent approach to generating disease burden and vaccine impact estimates.
farmOS is a web-based application for farm management, planning, and record keeping. It is developed by a community of farmers, developers, researchers, and organizations with the aim of providing a standard Continue reading
Here you will learn about NetBox at a high level, how it works to become a Source of Truth (SoT), and look into the use of the Ansible Content Collection, which is available on Ansible Galaxy. The goal is to show some of the capabilities that make NetBox a terrific tool and why you will want to use NetBox as your network Source of Truth for automation!
Why a Source of Truth? The Source of Truth is where you go to get the intended state of the device. There does not need to be a single Source of Truth, but you should have a single Source of Truth per data domain, often referred to as the System of Record (SoR). For example, if you have a database that maintains your physical sites that is used by teams outside of the IT domain, that should be the Source of Truth on physical sites. You can aggregate the data from the physical site Source of Truth into other data sources for automation. Just be aware that when it comes time to collect data, then it should come from that other tool.
The first step in creating a network automation Continue reading
The latest version of Kubernetes Kubernetes v1.20.0-rc.0 is now available. The Kubernetes project plans to deprecate Docker Engine support in the kubelet and support for dockershim will be removed in a future release, probably late next year. The net/net is support for your container images built with Docker tools is not being deprecated and will still work as before.
Even better news however, is that Mirantis and Docker have agreed to partner to maintain the shim code standalone outside Kubernetes, as a conformant CRI interface for Docker Engine. We will start with the great initial prototype from Dims, at https://github.com/dims/cri-dockerd and continuing to make it available as an open source project, at https://github.com/Mirantis/cri-dockerd. This means that you can continue to build Kubernetes based on Docker Engine as before, just switching from the built in dockershim to the external one. Docker and Mirantis will work together on making sure it continues to work as well as before and that it passes all the conformance tests and works just like the built in version did. Docker will continue to ship this shim in Docker Desktop as this gives a great developer experience, and Mirantis will be Continue reading
Openness and transparency are key pillars of a healthy open source community. We’re constantly exploring ways to better engage the Docker community, to better incorporate feedback and to better foster participation.
To this end, we’re very excited to host our first Community All-Hands on Thursday December 10th at 8am PST / 5pm CET. This one-hour event will be a unique opportunity for Docker staff and the broader Docker community to come together for company and product updates, live demos, community shout-outs and a Q&A.
The All-Hands will include updates from:
We’ll then dive into specific product updates around Docker Desktop, Hub and Developer Tooling, followed by two awesome live demos where we’ll show cool new features and integrations.
A Community All-Hands is not complete without a community update. We will announce new community initiatives and recognize outstanding contributors who have gone above and beyond to help push Docker Continue reading
Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.
The process I’ll describe in this post is also described in the upstream Cluster API documentation (see the “Bootstrap & Pivot” section of this page).
At a high level, the process looks like this:
The following sections describe each of these steps in a bit more detail.
The first step is Continue reading
For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.
This is the busiest time of the year for developers targeting AWS. Just over a week ago we announced the GA of Docker Compose for AWS, and this week we’re getting ready to virtually attend AWS re:Invent. re:Invent is the annual gathering of the entire AWS community and ecosystem to learn what’s new, get the latest tips and tricks, and connect with peers from around the world. Instead of the traditional week-long gathering of 60,000 attendees in Las Vegas, the event has pivoted to a flexible three-week online conference. This year the event is free, and anyone can participate on their own schedule. This blog post covers highlights of the event so Docker developers can get the most from re:Invent.
In the kickoff keynote by CEO Andy Jassy, AWS announced a number of new features for container developers, including a new capability, ECS Anywhere, which allows Amazon Elastic Container Service (ECS) to run on-prem and in the cloud to support hybrid computing workloads as well as the launch of AWS Proton, an end-to-end pipeline to deliver containerized and microservices applications. Separately, AWS also announced a new public Elastic Container Registry (ECR) and gallery today. We’re excited to see a Continue reading
AWS re:Invent kicks off this week and if you are anything like us, we are super geeked out to watch and attend all the talks that are lined up for the next three weeks.
To get ready for re:Invent, we’ve gathered some of our best resources and expert guidance to get the most out of the Docker platform when building apps for AWS. Check out these blogs, webinars and DockTalks from the past few weeks to augment your re:Invent experience over the next three weeks:
Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm
. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.
The “TL;DR” is that you can use the kubeletExtraArgs
field in a kubeadm
configuration file to pass the node-labels
command to the Kubelet, which would allow you to assign node labels when kubeadm
bootstraps the node. Read on for more details.
kind
is a great tool for testing this sort of configuration, since kind
uses kubeadm
to bootstrap its nodes. If you aren’t familiar with kind
, I encourage you to visit the kind
website; in Continue reading
Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).
Since CAPI leverages Kubernetes-style APIs to manage Kubernetes cluster lifecycle, the idea of reconciliation is very important—it’s a core Kubernetes concept that isn’t at all specific to CAPI. This article on level triggering and reconciliation in Kubernetes is a great article that helps explain reconciliation, as well as a lot of other key concepts about how Kubernetes works.
When reconciliation is active, the controllers involved in CAPI are constantly evaluating desired state and actual state, and then reconciling differences between the two. There may be times when you need to pause this reconciliation loop. Fortunately, CAPI makes this pretty easy: there is a paused
field that allows users Continue reading
Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!
As Red Hat Ansible Automation Platform expands its footprint with a growing customer base, security continues to be an important aspect of organizations’ overall strategy. Red Hat regularly reviews and enhances the foundational codebase to follow better security practices. As part of this effort, we are introducing FIPS 140-2 readiness enablement by means of a newly developed Ansible SSH connection plugin that uses the libssh library.
Since most network appliances don't support or have limited capability for the local execution of a third party software, the Ansible network modules are not copied to the remote host unlike linux hosts; instead, they run on the control node itself. Hence, Ansible network can’t use the typical Ansible SSH connection plugin that is used with linux host. Furthermore, due to this behavior, performance of the underlying SSH subsystem is critical. Not only is the new LibSSH connection plugin enabling FIPS readiness, but it was also designed to be more performant than the existing Paramiko SSH subsystem.
The top level network_cli connection plugin, provided by the ansible.netcommon Collection (specifically ansible.netcommon.network_cli), provides an SSH based connection to the network appliance. It in turn calls the Continue reading
Today, we are thrilled to announce that Canonical will distribute its free and commercial software through Docker Hub as a Docker Verified Publisher. Canonical and Docker will partner together to ensure that hardened free and commercial Ubuntu images will be available to all developer software supply chains for multi-cloud app development.
Canonical is the publisher of the Ubuntu OS, and a global provider of enterprise open source software, for all use cases from cloud to IoT. Canonical Ubuntu is one of the most popular Docker Official Images on Docker Hub, with over one billion images pulled. With Canonical as a Docker Verified Publisher, developers who pull Ubuntu images from Docker Hub can be confident they get the latest images backed by both Canonical and Docker.
Canonical is the latest publisher to choose Docker Hub for globally sharing their container images. With millions of users, Docker Hub is the world’s largest container registry, ensures Canonical can reach their developers regardless where they build and deploy their applications.
This partnership, which covers both free and commercial Canonical LTS images, so developers can confidently pull the latest images straight from the source without concern Continue reading