Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Elton Stoneman who has been a Docker Captain since 2016. He is a Container Consultant and Trainer and is based in Gloucestershire, United Kingdom.
I was consulting as an API Architect, building out the backend services for a new Android device. My role was all about .NET services running in Azure, but we worked as a single team – and the people working on the operating system were using Docker to simplify their build tools.
I started looking into their setup and I was just stunned at how you could run complex software with a single Docker command – and have it run the same way on any machine. That was way back in 2014, Continue reading
Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation. This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems. This blog post will walk through the automation workflow for validation of operational state and even automatic remediation of issues.
We will demonstrate how the Red Hat supported and certified Ansible content can be used to:
The recently released ansible.utils version 1.0.0 Collection has added support for ansible.utils.cli_parse module, which converts text data into structured JSON format. The module has the capability to either execute the command on the remote endpoint and fetch the text response, or Continue reading
Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation. This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems. This blog post will walk through the automation workflow for validation of operational state and even automatic remediation of issues.
We will demonstrate how the Red Hat supported and certified Ansible content can be used to:
The recently released ansible.utils version 1.0.0 Collection has added support for ansible.utils.cli_parse module, which converts text data into structured JSON format. The module has the capability to either execute the command on the remote endpoint and fetch the text response, or Continue reading
Welcome to Technology Short Take #136, the first Short Take of 2021! The content this time around seems to be a bit more security-focused, but I’ve still managed to include a few links in other areas. Here’s hoping you find something useful!
The year 2020 will go down in the history books for so many reasons. For Docker, despite the challenges of our November 2019 restructuring, we were fortunate to see 70% growth in activity from our 11.3 million monthly active users sharing 7.9 million apps pulled 13.6 billion times per month. Thank you, Docker team, community, customers, and partners!
But with 2020 behind us it’s natural to ask, “What’s next?” Here in the second week of January, we couldn’t be more excited about 2021. Why? Because the step-function shift from offline to online of every dimension of human activity brought about by the global pandemic is accelerating opportunities and challenges for development teams. What are the key trends relevant to development teams in 2021? Here are our top picks:
The New Normal: Open, Distributed Collaboration
While already a familiar teamwork model for many open source projects and Internet companies, the global pandemic seemingly overnight drove all software development teams to adopt new ways of working together. In fact, our 2020 survey of thousands of Docker developers about their ways of working found that 51% prefer to work mostly remote and only sometimes in an office if/when given Continue reading
The Red Hat Ansible Network Automation engineering team is continually adding new resource modules to its supported network platforms. Ansible Network Automation resource modules are opinionated network modules that make network automation easier to manage and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration, as well as having to maintain complex fact gathering and parsing methodologies. For this blog post, we will cover standard return values that are the same across all supported network platforms (e.g. Arista EOS, Cisco IOS, NXOS, IOS-XR, and Juniper Junos) and all resource modules.
Before we get started, I wanted to call out three previous blog posts covering resource modules. If you are unfamiliar with resource modules, check any of these out:
The Red Hat Ansible Network Automation engineering team is continually adding new resource modules to its supported network platforms. Ansible Network Automation resource modules are opinionated network modules that make network automation easier to manage and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration, as well as having to maintain complex fact gathering and parsing methodologies. For this blog post, we will cover standard return values that are the same across all supported network platforms (e.g. Arista EOS, Cisco IOS, NXOS, IOS-XR, and Juniper Junos) and all resource modules.
Before we get started, I wanted to call out three previous blog posts covering resource modules. If you are unfamiliar with resource modules, check any of these out:
Cluster API (also known as CAPI) is, as you may already know, an effort within the upstream Kubernetes community to apply Kubernetes-style APIs to cluster lifecycle management—in short, to use Kubernetes to manage the lifecycle of Kubernetes clusters. If you’re unfamiliar with CAPI, I’d encourage you to check out my introduction to Cluster API before proceeding. In this post, I’m going to show you how to use Velero (formerly Heptio Ark) to backup and restore Cluster API objects so as to protect your organization against an unrecoverable issue on your Cluster API management cluster.
To be honest, this process is so straightforward it almost doesn’t need to be explained. In general, the process for backing up the CAPI management cluster looks like this:
In the event of catastrophic failure, the recovery process looks like this:
Let’s look at these steps in a bit more detail.
The process for pausing and resuming reconciliation of CAPI resources is outlined in this separate blog post. To summarize that post here for convenience, the Cluster Continue reading
As the calendar leaves 2020 in the rear view mirror, we’re looking forward to the year ahead. And as part of this, today we’re announcing the dates for DockerCon Live 2021. DockerCon Live will take place on May 27th, 2021. Sign up here to pre-register for the event!
Once again, DockerCon Live will be a free, online experience full of demos of products and innovation from Docker and our partners. You’ll get deep technical sessions from Docker experts, Docker Captains and luminaries from across the industry, along with a chance for the community to gather and connect with colleagues around the world.
Last year DockerCon Live 2020 was one of the largest events in the app dev industry– over 80,000 developers from 193 countries registered to hear over 50 sessions focused on best practices, real world techniques and how-to instruction for building containerized cloud-native solutions with Docker. Speakers joined from companies such as AWS, Google, Microsoft, Salesforce, Nginx, Snyk, Datadog, LaunchDarkly, among others.
For 2021, we are building on this format with a couple of new features including full day pre-conference technical workshops, additional content and more community activities. And there will be some surprises for everyone involved. If you Continue reading
Over the holiday break I made some time to work on my desk layout, something I’d been wanting to do for quite a while. I’d been wanting to “up my game,” so to speak, with regard to producing more content, including some video content. Inspired by—and heavily borrowing from—this YouTube video, I decided I wanted to create a similar arrangement for my desk. In this post, I’ll share more details on my setup.
I’ll start with the parts list, which contains links to everything I’m using in this new arrangement.
When I shared a picture of the desk layout on Twitter, a number of folks expressed interest in the various components that I used. To make it easier for others who may be interested in replicating their own variation of this setup, here are Amazon links for all the parts I used to build this setup (these are not affiliate links):
It’s end of year round up time! The first post in this series covered the number 10-6 most viewed Docker blog posts. If you were wondering what the #1 most viewed blog post of the year was, then keep reading. The suspense will soon be over…
5) How to Develop Inside a Container Using Visual Studio Code Remote Containers
VS Code is another beloved tool. This guest post from Docker Community Leader Jochen Zehnder included some handy tricks for the Visual Studio Code Remote Containers extension that allows you to develop inside a container.
4) How to deploy on remote Docker hosts with docker-compose
There was some solid Compose momentum this year. This how-to post showed an example of how to access remote docker hosts via SSH and tcp protocols in hopes to cover a large number of use-cases.
3) How To Use the Official NGINX Docker Image
NGINX is super popular, so naturally so was this tutorial that took a look at the NGINX Official Docker Image and how to use it.
2) Containerized Python Development – Part 1
This post contained tips for how to containerize a Python service/tool and the best practices for it. Fun Continue reading
Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Gianluca Arbezzano who has been a Docker Captain since 2016. He is a Senior Software Staff Engineer at Equinix Metal and is based in Italy.
How/when did you first discover Docker?
At this point, it is not easy to pick a date. Four years ago I was in Dublin away from my home town near Turin. The Docker Meetup along with many other meetups were a great opportunity for nerds like me looking for new friends and to grab free pizza while having a good time. Back then I was working for a company that helps businesses move to the cloud. I saw that Docker was a powerful tool to master. Not only was Docker a useful tool and led me Continue reading
2020 was some type of year…as we wrap up a year that undoubtedly will never be forgotten, we rounded up the most viewed Docker blog posts. The following posts are some of what you, the Docker community, found to be most interesting and useful. Which was your favorite?
10) Announcing the Compose Specification
Starting the list with a *bang* is a post highlighting that we created a new open community to develop the Compose Specification. This new community is run with open governance and with input from all interested parties, allowing us together to create a new standard for defining multi-container apps that can be run from the desktop to the cloud.
9) Advanced Dockerfiles: Faster Builds and Smaller Images Using BuildKit and Multistage Builds
This post showed some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing one to get the most out of the multistage build feature. Who doesn’t want more efficient multistage Dockerfiles?
8) Containerized Python Development – Part 2
The second in a series, this post discussed how to set up and wire other components to a containerized Python service. It showed a good way to Continue reading
This is a guest post from Viktor Petersson, CEO of Screenly.io. Screenly is the most popular digital signage product for the Raspberry Pi. Find Viktor on Twitter @vpetersson.
For those not familiar with Qt, it is a cross-platform development framework that is used in a wide range of products, including cars (Tesla), digital signs (Screenly), and airplanes (Lufthansa). Needless to say, Qt is very powerful. One thing you cannot say about the Qt framework, however, is that it is easy to compile — at least for embedded devices. The countless blog posts, forum threads, and Stack Overflow posts on the topic reveal that compiling Qt is a common headache.
As long-term Qt users, we have had our fair share of battles with it at Screenly. We migrated to Qt for our commercial digital signage software a number of years ago, and since then, we have been very happy with both its performance and flexibility. Recently, we decided to migrate our open source digital signage software (Screenly OSE) to Qt as well. Since these projects share no code base, this was a greenfield opportunity that allowed us to start afresh and explore Continue reading
Recently our CEO Scott Johnston took a look back on all that Docker had achieved one year after selling the Enterprise business to Mirantis and refocusing solely on developers. We made significant investments to deliver value-enhancing features for developers, completed strategic collaborations with key ecosystem partners and doubled down on engaging its user community, resulting in a 70% year-over-year increase in Docker usage.
Even though we are winding down the calendar year, you wouldn’t know it based on the pace at which our engineering and product teams have been cranking out new features and tools for cloud-native development. In this post, I’ll add some context around all the goodness that we’ve released recently.
Recall that our strategy is to deliver simplicity, velocity and choice for dev teams going from code to cloud with Docker’s collaborative application development platform. Our latest releases, including Docker Desktop 3.0 and Docker Engine 20.10, accelerate the build, share, and run process for developers and teams.
Higher Velocity Docker Desktop Releases
With the release of Docker Desktop 3.0.0, we are totally changing the way we distribute Docker Desktop to developers. These changes allow for smaller, faster Docker Desktop Continue reading
At Microsoft Build in the first half of the year, Microsoft demonstrated some awesome new capabilities and improvements that were coming to Windows Subsystem for Linux 2 including the ability to share the host machine’s GPU with WSL 2 processes. Then in June Craig Loewen from Microsoft announced that developers working on the Windows insider ring machines could now make use of GPU for the Linux workloads. This support for NVIDIA CUDA enabled developers and data scientists to use their local Windows machines for inner-loop development and experimentation.
Last week, during the Docker Community All Hands, we announced the availability of a developer preview build of Docker Desktop for WSL 2 supporting GPU for our Developer Preview Program. We already have more than 1,000 who have joined us to help test preview builds of Docker Desktop for Windows (and Mac!). If you’re interested in joining the program for future releases you should do it now!
Today we are excited to announce the general preview of Docker Desktop support for GPU with Docker in WSL2. There are over one and a half million users of Docker Desktop for Windows today and we saw in our roadmap how excited you Continue reading
We are excited to let you know that we have released a new experimental tool. We would love to get your feedback on it. Today we have released an experimental Docker Hub CLI tool, the hub-tool. The new Hub CLI tool lets you explore, inspect and manage your content on Docker Hub as well as work with your teams and manage your account.
The new tool is available as of today for Docker Desktop for Mac and Windows users and we will be releasing this for Linux in early 2021.
The hub-tool is designed to map as closely to the top level features we know people are using in Docker Hub and provide a new way for people to start interacting with and managing their content. Let’s start by taking a look at the top level options we have.
We can see that we have the ability to jump into your account, your content, your orgs and your personal access tokens.
From here I can dive into one of my repos
And from here I can then decide to list the tags in one of those repos. This also now lets me see when Continue reading
Last week, we held our first Community All Hands and the response was phenomenal. A huge thank you to all 1,100+ people who joined. If you missed it, you can watch the recording here. You can also find answers to those questions that came in towards the end that we didn’t have time to answer here.
This all-hands was an effort to further deepen our engagement with the community and bring users, contributors and staff together on a quarterly basis to share updates on what we’re working on and what our priorities are for 2021 and beyond. The event was also an opportunity to give the community direct access to Docker’s leadership and provide a platform to submit questions and upvote those that are most relevant and important to people.
The overwhelming piece of feedback we got from attendees was that the event was too short and people would have loved to see more demos. We certainly had a packed agenda and we did our best to squeeze in as much into an hour. For our next one (in February 2021!), we’ll aim to extend the event by 30 minutes and include more live demos. We’ll also try Continue reading
Sean Cavanaugh, Anshul Behl and I recently hosted a webinar entitled “Migrating to Ansible Collections” (link to YouTube on-demand webinar replay and link to PDF slides download). This webinar was focused on enabling the Ansible Playbook writers, looking to move to the wonderful world of Ansible Collections in existing Ansible 2.9 environments and beyond.
The webinar was much more popular than we expected, so much so we didn’t have enough time to answer all the questions, so we took all the questions and put them in this blog to make them available to everyone.
I would like to use Ansible to automate an application using a REST API (for example creating a new bitbucket project). Should I be writing a role or a custom module? And should I then publish that as a Collection?
It depends on how much investment you’d like to make into the module or role that you develop. For example, creating a role that references the built-in Ansible URI module can be evaluated versus creating an Ansible module written in Python. If you were to create a module, it can be utilized via a role developed by you or the playbook author. Continue reading
Sean Cavanaugh, Anshul Behl and I recently hosted a webinar entitled “Migrating to Ansible Collections” (link to YouTube on-demand webinar replay and link to PDF slides download). This webinar was focused on enabling the Ansible Playbook writers, looking to move to the wonderful world of Ansible Collections in existing Ansible 2.9 environments and beyond.
The webinar was much more popular than we expected, so much so we didn’t have enough time to answer all the questions, so we took all the questions and put them in this blog to make them available to everyone.
I would like to use Ansible to automate an application using a REST API (for example creating a new bitbucket project). Should I be writing a role or a custom module? And should I then publish that as a Collection?
It depends on how much investment you’d like to make into the module or role that you develop. For example, creating a role that references the built-in Ansible URI module can be evaluated versus creating an Ansible module written in Python. If you were to create a module, it can be utilized via a role developed by you or the playbook author. Continue reading