
No matter what industry you’re in, your application modernization strategy matters. Overlooking or downplaying its importance is a quick way for customers to sour and competitors to gain an edge. It’s why 91% of executives believe their revenues will take a hit without successful digital transformation.
The good news is modern applications offer a clear path forward. Creating a roadmap for your modern application strategy is a critical step toward a more agile and continuous model of software development and delivery – one that’s centered on delivering perpetually expanding value and new experiences to customers.
This is the first of a series of blogs where we will look at industry viewpoints, different approaches, underlying platforms and real-world stories that are foundational to successful modern application development in order to provide a roadmap for application modernization.
The technology inventory at companies today is as diverse, distributed and complex as ever. It includes a variety of technology stacks, application frameworks, services and languages. During a modernization process, new Open Source technologies are often integrated with legacy solutions. Existing applications need to be maintained and enhanced, modern applications need to be Continue reading
This is a guest post by Alex Iankoulski, Docker Captain and full stack software and infrastructure architect at Shell New Energies. The views expressed here are his own and are neither opposed or endorsed by Shell or Docker.
In this blog, I will show you how to use Docker Desktop for Mac or Windows to run Kubeflow. To make this easier, I used my Depend on Docker project, which you can find on Github.
Even though we are experiencing a tectonic shift of development workflows in the cloud era towards hosted and remote environments, a substantial amount of work and experimentation still happens on developer’s local machines. The ability to scale down allows us to mimic a cloud deployment locally and enables us to play, learn quickly, and make changes in a safe, isolated environment. A good example of this rationale is provided by Kubeflow and MiniKF.
Since Kubeflow was first released by Google in 2018, adoption has increased significantly, particularly in the data science world for orchestration of machine learning pipelines. There are various ways to deploy Kubeflow both on desktops and servers as described in Continue reading

We had the chance recently to sit down with the Liberty Mutual Insurance team at their Portsmouth, New Hampshire offices and talk about how they deliver better business outcomes with the cloud and containerization.
At this point, Liberty Mutual has moved about 30 percent of their applications to the cloud. One of big improvements the team has seen with the cloud and Docker is the speed at which developers can develop and deploy their applications. That means better business outcomes for Liberty Mutual and its customers.
Here’s what they told us. You can also catch the highlights in this two-minute video:
Mark Cressey, SVP and GM, IT Hosting Services: Tech and the digitization it’s allowed has really enabled Liberty Mutual to get deeply ingrained in our customers’ lives and support them through their major life journeys. We’re able to be more predictive of what our customer’s needs and get in front of them as a proactive step. How can we help? How can we assist you? Is this the right coverage? And even to the point where using real time information, we can warn them about approaching windstorms or warn our Continue reading

The Docker team is gearing up for another great KubeCon this year in San Diego, November 17-21. As a Platinum sponsor of this year’s event, we are excited to bring Docker employees, community members and Docker captains together to demonstrate and celebrate the combined impact of Docker and Kubernetes.
Stop by Booth P37 to learn how to leverage the Docker platform to securely build, share and run modern applications for any Kubernetes environment. We will demonstrate Docker Desktop Enterprise and how it accelerates container application development while supporting developer choice. Experts will be on hand to answer questions about Docker Kubernetes Services (DKS), a secure and production-ready Kubernetes environment. Or come to learn more about Docker’s contributions to Kubernetes while picking up some great Docker swag.
KubeCon will also provide a great opportunity to learn from industry experts and hear from people who run production applications on Kubernetes. Here’s a helpful guide from the Docker team of our recommended talks:
Monday, Nov 18
Tuesday, Nov 19

The Docker team will be on the show floor at Microsoft Ignite the week of November 4. We’ll be talking about the state of modern application development, how to accelerate innovation efforts, and the role containerization, Docker, and Microsoft Azure play in powering these initiatives.
Come by booth #2414 at Microsoft Ignite to check out the latest developments in the Docker platform. Learn why over 1.8 million developers build modern applications on Docker, and over 800 enterprises rely on Docker Enterprise for production workloads.
At Microsoft Ignite, we will be talking about:
Docker Enterprise 3.0 shipped back in April 2019, making it the first and only desktop-to-cloud container platform in the market that lets you build and share any application and securely run them anywhere – from hybrid cloud to the edge. At Microsoft Ignite, we’ll have demos that shows how Docker Enterprise 3.0 simplifies Kubernetes for Azure Kubernetes Service (AKS) and enables companies to more easily build modern applications with Docker Desktop Enterprise and Docker Application.
Learn how to accelerate your journey to the cloud with Docker’s Dev Team Starter Bundle for Continue reading
Kubernetes has the broadest capabilities of any container orchestrator available today, which adds up to a lot of power and complexity. That can be overwhelming for a lot of people jumping in for the first time – enough to scare people off from getting started. There are a few reasons it can seem intimidating:

This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.
Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.
Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:
The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful Continue reading
A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.
The basic idea behind jk is that you could write some relatively simple JavaScript, and jk will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)
Here’s a very simple example:
const service = new api.core.v1.Service('appService', {
metadata: {
namespace: 'appName',
labels: {
app: 'appName',
team: 'blue',
},
},
spec: {
selector: Continue reading

We recognize the central role that Docker Hub plays in modern application development and are working on many enhancements around security and content. In this blog post we will share how we are implementing two-factor authentication (2FA).
Two-factor authentication increases the security of your accounts by requiring two different forms of validation. This helps ensure that you are the rightful account owner. For Docker Hub, that means providing something you know (your username and a strong password) and something you have in your possession. Since Docker Hub is used by millions of developers and organizations for storing and sharing content – sometimes company intellectual property – we chose to use one of the more secure models for 2FA: software token (TOTP) authentication.
TOTP authentication is more secure than SMS-based 2FA, which has many attack vectors and vulnerabilities. TOTP requires a little more upfront setup, but once enabled, it is just as simple (if not simpler) than text message-based verification. It requires the use of an authenticator application, of which there are many available. These can be apps downloaded to your mobile device (e.g. Google Authenticator or Microsoft Authenticator) or it can Continue reading
Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!
If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:
Lunch and private transportation are included for all activities, and all activities Continue reading
Red Hat is coming onto IBM’s books at just the right time, and to be honest, it might have been better for Big Blue if the deal to acquire the world’s largest supplier of support and packaging services for open source software had closed maybe one or two quarters ago. …
The Potential Of Red Hat Plus Power Is Larger Than Exascale was written by Timothy Prickett Morgan at The Next Platform.

From October through December, Docker User Groups all over the world are hosting a workshop for their local community! Join us for an Introduction to Docker for Developers, a hands-on workshop we run on Play with Docker.
This Docker 101 workshop for developers is designed to get you up and running with containers. You’ll learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. We’ll even mix in a few advanced topics, such as networking and image building best-practices. There is definitely something for everyone!
Visit your local User Group page to see if there is a workshop scheduled in your area. Don’t see an event listed? Email the team by scrolling to the bottom of the chapter page and clicking the contact us button. Let them know you want to join in on the workshop fun!
Don’t see a user group in your area? Never fear, join the virtual meetup group for monthly meetups on all things Docker.
The #LearnDocker for #developers workshop series is coming to Continue reading
Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.
If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading
Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.
If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition Continue reading
We’re continuing our celebration of Women in Tech Week into this week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.


Product Designer.
11 months.
The designer part, yes. But the software product part, not necessarily. My background is in architecture and industrial design and I imagined I would do physical product design. But I enjoy UX; the speed at which you can iterate is great for design.
To embrace discomfort. I don’t mean that in a bad way. A mentor once told me that the only time your brain is actually growing is when you’re uncomfortable. It has something to do with the dendrites being forced to grow because you’re forced to learn new things.

Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.
I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.
With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:

The upcoming Red Hat Ansible Engine 2.9 release has some really exciting improvements, and the following blog highlights just a few of the notable additions. In typical Ansible fashion, development of Ansible Network enhancements are done in the open with the help of the community. You can follow along by watching the GitHub project board, as well as the roadmap for the Red Hat Ansible Engine 2.9 release via the Ansible Network wiki page.
As was recently announced, Red Hat Ansible Automation Platform now includes Ansible Tower, Ansible Engine, and all Ansible Network content. To date, many of the most popular network platforms are enabled via Ansible Modules. Here are just a few:
A full list of the platforms that are fully supported by Red Hat via an Ansible Automation subscription can be found at the following location: https://docs.ansible.com/ansible/2.9/modules/network_maintained.html#network-supported
In the last four years we’ve learned a lot about developing a platform for network automation. We’ve also learned a lot about how users apply these platform artifacts as consumed in end-user Ansible Playbooks and Roles. In the Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.


SEG Engineer (Support Escalation Engineer).
4 months.
The SEG role is a combination that probably doesn’t exist as a general rule. I’ve always liked to support other engineers and work cross-functionally, as well as unravel hard problems, so it’s a great fit for me.
The only thing constant about a career in tech is change. When in doubt, keep moving. By that, I mean keep learning, keep weighing new ideas, keep trying new things.
In my first month at Docker, we hosted a summer cohort of students from Historical Black Colleges who were participating in a summer internship. As part of their visit Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.


I work as a data engineer – building and maintaining data pipelines and delivery tools for the entire company.
2 years.
Not quite! As a teenager, I wanted to become a cryptographer and spent most of my time in undergrad and grad school on research in privacy and security. I eventually realized I liked working with data and was pretty good at dealing with databases, which pushed me into my current role.
Become acquainted with the entire data journey and try to pick up one tool or language for each phase. For example, you may choose to use Python to fetch and transform data from an API and load it Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.


What is your job?
Senior Director of Product Marketing.
How long have you worked at Docker?
2 ½ years.
Is your current role one that you always intended on your career path?
Nope! I studied engineering and started in a technical role at a semiconductor company. I realized there that I really enjoyed helping others understand how technology works, and that led me to Product Marketing! What I love about the role is that it’s extremely cross-functional. You work closely with engineering, product management, sales and marketing, and it requires both left brain and right brain skills. My technical background helps me to understand our products, while my creative side helps me communicate our products’ core value propositions.
What is your advice for someone entering the field?
It’s always good to be self-aware. Know your strengths and weaknesses, and look Continue reading