According to a recent stackoverflow report, the Docker Platform is in the top 10 skills to learn if you want to advance in a career in tech. So where do I go to start learning Docker you may ask? Well the good news is that we now have free workshops and hands-on Labs included as part of your DockerCon 2018 ticket.

The conference workshops will focus on a range of subjects from migrating .NET or Java apps to the Docker platform to deep dives on container monitoring and logging, networking, storage and security. Each workshop is designed to give you hands-on instructions and guidance on key container notions and mentoring by Docker Engineers and Docker Captains. The workshops are a great opportunity to zoom in a specific aspects on the Docker platform. Here is the list of free workshops available (click on the links to see the full abstracts):
Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.
Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.
So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:
tasks/
handlers/
files/
templates/
vars/
defaults/
meta/
In folders which contain Ansible code - like tasks, handlers, vars, defaults - there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code - in Continue reading


“You are now Certified Kubernetes.” With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?
Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren’t really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite You can find more about the test suite at https://github.com/cncf/k8s-conformance.
This is important for Docker for Windows and Docker for Continue reading
Welcome to the first installment of our Windows-specific Getting Started series!
Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don’t know how to set everything up? Are you worried that Red Hat Ansible Engine won’t be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone’s best friend, Clippy? 
We can’t help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we’ll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.
A few of the many things you can do for your Windows hosts with Ansible Engine include:
In addition to connecting to and automating Windows hosts using local or domain users, you’ll also be able to use runas to execute actions as the Administrator (the Windows alternative to Linux’s sudo or su), so Continue reading

Last month the Linux Foundation announced the 2018 Open Container Initiative (OCI) election results of the Technical Oversight Board (TOB). Members of the TOB then voted to elect our very own Michael Crosby as the new Chairman. The result of the election should not come as a surprise to anyone in the community given Michael’s extensive contributions to the container ecosystem.
Back in February 2014, Michael led the development of libcontainer, a Go library that was developed to access the kernel’s container APIs directly, without any other dependencies. If you look at this first commit of libcontainer, you’ll see that the JSONspec is very similar to the latest version of the 1.0 runtime specification.
In the interview below, we take a closer look at Michael’s contributions to OCI, his vision for the future and how this benefits all Docker users.
I think that it is important to be part of the TOB to ensure that the specifications that have been created are generally useful and not specific to any one use case. I also feel it is important to ensure that the specifications are stable so that Continue reading
The Ansible Networking Team is excited about the release of Ansible 2.5. Back in February, I wrote about new Networking Features in Ansible 2.5, and one of the biggest areas of feedback was around the network_cli connection plugin. For more background on this connection plugin, please refer to the previous blog post.
In this post, I convert existing networking playbooks that use connection: local to use connection: network_cli. Please note that the passwords are in plain text for demonstration purposes only. Refer to the following Ansible Networking documentation page recommendation for using Ansible Vault for secure password storage and usage.
To demonstrate, let’s use an existing GitHub repository with working playbooks using the legacy connection local method. NOTE: The connection local method will continue to be supported for quite some time, and has not been announced as deprecated yet. This repository has several examples using Ansible and NAPALM but we are highlighting the Ansible Playbooks in this post. The GitHub repository can be found here.
Networking platforms use their specific *_config platform module for easy backups within Ansible. For this playbook we are running the Ansible Playbook Continue reading

Docker believes in making technology easy to use and accessible and that approach also extends to our enterprise-ready container platform. That means providing out-of-the-box integrations to key extensions of the platform that enterprise organizations require, but also making it possible to swap these built-in solutions with other tools as desired.
Docker Enterprise Edition 2.0, integrates Kubernetes to our platform and delivers the only Kubernetes platform that can be deployed across multiple clouds and multiple operating systems. As part of this release, we have included Project Calico by Tigera as the “batteries included” Kubernetes CNI plug-in for a highly scalable, industry-leading networking and routing solution.
While we support our customers using their preferred CNI plug-in, we chose to integrate Project Calico for our built-in solution because it aligns well with our design objectives for Docker EE 2.0:
Earlier this morning, I asked on Twitter about good individuals to follow on Twitter for Kubernetes information. I received quite a few good responses (thank you!), and I though it might be useful to share the list of the folks that were recommended across all those responses.
The list I’ve compiled is clearly incomplete! If you think someone should be added to this list, feel free to hit me up on Twitter and let me know. Alternately, feel free to submit a pull request (PR) that adds them to this list. I’m not going to “vet” the list, so I’ll add any and all recommendations (unless they are clearly not related to Kubernetes, such as a news anchorman someone recommended to me—not sure about that one!).
Without further ado, here is the list I compiled from the responses to my tweet, in no particular order (I’ve included full name and employer, where that information is available):

Did you know that Docker Hub has millions of users pulling roughly one billion container images every two weeks — and it all runs on Docker Enterprise Edition?
Docker Enterprise Edition 2.0 may now be available to commercial customers who require an enterprise-ready container platform, but the Docker operations team has already been using it in production for some time. As part of our commitment to delivering high quality software that is ready to support your mission-critical applications, we leverage Docker Enterprise Edition 2.0 as the platform behind Docker Hub and our other SaaS services, Docker Store, and Docker Cloud.
Some organizations call it “dogfooding;” some call it “drinking your own champagne.” Whatever you call it, the importance of this program is to be fully invested in our own container platform and share in the same operational experiences as our customers.
One of the main features of this latest release is the integration of Kubernetes so we wanted to make sure we are leveraging this capability. Working closely with our SaaS team leads, we chose a few services to migrate to Kubernetes while keeping others on Swarm.
For people already running Docker EE, Continue reading
We are excited to announce Docker Enterprise Edition 2.0 – a significant leap forward in our enterprise-ready container platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions. In this blog post, we’ll walk through some of the key new capabilities of Docker EE 2.0.
As containerization becomes core to your IT strategy, the importance of having a platform that supports choice becomes even more important. Being able to address a broad set of applications across multiple lines of business, built on different technology stacks and deployed to different infrastructures means that you have the flexibility needed to make changes as business requirements evolve. In Docker EE 2.0 we are expanding our customers’ choices in a few ways:
Just like with Windows and Linux servers, networking devices can be exploited by vulnerabilities found in their operating systems. Many IT organizations do not have a comprehensive strategy for mitigating security vulnerabilities that span multiple teams (networking, servers, storage, etc.). Since the majority of network operations is still manual, the need to mitigate quickly and reliably across multiple platforms consisting of hundreds of network devices becomes extremely important.
In Cisco’s March 2018 Semiannual Cisco IOS and IOS XE Software Security Advisory Bundled Publication, 22 vulnerabilities were detailed. While Red Hat does not report or keep track of individual networking vendors CVEs, Red Hat Ansible Engine can be used to quickly automate mitigation of CVEs based on instructions from networking vendors.
In this blog post we are going to walk through CVE-2018-0171 which is titled “Cisco IOS and IOS XE Software Smart Install Remote Code Execution Vulnerability.” This CVE is labeled as critical by Cisco, with the following headline summary:
“...a vulnerability in the Smart Install feature of Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, remote attacker to trigger a reload of an affected device, resulting in a Continue reading
DockerCon is the premier container conference where the IT industry comes together to learn, belong, and collaborate on the different phases of the containerization journey. This year, we’re focusing on helping our 6,000+ attendees define their containerization journeys. Whether you’re a developer just getting started with Docker or an Enterprise systems architect ready to scale and innovate, at DockerCon we’ll help you map out and implement a containerization strategy for you, your team and your company.

Throughout the four days, you’ll have the chance to design your own DockerCon journey – selecting from 7 different breakout session tracks, a collection of free hands-on labs and workshops, and our peer to peer networking Hallway Track.
This year at DockerCon we’re designing our conference around the containerization journey and providing opportunities for our attendees to create tailored learning and networking experiences for their particular needs.
We’ve identified four stages of the containerization journey that will be present at DockerCon 2018:

The event program is designed to be a “choose your own adventure,” allowing every attendee to find the content, people, trainings, and labs that are right for them. Maybe you’re new to the Docker platform and are looking for more information on Continue reading
The size, complexity and high rate of change in today’s IT environments can be overwhelming. Enabling the performance and availability of these modern microservice environments is a constant challenge for IT organizations.
One trend contributing to this rate of change is the adoption of IT automation for provisioning, configuration management and ongoing operations. For this blog, we want to highlight the repeatable and consistent outcomes allowed by IT automation, and explore what is possible when Ansible automation is extended to the application monitoring platform Dynatrace.
Thanks to Jürgen Etzlstorfer for giving us an overview of the Ansible and Dynatrace integration.
---
Considering the size, complexity and high rate of change in today's IT environments, traditional methods of monitoring application performance and availability are often necessary and commonplace in most operations teams. Application performance monitoring (APM) platforms are used to detect bottlenecks and problems that can impact the experience of your customers.
Monitoring alone, however, isn’t always enough to help keep your applications running at peak performance. When issues are detected, APM platforms are designed to alert the operator of the problem and its root-cause. The Ops team can then agree on a corrective action, and implement this Continue reading
As part of the transition into my new role at Heptio (see here for more information), I had to select a new corporate laptop. Given that my last attempt at running Linux full-time was thwarted due primarily to work-specific collaboration issues that would no longer apply (see here), and given that other members of my team (the Field Engineering team) are also running Linux full-time, I thought I’d give it another go. Accordingly, I’ve started working on a Lenovo ThinkPad X1 Carbon (5th generation). Here are my thoughts on this laptop.
This is now my second non-Apple laptop in the last year. My previous non-Apple laptop, a Dell Latitude E7370, was a pretty decent laptop (see my review). As good as the E7370 was, though, the X1 Carbon is better.
The X1 Carbon features a dual-core i7 7500U CPU, which (subjectively, anyway) outperforms the mobile CPU in the E7370. This makes the X1 Carbon feel quite snappy and responsive. CPU performance was an issue for me with the Dell—it didn’t take much to tax that mobile CPU. I haven’t seen that issue so far with the X1 Carbon. Coupled with 16GB of RAM, the X1 Carbon is no Continue reading

Last month, Docker turned five! In celebration of this milestone, we turned the spotlight on our amazing global community of customers, users, Community Leaders, Captains, mentors, partners and sponsors, and asked them to reflect on their Docker learning journey. Everyone came together to celebrate how far they had come, think about where they would like to go and take that next step towards reaching their goal.
We had a lot of fun during the #dockerbday with the Quebec #Docker community! Thanks to @ingeno for sponsoring the event, @tnazare for the cake and for being an awesome mentor! #dockerselfie #DockerQC pic.twitter.com/YZZNkWfWjq
— Julien Maitrehenry (@jmaitrehenry) March 23, 2018
We invite you to do the same. Whether you just want to test the waters, or want to dive right in, there are a variety of ways for you to take the next step on your Docker journey:
Just getting started and want to learn the basics? Check out the Play with Docker Classroom and work through our self paced labs to learn about containers and the Docker platform.
Want to learn about the latest update to Docker Enterprise Edition ? Join Docker and thousands of your peers for the Docker Continue reading
A significant number of Docker early adopters, advanced container users and Open Source lovers come to DockerCon to contribute to open source projects and collaborate on technical system implementations. Last year, these activities were taking place at the Moby Summit scheduled on the last day of the conference. Listening to feedback from attendees who expressed interest in participating in such activities earlier in the week, we’ve decided to bring back the Contribute & Collaborate track to the main conference days!

The goal of this track is to raise awareness and educate users around the upstream components of the Docker Platform, provide a path for new contributors and unleash new opportunities for innovation and collaboration within the broader Cloud Native and Open Source communities.
This track is organized in 4 half days (one for each of the categories below). Each will start by a series of lightning talks during which maintainers will be introducing their projects and doing a brief demo. We’ll then break into smaller groups for roundtables and informal, interactive Birds-of-a-Feather discussions with maintainers. This time will be a great opportunity to collaborate with peers who share the same interest, ask questions to maintainers, get insights into project roadmaps Continue reading
Special thanks to Kylie Liang from the Microsoft Azure DevEx team for giving us a closer look at one of the new Azure module features.
---
For this blog entry, we wanted to share a step by step guide to using the Azure Container Instance module that has been included in Ansible 2.5.
The Container Instance service is a PaaS offering on Azure that is designed to let users run containers without managing any of the underlying infrastructure. The Ansible Azure Container Instance module allows users to create, update and delete an Azure Container Instance.
For the purposes of this blog, we’ll assume that you are new to Azure and Ansible and want to automate the Container Instance service. This tutorial will guide you through automating the following steps:

Moving a monolithic application to a modern cloud architecture can be difficult and often result in a greenfield development effort. However, it is possible to move towards a cloud architecture using Docker Enterprise Edition (EE) with no code changes and gain portability, security and efficiency in the process.

To conclude the series In part 5, I use the message service’s REST endpoint to replace one part of the application UI with a Javascript client. The original application client UI was written in Java Server Pages (JSP) so that any UI changes required the application to be recompiled and redeployed. I can use modern web tools and frameworks such as React.js to write a new client interface. I’ll build the new client using a multi-stage build and deploy it by adding the container to the Docker Compose file. I’ll also show how to deploy the entire application from your development to Docker EE to make it available for testing.
Modernizing Java Apps for Developers shows how to take an existing Java N-tier application and run it in containers using the Docker platform to modernize the architecture. The source code for each part of this series is available on github and Continue reading
We are excited to announce that the Docker Registry HTTP API V2 specification will be adopted in the Open Container Initiative (OCI), the organization under the Linux Foundation that provides the standards that fuel the containerization industry. The Docker team is proud to see another aspect of our technology stack become a de-facto standard. As we’ve done with our image format, we are happy to formally share and collaborate with the container ecosystem as part of the OCI community. Our distribution protocol is the underpinning of all container registries on the market and is so robust that it is leveraged over a billion times every two weeks as container content is distributed across the globe.
Putting the protocol into perspective, part of the core functionality of Docker is the ability to push and pull images. From the first “Hello, World” moment, this concept is introduced to every user and is a large part of the Docker experience. While we normally sit back in our armchairs and marvel at this magical occurence, the amount of design and consideration that has gone into that simple capability can easily be overlooked.
When Docker was first released, the team Continue reading

DockerCon is a hub for the IT industry , bringing together members from all parts of our growing ecosystem and global community. By actively promoting inclusivity, our goal is to make DockerCon a safe place for everyone to learn, belong and collaborate. With the support of Docker and our DockerCon scholarship sponsor, the Open Container Initiative (OCI), we are excited to announce the launch of this year’s DockerCon Diversity Scholarship Program to provide members of the Docker community, who are traditionally underrepresented, a financial scholarship to attend DockerCon US 2018. This year, we are increasing the number of scholarships we are granting to ensure attending DockerCon is an option for all.
Deadline to Apply:
Wednesday, April 25, 2018 at 5:00PM PST
Selection Process
A committee of Docker community members will review and select the scholarship recipients. Recipients will be notified by the week of May 7, 2018
What’s included:
Full Access DockerCon Conference Pass
Requirements
Must be able to attend DockerCon US 2018
Must be 18 years old or older to apply
Learn more about the DockerCon Diversity Scholarship here.
Have questions or concerns? Reach us at [email protected]
#DockerCon US Diversity Scholarship is now open! Learn more and Continue reading