
As you may have heard, the Notary project has been invited to join the Cloud Native Computing Foundation (CNCF). Much like its real world namesake, Notary is a platform for establishing trust over pieces of content.
In life, certain important events such as buying a house are facilitated by a trusted third party called a “notary.” When buying a house, this person is typically employed by the lender to verify your identity and serve as a witness to your signatures on the mortgage agreement. The notary carries a special stamp and will also sign the documents as an affirmation that a notary was present and verified all the required information relating to the borrowers.
In a similar manner, the Notary project, initially sponsored by Docker, is designed to provide high levels of trust over digital content using strong cryptographic signatures. In addition to ensuring the provenance of the software, it also provides guarantees that the content is not modified without approval of the author anywhere in the supply chain. This then allows higher level systems like Docker Enterprise Edition (EE) with Docker Content Trust (which uses Notary) to establish clear policy on the usage of content. For instance, a Continue reading
If you live (or will be) in Denver next week—specifically, on Wednesday, November 1—I’ll be joining the Denver Network Programmability User Group (NPUG) to talk about network programmability and my recent book with Jason Edelman and Matt Oswalt around network programmability and automation. We’d love to have you join us!
Here are the meeting details:
When: Wednesday, November 1, at 4:00 Mountain Time
Where: GTRI, 990 S Broadway, Suite 300, Denver CO (free parking in and around GTRI)
What: Me joining the NPUG to share some thoughts on network programmability
Why: Because there will be food and drinks, and because you love talking about network programmability and automation
Who: You!
As I mentioned, there will be food and beverages provided for attendees so please take a few moments to RSVP (so that we can plan on how much food and drink to provide).
I’d love to see you there!
As you’ve probably already heard, Red Hat announced the release of the AWX project at AnsibleFest in San Francisco. AWX is the open source project behind Red Hat® Ansible® Tower, offering developers access to the latest features, and the opportunity to directly collaborate with the Ansible Tower engineering team.
AWX is built to run on top of the Ansible project, enhancing the already powerful automation engine. AWX adds a web-based user interface, job scheduling, inventory management, reporting, workflow automation, credential sharing, and tooling to enable delegation.
Even if you’re only managing a small infrastructure, here are 5 things you can do with AWX. And we promise, they’ll make your job as a system administrator a whole lot easier:
Central to AWX is the ability to create users, and group them into teams. You can then assign access and rules to inventory, credentials, and playbooks at an individual level or team level. This makes it possible to setup push-button access to complex automation, and control who can use it, and where they can run it.
For example, when developers need to stand up a new environment, they don’t need to add another task to your already overbooked Continue reading
As some readers may already know, this site has been running on a static site generator since late 2014/early 2015, when I migrated from WordPress to Jekyll on GitHub Pages. I’ve since migrated again, this time to Hugo on S3/CloudFront. Along the way, I’ve taken an interest in using make and a Makefile to help automate certain tasks at the CLI. In this post, I’ll share how I’m using a Makefile to help with publishing blog articles.
If you’re not familiar with make or its use of a Makefile, have a look at this article I wrote on using a Makefile with Markdown documents, then come back here.
In general, the process for publishing a blog post using Hugo and S3/CloudFront basically looks like this:
content/post directory.)hugo.Some of these steps Continue reading
For the past five and a half years, which is not quite an eternity in the IT business but is something akin to a half of a generation or so, IBM’s revenues have been declining, quarter in and quarter out. As has happened many, many times in its more than century of existence, Big Blue, which used to be a peddler of meat slicers, time machines, scales, and punch card tabulators early in its history, has had to constantly evolve and reimagine itself.
The transformation that IBM had to undergo in the late 1980s and early 1990s was a near …
The IBM Transformation Can Gather Steam Now was written by Timothy Prickett Morgan at The Next Platform.
DockerCon Europe 2017 is coming to an end and we’d like to thank all of the speakers, sponsors and attendees for contributing to the success of these amazing 3 days in Copenhagen. All the slides will soon be published on our slideshare account and all the breakout session videos recordings will soon be available on the docker website.
On Tuesday, we announced that Docker will be delivering seamless integration of Kubernetes into the Docker platform. Adding Kubernetes support as an orchestration option (alongside Swarm) in both Docker Enterprise Edition, and in Docker for Mac and Windows will help simplify and advance the management of Kubernetes for enterprise IT and deliver the advanced capabilities of the Docker platform to a broader set of applications.

To try the latest version of Docker Enterprise Edition, Docker for Mac and Windows with built-in Kubernetes and sign up for the upcoming Beta. Also, Check out the detailed blog posts to learn how we’re bringing Kubernetes to:
You can also watch the video recording and slides of the day 1 keynote here:
This is a liveblog of the session titled “Looking Under the Hood: containerD”, presented by Scott Coulton with Puppet (and also a Docker Captain). It’s part of the Edge track here at DockerCon EU 2017, where I’m attending and liveblogging as many sessions as I’m able.
Coulton starts out by explaining the session (it will focus a bit more on how to consume containerD in your own software projects), and provides a brief background on himself. Then he reviews the agenda, and dives right into the content.
Up first, Coulton starts by providing a bit of explanation around what containerD is and does. He notes that there is a CLI tool for containerD (the ctr tool), and that containerD uses a gRPC API listening on a local UNIX socket. Coulton also discusses ctr, but points out that ctr is, currently, an unstable tool (changing too quickly). Next, Coulton talks about how containerD provides support for the OCI Image Spec and the OCI Runtime Spec (of which runC is an implementation), image push/pull support, and management of namespaces.
Coulton moves into a demo showing off some of containerD’s functionality, using the ctr tool.
After the demo, Coulton talks about some Continue reading
This is a liveblog of the session titled “Building a Secure Supply Chain,” part of the Using Docker track at DockerCon EU 2017 in Copenhagen. The speakers are Ashwini Oruganti (@ashfall on Twitter) and Andy Clemenko (@aclemenko on Twitter), both from Docker. This session was recommended in the Docker EE deep dive (see the liveblog for that session) as a way to get more information on Docker Content Trust (image signing). The Docker EE deep dive presenter only briefly discussed Content Trust, so I thought I’d drop into this session to get more information.
Oruganti starts the session by reviewing some of the steps in the software lifecycle: planning, development, testing, packaging/distribution, support/maintenance. From a security perspective, there are some additional concepts as well: code origins, automated builds, application signing, security scanning, and promotion/deployment. Within Docker EE, there are three features that help with the security aspects of the lifecycle: signing, scanning, and promotion. (Note that scanning and promotion were also discussed in the Docker EE deep dive, which I liveblogged; link is in the first paragraph).
Before getting into the Docker EE features, Clemenko reminds attendees how not to do it: manually. This approach doesn’t Continue reading
This is a liveblog of the session titled “Docker EE Deep Dive,” part of the Docker Best Practices track here at DockerCon EU 2017 in Copenhagen, Denmark. The speaker is Patrick Devine, a Product Manager at Docker. I had also toyed with the idea of attending the Cilium presentation in the Black Belt track, but given that I attended a version of that talk in Austin in April (liveblog is here), I figured I’d better stretch my boundaries and dig deeper into Docker EE.
Devine starts with a bit of information on his background, then provides an overview of the two editions (Community and Enterprise) of Docker. (Recall again that Docker is the downstream product resulting from the open source Moby upstream project.) Focusing a bit more on Docker EE, Devine outlines some of the features of Docker EE: integrated orchestration, stable releases for 1 year with support and maintenance, security patches and hotfixes backported to all supported versions, and enterprise-class support.
So what components are found in Docker EE? It starts with the Docker Engine, which has the core container runtime, orchestration, networking, volumes, plugins, etc. On top of that is Univeral Control Plane (UCP), which Continue reading
This is a liveblog of the day 2 keynote/general session here in Copenhagen, Denmark, at DockerCon EU 2017. Yesterday’s keynote (see the liveblog here) featured the hotly-anticipated Kubernetes announcement (I shared some thoughts here), so it will be interesting to see what Docker has in store for today’s general session.
At 9:02am, the lights go down and Scott Johnston, COO of Docker (@scottcjohnnston on Twitter), takes the stage. Johnston provides a brief recap of yesterday’s activities, from the keynote to the breakout sessions to the party last night, then dives into content focusing around modernizing traditional applications through partnerships. (If two themes have emerged from this year’s DockerCon EU, they are “Docker is a platform” and “Modernize traditional applications”.) Johnston shares statistics that show 50% of customers have leveraging hybrid cloud as a priority, and that increasing major release frequency is also a priority for enterprise IT organizations. According to Johnston, 79% of customers are saying that increasing software release velocity is a goal for their organizations. Continuing with the statistics, Johnston shows a very familiar set of numbers stating that 80% of the IT spend is on maintenance (I say familiar because these numbers Continue reading
Today at DockerCon EU, Docker announced that the next version of Docker (and its upstream open source project, the Moby Project) will feature integration with Kubernetes (see my liveblog of the day 1 general session). Customers will be able to choose whether they leverage Swarm or Kubernetes for container orchestration. In this post, I’ll share a few thoughts on this move by Docker.
First off, you may find it useful to review some details of the announcement via Docker’s blog post.
Done reviewing the announcement? Here are some thoughts; some of them are mine, some of them are from others around the Internet.
This is a liveblog of a Black Belt track session at DockerCon EU in Copenhagen. The session is named “Container-Relevant Kernel Developments,” and the presenter is Tycho Andersen.
Andersen first presents a disclaimer that the presentation is mostly a brain dump, and the he’s not personally responsible for a lot of the work presented here. In fact, all of the work Andersen will talk about is not yet merged upstream in the Linux kernel, and he doesn’t expect that they will be accepted upstream and see availability for average users.
The first technology Andersen talks about IMA (Integrity Management Association, I think?), which prevents user space from even opening files if they have been tampered with or modified in some fashion that violates policy. IMA is also responsible for allowing the Linux kernel to take advantage of a system’s Trusted Platform Module (TPM).
Pertinent to containers, Andersen talks about work that’s happening within the kernel development community around namespacing IMA. There are a number of challenges here, not all of which have been addressed or resolved yet, and Andersen refers attendees to the Linux Kernel mailing list (LKML) for more information.
Next, Andersen talks about the Linux audit log. Continue reading
This is a liveblog of the DockerCon EU session titled “LinuxKit Deep Dive”. The speakers are Justin Cormack and Rolf Neugebauer, both with Docker, and this session is part of the “Black Belt” track here at DockerCon.
So what is LinuxKit? It’s a toolkit, part of the Moby Project, that is used for building secure, portable, and lean operating systems for containers. It uses the moby tooling to build system images. LinuxKit uses YAML files to describe the complete system, and these files are consumed by moby to assemble the boot image and verify the signature. On top of that is containerD, which runs on-boot containers, service containers, and shutdown containers. Think of on-boot and shutdown containers as one-time containers that perform some task, either when the system is booting or shutting down (respectively).
LinuxKit was first announced and open sourced in April 2017 at DockerCon in Austin. Major additions since it was announced include:
After reviewing the changes Continue reading
This is a liveblog of the session titled “Rock Stars, Builders, and Janitors: You’re Doing it Wrong”. The speaker is Alice Goldfuss (@alicegoldfuss) from GitHub. This session is part of the “Transform” track at DockerCon; I’m attending it because I think that cultural and operational transformation is key for companies to successfully embrace new technologies like containers and fully maximize the benefits of these technologies. (There’s probably a blog post in that sentence.)
Goldfuss starts out by asking the audience some questions about what they’ve been doing for the last 3 months, and then informs the attendees that they are, in fact, part of the problem.
Goldfuss now digs into the meat of the presentation by covering some terminology. First, what is a rock star? They’re the idea person, the innovator. They’re curious, open-minded, iterating faster, and always looking for the new things and the new ideas. They’re important to our companies, but they do have some weaknesses. They get bored easily, they have no patience for maintenance, and they’re not used to thinking about end user experience. Thus, according to Goldfuss, you can’t have a team of only rock stars.
Next, Goldfuss talks aboutbuilders. Builders Continue reading
This is a liveblog of the day 1 keynote/general session at DockerCon EU 2017 in Copenhagen, Denmark. Prior to the start of the keynote, attendees are “entertained” by occasional clips of some Monty Python-esque production.
At 9:02, the lights go down and another clip appears, the first of several cliups that depict life “without Docker” and then again “with Docker” (where everything is better, of course). It’s humorous and a good introduction to the general session.
Steve Singh, CEO of Docker, now takes the stage to kick off the general session. Singh thanks the attendees for their time, discusses the growth of the Docker community and the Docker ecosystem, welcomes new members of the community (including himself), and positions Docker less as a container company and more as a platform company. (Singh comes to Docker from SAP, following SAP’s acquisition of Concur.) Singh pontificates for a few moments about his background, the changes occurring in the industry, and the “center stage front-row” seat that Docker has to witness—and affect/shape—these changes.
Singh pivots after a few minutes to talk about Docker growth in terms of specific metrics (21 million Docker hosts, for example). This allows him to return to the Continue reading
Today we’re announcing that the Docker platform is integrating support for Kubernetes so that Docker customers and developers have the option to use both Kubernetes and Swarm to orchestrate container workloads. Register for beta access and check out the detailed blog posts to learn how we’re bringing Kubernetes to:
Docker is a platform that sits between apps and infrastructure. By building apps on Docker, developers and IT operations get freedom and flexibility. That’s because Docker runs everywhere that enterprises deploy apps: on-prem (including on IBM mainframes, enterprise Linux and Windows) and in the cloud. Once an application is containerized, it’s easy to re-build, re-deploy and move around, or even run in hybrid setups that straddle on-prem and cloud infrastructure.
The Docker platform is composed of many components, assembled in four layers:
At DockerCon Europe, we announced that Docker will be delivering seamless integration of Kubernetes into the Docker platform. Bringing Kubernetes to Docker Enterprise Edition (EE) will simplify and advance the management of Kubernetes for enterprise IT and deliver the advanced capabilities of Docker EE to a broader set of applications.
Docker EE is an enterprise-grade container platform that includes a private image registry, advanced security features and centralized management for the entire container lifecycle. By including Kubernetes for container orchestration, customers will have the ability to run both Swarm and Kubernetes in the same Docker EE cluster while still leveraging the same secure software supply chain for building and deploying applications.

Figure 1. Docker EE Architecture with Multiple Orchestrators
This is possible because Docker EE has a modular architecture that is designed to support multiple orchestrators. The Linux nodes are both Swarm and Kubernetes-ready and application teams can decide which orchestrator to use at app deployment time.
When creating a new Stack in Docker EE, you are given the choice of deploying it as Swarm Services or as Kubernetes Workloads:

Figure 2. Selectable modes at app deployment time
Upon deployment, the Docker EE dashboard has a “Shared Resources” area Continue reading
Today, as part of our effort to bring Kubernetes support to the Docker platform, we’re excited to announce that we will also add optional Kubernetes to Docker Community Edition for Mac and Windows. We’re demoing previews at DockerCon (stop by the Docker booth!) and will have a beta program ready at the end of 2017. Sign up to be notified when the beta is ready.
With Kubernetes support in Docker CE for Mac and Windows, Docker Inc. can provide customers an end-to-end suite of container-management software and services that span from developer workstations, through test and CI/CD through to production on-prem or in the cloud.
Docker for Mac and Windows are the most popular way to configure a Docker dev environment and are used everyday by hundreds of thousands of developers to build, test and debug containerized apps. Docker for Mac and Windows are popular because they’re simple to install, stay up-to-date automatically and are tightly integrated with macOS and Windows respectively.
The Kubernetes community has built solid solutions for installing limited Kubernetes development setups on developer workstations, including Minikube (itself based partly on the docker-machine project that predated Docker for Mac and Windows). Common to these solutions however, Continue reading
Welcome to Technology Short Take #88! Travel is keeping me pretty busy this fall (so much for things slowing down after VMworld EMEA), and this has made it a bit more difficult to stick to my self-imposed biweekly schedule for the Technology Short Takes (heck, I couldn’t even get this one published on Friday!). Sorry about that! Hopefully the irregular schedule is outweighed by the value found in the content I’ve collected for you.
Today we start releasing a new video series in Docker’s Modernize Traditional Apps (MTA) program, aimed at IT Pros who manage, maintain and deploy Java apps. The video series shows you how to move a Java EE 7 application written to run on Wildfly 3, move it to a Windows Docker container and deploy it to a scalable, highly-available environment in the cloud – without any changes to the app.
These are the first 4 of a 5 part video series in Docker’s Modernize Traditional Apps (MTA) program, aimed at Java IT Pros. The video series shows you how to move a Java EE app on JBoss Wildfly to a Docker container and deploy it to a scalable, highly-available environment in the cloud – without any changes to the app.

Part 1 introduces the series, explaining what is meant by “traditional” apps and the problems they present. Traditional apps are built to run on a server, rather than on a modern application platform. They have common traits, like being complex to manage and difficult to deploy. A portfolio of traditional applications tends to under-utilize its infrastructure, and over-utilize the humans who manage it. Docker Enterprise Edition (EE) fixes that, giving Continue reading