
A common developer workflow when using frameworks like Symfony or React is to edit the source code using a Windows IDE while running the app itself in a Docker container. The source is shared between the host and the container with a command like the following:
$ docker run -v C:\Users\me:/code -p 8080:8080 my-symfony-app
This allows the developer to edit the source code, save the changes and immediately see the results in their browser. This is where file sharing performance becomes critical.
The latest Edge release of Docker Desktop for Windows 2.1.7.0 has a completely new filesharing implementation using Filesystem in Userspace (FUSE) instead of Samba which:
This improvement is available today in the Edge 2.1.7.0 release and will roll-out to the stable Continue reading
Welcome to Technology Short Take #121! This may possibly be the last Tech Short Take of 2019 (not sure if I’ll be able to squeeze in another one), so here’s hoping that you find something useful, helpful, or informative in the links that I’ve collected. Enjoy some light reading over your festive holiday season!

Docker Application eases the packaging and the distribution of a Docker Compose application. The TICK stack – Telegraf, InfluxDB, Chronograf, and Kapacitor – is a good candidate to illustrate how this actually works. In this blog, I’ll show you how to deploy the TICK stack as a Docker App.
This application stack is mainly used to handle time-series data. That makes it a great choice for IoT projects, where devices send data (temperature, weather indicators, water level, etc.) on a regular basis.
Its name comes from its components:
– Telegraf
– InfluxDB
– Chronograf
– Kapacitor
The schema below illustrates the overall architecture, and outlines the role of each component.
Data are sent to Telegraph and stored in an InfluxDB database. Chronograf can query the database through a web interface. Kapacitor can process, monitor, and raise alerts based on the data.
The tick.yml file below defines the four components of the stack and the way they communicate with each other:
version: '3.7' services: telegraf: image: telegraf configs: - source: telegraf-conf target: /etc/telegraf/telegraf.conf ports: - 8186:8186 influxdb: image: influxdb chronograf: Continue reading
On November 25, 2019, AWS announced the release of AWS IoT Greengrass 1.10 allowing developers to package applications into Docker container images and deploy these to edge devices. Deploying and running Docker containers on AWS IoT Greengrass devices enables application portability across development environments, edge locations, and the cloud. Docker images can easily be stored in Docker Hub, private container registries, or with Amazon Elastic Container Registry (Amazon ECR).

Docker is committed to working with cloud service provider partners such as AWS who offer Docker-compatible on-demand container infrastructure services for both individual containers as well as multi-container apps. To make it even easier for developers to benefit from the speed of these services but without giving up app portability and infrastructure choice, Docker Hub will seamlessly integrate developers’ “build” and “share” workflows with the cloud “run” services of their choosing.
“Docker and AWS are collaborating on our shared vision of how workloads can be more easily deployed to edge devices. Docker’s industry-leading container technology including Docker Desktop and Docker Hub are integral to advancing developer workflows for modern apps and IoT solutions. Our customers can now deploy and run Docker containers seamlessly on AWS IoT Greengrass devices, enabling development Continue reading

In a previous blog I wrote about Getting Started with Automation Analytics, but now want to expand on what data is collected and how to gain access to that data. I highly recommend reading the previous blog if you are new to Red Hat Ansible Automation Platform, Ansible Tower concepts and our SaaS offerings. This is important to many customers because they all have their own security concerns with what data leaves their premises as well as obligations to their own customers and stakeholders to make sure data sent will not be compromised in any way.
unified_job_template_table.csv
Login to the Ansible Tower host with Continue reading

In honor of Black Friday, America’s favorite shopping holiday, we’ve rounded up the best deals on Docker + Kubernetes learning materials from Docker Captains. Docker Captain is a distinction that Docker awards to select members of the community that are both experts in their field and are committed to sharing their Docker knowledge with others.

Learn Docker in a Month of Lunches, Elton Stoneman (Save 40% with the code webdoc40).

Docker in Action Second Edition (2019), Jeff Nickeloff (Save 50% with the code tsdocker).
Manning publications is also offering half off when you spend $50 this week.

Nigel Poulton’s The Kubernetes Book and Docker Deep Dive ebook bundles is $7 (for both!) through December 1st with this link.

All of Bret Fisher’s courses are $9.99 through Friday, November 29th. Choose from Docker Mastery, Kubernetes Mastery, Swarm Mastery, and Docker for Node.js.

Elton Stoneman has a wealth of courses, from Handling Data and Stateful Applications in Docker to Modernizing .Net Framework Apps with Docker on Pluralsight. Get 40% an annual or premium subscription through Friday November 29th.

Nick Janetakis’ Dive into Docker and Build Web Applications with Flask and Docker Continue reading

As part of the release of Red Hat Ansible Automation Platform, we’re happy to announce the release of Red Hat Ansible Tower 3.6. Ansible Tower is the scalable execution framework of the Ansible Automation Platform, providing an API around automation that you can use to scale automation across your enterprise and integrate automation into your tools and processes.
Not all automation processes can proceed entirely without human input. In Ansible Tower 3.6, we’ve added pause and approval to Ansible Tower workflows to help enable more flexible automation. At any step in a workflow you can pause and wait for an approval from an administrator, or any other you delegate approval permissions to. Need to verify that your deployment was fully successful before updating the external DNS entries? Need to ensure that your developers won’t spin up 300 extra cloud servers when provisioning new dev environments? Now you can do that, integrated directly in Ansible Tower workflows.
Notifications were introduced in Ansible Tower 3.0, allowing the status of any job to be reported out via email, Slack, IRC, and more. In Ansible Tower 3.6, we’ve made the content Continue reading
Welcome to Technology Short Take #120! Wow…hard to believe it’s been almost two months since the last Tech Short Take. Sorry about that! Hopefully something I share here in this Tech Short Take is useful or helpful to readers. On to the content!
mitmproxy to inspect kubectl traffic. I’m now inspired to go do this myself and see what knowledge I can gain.I don’t have anything to share this time around, but I’ll stay alert for content to include future Tech Short Takes.
firewalld as found in CentOS 8 may prove useful to some readers. I’ve been messing around with firewalld ever since Continue readingBryan Liles kicked off the day 3 morning keynotes with a discussion of “finding Kubernetes’ Rails moment”—basically focusing on how Kubernetes enables folks to work on/solve higher-level problems. Key phrase from Bryan’s discussion (which, as usual, incorporated the humor I love to see from Bryan): “Kubernetes isn’t the destination. Kubernetes is the vehicle that takes us to the destination.” Ian Coldwater delivered a talk on looking at Kubernetes from the attacker’s point of view, and using that perspective to secure and harden Kubernetes. Two folks from Walmart also discussed their use case, which involves running Kubernetes clusters in retail locations to support a point-of-sale (POS) application at the check-out register. Finally, there was a discussion of chaos engineering from folks at Gremlin and Target.
Due to booth duty and my flight home, I wasn’t able to attend any breakout sessions today.
If I’m completely honest, I didn’t get as much out of the event as I’d hoped. I’m not yet sure if that is because I didn’t get to attend as many sessions as I’d hoped/planned (due to problems with sessions being moved/rescheduled or whatever), if my choice of sessions was just poor, Continue reading
This morning’s keynotes were, in my opinion, better than yesterday’s morning keynotes. (I missed the closing keynotes yesterday due to customer meetings and calls.) Only a couple of keynotes really stuck out. Vicki Cheung provided some useful suggestions for tools that are helping to “close the gap” on user experience, and there was an interesting (but a bit overly long) session with a live demo on running a 5G mobile core on Kubernetes.
Due to some power outages at the conference venue resulting from rain in San Diego, the Prometheus session I had planned to attend got moved to a different time. As a result, I sat in this session by Lyft instead. The topic was about running large-scale stateful workloads, but the content was really about a custom solution Lyft built (called Flyte) that leveraged CRDs and custom controllers to help manage stateful workloads. While it’s awesome that companies like Lyft can extend Kubernetes to address their specific needs, this session isn’t helpful to more “ordinary” companies that are trying to figure out how to run their stateful workloads on Kubernetes. I’d really like the CNCF and the conference committee to try Continue reading

Automation is an essential part of modern IT. In this blog I focus on Ansible credential plugins integration via Hashicorp Vault, an API addressable secrets engine which will make life easier for anyone wishing to handle secrets management and automation better. In order to automate effectively, modern systems require multiple secrets: certificates, database credentials, keys for external services, operating systems, networking. Understanding who is accessing secret credentials and when is difficult and often platform-specific and to manage key rotation, secure storage and detailed audit logging across a heterogeneous toolset is almost impossible. Red Hat Ansible Tower solves many of these issues on its own, but its integration with enterprise secret management solutions means it can utilize secrets on demand without human interaction.
In terms of secrets management, I will demonstrate how some of the risks associated with an automation service account can be mitigated by replacing password authentication with ssh certificate based authentication. In the context of automation, a service account is used to provide authorised access into endpoints from a central location. Best practices around security state that, shared accounts could pose a risk. While Red Hat Ansible Tower has the ability to obfuscate passwords, private keys, etc. Continue reading
This week I’m in San Diego for KubeCon + CloudNativeCon. Instead of liveblogging each session individually, I thought I might instead attempt a “daily summary” post that captures highlights from all the sessions each day. Here’s my recap of day 1 at KubeCon + CloudNativeCon.
KubeCon + CloudNativeCon doesn’t have “one” keynote; it uses a series of shorter keynotes by various speakers. This has advantages and disadvantages; one key advantage is that there is more variety, and the attendees are more likely to stay engaged. I particularly enjoyed Bryan Liles’ CNCF project updates; I like Bryan’s sense of humor, and getting updates on some of the CNCF projects is always useful. As for some of the other keynotes, those that were thinly-disguised vendor sales pitches were generally pretty poor.
I was running late for the start of this session due to booth duty, and I guess the stuff I needed most was presented in that portion I missed. Most of what I saw was about Netflix Titus, and how the Netflix team ported Titus from Mesos to Virtual Kubelet. However, that information was so specific to Netflix’s particular use of Virtual Kubelet that it Continue reading

Ansible is an ideal tool for managing many different types of Kubernetes resources. There are four key features that really help:
Together these combine to help enable repeatable deployment and management of applications and multiple Kubernetes clusters in a single role for every resource.
Since the last blog post on Kubernetes features for Ansible Engine 2.6, there have been a number of improvements to Ansible's Kubernetes capabilities. Let’s go over some of the improvements to the modules and libraries and other new features that have been added in the last year, and also highlight what is in the works.
The k8s module now accepts an apply parameter, which approximates the behavior of kubectl apply. When apply is set to True, the k8s module will store the last applied configuration in an annotation on the object. When the object already exists, instead of just sending the new manifest to the API server, the module will now do a 3-way merge, combining the existing cluster state, the Continue reading

With the upcoming release of the Red Hat Ansible Automation Platform there are now included Software as a Service (SaaS) offerings, one of which is Automation Analytics. This application provides a visual dashboard, health notifications and organization statistics for your Ansible Automation. Automation Analytics works across multiple Ansible Tower clusters allowing holistic analytics across your entire infrastructure.
When talking to the community and our customers, a question that often comes up is: “How do we measure success?”. Automation Analytics provides key data on Job Template usage, Ansible Module usage, organizational comparisons across your enterprise, and much more. This data can be used to assess usage, success criteria, and even charge backs between different groups. This blog post will outline how to get started with Automation Analytics and start collecting data right away.
There are some terms used in this blog post that may be unfamiliar Continue reading

In the past, Ansible content such as roles, modules and plugins was usually consumed in two ways: the modules were part of the Ansible package, and roles could be found in Galaxy. However, as time went on the current method of content distribution had challenges with scale for both contributors and consumers of Ansible content. Dylan described this in a blog post worth reading.
Recent releases of Ansible started a journey towards better content management. In previous Ansible releases, each and every module was strictly tied to the release schedule of Ansible and community, customer, and partner feedback demonstrated that the release schedule of content needed to evolve. Ansible content collections allow our Ansible contributors to create specialized content without being tied to a specific release cycle of the Ansible product, making it easier to plan and deliver. For Ansible newcomers, the collections come “pre-packaged” with modules and playbooks around common use cases like networking and security, making it easier to get off the ground with Ansible. If you want to learn more about Ansible content collections, check out our series about collections!
The introduction of collections to the Ansible ecosystem solves a number of challenges for access to Continue reading

With the release of Red Hat Ansible Automation Platform, Ansible Content Collections are now fully supported. Ansible Content Collections, or collections, represent the new standard of distributing, maintaining and consuming automation. By combining multiple types of Ansible content (playbooks, roles, modules, and plugins), flexibility and scalability are greatly improved.
Everyone!
Traditionally, module creators have had to wait for their modules to be marked for inclusion in an upcoming Ansible release or had to add them to roles, which made consumption and management more difficult. By shipping modules within Ansible Content Collections along with pertinent roles and documentation, and removing the barrier to entry, creators are now able to move as fast as the demand for their creations. For a public cloud provider, this means new functionality of an existing service or a new service altogether, can be rolled out along with the ability to automate the new functionality.
For the automation consumer, this means that fresh content is continuously made available for consumption. Managing content in this manner also becomes easier as modules, plugins, roles, and docs are packaged and tagged with a collection version. Modules can be updated, renamed, improved upon; roles can be updated to Continue reading

In the past, Ansible content such as roles, modules and plugins was usually consumed in two ways: the modules were part of the Ansible package, and roles could be found in Galaxy. However, as time went on the current method of content distribution had challenges with scale for both contributors and consumers of Ansible content. Dylan described this in a blog post worth reading.
Recent releases of Ansible started a journey towards better content management. In previous Ansible releases, each and every module was strictly tied to the release schedule of Ansible and community, customer, and partner feedback demonstrated that the release schedule of content needed to evolve. Ansible content collections allow our Ansible contributors to create specialized content without being tied to a specific release cycle of the Ansible product, making it easier to plan and deliver. For Ansible newcomers, the collections come “pre-packaged” with modules and playbooks around common use cases like networking and security, making it easier to get off the ground with Ansible. If you want to learn more about Ansible content collections, check out our series about collections!
The introduction of collections to the Ansible ecosystem solves a number of challenges for access to Continue reading
Today we start the next chapter in the Docker story, one that’s focused on developers. That we have the opportunity to write this next chapter is thanks to you, our community, for without you we wouldn’t be here. And while our focus on developers builds on recent history, it’s a focus also grounded in Docker’s beginning.
When Solomon Hykes, Docker’s founder, unveiled the Docker project in 2013, he succinctly stated the problem Docker aimed to solve as, “for a developer, shipping code to the server is hard.” To address, Docker abstracted out OS kernels’ complex container primitives, provided a developer-friendly, CLI-based workflow and defined an immutable, portable image format. The result transformed how developers work, making it much easier to build, ship and run their apps on any server. So while container primitives had existed for decades, Docker democratized them and made them as easy to use as
docker run hello-world
The rest is history. Over the last six years, Docker containerization catalyzed the growth of microservices-based applications, enabled development teams to ship apps many times faster and accelerated the migration of apps from the data center to the cloud. Far from a Docker-only effort, a Continue reading
A topic that’s been in the back of my mind since writing the Cluster API introduction post is how someone could use kustomize to modify the Cluster API manifests. Fortunately, this is reasonably straightforward. It doesn’t require any “hacks” like those needed to use kustomize with kubeadm configuration files, but similar to modifying kubeadm configuration files you’ll generally need to use the patching functionality of kustomize when working with Cluster API manifests. In this post, I’d like to take a fairly detailed look at how someone might go about using kustomize with Cluster API.
By the way, readers who are unfamiliar with kustomize should probably read this introductory post first, and then read the post on using kustomize with kubeadm configuration files. I suggest reading the latter post because it provides an overview of how to use kustomize to patch a specific portion of a manifest, and you’ll use that functionality again when modifying Cluster API manifests.
For this post, I’m going to build out a fictional use case/scenario for the use of kustomize and Cluster API. Here are the key points to this fictional use case:
On Veterans Day, and every day, we give thanks to our veterans. We are fortunate to have Brent Salisbury, Siobhan Casey, and Johnny Gonzalez, as Docker colleagues who were in the United States Marine Corps Reserve, the United States Army Reserve, and the United States Marine Corps. Thank you all for your service, hard work, and dedication. As a thank you for their service, we’re profiling them on our blog.

Software Alliance Engineer.
4.5 years.
Data Networking has been my passion since college. Working at Docker has afforded me the opportunity to help usher in a new software paradigm in what can be achieved in host networking and security versus the traditional proprietary hardware models of the past.

It may sound cliche, but find your passion. Everyone in technology is Continue reading