DockerCon LIVE 2020 is about to kick off and there are over 64,000 community members, users and customers registered! Although we miss getting together in person, we’re excited to be able to bring even more people together to learn and share how Docker helps dev teams build great apps. Like DockerCon’s past there is so much great content on the agenda for you to learn and expand your expertise around containers and applications.
We’ve been very busy here at Docker and a couple of months ago, we outlined our refocused developer-focused strategy. Since then, we’ve made great progress on executing against it and remain focused on bringing simplicity to app building experience, embracing the ecosystem and helping developers and developer teams bring code to cloud faster and easier than ever before. A few examples:
Red Hat Ansible Automation Platform introduces automation services catalog, a new hosted service for Red Hat Ansible customers to extend their automation in a controlled way to the various end users who need it. This is a deep dive into the capabilities and features of the offering.
The automation services catalog is designed to be a familiar experience, providing an easy and intuitive user interface for ordering products (automation resources).
Products to Order
The idea is that those using the automation services catalog may not know that what they are ordering is actually Ansible Automation. For example, a product could be a business function, like ordering a new OpenShift project or onboarding a user to a new platform.
Ordering a product will present the user with options to facilitate the order. This could be provisioning the datacenter or applying permissions for a Kubernetes project. Upon submitting the order, the user can see the progress in their order queue. Users can search for past orders and see those currently in progress indicated by statuses including: Order, Failed, Approval Pending and Completed. Orders that are pending approval can be compared with ordering a product from a website and seeing Continue reading
Do you remember the first time you used Docker? I do. It was about six years ago and like many folks at the time it looked like this:
docker run -it redis
I was not using Redis at the time but it seemed like a complicated enough piece of software to put this new technology through its paces. A quick Docker image pull and it was up and running. It seemed like magic. Shortly after that first Docker command I found my way to Docker Compose. At this point I knew how to run Redis and the docs had an example Python Flask application. How hard could it be to put the two together?
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: “redis”
I understood immediately how Docker could help me shorten my developer “commute.” All the time I spent doing something else just to get to the work I wanted to be doing. It was awesome!
As time passed, unfortunately my commute started to get longer again. Maybe I needed to collaborate with a colleague or get more resources then I had locally. Ok, I can run Docker in the cloud, let me see how Continue reading
We’re excited to announce the release of Ansible Tower 3.7, part of the Red Hat Ansible Automation Platform. Ansible Tower is the scalable execution framework of the Ansible Automation Platform, providing a REST API and UI framework that allows users to scale automation across their enterprise and integrate it into their processes and tools.
The focus of the Ansible Tower 3.7 release is on scalability of automation and improving the experience of our users.
These days automation often needs to work at large scales - both in terms of infrastructure, but also in terms of how many jobs are executed in parallel. With Ansible Tower 3.7 we have put in extra effort to handle job dependencies in a way that helps ensure your jobs aren’t blocked. By allowing project updates and inventory updates to happen while other jobs are running, we’ve eliminated many of the bottlenecks in job processing. This enables jobs to proceed faster and without the need to wait for each other.
Scale is not only about the IT technology itself, but also about users. As our customers Continue reading
The Red Hat Ansible Automation Platform is continually offering enhancements through its hosted services on cloud.redhat.com. At Red Hat Summit 2020 the new automation services catalog took the spotlight, which provides lifecycle management, provisioning, retirement and cataloging of automation resources to your business. However I wanted to also talk about the additional new enhancements coming to Automation Analytics! Specifically I have two big things I want to talk about:
If you are unfamiliar with Automation Analytics it is included as part of a Red Hat Ansible Automation Platform subscription and allows customers to analyze, aggregate, and report on data for their Red Hat Ansible Automation Platform deployments. Check out the previous blog I wrote about Getting Started with Automation Analytics, or if you have concerns around what type of data is being shared with Red Hat check out my blog Automation Analytics: Part 2 - Looking at Data Collection.
I am super excited about this new feature of Automation Analytics. A lot of customers I get to meet with are trying to figure out how to Continue reading
Last year, we made some significant changes to Red Hat Ansible Automation, including what is offered with it and alongside it, to bring you the first version of Red Hat Ansible Automation Platform. That change allowed users to harness the power of Ansible automation under one roof, one subscription - one platform.
Today, we are pleased to announce updates to Red Hat Ansible Automation Platform, the latest version of our enterprise-grade solution for building and operating automation at scale. The next time you log in to cloud.redhat.com, you can start utilizing the powerful new tools at your disposal. We’ve put in place the automation services catalog, a venue for developers and business users to manage, provision and retire resources. Customers told us this was a necessity and we agreed, with the automation services catalog now giving you a much deeper insight into how automation is improving efficiencies. It also gives you visibility into redundant processes that may be costing you time and resources that you may want to sunset. Accentuate what is working and eliminate what is not. More information about automation services catalog can be found here.
Another enhancement we rolled out with the Continue reading
We are really excited to have had Docker Desktop be featured in a breakout session titled “The Journey to One .NET” at MSFT Build by @Scott Hanselman with WSL 2. Earlier in the his keynote, we learned about the great new enhancements for GPU support for WSL 2 and we want to hear from our community about your interest in adding this functionality to Docker Desktop. If you are eager to see GPU support come to Docker Desktop, please let us know by voting up our roadmap item and feel free to raise any new requests here as well.
With this announcement, the launch of the Windows 2004 release imminently and Docker Desktop v2.3.02 reaching WSL2 GA , we thought this would be a good time to reflect on how we got to where we are today with WSL 2.
Casting our minds back to 2019 (a very different time!), we first discussed WSL 2 with Microsoft in April. We were excited to get started and wanted to find a way to get a build as soon as possible.
It turned out the easiest way to do this was to collect a laptop Continue reading
We are really excited that Docker and Snyk are now partnering together to engineer container security scanning deeply into Docker Desktop and Docker Hub. Image vulnerability scanning has been one of your most requested items on our public roadmap.
Modern software uses a lot of third party open source libraries, indeed this is one of the things that has really raised productivity in coding, as we can reuse work to support new features in our products and to save time in writing implementations of APIs, protocols and algorithms. But this comes with the downside of working out whether there are security vulnerabilities in the code that you are using. You have all told us that scanning is one of the most important roadmap issues for you.
Recall a famously huge data breach from the use of an unpatched version of the Apache Struts library, due to CVE 2017-5638. The CVE was issued in March 2017, and according to the official statement, while the patch should have been applied within 48 hours, it was not, and during May 2017 the websites were hacked, with the attackers having access until late July. This is everyone’s nightmare now. How can we help Continue reading
With just 2 weeks away from DockerCon LIVE going, LIVE, we are humbled by the tremendous response from almost 50,000 Docker developers and community members, from beginner to expert, who have registered for the event.
DockerCon LIVE would not be complete without our ecosystem of partners who contribute to, and shape, the future of software development. They will be showcasing their products and solutions, and sharing the best practices they have culminated in working with the best developers and organizations across the globe.
We are pleased to announce the agenda for our Container Ecosystem Track with sessions built just for devs. In addition to actionable takeaways, their sessions will feature interactive, live Q&A, and so much more. Check out the incredible lineup:
Access Logging Made Easy With Envoy and Fluent Bit – Carmen Puccio, Principal Solutions Architect | AWS
Docker Desktop + WSL 2 Integration Deep Dive – Simon Ferquel, Senior Software Developer | Docker | Microsoft
Experience Report: Running a Distributed System Across Kubernetes Clusters – Chris Seto, Software Engineer | Cockroach Labs
Securing Your Containerized Applications with NGINX – Kevin Jones, Senior Product Manager | NGINX
You Want To Kubernetes? You MUST Know Docker! – Angel Rivera, Continue reading
The world is currently a very different place than it was only a few months ago and we have come up with some ideas on how we can help our community in dealing with this new reality. The Ansible team has started a “Here to Help” webinar series where myself and other Ansible engineers are spending time with smaller groups of people to try and help them with technical challenges: https://www.ansible.com/here-to-help-webinar-series. The goal of these webinars is strictly to help! Regardless of if folks are only using open source technologies and not Red Hat products, we want to use this time to help them solve automation challenges, and help us brainstorm use-cases that can help others.
Another idea we recently implemented is integrating IBM’s World Community Grid into our workshops. World Community Grid enables anyone with a Linux, Windows or Mac computer (or an Android smartphone for some projects) to donate their unused computing power to advance scientific research on topics related to health and sustainability. In fact, one of their projects is specifically going to help combat COVID-19. This blog post will cover what our workshops are and how we can use idle CPU time to help Continue reading
The Dynatrace software intelligence platform automates the monitoring lifecycle. The OneAgent automatically discovers and instruments your applications, processes, containers, and log files. Smartscape topology provides real-time dependency mapping without any configuration. The Davis AI continuously analyzes metrics, traces, logs, dependencies, and more to automatically detect problems and determine the root cause. This automation helps enable organizations to monitor their IT portfolio more quickly and easily - without the headaches that can occur from manual configuration required by traditional monitoring tools.
Dynatrace is designed to work for any environment, but it’s generic. How can we automate the personalization of Dynatrace and enable monitoring as a self-service (MaaSS) Red Hat Ansible Automation Platform. With the power of automation provided by Red Hat Ansible Automation Platform and the Dynatrace API, we can automate the onboarding of applications into Dynatrace in a way that’s tailored for application stakeholders.
Dynatrace automation begins when the OneAgent is deployed on your hosts. The rollout of the OneAgent can be automated on hosts that are managed by Red Hat Ansible Automation Platform. A playbook will download and execute the OneAgent installer on your Linux hosts (via SSH) and your Windows Continue reading
Back in March, Justin Graham, our VP of Product, wrote about how partnering with the ecosystem is a key part of Docker’s strategy to help developers and development teams get from source code to public cloud runtimes in the easiest, most efficient and cloud-agnostic way. This post will take a brief look at some of the ways that Docker’s approach to partnering has evolved to support this broader refocused company strategy.
First, to deliver the best experience for developers Docker needs much more seamless integration with Cloud Service Providers (CSPs). Developers are increasingly looking to cloud runtimes for their applications as evidenced by the tremendous growth that the cloud container services have seen. We want to deliver the best developer experience moving forward from local desktop to cloud, and doing that includes tight integration with any and all clouds for cloud-native development. As a first step, we’ve already announced that we are working with AWS, Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms. You will see us continue to progress our activity in this direction.
The second piece of Docker’s partnership strategy is offering best in class Continue reading
Each year, AnsibleFest is one of our favorite events because it brings together our customers, partners, community members and Red Hatters to talk about the open source innovations and best practices that are enabling the future of enterprise technology and automation.
Because the safety of our attendees is our first priority, we have decided to make AnsibleFest 2020 a virtual experience. We are excited to connect with everyone virtually, and look forward to expanding our conversation across the globe. By changing our event platform, we hope to use this opportunity to collaborate, connect, and chat with more automation fans than ever before. It is exciting to think about how many more people will be able to join in on the automation conversation.
The AnsibleFest Virtual Experience will be a free, immersive multi-day event the week of October 12th, 2020, that will deliver timely and useful customer keynotes, breakout sessions, direct access to Ansible experts, and more. You will want to sign up to stay connected and up-to-date on all things AnsibleFest on the AnsibleFest page.
Call for proposals is open
We are still working through the details for the virtual event, but are very excited to announce that Continue reading
Managing a software-defined networking (SDN) solution in Ansible can be tricky. In most use cases, Ansible communicates with each managed node individually. However, in a SDN scenario, Ansible is most likely managing policy on a controller appliance, which ultimately may make changes to thousands of network endpoints behind it.
But what about these endpoints behind that controller abstraction? Wouldn't it be best if Ansible Tower had visibility to every node, in addition to the controller? Not to run playbooks directly against those nodes, but for the following reasons:
I wrote an Ansible inventory plugin that solves these issues Continue reading
This is a guest post from Docker Captain Bret Fisher, a long-time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses Docker Mastery, Kubernetes Mastery, Docker for Node.js, and Swarm Mastery, weekly YouTube Live shows. Bret also consults with companies adopting Docker. Join Bret and other Docker Captains at DockerCon LIVE 2020 on May 28th, where they’ll be live all day hanging out, answering questions and having fun.
When Docker announced in December that it was continuing its DockerCon tradition, albeit virtually, I was super excited and disappointed at the same time. It may sound cliché but truly, my favorite part of attending conferences is seeing old friends and fellow Captains, meeting new people, making new friends, and seeing my students in real life.
Can a virtual event live up to its in-person version? My friend Phil Estes was honest about his experience on Twitter and I agree… it’s not the same. Online events shouldn’t be one-way information dissemination. As attendees, we should be able to *do* something, not just watch.
Well, challenge accepted. We’ve been working hard for months to pull together a great event for you – and Continue reading
A crucial piece of automation is ensuring that it runs flawlessly. Automation Analytics can help by providing insight into health state and organizational statistics. However, there is often the need to monitor the current state of Ansible Tower. Luckily, Ansible Tower does provide metrics via the API, and they can easily be fed into Grafana.
This blog post will outline how to monitor Ansible Tower environments by feeding Ansible Tower and operating system metrics into Grafana by using node_exporter & Prometheus.
To reach that goal we configure Ansible Tower metrics for Prometheus to be viewed via Grafana and we will use node_exporter to export the operating system metrics to an operating system (OS) dashboard in Grafana. Note that we use Red Hat Enterprise Linux 8 as the OS running Ansible Tower here. The data flow is outlined below:
As you see, Grafana looks for data in Prometheus. Prometheus itself collects the data in its database by importing them from node_exporters and from the Ansible Tower APIs.
In this blog post we assume a cluster of three Ansible Tower instances and an external database. Also please note that this blog post assumes an already installed instance of Prometheus and Grafana.
Part 2 in the series on Using Docker Desktop and Docker Hub Together
In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how Docker Compose helps in this process.
In this article, we’ll walk through deploying our code to the cloud, how to use Docker Hub to build our images when we push to GitHub and how to use Docker Hub to automate running tests.
Docker Hub is the easiest way to create, manage, and ship your team’s images to your cloud environments whether on-premises or into a public cloud.
This first thing you will want to do is create a Docker ID, if you do not already have one, and log in to Hub.
Once you’re logged in, let’s create a couple of repos where we will push our images to.
Click on “Repositories” in the main navigation bar and then click the “Create Repository” button at the top of the screen.
You should now see the “Create Repository” screen.
You can create repositories for your Continue reading
Docker Desktop WSL 2 backend has now been available for a few months for Windows 10 insider users and Microsoft just released WSL 2 on the Release Preview channel (which means GA is very close). We and our early users have accumulated some experience working with it and are excited to share a few best practices to implement in your Linux container projects!
Docker Desktop with the WSL 2 backend can be used as before from a Windows terminal. We focused on compatibility to keep you happy with your current development workflow.
But to get the most out of Windows 10 2004 we have some recommendations for you.
The first and most important best practice we want to share, is to fully embrace WSL 2. Your project files should be stored within your WSL 2 distro of choice, you should run the docker CLI from this distro, and you should avoid accessing files stored on the Windows host as much as possible.
For backward compatibility reasons, we kept the possibility to interact with Docker from the Windows CLI, but it is not the preferred option anymore.
Running docker CLI from WSL will bring you…
Ansible Content Collections are a new way of distributing content, including modules, for Ansible. For detailed information on how to use collections in general, please read Colin McNaughton’s blog post about the topic.
The AWX Collection allows Ansible Playbooks to interact with AWX and Ansible Tower. Much like interacting with AWX or Red Hat Ansible Tower via the web-based UI or the API, the modules provided by the AWX Collection are another way to create, update or delete objects as well as perform tasks such as run jobs, configure Ansible Tower and more. This article will discuss new updates regarding this collection, as well as an example playbook and details on how to run it successfully.
The AWX Collection awx.awx is the upstream community distribution available on Ansible Galaxy. The downstream supported Ansible Collection ansible.tower is being targeted for mid-May on Automation Hub alongside the release of Ansible Tower 3.7. For more details on the difference between Ansible Galaxy and Automation Hub please refer to Ajay Chenampara’s blog post.
This collection is a replacement for the Ansible Tower web modules which were previously housed and maintained directly in the Ansible repo. The modules were Continue reading
“Build once, deploy anywhere” is really nice on the paper but if you want to use ARM targets to reduce your bill, such as Raspberry Pis and AWS A1 instances, or even keep using your old i386 servers, deploying everywhere can become a tricky problem as you need to build your software for these platforms. To fix this problem, Docker introduced the principle of multi-arch builds and we’ll see how to use this and put it into production.
To be able to use the docker manifest
command, you’ll have to enable the experimental features.
On macOS and Windows, it’s really simple. Open the Preferences > Command Line panel and just enable the experimental features.
On Linux, you’ll have to edit ~/.docker/config.json
and restart the engine.
OK, now we understand why multi-arch images are interesting, but how do we produce them? How do they work?
Each Docker image is represented by a manifest. A manifest is a JSON file containing all the information about a Docker image. This includes references to each of its layers, their corresponding sizes, the hash of the image, its size and also the platform it’s supposed to work on. Continue reading