Last year we released the Docker Dashboard as part of Docker Desktop, today we are excited to announce we are releasing the next piece of the dashboard to our community customers with a new Images UI. We have expanded the existing UI for your local machine with the ability to interact with your Docker images on Docker Hub and locally. This allows you to: display your local images, manage them (run, inspect, delete) through an intuitive UI without using the CLI. And for you images in Hub you can now view you repos or your teams repos and pull images directly from the UI.
To get started, Download the latest Docker Desktop release and load up the dashboard (we are also excited that we have given the dashboard an icon)
You will be able to see that we have also added a new sidebar to navigate between the two areas and we are planning to add new sections in here soon. To find out more about what’s coming or to give feedback on what you would like to see check out our public roadmap.
Let’s jump in and have a look at what we can do…
From Continue reading
To deepen Docker’s investment in products that make developers successful, we’re pleased to announce that Donnie Berkholz will join the Docker team as VP of Products. Donnie has an extensive background as a practitioner, leader, and advisor on developer platforms and communities. He spent more than a decade as an open-source developer and leader at Gentoo Linux, and he recently served as a product and technology VP at CWT overseeing areas including DevOps and developer services. Donnie’s also spent time at RedMonk, 451 Research, and Scale Venture Partners researching and advising on product and market strategy for DevOps and developer products.
To get to know Donnie, we asked him a few questions about his background and where he plans to focus in his new role:
What got you the most excited about joining Docker?
I’ve been a big fan of Docker’s technology since the day it was announced. At the time, I was an industry analyst with RedMonk, and I could instantly sense the incredible impact that it would have in transforming the modern developer experience. Recent years have borne that out with the astonishing growth in popularity of containers and cloud-native development. With Docker’s renewed focus on developers, Continue reading
Now that I've startled you, no, the network CLI isn’t going away anytime soon, nor are people going to start manipulating XML directly for their network configuration data. What I do want to help you understand is how Ansible can now be used as an interface into automating the pushing and pulling of configuration data (via NETCONF) in a structured means (via YANG data models) without having to truly learn about either of these complex concepts. All you have to understand is how to use the Ansible Content Collection as shown below, obfuscating all technical implementation details that have burdened network operators and engineers for years.
Before we even start talking about NETCONF and YANG, our overall goal is for the network to leverage configuration data in a structured manner. This makes network automation much more predictable and reliable when ensuring operation state. NETCONF and YANG are the low-level pieces of the puzzle, but we are making it easier to do via well known Ansible means and methodologies.
What we believe as Ansible developers is that NETCONF and YANG aren't (and shouldn't) be quintessential or ultimate goals for network automation engineers. You should not need to Continue reading
AnsibleFest 2020 will be here before we know it, and we cannot wait to connect with everyone in October. We have some great content lined up for this year’s virtual experience and that includes some amazing customer spotlights. This year you will get to hear from CarMax, Blue Cross Blue Shield of NC, T-Mobile, PRA International and CEPSA. These customers are using Ansible in a variety of ways, and we hope you connect to their incredible stories of teamwork and transformative automation.
Customer Spotlights
Benjamin Blizard, a Network Engineer at T-Mobile, will explore how T-Mobile transformed from a disparate organization with difficulty enforcing standards to a collaborative group of engineers working from repeatable templates and processes. T-Mobile, a major telecommunications provider, uses Ansible Automation Platform to standardize processes across their organization. Ben will show how automation supports T-Mobile’s compliance standards, data integrity, and produces speed and efficiency for network teams.
What Next?
Join us for AnsibleFest 2020 to hear from more customer like T-mobile talk about their automation journey. Make sure to go and register today and check out the session catalog that lists all the content that we have prepared for you this year. We look Continue reading
The VMware Ansible modules as part of the current community.vmware Collection are extremely popular. According to GitHub, it's the second most forked Collection1, just after community.general. The VMware modules and plugins for Ansible have benefited from a stream of contributions from dozens of users. Many IT infrastructure engineers rely on managing their VMware infrastructure by means of a simple Ansible Playbook. The vast majority of the current VMware modules are built on top of a dependent python library called pyVmomi, also known as vSphere Automation SDK for Python.
VMware has recently introduced the vSphere REST API for vSphere 6.0 and later, which will likely replace the existing SOAP SDK used in the community.vmware Collection.
Since the REST API’s initial release, vSphere support for the REST API has only improved. Furthermore, there is no longer a need for any dependent python packages. In order to maintain the existing VMware modules in the community.vmware Collection, a set of modules specifically for interacting with the VMware REST API is now available in the newly created vmware.vmware_rest Collection.
If you compare modules used with the VMware vSphere Continue reading
Today we are open sourcing the code for the Amazon ECS and Microsoft ACI Compose integrations. This is the first time that Docker has made Compose available for the cloud, allowing developers to take their Compose projects they were running locally and deploy them to the cloud by simply switching context.
With Docker focusing on developers, we’ve been doubling down on the parts of Docker that developers love, like Desktop, Hub, and of course Compose. Millions of developers all over the world use Compose to develop their applications and love its simplicity but there was no simple way to get these applications running in the cloud.
Docker is working to make it easier to get code running in the cloud in two ways. First we moved the Compose specification into a community project. This will allow Compose to evolve with the community so that it may better solve more user needs and ensure that it is agnostic of runtime platform. Second, we’ve been working with Amazon and Microsoft on CLI integrations for Amazon ECS and Microsoft ACI that allow you to use docker compose up
to deploy Compose applications directly to the cloud.
While implementing these integrations, we wanted to Continue reading
In our first post in our series on CI/CD we went over some of the high level best practices for using Docker. Today we are going to go a bit deeper and look at Github actions.
We have just released a V2 of our GitHub Action to make using the Cache easier as well! We also want to call out a huge THANK YOU to @crazy-max (Kevin :D) for the of work he put into the V2 of the action, we could not have done this without him!
Right now let’s have a look at what we can do!
To start we will need to get a project setup, I am going to use one of my existing simple Docker projects to test this out:
The first thing I need to do is to ensure that I will be able to access Docker Hub from any workflow I create, to do this I will need to add my DockerID and a Personal Access Token (PAT) as secrets into GitHub. I can get a PAT by going to https://hub.docker.com/settings/security and clicking ‘new access token’, in this instance I will call my token ‘whaleCI’
I can then Continue reading
According to the 2020 Jetbrains developer survey 44% of developers are now using some form of continuous integration and deployment with Docker Containers. We know a ton of developers have got this setup using Docker Hub as their container registry for part of their workflow so we decided to dig out the best practices for doing this and provide some guidance for how to get started. To support this we will be publishing a series of blog posts over the next few weeks to answer the common questions we see with the top CI providers.
We have also heard feedback that given the changes Docker introduced relating to network egress and the number of pulls for free users, that there are questions around the best way to use Docker Hub as part of CI/CD workflows without hitting these limits. This blog post covers best practices that improve your experience and uses a sensible consumption of Docker Hub which will mitigate the risk of hitting these limits and how to increase the limits depending on your use case.
To get started, one of the most important things when working with Docker and really any CI/CD is to work out when Continue reading
Molecule is a complete testing framework that helps you develop and test Ansible roles, which allows you to focus on role content instead of focusing on managing testing infrastructure. In the first part of this series, we’ve successfully installed, configured and used Molecule to set up new testing instances.
Now that the instances are running, let’s start developing the new role and apply Molecule to ensure it runs according to the specifications.
This basic role deploys a web application supported by the Apache web server. It must support Red Hat Enterprise Linux (RHEL) 8 and Ubuntu 20.04.
Molecule helps in the development stage by allowing you to “converge” the instances with the role content. You can test each step without worrying about managing the instances and test environment. It provides quick feedback, allowing you to focus on the role content, ensuring it works in all platforms.
In the first part of this series, we initialized a new role “mywebapp”. If you’re not there yet, switch to the role directory “mywebapp” and add the first task, installing the Apache package “httpd” using the “package” Ansible module. Edit the file “tasks/main.yaml” and include Continue reading
Community is a backbone of all sustainable open source projects and so at Docker, we’re particularly thrilled to announce that William Quiviger has joined the team as our new Head of Community.
William is a seasoned community manager based in Paris, having worked with open source communities for the past 15 years for a wide range of organizations including Mozilla Firefox, the United Nations and the Open Networking Foundation. His particular area of expertise is in nurturing, building and scaling communities, as well as developing mentorship and advocacy programs that help push leadership to the edges of a community.
To get to know William a bit more, we thought we’d ask him a few questions about his experience as a community manager and what he plans to focus on in his new role:
What motivated you most about joining Docker?
I started following Docker closely back in 2016 when I joined the Open Networking Foundation. There, I was properly introduced to cloud technologies and containerization and quickly realised how Docker was radically simplifying the lives of our developers and was the de-facto standard for anything deployed in the cloud. I was particularly impressed by the incredible passion Continue reading
Back in May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). Then in June we were able to share the first version of this as part of a Desktop Edge release, this allowed users to use existing Docker CLI commands straight against ACI making getting started running containers in the cloud simpler than ever.
We are now pleased to announce that the Docker and ACI integration has moved into Docker Desktop stable 2.3.0.5 giving all Desktop users access to the simplest way to get containers running in the cloud.
As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.3.0.5), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. For a simple example of getting started with ACI you can see our initial blog post on the edge experience.
In July we announced a new strategic partnership with Amazon to integrate the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Over the last couple of months we have worked with the community on the beta experience in Docker Desktop Edge. Today we are excited to bring this experience to our entire community in Docker Desktop stable, version 2.3.0.5.
You can watch Carmen Puccio (Amazon) and myself (Docker) and view the original demo in the recording of our latest webinar here.
What started off in the beta as a Docker plugin experience docker ecs
has been pulled into Docker directly as a familiar docker compose
flow. This is just the beginning, and we could use your input so head over to the Docker Roadmap and let us know what you want to see as part of this integration.
There is no better time to try it. Grab the latest Docker Desktop Stable. Then check out my example application which will walk you through everything you need to know to deploy a Python application locally in development and then again directly to Amazon ECS in minutes not Continue reading
Application delivery velocity can be tripped up when security vulnerabilities are discovered after an app is deployed into production. Nothing is more detrimental to shipping new features to customers than having to go back and address vulnerabilities discovered in an app or image you already released. At Docker, we believe the best way to balance the needs for speed and security is to shift security left in the app delivery cycle as an integral part of the development process.
Integrating security checks into Docker Scan was the driver behind the partnership with Snyk, one of the leading app security scan providers in the industry. This partnership, announced in May of this year, creates a vision for a simple and streamlined approach for developers to build and deploy secure containers. And today, I’m excited to share that the latest Docker Desktop Edge release includes Snyk vulnerability scanning. This allows Docker users to trigger local Docker file and local image scans directly from the Docker Desktop CLI. With the combination of Docker Scan and Snyk, developers gain visibility into open source vulnerabilities that can have a negative impact on the security of container images. Now you can extend your Continue reading
Security automation is an area that encompasses different practices, such as investigation & response, security compliance, hardening, etc. While security is a prominent topic now more than ever, all of these activities also greatly benefit from automation.
For the second year at AnsibleFest, we will have a channel dedicated to security automation. We talked with channel Lead Massimo Ferrari to learn more about the security automation channel and the sessions within it.
Security Channel
The sessions in this channel will show you how to introduce and consume Red Hat Ansible Automation Platform in different stages of maturity of your security organization as well as using it to share processes through cross-functional teams. Sessions include guidance from customers, Red Hat subject matter experts and certified partners.
What will Attendees learn?
The target audience is security professionals who want to learn how Ansible can support and simplify their activities, and automation experts tasked with expanding the footprint of their automation practice and support security teams in their organization. This track is focused on customer stories and technical guidance on response & remediation, security operations and vulnerability management use cases.
Content is suitable for both automation veterans and Continue reading
In part I of this series, we learned about creating Docker images using a Dockerfile, tagging our images and managing images. Next we took a look at running containers, publishing ports, and running containers in detached mode. We then learned about managing containers by starting, stopping and restarting them. We also looked at naming our containers so they are more easily identifiable.
In this post, we’ll focus on setting up our local development environment. First, we’ll take a look at running a database in a container and how we use volumes and networking to persist our data and allow our application to talk with the database. Then we’ll pull everything together into a compose file which will allow us to setup and run a local development environment with one command. Finally, we’ll take a look at connecting a debugger to our application running inside a container.
Instead of downloading MongoDB, installing, configuring and then running the Mongo database as a service. We can use the Docker Official Image for MongoDB and run it in a container.
Before we run MongoDB in a container, we want to create a couple of volumes that Docker can manage to Continue reading
AnsibleFest 2020 is right around the corner and we could not be more excited. This year we have some great content in each of our channels. Here is a preview of what attendees can expect from the Operations channel at AnsibleFest.
Operations Channel
This channel will take Operators on an automation journey through the Technical Operations lifecycle and how The Ansible Automation Platform is the center of your automation goals. Learn how to get your automation moving with Certified Content Collections, then scale out with execution environments and tune the performance. Once you are running at scale we have tools to show you what teams are using automation and how much it is saving you with some real world examples and by using Analytics.
You should be leaving with some great examples and walkthroughs on infrastructure automation, from operating systems to public cloud and how you can leverage Ansible Automation Platform to foster cross-functional team collaboration and empower your whole organization with automation they need.
There will be something for everyone. You’ll get to hear from customers, Red Hatters and our partners. Also pick up some tips for your server deployments, performance and cluster management.
Operation Continue reading
As a developer, have you ever made a change that takes down an entire Kubernetes production cluster, requiring you to rebuild all YAML and automation scripts to get production back up? Have you ever wanted to create reproducible, self-contained environments that can be run locally or in production? Welcome to the new AnsibleFest Developer Channel! Here you can learn how Ansible is critical to the journey of the developer as an open-source software configuration management, provisioning and application-deployment tool that enables infrastructure as code.
Ansible Developer Channel
Many themes will be presented in the Ansible Developer Channel, including Kubernetes operations, Red Hat Ansible Automation Platform use cases, as well as execution speed and development efficiency considerations. You can learn how Ansible can streamline Kubernetes Day 2 Operations, where monitoring, maintenance and troubleshooting come into play and the application moves from a development project to an actual strategic advantage for your business. You will also learn how Ansible execution environments solve problems for developers using Ansible Automation Platform and how to create self-contained environments that can be run locally or in production Red Hat Ansible Tower deployments. In addition, you can learn how to optimize execution speed and Continue reading
As we adapt AnsibleFest into a free virtual experience this year, we wanted to share with our automation lovers what to expect. Seasoned pros and brand new Ansiblings alike can find answers and guidance for Red Hat Ansible Automation Platform, the enterprise solution for building and operating automation at scale. We are giving our attendees an inside peek of exactly what to expect from each channel. Let’s take a closer look at what is to come from the network-telco mini channel at AnsibleFest 2020.
Network-Telco Automation at AnsibleFest
Telecommunication service providers have extremely critical and complex workflows that require specialized attention for automation. The network is no longer isolated to the data center, but extends to the enterprise and now the edge, each that have specific requirements.
This is the first time Telco as an industry or use case has been specifically highlighted as part of its own channel at AnsibleFest. Data center automation has long been a use case for Ansible automation, but as Telco workloads are moving to the edge, so does the need to automate the enterprise, branch-office and entry points for end-users.
Attendees can expect to hear about targeted use cases for Telecommunications Continue reading
A step-by-step guide to help you get started using Docker containers with your Node.js apps.
To complete this tutorial, you will need the following:
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.
With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Let’s create a simple Node.js application that we’ll use as our example. Create a directory on your local machine named node-docker and follow the steps below to create a simple REST API.
$ cd [path to your node-docker directory]
$ npm init -y
$ npm install ronin-server ronin-mocks
$ Continue reading
I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.
Why might you need to update the credentials being used by CAPA? Security professionals recommend that users rotate credentials on a regular basis, and when those credentials get rotated you’ll need to update what CAPA is using. There are other reasons, too; perhaps you started with one set of credentials but now want to move to a different set of credentials. Fortunately, the process for updating the CAPA credentials isn’t too terribly tedious.
CAPA stores the credentials it uses as a Secret in the “capa-system” namespace. You can use kubectl -n capa-system get secrets
and you’ll see the “capa-manager-bootstrap-credentials” Secret. The credentials themselves are stored as a key named credentials
; you Continue reading