By using cloud platforms, we can take advantage of different resource configurations and compute capacities. However, deploying containerized applications on cloud platforms is proving to be quite challenging, especially for new users who have no expertise on how to use that platform. As each platform may provide specific APIs, orchestrating the deployment of a containerized application can become a hassle.
Docker Compose is a very popular tool used to manage containerized applications deployed on Docker hosts. Its popularity is maybe due to the simplicity on how to define an application and its components in a Compose file and the compact commands to manage its deployment.
Since cloud platforms for containers have emerged, being able to deploy a Compose application on them is a most-wanted feature by many developers that use Docker Compose for their local development.
In this blog post, we discuss how to use Docker Compose to deploy containerized applications to Amazon ECS. We aim to show how the transition from deploying to a local Docker environment to deploying to Amazon ECS is effortless, the application being managed in the same way for both environments.
In order to exercise the examples in this blogpost, the following tools need Continue reading
Welcome to Technology Short Take #138. I have what I hope is an interesting and useful set of links to share with everyone this time around. I didn’t do so well on storage links; apologies to my storage-focused friends! However, there should be something for most everyone else. Enjoy!
#BeLikeIvan
vim
syntax plugin for TextFSM templates.At Docker, we are incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. The following is a guest post by Docker community member Gabriel de Marmiesse. Are you working on something awesome with Docker? Send your contributions to William Quiviger (@william) on the Docker Community Slack and we might feature your work!
The most common way to call and control Docker is by using the command line.
With the increased usage of Docker, users want to call Docker from programming languages other than shell. One popular way to use Docker from Python has been to use docker-py. This library has had so much success that even docker-compose
is written in Python, and leverages docker-py.
The goal of docker-py though is not to replicate the Docker client (written in Golang), but to talk to the Docker Engine HTTP API. The Docker client is extremely complex and is hard to duplicate in another language. Because of this, a lot of features that were in the Docker client could not be available in Continue reading
Businesses continue to have a high demand for automation. This is not limited to infrastructure components, but stretches across the entire IT, including the platforms supporting application deployments. Here often Kubernetes is the way to go - and why in November we released the first Certified Content Collection for deploying and managing Kubernetes applications and services. Since then, the development and work in this area has only increased. That is why we recently released kubernetes.core 1.2.
In this blog post, we’ll go over what’s new and what’s changed in this release of our Kubernetes Collection.
We continue to build on our existing support for Helm 3, the Kubernetes package manager: we added the new helm_template module, which opens up access to Helm’s template command.
There are times when you may need to take an existing chart and do something with its content. For example, there might be existing objects that you would like to bring under Helm's control and you need to compare what's deployed with what's in the chart. The newly added helm_template module gives you access to the rendered YAML of a chart. Continue reading
The latest Docker Desktop release, 3.2, includes support for iTerm2 which is a terminal emulator that is highly popular with macOS fans. From the Containers/Apps Dashboard, for a running container, you can click `CLI` to open a terminal and run commands on the container. With this latest release of Docker Desktop, if you have installed iTerm2 on your Mac, the CLI option opens an iTerm2 terminal. Otherwise, it opens the Terminal app on Mac or a Command Prompt on Windows.
Of note, this feature request to support additional terminals started from the Docker public roadmap. Daniel Rodriguez, one of our community members, submitted the request to the public roadmap. 180 people upvoted that request and we added it and prioritized it on our public roadmap.
The public roadmap is our source of truth for community feedback on prioritizing product updates and feature enhancements. Not everything submitted to the public roadmap will end up as a delivered feature, but the support for M1 chipsets, image vulnerability scanning and audit logging – all delivered within the last year – all started as issues submitted via the roadmap.
This is the easiest way for you to let us know Continue reading
Next week, on Thursday March 11th, 2021 (8am PST/5pm CET) we’ll be hosting our next quarterly Docker Community All-Hands. This free virtual event, open to everyone, is a unique opportunity for Docker staff and the broader Docker community to come together for company and product updates, live demos, community presentations and a live Q&A.
As luck would have it, this All-Hands will coincide almost to the day with Docker’s 8th birthday (yay!). To mark the occasion, we’re going to make this event extra special by introducing :
We’re also really excited about the new video platform we’ll be using that makes it much easier for attendees to engage/connect/share with each other.
Not too long ago I hosted an episode of TGIK8s, where I explored some features of Cluster API. One of the features I explored on the show was ClusterResourceSet, an experimental feature that allows users to automatically install additional components onto workload clusters when the workload clusters are provisioned. In this post, I’ll show how to deploy a CNI plugin automatically using a ClusterResourceSet.
A lot of this post is inspired by a similar post on installing Calico using a ClusterResourceSet. Although that post is for vSphere and this one focuses on AWS, much of the infrastructure differences are abstracted away by Kubernetes and Cluster API.
At a high level, using ClusterResourceSet to install a CNI plugin automatically looks like this:
The sections below describe each of these steps in more detail.
The preferred way to enable experimental features on your management cluster is to use a setting in the Continue reading
First off, a big thank you to all those who have already submitted a talk proposal for DockerCon LIVE 2021. We’re seeing some really excellent proposals and we look forward to reviewing many more! We opened the CFP on February 8th and with a few more weeks to go before we close the CFP on March 15th there’s still lots of time to submit a talk.
If you’re toying with the idea of submitting a talk but you’re still not sure whether or not your topic is interesting enough, or how to approach your topic, or if you just need a little push in the right direction to write up your proposal and click on “Send”, below are a few resources we thought you might find interesting.
Amanda Sopkin wrote a great article a few years ago that has now become a reference for conference organizers sharing tips on how to get a technical talk accepted for a conference. We also like Todd Lewis’ 13 tips on how to write an awesome talk proposal for a tech conference. Other interesting articles include:
Red Hat Virtualization (RHV) is a complete, and fully supported enterprise virtualization platform that is built upon a foundation of Red Hat Enterprise Linux (RHEL), oVirt virtualization management projects, and Kernel-based Virtual Machine (KVM) technology in order to virtualize resources, processes and applications.
With RHEL as the compute provider, RHV addsan intuitive web interface with a robust API including SDKs for Java, Ruby, Python, JavaScript and Go for management of virtualization instances and resources that comprise a typical datacenter.
Interacting with an API through a full fledged SDK may present a barrier to datacenter automation due to requisite knowledge of a programming language before getting started. This also means that collaboration may be stifled due to a lack of resources proficient in one of the available SDKs. Standardizing on Ansible for automating RHV allows for all teams and individuals to create and maintain automation without knowledge of a programming language.
Red Hat Ansible Automation Platform allows for interacting with datacenter services in a cleanly formatted and human readable markup language that offers an on-ramp to automating the datacenter. By leveraging the newly released Ansible Content Collection for RHV, this gentle on-ramp to automation becomes more powerful by Continue reading
Red Hat Virtualization (RHV) is a complete, and fully supported enterprise virtualization platform that is built upon a foundation of Red Hat Enterprise Linux (RHEL), oVirt virtualization management projects, and Kernel-based Virtual Machine (KVM) technology in order to virtualize resources, processes and applications.
With RHEL as the compute provider, RHV addsan intuitive web interface with a robust API including SDKs for Java, Ruby, Python, JavaScript and Go for management of virtualization instances and resources that comprise a typical datacenter.
Interacting with an API through a full fledged SDK may present a barrier to datacenter automation due to requisite knowledge of a programming language before getting started. This also means that collaboration may be stifled due to a lack of resources proficient in one of the available SDKs. Standardizing on Ansible for automating RHV allows for all teams and individuals to create and maintain automation without knowledge of a programming language.
Red Hat Ansible Automation Platform allows for interacting with datacenter services in a cleanly formatted and human readable markup language that offers an on-ramp to automating the datacenter. By leveraging the newly released Ansible Content Collection for RHV, this gentle on-ramp to automation becomes more powerful by Continue reading
One of the things that makes the Docker Platform so powerful is how easy it is to use images from a central location. Docker Hub is the premier Image Repository with thousands of Official Images ready for use. It’s also just as easy to push your own images to the Docker Hub registry so that everyone can benefit from your Dockerized applications.
But in certain scenarios, you might not want to push your images outside of your firewall. In this case you can set up a local registry using the open source software project Distribution. In this article we’ll take a look at setting up and configuring a local instance of the Distribution project where your teams can share images with docker commands they already know: docker push
and docker pull
.
To complete this tutorial, you will need the following:
The Distribution project has been packaged as an Official Image on Docker Hub. To run a version locally, execute the following command:
$ docker run -d -p 5000:5000 --name Continue reading
Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Nick Janetakis who has been a Docker Captain since 2016. He is a freelance full stack developer / teacher and is based in New York, United States.
I was doing freelance web development work and kept running into situations where it was painful to set up my development environment for web apps created with Ruby on Rails. Different apps had different Ruby version requirements as well as needing different PostgreSQL and Redis versions too.
I remember running a manually provisioned Linux VM on my Windows dev box and did most of my development there. I even started to use LXC directly within that Linux VM.
That wasn’t too bad after investing a lot of time to Continue reading
This is a guest post from Viktor Petersson, CEO of Screenly.io. Screenly is the most popular digital signage product for the Raspberry Pi. Find Viktor on Twitter @vpetersson.
In the previous blog post, we talked about how we compile Qt for Screenly OSE using Docker’s nifty multi-stage and multi-platform features. In this article, we build on this topic further and zoom in on caching.
Docker does a great job with caching using layers. Each command (e.g., RUN, ADD, etc.) generates a layer, which Docker then reuses in future builds unless something changes. As always, there are exceptions to this process, but this is generally speaking true. Another type of caching is caching for a particular operation, such as compiling source code, inside a container.
At Screenly, we created a Qt build environment inside a Docker container. We created this Qt build to ensure that the build process was reproducible and easy to share among developers. Since the Qt compilation process takes a long time, we leveraged ccache to speed up our Qt compilation. Implementing ccache requires volume mounting a folder from outside of the Docker environment.
The above steps work well if you Continue reading
Seeking more streamlined access to AWS EC2 instances on private subnets, I recently implemented Wireguard for VPN access. Wireguard, if you’re not familiar, is a relatively new solution that is baked into recent Linux kernels. (There is also support for other OSes.) In this post, I’ll share what I learned in setting up Wireguard for VPN access to my AWS environments.
Since the configuration of the clients and the servers is largely the same (especially since both client and server are Linux), I haven’t separated out the two configurations. At a high level, the process looks like this:
The first thing to do, naturally, is install the necessary software.
On recent versions of Linux—I’m using Fedora (32 and 33) and Ubuntu 20.04—kernel support for Wireguard ships with the distribution. All that’s needed is to install the necessary userspace tools.
On Fedora, that’s done with dnf install wireguard-tools
. On Ubuntu, the command is apt install wireguard-tools
. (You can also install the wireguard
meta-package, if you’d prefer.) Apple’s macOS, for example, Continue reading
As automation becomes crucial for more and more business cases, there is an increased need to test the automation code itself. This is where ansible-test comes in: developers who want to test their Ansible Content Collections for sanity, unit and integration tests can use ansible-test to achieve testing workflows that integrate with source code repositories.
Both ansible-core and ansible-base come packaged with a cli tool called ansible-test, which can be used by collection developers to test their Collection and its content. The ansible-test knows how to perform a wide variety of testing-related tasks, from linting module documentation and code to running unit and integration tests.
We will cover different features of ansible-test in brief below.
With the general availability of Ansible Content Collections with Ansible-2.9, a user can run ansible-test inside a collection to test the collection itself. ansible-test needs to be run from the collection root or below in order for ansible-test to run tests on the Collection.
If you try to run ansible-test from outside the above directory norms, it will throw an error like below:
root@root ~/.ansible/collections ansible-test sanity
ERROR: The current working directory must be at or below:
Continue reading
As automation becomes crucial for more and more business cases, there is an increased need to test the automation code itself. This is where ansible-test comes in: developers who want to test their Ansible Content Collections for sanity, unit and integration tests can use ansible-test to achieve testing workflows that integrate with source code repositories.
Both ansible-core and ansible-base come packaged with a cli tool called ansible-test, which can be used by collection developers to test their Collection and its content. The ansible-test knows how to perform a wide variety of testing-related tasks, from linting module documentation and code to running unit and integration tests.
We will cover different features of ansible-test in brief below.
With the general availability of Ansible Content Collections with Ansible-2.9, a user can run ansible-test inside a collection to test the collection itself. ansible-test needs to be run from the collection root or below in order for ansible-test to run tests on the Collection.
If you try to run ansible-test from outside the above directory norms, it will throw an error like below:
root@root ~/.ansible/collections ansible-test sanity
ERROR: The current working directory must be at or below:
- Continue reading
The Ansible community team has announced the release of Ansible 3.0.0 and here are the questions about the release that we’ve heard from community members so far. If you have a question that is not answered below, let us know on the mailing lists or IRC.
The former became known as Continue reading
The Red Hat Ansible Product Team wanted to provide an update on the status and progress of Ansible’s foundational role as it pertains to the product, specifically as a deliverable for implementing automation as a language. That is, Ansible as provided by aggregated low-level command line binary executables leveraging Python, with a YAML-based user abstraction. Specifically, the packaged deliverable is currently named Ansible Base, but will soon be named Ansible Core later this year. When people often generally refer to “Ansible,” this largely describes what people use directly as part of their day-to-day development efforts.
As an Ansible Automation Platform user, you may have noticed changes over the last year and a half to the Ansible open source project and downstream product in order to provide targeted solutions for each customer persona, focusing on enhancements to packaging, release cadence, and content development.
We’ve seen the community and enterprise user bases of Ansible continue to grow as different groups adopt Ansible due to its strengths and its ability to automate a broad set of IT domains (such as Linux, Windows, network, cloud, security, storage, etc.). But with this success it became apparent that there is no one size Continue reading
Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways:
First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed.
During the Ansible 2.10 development Continue reading
The Ansible community team has announced the release of Ansible 3.0.0 and here are the questions about the release that we’ve heard from community members so far. If you have a question that is not answered below, let us know on the mailing lists or IRC.
The former became known as Continue reading