Archive

Category Archives for "Systems"

Compiling Qt with Docker Using Caching

This is a guest post from Viktor Petersson, CEO of Screenly.io. Screenly is the most popular digital signage product for the Raspberry Pi. Find Viktor on Twitter @vpetersson.

In the previous blog post, we talked about how we compile Qt for Screenly OSE using Docker’s nifty multi-stage and multi-platform features. In this article, we build on this topic further and zoom in on caching. 

Docker does a great job with caching using layers. Each command (e.g., RUN, ADD, etc.) generates a layer, which Docker then reuses in future builds unless something changes. As always, there are exceptions to this process, but this is generally speaking true. Another type of caching is caching for a particular operation, such as compiling source code, inside a container.

At Screenly, we created a Qt build environment inside a Docker container. We created this Qt build to ensure that the build process was reproducible and easy to share among developers. Since the Qt compilation process takes a long time, we leveraged ccache to speed up our Qt compilation. Implementing ccache requires volume mounting a folder from outside of the Docker environment. 

The above steps work well if you Continue reading

Setting up Wireguard for AWS VPC Access

Seeking more streamlined access to AWS EC2 instances on private subnets, I recently implemented Wireguard for VPN access. Wireguard, if you’re not familiar, is a relatively new solution that is baked into recent Linux kernels. (There is also support for other OSes.) In this post, I’ll share what I learned in setting up Wireguard for VPN access to my AWS environments.

Since the configuration of the clients and the servers is largely the same (especially since both client and server are Linux), I haven’t separated out the two configurations. At a high level, the process looks like this:

  1. Installing any necessary packages/software
  2. Generating Wireguard private and public keys
  3. Modifying the AWS environment to allow Wireguard traffic
  4. Setting up the Wireguard interface(s)
  5. Activating the VPN

The first thing to do, naturally, is install the necessary software.

Installing Packages/Software

On recent versions of Linux—I’m using Fedora (32 and 33) and Ubuntu 20.04—kernel support for Wireguard ships with the distribution. All that’s needed is to install the necessary userspace tools.

On Fedora, that’s done with dnf install wireguard-tools. On Ubuntu, the command is apt install wireguard-tools. (You can also install the wireguard meta-package, if you’d prefer.) Apple’s macOS, for example, Continue reading

Introduction to ansible-test

As automation becomes crucial for more and more business cases, there is an increased need to test the automation code itself. This is where ansible-test comes in: developers who want to test their Ansible Content Collections for sanity, unit and integration tests can use  ansible-test  to achieve testing workflows that integrate with source code repositories.

Both ansible-core and ansible-base come packaged with a cli tool called ansible-test, which can be used by collection developers to test their Collection and its content. The ansible-test knows how to perform a wide variety of testing-related tasks, from linting module documentation and code to running unit and integration tests.

We will cover different features of ansible-test in brief below.

 

How to run ansible-test?

With the general availability of Ansible Content Collections with Ansible-2.9, a user can run ansible-test inside a collection to test the collection itself. ansible-test needs to be run from the collection root or below in order for ansible-test to run tests on the Collection.

If you try to run ansible-test from outside the above directory norms, it will throw an error like below:

root@root ~/.ansible/collections ansible-test sanity
ERROR: The current working directory must be at or below:
                                                                                                                                                                                                                                            
 Continue reading

Introduction to ansible-test

As automation becomes crucial for more and more business cases, there is an increased need to test the automation code itself. This is where ansible-test comes in: developers who want to test their Ansible Content Collections for sanity, unit and integration tests can use  ansible-test  to achieve testing workflows that integrate with source code repositories.

Both ansible-core and ansible-base come packaged with a cli tool called ansible-test, which can be used by collection developers to test their Collection and its content. The ansible-test knows how to perform a wide variety of testing-related tasks, from linting module documentation and code to running unit and integration tests.

We will cover different features of ansible-test in brief below.

How to run ansible-test?

With the general availability of Ansible Content Collections with Ansible-2.9, a user can run ansible-test inside a collection to test the collection itself. ansible-test needs to be run from the collection root or below in order for ansible-test to run tests on the Collection.

If you try to run ansible-test from outside the above directory norms, it will throw an error like below:

root@root ~/.ansible/collections ansible-test sanity
ERROR: The current working directory must be at or below:
                                                                                                                                                                                                                                            
-  Continue reading

Ansible 3.0.0 Q&A

The Ansible community team has announced the release of Ansible 3.0.0 and here are the questions about the release that we’ve heard from community members so far. If you have a question that is not answered below, let us know on the mailing lists or IRC.

  • How can I stay up to date with changes in the Ansible community?

About the Ansible community package and ansible-base/ansible-core

  • Are there any changes to the Ansible language in 3.0.0?
  • There are no significant changes since the Ansible 3.0.0 package depends on the same version of ansible-base as Ansible 2.10.x.
  • Why are the versions of ansible-base/ansible-core packages diverging from the Ansible package?
    • When the Ansible Community Team set out to restructure the Ansible project, Ansible was split into the following components: 
      • The core engine, modules and plugins
      • Community and partner supported Ansible Collections of modules and plugins

The former became known as Continue reading

Red Hat Ansible Automation Platform Product Status Update

The Red Hat Ansible Product Team wanted to provide an update on the status and progress of Ansible’s foundational role as it pertains to the product, specifically as a deliverable for implementing automation as a language. That is, Ansible as provided by aggregated low-level command line binary executables leveraging Python, with a YAML-based user abstraction. Specifically, the packaged deliverable is currently named Ansible Base, but will soon be named Ansible Core later this year. When people often generally refer to “Ansible,” this largely describes what people use directly as part of their day-to-day development efforts.

As an Ansible Automation Platform user, you may have noticed changes over the last year and a half to the Ansible open source project and downstream product in order to provide targeted solutions for each customer persona, focusing on enhancements to packaging, release cadence, and content development.

We’ve seen the community and enterprise user bases of Ansible continue to grow as different groups adopt Ansible due to its strengths and its ability to automate a broad set of IT domains (such as Linux, Windows, network, cloud, security, storage, etc.).  But with this success it became apparent that there is no one size Continue reading

Announcing the Community Ansible 3.0.0 Package

Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways: 

  1. The versioning methodology for the Ansible community package now adopts semantic versioning, and begins to diverge from the versions of the Ansible Core package (which contains the Ansible language and runtime)
  2. The forthcoming Ansible Core package will be renamed from ansible-base in version 2.10 to ansible-core in version 2.11 for consistency

First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed. 

During the Ansible 2.10 development Continue reading

Ansible 3.0.0 Q&A

The Ansible community team has announced the release of Ansible 3.0.0 and here are the questions about the release that we’ve heard from community members so far. If you have a question that is not answered below, let us know on the mailing lists or IRC.

  • How can I stay up to date with changes in the Ansible community?

About the Ansible community package and ansible-base/ansible-core

  • Are there any changes to the Ansible language in 3.0.0?
  • There are no significant changes since the Ansible 3.0.0 package depends on the same version of ansible-base as Ansible 2.10.x.
  • Why are the versions of ansible-base/ansible-core packages diverging from the Ansible package?
    • When the Ansible Community Team set out to restructure the Ansible project, Ansible was split into the following components: 
      • The core engine, modules and plugins
      • Community and partner supported Ansible Collections of modules and plugins

The former became known as Continue reading

Red Hat Ansible Automation Platform Product Status Update

The Red Hat Ansible Product Team wanted to provide an update on the status and progress of Ansible’s foundational role as it pertains to the product, specifically as a deliverable for implementing automation as a language. That is, Ansible as provided by aggregated low-level command line binary executables leveraging Python, with a YAML-based user abstraction. Specifically, the packaged deliverable is currently named Ansible Base, but will soon be named Ansible Core later this year. When people often generally refer to “Ansible,” this largely describes what people use directly as part of their day-to-day development efforts.

As an Ansible Automation Platform user, you may have noticed changes over the last year and a half to the Ansible open source project and downstream product in order to provide targeted solutions for each customer persona, focusing on enhancements to packaging, release cadence, and content development.

We’ve seen the community and enterprise user bases of Ansible continue to grow as different groups adopt Ansible due to its strengths and its ability to automate a broad set of IT domains (such as Linux, Windows, network, cloud, security, storage, etc.).  But with this success it became apparent that there is no one size Continue reading

Announcing the Community Ansible 3.0.0 Package

Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways: 

  1. The versioning methodology for the Ansible community package now adopts semantic versioning, and begins to diverge from the versions of the Ansible Core package (which contains the Ansible language and runtime)
  2. The forthcoming Ansible Core package will be renamed from ansible-base in version 2.10 to ansible-core in version 2.11 for consistency

First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed. 

During the Ansible 2.10 development Continue reading

New Docker Desktop Preview for Apple M1 Released

This is just a quick update to let you know that we’ve released another preview of Docker Desktop for Apple M1 chips, which you can download from our Docker Apple M1 Tech Preview page. The most exciting change in this version is that Kubernetes now works.

First, a big thank you to everyone who tried out the previous preview and gave us feedback. We’re really excited to see how much enthusiasm there is for this, and also really grateful to you for reporting what doesn’t yet work and what your highest priorities are for quick fixes. In this post, we want to update you on what we’ve done and what we’re still working on.

Some of the biggest things we’ve been doing since the New Year are not immediately visible but are an essential part of eventually turning this into a supported product. The previous preview was built on a developer’s laptop from a private branch. Now all of the code is fully integrated into our main development branch. We’ve extended our CI suite to add several M1 machines, and we’ve extended our CI code to build and test Docker Desktop itself and all our dependencies for both architectures in Continue reading

How to Deploy GPU-Accelerated Applications on Amazon ECS with Docker Compose

Many applications can take advantage of GPU acceleration, in particular resource-intensive Machine Learning (ML) applications. The development time of such applications may vary based on the hardware of the machine we use for development. Containerization will facilitate development due to reproducibility and will make the setup easily transferable to other machines. Most importantly, a containerized application is easily deployable to platforms such as Amazon ECS, where it can take advantage of different hardware configurations.

In this tutorial, we discuss how to develop GPU-accelerated applications in containers locally and how to use Docker Compose to easily deploy them to the cloud (the Amazon ECS platform). We make the transition from the local environment to a cloud effortless, the GPU-accelerated application being packaged with all its dependencies in a Docker image, and deployed in the same way regardless of the target environment.

Requirements

In order to follow this tutorial, we need the following tools installed locally:

For deploying to a cloud platform, we rely on the new Docker Compose implementation embedded into the Docker CLI binary. Therefore, when targeting a cloud Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Closing out the Tokyo Assignment

In late 2019, I announced that I would be temporarily relocating to Tokyo for a six-month assignment to build out a team focused on cloud-native services and offerings. A few months later, I was still in Colorado, and I explained what was happening in a status update on the Tokyo assignment. I’ve had a few folks ask me about it, so I thought I’d go ahead and shared that the Tokyo assignment did not happen and will not happen.

So why didn’t it happen? In my March 2020 update, I mentioned that paperwork, approvals, and proper budget allocations had slowed down the assignment, but then the pandemic hit. Many folks, myself included, expected that the pandemic would work itself out, but—as we now clearly know—it did not. And as the pandemic dragged on (and continues to drag on), restrictions on travel and concerns over public health and safety continued to mean that the assignment was not going to happen. As many of you know all too well, travel restrictions still exist even today.

OK, but why won’t it happen in the future, when the pandemic is under control? At the time when the Tokyo assignment was offered to me, there Continue reading

How Developers Can Get Started with Python and Docker


Python started in 1991 with humble beginnings focusing on helping “automate the boring stuff.” But over the past few years, we’ve seen Python grow in popularity and become extremely useful not only for scripting but for building modern web applications, machine learning and data science. 

The TIOBE Index for February has Python ranked at number 3 on the list. Python has also been in the top 8 rank programming languages for the past 7 years. With such a popular and powerful programming language comes a vibrate and large community.

To that end, we are excited to announce that we are releasing a series of programming language-specific guides to help developers go from discovering the basics of Docker to delivering your images into a production environment and more.

The first in our series is a focus on the Python development ecosystem. We have created a series of tutorials, how-tos, and guides focused on the Python community with much more coming in the future. 

We are extremely excited to help Python developers become experts at developing and delivering the next generation of applications using the Docker platform. Below you will find a list of resources and our Python language-specific Continue reading

Technology Short Take 137

Welcome to Technology Short Take #137! I’ve got a wide range of topics for you this time around—eBPF, Falco, Snort, Kyverno, etcd, VMware Code Stream, and more. Hopefully one of these links will prove useful to you. Enjoy!

Networking

Servers/Hardware

  • I recently mentioned on Twitter that I was considering building out a new Linux PC to replace my aging Mac Pro (it’s a 2012 model, so going on 9 years old). Joe Utter shared with me his new lab build information, and now I’m sharing it with all of you. Sharing is caring, you know.

Security

Cloud Computing/Cloud Management

Save the Date! Next Docker Community All Hands

We’re excited to announce that our next Community All Hands will be on March 11th, 2021. This quarterly event is a unique opportunity for Docker staff and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. We had more than 1,500 attendees for our last all-hands and we hope to double that this time.  

This all-hands will be particularly special because it will coincide with none other than….you guessed it…Docker’s 8th birthday! For this “birthday edition,” we’re going to make the event extra special.

We’ll start by extending the format from 1 hour to 3 hours to pack in more Docker goodness. The main piece of feedback we got from our last all hands was that it was way too short. We had too much content that we tried to squeeze into 60 minutes. This longer format will give us plenty of time to cover everything we need to cover and let presenters catch their breath ?

Another new feature of this all-hands will be integrated chat and multi-casting made possible by a new innovative video conferencing platform we’ll be using. This will give us the opportunity to present content Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

1 18 19 20 21 22 125