Archive

Category Archives for "Systems"

Red Hat Ansible Automation Platform Product Status Update

The Red Hat Ansible Product Team wanted to provide an update on the status and progress of Ansible’s foundational role as it pertains to the product, specifically as a deliverable for implementing automation as a language. That is, Ansible as provided by aggregated low-level command line binary executables leveraging Python, with a YAML-based user abstraction. Specifically, the packaged deliverable is currently named Ansible Base, but will soon be named Ansible Core later this year. When people often generally refer to “Ansible,” this largely describes what people use directly as part of their day-to-day development efforts.

As an Ansible Automation Platform user, you may have noticed changes over the last year and a half to the Ansible open source project and downstream product in order to provide targeted solutions for each customer persona, focusing on enhancements to packaging, release cadence, and content development.

We’ve seen the community and enterprise user bases of Ansible continue to grow as different groups adopt Ansible due to its strengths and its ability to automate a broad set of IT domains (such as Linux, Windows, network, cloud, security, storage, etc.).  But with this success it became apparent that there is no one size Continue reading

Announcing the Community Ansible 3.0.0 Package

Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways: 

  1. The versioning methodology for the Ansible community package now adopts semantic versioning, and begins to diverge from the versions of the Ansible Core package (which contains the Ansible language and runtime)
  2. The forthcoming Ansible Core package will be renamed from ansible-base in version 2.10 to ansible-core in version 2.11 for consistency

First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed. 

During the Ansible 2.10 development Continue reading

New Docker Desktop Preview for Apple M1 Released

This is just a quick update to let you know that we’ve released another preview of Docker Desktop for Apple M1 chips, which you can download from our Docker Apple M1 Tech Preview page. The most exciting change in this version is that Kubernetes now works.

First, a big thank you to everyone who tried out the previous preview and gave us feedback. We’re really excited to see how much enthusiasm there is for this, and also really grateful to you for reporting what doesn’t yet work and what your highest priorities are for quick fixes. In this post, we want to update you on what we’ve done and what we’re still working on.

Some of the biggest things we’ve been doing since the New Year are not immediately visible but are an essential part of eventually turning this into a supported product. The previous preview was built on a developer’s laptop from a private branch. Now all of the code is fully integrated into our main development branch. We’ve extended our CI suite to add several M1 machines, and we’ve extended our CI code to build and test Docker Desktop itself and all our dependencies for both architectures in Continue reading

How to Deploy GPU-Accelerated Applications on Amazon ECS with Docker Compose

Many applications can take advantage of GPU acceleration, in particular resource-intensive Machine Learning (ML) applications. The development time of such applications may vary based on the hardware of the machine we use for development. Containerization will facilitate development due to reproducibility and will make the setup easily transferable to other machines. Most importantly, a containerized application is easily deployable to platforms such as Amazon ECS, where it can take advantage of different hardware configurations.

In this tutorial, we discuss how to develop GPU-accelerated applications in containers locally and how to use Docker Compose to easily deploy them to the cloud (the Amazon ECS platform). We make the transition from the local environment to a cloud effortless, the GPU-accelerated application being packaged with all its dependencies in a Docker image, and deployed in the same way regardless of the target environment.

Requirements

In order to follow this tutorial, we need the following tools installed locally:

For deploying to a cloud platform, we rely on the new Docker Compose implementation embedded into the Docker CLI binary. Therefore, when targeting a cloud Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Closing out the Tokyo Assignment

In late 2019, I announced that I would be temporarily relocating to Tokyo for a six-month assignment to build out a team focused on cloud-native services and offerings. A few months later, I was still in Colorado, and I explained what was happening in a status update on the Tokyo assignment. I’ve had a few folks ask me about it, so I thought I’d go ahead and shared that the Tokyo assignment did not happen and will not happen.

So why didn’t it happen? In my March 2020 update, I mentioned that paperwork, approvals, and proper budget allocations had slowed down the assignment, but then the pandemic hit. Many folks, myself included, expected that the pandemic would work itself out, but—as we now clearly know—it did not. And as the pandemic dragged on (and continues to drag on), restrictions on travel and concerns over public health and safety continued to mean that the assignment was not going to happen. As many of you know all too well, travel restrictions still exist even today.

OK, but why won’t it happen in the future, when the pandemic is under control? At the time when the Tokyo assignment was offered to me, there Continue reading

How Developers Can Get Started with Python and Docker


Python started in 1991 with humble beginnings focusing on helping “automate the boring stuff.” But over the past few years, we’ve seen Python grow in popularity and become extremely useful not only for scripting but for building modern web applications, machine learning and data science. 

The TIOBE Index for February has Python ranked at number 3 on the list. Python has also been in the top 8 rank programming languages for the past 7 years. With such a popular and powerful programming language comes a vibrate and large community.

To that end, we are excited to announce that we are releasing a series of programming language-specific guides to help developers go from discovering the basics of Docker to delivering your images into a production environment and more.

The first in our series is a focus on the Python development ecosystem. We have created a series of tutorials, how-tos, and guides focused on the Python community with much more coming in the future. 

We are extremely excited to help Python developers become experts at developing and delivering the next generation of applications using the Docker platform. Below you will find a list of resources and our Python language-specific Continue reading

Technology Short Take 137

Welcome to Technology Short Take #137! I’ve got a wide range of topics for you this time around—eBPF, Falco, Snort, Kyverno, etcd, VMware Code Stream, and more. Hopefully one of these links will prove useful to you. Enjoy!

Networking

Servers/Hardware

  • I recently mentioned on Twitter that I was considering building out a new Linux PC to replace my aging Mac Pro (it’s a 2012 model, so going on 9 years old). Joe Utter shared with me his new lab build information, and now I’m sharing it with all of you. Sharing is caring, you know.

Security

Cloud Computing/Cloud Management

Save the Date! Next Docker Community All Hands

We’re excited to announce that our next Community All Hands will be on March 11th, 2021. This quarterly event is a unique opportunity for Docker staff and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. We had more than 1,500 attendees for our last all-hands and we hope to double that this time.  

This all-hands will be particularly special because it will coincide with none other than….you guessed it…Docker’s 8th birthday! For this “birthday edition,” we’re going to make the event extra special.

We’ll start by extending the format from 1 hour to 3 hours to pack in more Docker goodness. The main piece of feedback we got from our last all hands was that it was way too short. We had too much content that we tried to squeeze into 60 minutes. This longer format will give us plenty of time to cover everything we need to cover and let presenters catch their breath ?

Another new feature of this all-hands will be integrated chat and multi-casting made possible by a new innovative video conferencing platform we’ll be using. This will give us the opportunity to present content Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

Docker Index Shows Continued Massive Developer Adoption and Activity to Build and Share Apps with Docker

It’s been one year since we started publishing the Docker Index (stats, trends and analysis from developers and dev teams based on anonymized data from millions of Docker users). At that time we saw how Docker was being used at an incredible scale to power application building globally. Today we are excited to share the latest edition of the Docker Index, this time with some yearly and quarterly comparisons. 

Every time we pull these user stats, we are blown away by the sheer volume and continued growth in activity happening across the Docker developer community. It’s clear to see that collaborative application development platforms are the foundation for developers who want to build, share, and run modern apps. We are also thrilled to see this type of growth more than one year after refocusing Docker on making developers’ lives easier. The Docker community has stayed with us and continues to grow at a tremendous pace, giving us very encouraging signals about the path that Docker is taking. 

To begin, there has now been a total of 318 billion all time pulls on Docker Hub, an increase of 145% year-over-year. That’s right, the total number of pulls has increased Continue reading

Network Functions Virtualisation (NFV) Automation

Red Hat Ansible Automation Platform acts as the single pane of glass to automate different manual tasks in a heterogeneous cloud and virtualization environments; be it Red Hat OpenStack Platform, VMware vSphere, bare metal or the next-generation Telco cloud-native platform.

To manage cloud infrastructures like Red Hat OpenStack Platform, we will need to manage not just the individual cloud services (configuration), but also the interactions and relationships between them (orchestration).  Ansible Content Collections for Red Hat OpenStack Platform allows automation and management of various OpenStack offerings powered by different prominent Telco network vendors - such as Ericsson, Huawei and Nokia.

Bringing the values and benefits of Ansible automation to Telco NFV operations and deployments daily jobs helps avoid a lot of manual tasks, saves time, improves consistency and frees the existing resources to do more non-repetitive tasks to focus on innovation.  

The following is an example of an Asia Pacific Telco using Ansible to implement NFV automation.



Background

The Telco customer has multiple Red Hat OpenStack Platform clouds from different vendors, i.e. Ericsson, Huawei, Nokia, VMware, and would like to have an automation tool that acts as a single pane of glass to fill some gaps in Continue reading

Network Functions Virtualisation (NFV) Automation

Red Hat Ansible Automation Platform acts as the single pane of glass to automate different manual tasks in a heterogeneous cloud and virtualization environments; be it Red Hat OpenStack Platform, VMware vSphere, bare metal or the next-generation Telco cloud-native platform.

To manage cloud infrastructures like Red Hat OpenStack Platform, we will need to manage not just the individual cloud services (configuration), but also the interactions and relationships between them (orchestration).  Ansible Content Collections for Red Hat OpenStack Platform allows automation and management of various OpenStack offerings powered by different prominent Telco network vendors - such as Ericsson, Huawei and Nokia.

Bringing the values and benefits of Ansible automation to Telco NFV operations and deployments daily jobs helps avoid a lot of manual tasks, saves time, improves consistency and frees the existing resources to do more non-repetitive tasks to focus on innovation.  

The following is an example of an Asia Pacific Telco using Ansible to implement NFV automation.



Background

The Telco customer has multiple Red Hat OpenStack Platform clouds from different vendors, i.e. Ericsson, Huawei, Nokia, VMware, and would like to have an automation tool that acts as a single pane of glass to fill some gaps in Continue reading

CFP for DockerCon LIVE 2021 Now Open!

Ahoy! You can now submit your talk proposal for DockerCon LIVE 2021!

Taking place May 27, 2021, DockerCon brings together the entire community of Docker developers, contributors and partners to share, teach, and collaborate to grow the understanding and capabilities of modern application developers. The Docker community is growing fast and is incredibly diverse, and our aim is to have a conference that reflects this growth and diversity. To that end, we’re announcing the CFP a bit earlier this year to substantially increase the number of submissions to review.

Share your know-how

Like last year’s edition, DockerCon LIVE will be 100% virtual. To allow for conversation and ensure a stress-free delivery for the speaker, session talks will be pre-recorded and played at a specific time during the conference. Speakers will be able to chat live with their audience while their recorded session is broadcast and be available to answer questions in real-time. We’re really excited about this format and we look forward to introducing a host of new interactive features that’ll ensure that speakers (new and seasoned) and attendees alike have an exceptional experience.

The theme of this year’s DockerCon is developer team collaboration in the new remote-first world.

Before Continue reading

Donating Docker Distribution to the CNCF

We are happy to announce that Docker has contributed Docker Distribution to the Cloud Native Computing Foundation (CNCF). Docker is committed to the Open Source community and open standards for many of our projects, and this move will ensure Docker Distribution has a broad group maintaining what is the foundation for many registries. 

What is Docker Distribution?

Distribution is the open source code that is the basis of the container registry that is part of Docker Hub, and also many other container registries. It is the reference implementation of a container registry and is extremely widely used, so it is a foundational part of the container ecosystem. This makes its new home in the CNCF highly appropriate.

Docker Distribution was a major rewrite of the original Registry code which was written in Python and was a much earlier design not using content addressed storage. This new version, written in Go, was designed to be an extensible library, so that different backends and subsystems could be designed. Docker formed the Open Container Initiative (OCI) in 2015, in the Linux Foundation, in order to standardise the specifications for the container ecosystem, including the registry and image formats.

Why are we donating Docker Continue reading

? Open Sourcing the Docker Hub CLI Tool

At Docker, we are committed to making developer’s lives easier, and maintaining and extending our commitment to the Open Source community and open standards for many of our projects. We believe building new capabilities into the Docker Platform in partnership with our developer community and in full transparency leads to much better software.

Last December, we announced the release of a new experimental Docker Hub CLI tool, also known as hub-tool. This new CLI lets you explore, inspect and manage your content on Docker Hub as well as work with your teams and manage your account. We demonstrated it during the last Docker Community All Hands in December 2020.

This tool is already available with Docker Desktop, so if you are a Windows or Mac user you can try it now. For Linux users, we are pleased to announce that we open sourced the hub-tool code, and it can be found at https://github.com/docker/hub-tool. You can download the binary directly on the release page.

With the open sourcing of hub-tool we have also cut a new v0.3.0 release which includes the following new features:

  • Added an optional argument to the account info command to check the status of Continue reading

New Docker and JFrog Partnership Designed to Improve the Speed and Quality of App Development Processes

Today, Docker and JFrog announced a new partnership to ensure developers can benefit from integrated innovation across both companies’ offerings. This partnership sets the foundation for ongoing integration and support to help organizations increase both the velocity and quality of modern app development. 

The objective of this partnership is simple: how can we ensure developers can get the images they want and trust, and make sure they can access them in whatever development process they are using from a centralized platform? To this end, the new agreement between Docker and JFrog ensures that developers can take advantage of their Docker Subscription and Docker Hub Official Images in their Artifactory SaaS and on-premise environments so they can build, share and run apps with confidence.

At a high level, a solution based on the Docker and JFrog partnership looks like this: 

In this sample architecture, developers can build apps with images, including Docker Official Images and images from popular OSS projects and software companies, from Docker Hub. As images are requested, they are cached into JFrog Artifactory, where images can be managed by corporate policies, cached for high performance, and mirrored across an organization’s infrastructure. Also, the images in Artifactory can take Continue reading

New Docker Reporting Provides Teams with Tools for Higher Efficiency and Better Collaboration

Today, we are very excited to announce the release of Audit Log, a new capability that provides the administrators of Docker Team subscription accounts with a chronological report of their team activities. The Audit Log is an unbiased system of record, displaying all the status changes for Docker organizations, teams, repos and tags.  As a tracking tool for all the team activities, it creates a central historical repository of actionable insights to diagnose incidents, provide a record of app lifecycle milestones and changes, and provides a view into events creating audit trails for regulatory compliance reviews.  The Audit Log is available for Team subscription accounts, and at this point, is not included with Free or Pro subscriptions.

Some typical scenarios where Audit Log will play a key role include:  

  • When several team members are collaborating on delivering a project, Audit Log creates a list of activities that becomes a ‘source of truth’ to validate which tags got deleted and which tags got pushed into repos, when these activities happened and which team members triggered them. 
  • Audit Log provides knowledge base continuity, delivering information on projects completed earlier when new team members need to familiarize themselves with work done Continue reading
1 18 19 20 21 22 125