Archive

Category Archives for "Systems"

Supporting Open Source Projects at Docker

At Docker, we are committed to building a platform that enables collaboration and innovation within the development community. Last month, we announced the launch of a special program to expand our support for Open Source projects that use Docker. The eligible projects that meet the program’s requirements (ie. they must be open source and non-commercial) can request to have their respective OSS namespaces whitelisted and see their data-storage and data-egress restrictions lifted.

The projects we’re supporting, and the organizations behind them, are as diverse as they are numerous, ranging from independent researchers developing frameworks for machine learning, to academic consortia collecting environmental data to human rights NGOs building encryption tools. To date, we’re thrilled to see that more than 80 non-profit organizations, large and small, from the four corners of the world have joined the program. 

Here are but a few diverse projects we’re supporting: 

The Vaccine Impact Modelling Consortium aims to deliver more sustainable, efficient, and transparent approach to generating disease burden and vaccine impact estimates. 

farmOS is a web-based application for farm management, planning, and record keeping. It is developed by a community of farmers, developers, researchers, and organizations with the aim of providing a standard Continue reading

Using NetBox for Ansible Source of Truth

Here you will learn about NetBox at a high level, how it works to become a Source of Truth (SoT), and look into the use of the Ansible Content Collection, which is available on Ansible Galaxy. The goal is to show some of the capabilities that make NetBox a terrific tool and why you will want to use NetBox as your network Source of Truth for automation!

Screen Shot 2020-12-08 at 9.27.19 AM

Source of Truth

Why a Source of Truth? The Source of Truth is where you go to get the intended state of the device. There does not need to be a single Source of Truth, but you should have a single Source of Truth per data domain, often referred to as the System of Record (SoR). For example, if you have a database that maintains your physical sites that is used by teams outside of the IT domain, that should be the Source of Truth on physical sites. You can aggregate the data from the physical site Source of Truth into other data sources for automation. Just be aware that when it comes time to collect data, then it should come from that other tool.

The first step in creating a network automation Continue reading

What developers need to know about Docker, Docker Engine, and Kubernetes v1.20

The latest version of Kubernetes Kubernetes v1.20.0-rc.0 is now available. The Kubernetes project plans to deprecate Docker Engine support in the kubelet and support for dockershim will be removed in a future release, probably late next year. The net/net is support for your container images built with Docker tools is not being deprecated and will still work as before.

Even better news however, is that Mirantis and Docker have agreed to partner to maintain the shim code standalone outside Kubernetes, as a conformant CRI interface for Docker Engine. We will start with the great initial prototype from Dims, at https://github.com/dims/cri-dockerd and continuing to make it available as an open source project, at https://github.com/Mirantis/cri-dockerd. This means that you can continue to build Kubernetes based on Docker Engine as before, just switching from the built in dockershim to the external one. Docker and Mirantis will work together on making sure it continues to work as well as before and that it passes all the conformance tests and works just like the built in version did. Docker will continue to ship this shim in Docker Desktop as this gives a great developer experience, and Mirantis will be Continue reading

Join Docker’s Community All-Hands

Openness and transparency are key pillars of a healthy open source community. We’re constantly exploring ways to better engage the Docker community, to better incorporate feedback and to better foster participation.

To this end, we’re very excited to host our first Community All-Hands on Thursday December 10th at 8am PST / 5pm CET. This one-hour event will be a unique opportunity for Docker staff and the broader Docker community to come together for company and product updates, live demos, community shout-outs and a Q&A. 

The All-Hands will include updates from:

  • Scott Johnston (CEO, Docker) who will go over Docker’s strategic vision and where the company is heading in 2021 and beyond
  • Donnie Berkholz (VP of Products, Docker) who will walk us through our product roadmap  
  • Jean-Laurent de Morlhon (VP of Engineering, Docker) who will provide an inside peek on engineering.

We’ll then dive into specific product updates around Docker Desktop, Hub and Developer Tooling, followed by two awesome live demos where we’ll show cool new features and integrations. 

A Community All-Hands is not complete without a community update. We will announce new community initiatives and recognize outstanding contributors who have gone above and beyond to help push Docker Continue reading

Bootstrapping a Cluster API Management Cluster

Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.

The process I’ll describe in this post is also described in the upstream Cluster API documentation (see the “Bootstrap & Pivot” section of this page).

At a high level, the process looks like this:

  1. Create a temporary bootstrap cluster.
  2. Make the bootstrap cluster into a temporary management cluster.
  3. Use the temporary management cluster to establish a workload cluster (through Cluster API).
  4. Convert the workload cluster into a permanent management cluster.
  5. Remove the temporary bootstrap cluster.

The following sections describe each of these steps in a bit more detail.

Create a Temporary Bootstrap Cluster

The first step is Continue reading

Some Site Updates

For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.

  1. The site has been updated to use a much more recent version of Hugo. This change is largely invisible to readers, but a couple of the site changes are related to this upgrade. Specifically…
  2. Although the main RSS feed for the site (found here) remains a full content feed, I ran into issues getting Hugo to use my custom RSS templates for generating the category and tag feeds (for example, the RSS feed for the “Tutorial” category, or the RSS feed for the “Kubernetes” tag). You’ll now find that the category and tag feeds are summary feeds only as opposed to full content feeds. I do intend to restore them to full content feeds as soon as possible.
  3. Finally, I’ve updated the “metadata line” when viewing a single article Continue reading

The Docker Developer Guide to AWS re:Invent

This is the busiest time of the year for developers targeting AWS. Just over a week ago we announced the GA of Docker Compose for AWS, and this week we’re getting ready to virtually attend AWS re:Invent. re:Invent is the annual gathering of the entire AWS community and ecosystem to learn what’s new, get the latest tips and tricks, and connect with peers from around the world. Instead of the traditional week-long gathering of 60,000 attendees in Las Vegas, the event has pivoted to a flexible three-week online conference. This year the event is free, and anyone can participate on their own schedule. This blog post covers highlights of the event so Docker developers can get the most from re:Invent.


In the kickoff keynote by CEO Andy Jassy, AWS announced a number of new features for container developers, including a new capability, ECS Anywhere, which allows Amazon Elastic Container Service (ECS) to run on-prem and in the cloud to support hybrid computing workloads as well as the launch of AWS Proton, an end-to-end pipeline to deliver containerized and microservices applications. Separately, AWS also announced a new public Elastic Container Registry (ECR) and gallery today. We’re excited to see a Continue reading

Docker and AWS Resources for Developers

AWS re:Invent kicks off this week and if you are anything like us, we are super geeked out to watch and attend all the talks that are lined up for the next three weeks.

To get ready for re:Invent, we’ve gathered some of our best resources and expert guidance to get the most out of the Docker platform when building apps for AWS. Check out these blogs, webinars and DockTalks from the past few weeks to augment your re:Invent experience over the next three weeks:

Expert Guidance from the Docker Team

  • AWS Howdy Partner
    • AWS Howdy Partner Twitch Show: Back in July, I (@pmckee) was a guest on the AWS Howdy Partner show hosted on Twitch. Follow along as we walked through deploying a multi-container application to AWS ECS using the Docker CLI.

Assigning Node Labels During Kubernetes Cluster Bootstrapping

Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.

The “TL;DR” is that you can use the kubeletExtraArgs field in a kubeadm configuration file to pass the node-labels command to the Kubelet, which would allow you to assign node labels when kubeadm bootstraps the node. Read on for more details.

Testing with Kind

kind is a great tool for testing this sort of configuration, since kind uses kubeadm to bootstrap its nodes. If you aren’t familiar with kind, I encourage you to visit the kind website; in Continue reading

Pausing Cluster API Reconciliation

Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).

Since CAPI leverages Kubernetes-style APIs to manage Kubernetes cluster lifecycle, the idea of reconciliation is very important—it’s a core Kubernetes concept that isn’t at all specific to CAPI. This article on level triggering and reconciliation in Kubernetes is a great article that helps explain reconciliation, as well as a lot of other key concepts about how Kubernetes works.

When reconciliation is active, the controllers involved in CAPI are constantly evaluating desired state and actual state, and then reconciling differences between the two. There may be times when you need to pause this reconciliation loop. Fortunately, CAPI makes this pretty easy: there is a paused field that allows users Continue reading

Technology Short Take 134

Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!

Networking

Security

  • I’m glad to see this. Open source has become so critical to so many aspects of our computing infrastructure.
  • OpenCSPM looks like it could be quite a useful tool. I haven’t yet had time to dig in and get familiar with the details, but what I have seen so far looks good.
  • Uh oh…more hardware exploits.
  • The macOS OCSP fiasco generated quite a bit of attention and analysis (see here and here).

Cloud Computing/Cloud Management

New LibSSH Connection Plugin for Ansible Network Replaces Paramiko, Adds FIPS Mode Enablement

As Red Hat Ansible Automation Platform expands its footprint with a growing customer base, security continues to be an important aspect of organizations’ overall strategy. Red Hat regularly reviews and enhances the foundational codebase to follow better security practices. As part of this effort, we are introducing FIPS 140-2 readiness enablement by means of a newly developed Ansible SSH connection plugin that uses the libssh library. 

 

Ansible Network SSH Connection Basics

Since most network appliances don't support or have limited capability for the local execution of a third party software, the Ansible network modules are not copied to the remote host unlike linux hosts; instead, they run on the control node itself. Hence, Ansible network can’t use the typical Ansible SSH connection plugin that is used with linux host. Furthermore, due to this behavior, performance of the underlying SSH subsystem is critical. Not only is the new LibSSH connection plugin enabling FIPS readiness, but it was also designed to be more performant than the existing Paramiko SSH subsystem.

Screen Shot 2020-11-20 at 8.52.53 AM

The top level network_cli connection plugin, provided by the ansible.netcommon Collection (specifically ansible.netcommon.network_cli), provides an SSH based connection to the network appliance. It in turn calls the Continue reading

Welcome Canonical to Docker Hub and the Docker Verified Publisher Program

Today, we are thrilled to announce that Canonical will distribute its free and commercial software through Docker Hub as a Docker Verified Publisher. Canonical and Docker will partner together to ensure that hardened free and commercial Ubuntu images will be available to all developer software supply chains for multi-cloud app development. 

Canonical is the publisher of the Ubuntu OS, and a global provider of enterprise open source software, for all use cases from cloud to IoT. Canonical Ubuntu is one of the most popular Docker Official Images on Docker Hub, with over one billion images pulled. With Canonical as a Docker Verified Publisher, developers who pull Ubuntu images from Docker Hub can be confident they get the latest images backed by both Canonical and Docker. 

The Ideal Container Registry for Multi-Cloud 

Canonical is the latest publisher to choose Docker Hub for globally sharing their container images. With millions of users, Docker Hub is the world’s largest container registry, ensures Canonical can reach their developers regardless where they build and deploy their applications. 

This partnership, which covers both free and commercial Canonical LTS images, so developers can confidently pull the latest images straight from the source without concern Continue reading

Docker Captain Take 5 – Ajeet Singh Raina

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. Today, we’re introducing “Docker Captains Take 5”, a regular blog series where we get a closer look at the Docker experts who share their knowledge online and offline around the world. A different Captain will be featured each time and we will ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). To kick us off we’re interviewing Ajeet Singh Raina who has been a Docker Captain since 2016 and is a DevRel Manager at Redis Labs. He is based in Bangalore, India.  

How/when did you first discover Docker?

It was the year 2013 when I watched Solomon Hykes for the first time presenting “The Future of Linux Containers” at PyCon in Santa Clara. This video inspired me to write my first blog post on Docker and the rest is history.

What is your favorite Docker command?

The docker buildx CLI  is one of my favorite commands. It allows you to Continue reading

Docker Compose for Amazon ECS Now Available

Docker is pleased to announce that as of today the integration with Docker Compose and Amazon ECS has reached V1 and is now GA! ?

We started this work way back at the beginning of the year with our first step – moving the Compose specification into a community run project. Then in July we announced how we were working together with AWS to make it easier to deploy Compose Applications to ECS using the Docker command line. As of today all Docker Desktop users will now have the stable ECS experience available to them, allowing developers to use docker compose commands with an ECS context to run their containers against ECS.

As part of this we want to thank the AWS team who have helped us make this happen: Carmen Puccio, David Killmon, Sravan Rengarajan, Uttara Sridhar, Massimo Re Ferre, Jonah Jones and David Duffey.

Getting started with Docker Compose & ECS

As an existing ECS user or a new starter all you will need to do is update to the latest Docker Desktop Community version (2.5.0.1 or greater) store your image on Docker Hub so you can deploy it (you can get started with Hub here Continue reading

Now Available: Red Hat Ansible Automation Platform 1.2

Red Hat Ansible Automation Platform 1.2 is now generally available with increased focus on improving efficiency, increasing productivity and controlling risk and expenses.  While many IT infrastructure engineers are familiar with automating compute platforms, Ansible Automation Platform is the first holistic automation platform to help manage, automate and orchestrate everything in your IT infrastructure from edge to datacenter.  To download the newest release or get a trial license, please sign up on http://red.ht/try_ansible.

Image One

An automation platform for mission critical workloads

The Ansible project is a remarkable open source project with hundreds of thousands of users encompassing a large community.  Red Hat extends this community and open source developer model to innovate, experiment and incorporate feedback to satisfy our customer challenges and use cases.  Red Hat Ansible Automation Platform transforms Ansible and many related open source projects into an enterprise grade, multi-organizational automation platform for mission-critical workloads.  In modern IT infrastructure, automation is no longer a nice-to-have; it’s often now a requirement to run, operate and scale how everything is managed: including network, security, Linux, Windows, cloud and more. 

Ansible Automation Platform includes a RESTful API for seamless integration with existing IT tools Continue reading

Introducing the Ansible Content Collection for Red Hat OpenShift

Increasing business demands are driving the need for automation to support rapid, yet stable and reliable deployments of applications and supporting infrastructure.  Kubernetes and cloud-native tools have quickly emerged as the enabling technologies essential for organizations to build the scalable open hybrid cloud solutions of tomorrow. This is why Red Hat has developed the Red Hat OpenShift Container Platform (OCP) to enable enterprises to meet these emerging business and technical challenges. Red Hat OpenShift brings together Kubernetes and other cloud-native technologies into a single, consistent platform that has been fine-tuned and enhanced for the enterprise. 

There are many similarities to how Red Hat OpenShift and Red Hat Ansible Automation Platform approach their individual problem domains that make a natural fit when we bring the two together to help make hard things easier through automation and orchestration.

We’ve released the Ansible Content Collection for Red Hat OpenShift (redhat.openshift) to enable the automation and management of Red Hat OpenShift clusters. This is the latest edition to the certified content available to subscribers of Red Hat Ansible Automation Platform in the Ansible Automation Hub.

In this blog post, we will go over what you’ll find in redhat.openshift Continue reading

Rate Limiting by the Numbers

As a critical part of Docker’s transition into sustainability, we’ve been gradually rolling out limits on docker pulls to the heaviest users of Docker Hub. As we near the end of the implementation of the rate limits, we thought we’d share some of the facts and figures behind our effort. Our goal is to ensure that Docker becomes sustainable for the long term, while continuing to offer developers 100% free tools to build, share, and run their applications.

We announced this plan in August with an effective date of November 1. We also shared that “roughly 30% of all downloads on Hub come from only 1% of our anonymous users,” illustrated in this chart:

This shows the dramatic impact that a very small percentage of anonymous, free users have on all of Docker Hub. That excessive usage by just 1%–2% of our users results not only in an unsustainable model for Docker but also slows performance for the other 98%–99% of the 11.3 million developers, CI services, and other platforms using Docker Hub every month. Those developers rely upon us to save and share their own container images, as well as to pull images from Docker Verified Publishers Continue reading

Apple Silicon M1 Chips and Docker

Revealed at Apple’s ‘One More Thing’ event on Nov 10th, Docker was excited to see new Macs feature Apple silicon and their M1 chip. At Docker we have been looking at the new hypervisor features and support that are required for Mac to continue to delight our millions of customers. We saw the first spotlight of these efforts at Apple WWDC in June, when Apple highlighted Docker Desktop on stage. Our goal at Docker is to provide the same great experience on the new Macs as we do today for our millions of users on Docker Desktop for Mac, and to make this transition as seamless as possible. 

Building the right experience for our customers means getting quite a few things right before we push a release. Although Apple has released Rosetta 2 to help move applications over to the new M1 chips, this does not get us all the way with Docker Desktop. Under the hood of Docker Desktop, we run a virtual machine, to achieve this on Apple’s new hardware we need to move onto Apple’s new hypervisor framework. We also need to do all the plumbing that provides the core experience of Docker Continue reading

Taking Your App Live with Docker and the Uffizzi App Platform


Tune in December 10th 1pm EST for our
Live DockTalk:  Simplify Hosting Your App in the Cloud with Uffizzi and Docker

We’re excited to be working with Uffizzi on this joint blog.  Docker and Uffizzi have very similar missions that naturally complement one another.  Docker helps you bring your ideas to life by reducing the complexity of application development and Uffizzi helps you bring your ideas to life by reducing the complexity of cloud application hosting. 

This blog is a step-by-step guide to setting up automated builds from your Github repo via Docker Hub and enabling Continuous Deployment to your Uffizzi app hosting environment.




Prerequisites
To complete this tutorial, you will need the following:

Docker Overview

Docker is an open platform for developing, shipping, and running applications. Docker containers separate your applications from your infrastructure so you can deliver software quickly. 

With Docker, you can manage your infrastructure in the same ways you manage your applications. By Continue reading

1 21 22 23 24 25 126