Archive

Category Archives for "Systems"

Our Favourite Picks from the KubeCon Europe 2020 Schedule

Last Wednesday, the CNCF released the KubeCon Europe 2020 schedule. There are so many talks at KubeCon it can be daunting even to decide what to go to see! Here are some talks by the team at Docker, and some others we think will be particularly interesting. Looking forward to seeing you in Amsterdam!

Simplify Your Cloud-Native Application Packaging and Deployments – Chris Crone

Chris is an engineer in our Paris office and is also co-executive director of the CNAB project. CNAB (Cloud Native Application Bundle) is a specification for bundling up cloud-native applications, which can consist of multiple containers, into a single object that can be pushed to a registry. Open source projects using CNAB, like Docker App or Porter allow you to package apps that would normally require multiple tools like Terraform, Helm, and shell to deploy, into a single tooling agnostic packaging format. These packages can then be shared using existing container registries and used with other CNAB compliant tools. This can really simplify cloud-native development.

Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic

Did you know that you can store anything into a container registry? Continue reading

Building an Isolated Kubernetes Cluster on AWS

In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.

At a high-level, the process looks something like this:

  1. Build preconfigured AMIs that you’ll use for the instances running Kubernetes.
  2. Stand up your AWS infrastructure, including necessary VPC endpoints for AWS services.
  3. Preload any additional container images, if needed.
  4. Bootstrap your cluster using kubeadm.

It’s important to note that this guide does not replace my earlier post on setting up an AWS-integrated Kubernetes cluster on AWS (written for 1.15, but valid for 1.16 and 1.17). All the requirements in that post still apply here. If you haven’t read that post or aren’t familiar with the requirements for setting up a Kubernetes cluster with the AWS Continue reading

Changes to dockerproject.org APT and YUM repositories

While many people know about Docker, not that many know its history and where it came from. Docker was started as a project in the dotCloud company, founded by Solomon Hykes, which provided a PaaS solution. The project became so successful that dotCloud renamed itself to Docker, Inc. and focused on Docker as its primary product.

As the “Docker project” grew from being a proof of concept shown off at various meetups and at PyCon in 2013 to a real community project, it needed a website where people could learn about it and download it. This is why the “dockerproject.org” and “dockerproject.com” domains were registered.

With the move from dotCloud to Docker, Inc. and the shift of focus onto the Docker product, it made sense to move everything to the “docker.com” domain. This is where you now find the company website, documentation, and of course the APT and YUM repositories at download.docker.com have been there since 2017.

On the 31st of March 2020, we will be shutting down the legacy APT and YUM repositories hosted at dockerproject.org and dockerproject.com. These repositories haven’t been updated with the latest releases of Docker and Continue reading

Introducing the Docker Index: Insight from the World’s Most Popular Container Registry

8 billion pulls! Yes, that’s billion with a B! This number represents a little known level of activity and innovation happening across the community and ecosystem, all in just one average month. How do we know? From the number of pulls and most popular images to top architectures, data from Docker Hub and Docker Desktop provide a window into application development trends in the age of containers. 

Today, we are sharing these findings in something we call the Docker Index – a look at developers’ preferences and trends, as told by using anonymized data from five million Docker Hub and two million Docker Desktop users, as well as countless other developers engaging with content on Hub. 

At Docker, we’re always looking for ways to make life easier for developers. Understanding the what, why and how behind these projects is imperative. As these trends evolve, we will continue to share updates on the findings.

Whether containers will become mainstream is no longer a topic of debate. As the Docker Index data suggests, containers have become a mainstay to how modern, distributed apps are built and shared so they can run anywhere. 

Usage is showing no signs of slowing Continue reading

How We Solved a Report on docker-compose Performance on macOS Catalina

Photo by Caspar Camille Rubin on Unsplash

As a Docker Compose maintainer, my daily duty is to check for newly reported issues and try to help users through misunderstanding and possible underlying bugs. Sometimes issues are very well documented, sometimes they are nothing much but some “please help” message. And sometimes they look really weird and can result in funny investigations. Here is the story of how we solved one such report…

A one-line bug report

An issue was reported as “docker-compose super slow on macOS Catalina” – no version, no details. How should I prioritize this? I don’t even know if the reporter is using the latest version of the tool – the opened issue doesn’t follow the bug reporting template. This is just a one-liner. But for some reason, I decided to take a look at it anyway and diagnose the issue.

Without any obvious explanation for super-slowness, I decided to take a risk and upgrade my own MacBook to OSX Catalina. I was able to reproduce significant slow down in docker-compose execution, waiting seconds for the very first line printed on the console even to display usage on invalid command.

Investigating the issue

In the meantime, some Continue reading

January Virtual Meetup Recap: Improve Image Builds Using the Features in BuildKit

This is a guest post by Docker Captain Nicholas Dille, a blogger, speaker and author with 15 years of experience in virtualization and automation. He works as a DevOps Engineer at Haufe Group, a digital media company located in Freiburg, Germany. He is also a Microsoft Most Valuable Professional.

In this virtual meetup, I share how to improve image builds using the features in BuildKit. BuildKit is an alternative builder with great features like caching, concurrency and the ability to separate your image build into multiple stages – which is useful for separating the build environment from the runtime environment. 

The default builder in Docker is the legacy builder. This is recommended for use when you need support for Windows. However, in nearly every other case, using BuildKit is recommended because of the fast build time, ability to use custom BuildKit front-ends, building stages in parallel and other features.

Catch the full replay below and view the slides to learn:

  • Build cache in BuildKit – instead of relying on a locally present image, buildkit will pull the appropriate layers of the previous image from a registry.
  • How BuildKit helps prevent disclosure of credentials by allowing files to Continue reading

Community Collaboration on Notary v2

One of the most productive meetings I had KubeCon in San Diego last November was a meeting with Docker, Amazon and Microsoft to plan a collaboration around a new version of the CNCF project Notary. We held the Notary v2 kickoff meeting a few weeks later in Seattle in the Amazon offices.

Emphasising that this is a cross-industry collaboration, we had eighteen people in the room (with more dialed in) from Amazon, Microsoft, Docker, IBM, Google, Red Hat, Sylabs and JFrog. This represented all the container registry providers and developers, other than the VMware Harbor developers who could unfortunately not make it in person. Unfortunately, we forgot to take a picture of everyone!

The consensus and community are important because of the aims of Notary v2. But let’s go back a bit as some of you may not know what Notary is and what it is for.

The Notary project was originally started at Docker back in 2015 to provide a general signing Continue reading

Creating an AWS VPC Endpoint with Pulumi

In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.

For those that aren’t familiar, Pulumi allows users to use one of a number of different general-purpose programming languages and apply them to infrastructure-as-code scenarios. In this example, I’ll be using TypeScript, but Pulumi also supports JavaScript and Python (and Go is in the works). (Side note: I intend to start working with the Go support in Pulumi when it becomes generally available as a means of helping accelerate my own Go learning.)

Here’s a snippet of TypeScript code that Continue reading

Manually Loading Container Images with containerD

I recently had a need to manually load some container images into a Linux system running containerd (instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.

For an administrator/operations-minded user, the primary means of interacting with containerd is via the ctr command-line tool. This tool uses a command syntax very similar to Docker, so users familiar with Docker should be able to be productive with ctr pretty easily.

In my specific example, I had a bastion host with Internet access, and a couple of hosts behind the bastion that did not have Internet access. It was the hosts behind the bastion that needed the container images preloaded. So, I used the ctr tool to fetch and prepare the images on the bastion, then Continue reading

Thinking and Learning About API Design

In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.

I initially approached the API in terms of how I would encode (serialize?) data on the wire using JSON (I’d decided on using a RESTful API with JSON over HTTP). Starting with how I anticipated storing the data in the back-end database, I created a representation of how a customer’s information would be encoded (serialized) in JSON:

{
    "customers": [
        {
            "customerID": "5678",
            "streetAddress": "123 Main Street",
            "unitNumber": "Suite 123",
            "city": "Anywhere",
            "state": "CO",
            "postalCode": "80108",
            "telephone": "3035551212",
            "primaryContactFirstName": "Scott",
            "primaryContactLastName": "Lowe"
        }
    ]
 Continue reading

Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support

One of the most requested features for the docker-compose tool is definitely support for building using Buildkit which is an alternative builder with great capabilities, like caching, concurrency and ability to use custom BuildKit front-ends just to mention a few… Ahhh with a nice blue output! And the good news is that Docker Compose 1.25.1 – that was just released early January – includes BuildKit support!

BuildKit support for Docker Compose is actually achieved by redirecting the docker-compose build to the Docker CLI with a limited feature set.

Enabling Buildkit build

To enable this, we have to align some stars.

First, it requires that the Docker CLI binary present in your PATH:

$ which
docker/usr/local/bin/docker

Second, docker-compose has to be run with the environment variable COMPOSE_DOCKER_CLI_BUILD set to 1 like in:

$ COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build

This instruction tells docker-compose to use the Docker CLI when executing a build. You should see the same build output, but starting with the experimental warning.

As docker-compose passes its environment variables to the Docker CLI, we can also tell the CLI to use BuildKit instead of the default builder. To accomplish that, we can execute this:

$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build

A Continue reading

Docker Desktop release 2.2 is here!

We are excited to announce that we released a new Docker Desktop version today! Thanks to the user feedback on the new features initially released in the Edge channel, we are now ready to publish them into Stable. 

Before getting to each feature into detail, let’s see what’s new in Docker Desktop 2.2:

  • WSL 2 as a technical preview, allowing access to the full system resources, improved boot time, access to Linux workspaces and improved file system performance
  • A new file sharing implementation for Windows, improving the developer inner loop user experience
  • A New Integrated Desktop Dashboard, to see at once glance your local running containers and Compose applications, and easily manage them.

WSL 2 – New architecture 

Back in July we released on Edge the technical preview of Docker Desktop for WSL 2, where we included an experimental integration of Docker running on an existing user Linux distribution. We learnt from our experience and re-architected our solution (covered in Simon’s blog)

This new architecture for WSL 2 allows users to: 

  • Use Kubernetes on the WSL 2 backend
  • Work with just WSL 2/turn off the traditional HyperV VM while working with WSL 2
  • Continue Continue reading

Capturing Logs in Docker Desktop

Docker Desktop runs a Virtual Machine to host Docker containers. Each component within the VM (including the Docker engine itself) runs as a separate isolated container. This extra layer of isolation introduces an interesting new problem: how do we capture all the logs so we can include them in Docker Desktop diagnostic reports? If we do nothing then the logs will be written separately into each individual container which obviously isn’t very useful!

The Docker Desktop VM boots from an ISO which is built using LinuxKit from a list of Docker images together with a list of capabilities and bind mounts. For a minimal example of a LinuxKit VM definition, see https://github.com/linuxkit/linuxkit/blob/master/examples/minimal.yml — more examples and documentation are available in the LinuxKit repository. The LinuxKit VM in Docker Desktop boots in two phases: in the first phase, the init process executes a series of one-shot “on-boot” actions sequentially using runc to isolate them in containers. These actions typically format disks, enable swap, configure sysctl settings and network interfaces. The second phase contains “services” which are started concurrently and run forever as containerd tasks.

The following diagram shows a simplified high-level view of the boot process:

By default Continue reading

Technology Short Take 123

Welcome to Technology Short Take #123, the first of 2020! I hope that everyone had a wonderful holiday season, but now it’s time to jump back into the fray with a collection of technical articles from around the Internet. Here’s hoping that I found something useful for you!

Networking

  • Eric Sloof mentions the NSX-T load balancing encyclopedia (found here), which intends to be an authoritative resource to NSX-T load balancing configuration and management.
  • David Gee has an interesting set of articles exploring service function chaining in service mesh environments (part 1, part 2, part 3, and part 4).

Servers/Hardware

Security

  • On January 13, Brian Krebs discussed the critical flaw (a vulnerability in crypt32.dll, a core Windows cryptographic component) that was rumored Continue reading

Removing Unnecessary Complexity

Recently, I’ve been working to remove unnecessary complexity from my work environment. I wouldn’t say that I’m going full-on minimalist (not that there’s anything wrong with that), but I was beginning to feel like maintaining this complexity was taking focus, willpower, and mental capacity away from other, more valuable, efforts. Additionally, given the challenges I know lie ahead of me this year (see here for more information), I suspect I’ll need all the focus, willpower, and mental capacity I can get!

When I say “unnecessary complexity,” by the way, I’m referring to added complexity that doesn’t bring any real or significant benefit. Sometimes there’s no getting around the complexity, but when that complexity doesn’t create any value, it’s unnecessary in my definition.

Primarily, this “reduction in complexity” shows up in three areas:

  1. My computing environment
  2. My home office setup
  3. My lab resources

My Computing Environment

Readers who have followed me for more than a couple years know that I migrated away from macOS for about 9 months in 2017 (see here for a wrap-up of that effort), then again in 2018 when I joined Heptio (some details are available in this update). Since switching to Fedora on a Lenovo Continue reading

How Useful Is Ansible in a Cloud-Native Kubernetes Environment?

blog_ansible-and-kubernetes-c

A question I've been hearing a lot lately is "why are you still using Ansible in your Kubernetes projects?" Followed often by "what's the point of writing your book Ansible for Kubernetes when Ansible isn't really necessary once you start using Kubernetes?"

I spent a little time thinking about these questions, and the motivation behind them, and wanted to write a blog post addressing them, because it seems a lot of people may be confused about what Kubernetes does, what Ansible does, and why both are necessary technologies in a modern business migrating to a cloud-native technology stack (or even a fully cloud-native business).

One important caveat to mention upfront, and I quote directly from my book:

While Ansible can do almost everything for you, it may not be the right tool for every aspect of your infrastructure automation. Sometimes there are other tools which may more cleanly integrate with your application developers' workflows, or have better support from app vendors.

We should always guard against the golden hammer fallacy. No single infrastructure tool—not even the best Kubernetes-as-a-service platform—can fill the needs of an entire business's IT operation. If anything, we have seen an explosion of specialist tools Continue reading

How to Add Approval Steps to Ansible Tower Workflows

Blog_add-approval-steps-to-ansible-tower-workflows

Suppose you have a workflow set up in Red Hat Ansible Tower with several steps and needed another user to view and approve some or all of the nodes in the workflow.  Or maybe a job is running inside of a workflow but it should be viewed and approved within a specific time limit, or else get canceled automatically?  Perhaps it would be useful to be able to see how a job failed before something like a cleanup task gets set off?  It is now possible to insert a step in between any job template or workflow within that workflow in order to achieve these objectives.

 

Table of Contents

A New Feature for Better Oversight and More User Input

How to Add Approval Nodes to Workflows

What Happens When Something Needs Approval?

Approval Notifications

Timeouts

Approval-Specific Role-Based Access Controls

Summary

Where to Go Next

 

A New Feature for Better Oversight and More User Input

The Workflow Approval Node feature has been available in Ansible Tower since the release of version 3.6.0 on November 13, 2019.  In order to visually compare the additional functionality, examine the before and after examples of a workflow Continue reading

5 Software Development Predictions for 2020

To kick off the new year, we sat down with Docker CEO Scott Johnston and asked him what the future holds for software development. Here are his 2020 predictions and trends to keep an eye on.

Existing Code and Apps Become New Again

Developers will find new ways to reuse existing code instead of reinventing the wheel to start from scratch. Additionally, we’ll see companies extend the value to existing apps by adding more functionality via microservices.

The Changing Definition of a Modern Application

Today’s applications are more complex than those of yesterday. In 2020, modern apps will power tomorrow’s innovation and this requires a diverse set of tools, languages and frameworks for developers. Developers need even more flexibility to address this new wave of modern apps and evolve with the rest of the industry.

Containers Pave the Way to New Application Trends

Now that containers are typically considered a common deployment mechanism, the conversation will evolve from the packaging of individual containers to the packaging of the entire application (which are becoming increasingly diverse and distributed). Organizations will increasingly look for guidance and solutions that help them unify how they build and manage Continue reading

Looking Back: 2019 Project Report Card

As has been my custom over the last five years or so, in the early part of the year I like to share with my readers a list of personal projects for the upcoming year (here’s the 2019 list). Then, near the end of that same year or very early in the following year, I evaluate how I performed against that list of personal projects (for example, here’s my project report card for 2018). In this post, I’ll continue that pattern with an evaluation of my progress against my 2019 project list.

For reference, here’s the list of projects I set out for myself for 2019 (you can read the associated blog post, if you like, for additional context):

  1. Make at least one code contribution to an open source project. (Stretch goal: Make three code contributions to open source projects.)
  2. Add at least three new technology areas to my “learning-tools” repository. (Stretch goal: Add five new technology areas to the “learning-tools” repository.)
  3. Become more familiar with CI/CD solutions and patterns.
  4. Create at least three non-written content pieces. (Stretch goal: Create five pieces of non-written content.)
  5. Complete a “wildcard project” (if applicable).

Here’s how I Continue reading

2019 Docker Community Awards

The Docker Community is the heart of Docker’s success and a huge reason why Docker was named the most wanted and second most loved developer tool in the 2019 Stack Overflow Survey. This year, we honored the following members of the Docker Community for their exemplary contributions to Docker users around the globe. On behalf of Docker and developers everywhere, thank you for your passion and commitment to this community!

Ajeet Singh Raina, Bangalore, India

Ajeet is a Docker Captain and Docker Community Leader for Docker Bangalore, the largest Docker Meetup in the world with nearly 8,000 members. His meetups are more like mini-conferences, commonly exceeding hundreds of RSVPs and involving free hands on workshop and training content that he and his docker community have developed. Ajeet is also a prolific blogger, sharing docker and kubernetes content on his blog Collabnix, which had over a million views in 2019. Ajeet also helped to organize and/or speak at more than 30+ events over the past year. This year, Ajeet was recognized by his fellow Captains to receive the Tip of the Captains Hat Award for his tireless dedication to sharing his expertise with the broader tech community. Keep up with Ajeet Continue reading

1 32 33 34 35 36 125