Archive

Category Archives for "Systems"

Docker Turns 7!

Since its introduction at PyCon in 2013, Docker has changed the way the world develops applications. And over the last 7 years, we’ve loved watching developers – new and seasoned – bring their ideas to life with Docker.

As is our tradition in the Docker community, we will be celebrating Docker’s birthday month with meetups (virtual + IRL), a special hands-on challenge, cake, and swag. Join us and celebrate your #myDockerBDay and the ways Docker and this community have impacted you – from the industry you work in, to an application you’ve built; from your-day-to-day workflow to your career. 

Learn more about the birthday celebrations below and share your #myDockerBday story with us on twitter or submit it here for a chance to win some awesome Docker swag.

Docker Birthday LIVE Show on March 26th, 9am – 12pm GMT-8

Celebrate Docker’s Birthday with a special 3-hour live show featuring exclusive conversations with the Docker team and Captains, open Q&A sessions, and prizes. To reserve a spot, sign up here

7th Birthday Challenge

Learn some of the Docker Captain’s favorite Tips + Tricks by completing 7 hands-on exercises. Earn a virtual badge for each exercise completed.

Learn Docker with Continue reading

Helping You and Your Development Team Build and Ship Faster

I remember the first time one of my co-workers told me about Docker. There is a longer story behind it, but it ended with “it was so easy and saved me so much time.” That compelled me to install Docker and try it for myself. Yup, she was right. Easy, simple, efficient. Sometime later, at a conference, while catching up with some friends who are developers, I asked them “how are things going?” The conversation eventually led to the topic of where things are going in the container space. I asked, “what’s the biggest issue you are having right now?” I expected the response to be something Kubernetes related. I was surprised the answer was “managing all the tech that gets my code deployed and running.” 

The above sentiment is echoed by our CEO, Scott Johnston, in this post. Millions of you use Docker today (check out the Docker Index for the latest usage stats), and we are so very thankful for the vibrant Docker Community. We heard from you that easily going from code to cloud is a problem, and Scott outlined the complexities. There are many choices across packaging, inner loop, packaging, registry, Continue reading

An Update on the Tokyo Assignment

Right at the end of 2019 I announced that in early 2020 I was temporarily relocating to Tokyo, Japan, for a six month work assignment. It’s now March, and I’m still in Colorado. So what’s up with that Tokyo assignment, anyway? Since I’ve had several folks ask, I figured it’s probably best to post something here.

First, based on all the information available to me, it looks like Tokyo is still going to happen. It will just be delayed a bit—although the exact extent of the delay is still unclear.

Why the delays? A few things have affected the timing:

  1. As part of an acquisition at work, some new folks got involved that hadn’t been previously involved, and it took them some time to understand what this assignment was all about. This is totally understandable, but we hadn’t accounted for the extra time to bring the new stakeholders up to speed.
  2. The finance folks got involved and made everyone do some necessary legwork that hadn’t previously been done in properly allocating costs to the appropriate budgets. So, that took some time.
  3. Finally, there’s this virus thing going around…

It looks like #1 and #2 have been sorted, but it’ll still Continue reading

Helping Developers Simplify Apps, Toolchains, and Open Source

It’s been an exciting four months since we announced that Docker is refocusing on developers. We have spent much of that time listening to you, our developer community, in meetups, on GitHub, through social media, with our Docker Captains, and in face-to-face one-on-ones. Your support and feedback on our refocused direction have been helpful and positive, and we’re fired-up for the year ahead!

What’s driving our enthusiasm for making developers successful? Quite simply, it’s in recognition of the enormous impact your creativity – manifested in the applications you ship – has on all of our lives. Widespread adoption of smartphones and near-pervasive Internet connectivity only accelerates consumer demand for new applications. And businesses recognize that applications are key to engaging their customers, partnering effectively with their supply chain ecosystem, and empowering their employees.

As a result, the demand for developers has never been higher. The current worldwide population of 18 million developers is growing approximately 20% every year (in contrast to the 0.6% annual growth of the overall US labor force). Yet, despite this torrid growth, demand for developers in 2020 will outstrip supply by an estimated 1 million. Thus, we see tremendous opportunities in helping every developer to Continue reading

Automation Analytics – UX Enhancements

Ansible-blog_analytics-update-3220

Red Hat Ansible Automation Platform now includes hosted service offerings, one of which is Automation Analytics.  This application provides a visual dashboard, health notifications and organization statistics for your Ansible Automation.  If you are new to Automation Analytics and want more background information please refer to my previous blog Getting Started with Automation Analytics.

 

Enhanced Filtering 

On the Automation Analytics dashboard there is the ability to filter by Red Hat Ansible Tower cluster and by date.  Red Hat recommends having a cluster local to where the automation is happening, for example if you had a data center in Japan, Germany and the United States you would most likely have an Ansible Tower Cluster in each geographic region.  This allows users to perform drill down filtering to individual clusters to get data relevant for a specific physical site or group. This is helpful if, for example, a company's team in Japan only cared about data relevant to them in the drop down menu.

Analytics update blog 1

Previously, this only affected the top graph, but not sub cards such as Top Templates or Top Modules.  In the following video you can now see that all cards are updated simultaneously.

ux_update blog 2

Continue reading

Modifying Visual Studio Code’s Bracketing Behavior

There are two things I’ve missed since I switched from Sublime Text to Visual Studio Code (I switched in 2018). First, the speed. Sublime Text is so much faster than Visual Studio Code; it’s insane. But, the team behind Visual Studio Code is working hard to improve performance, so I’ve mostly resigned myself to it. The second thing, though, was the behavior of wrapping selected text in brackets (or parentheses, curly braces, quotes, etc.). That part has annoyed me for two years, until this past weekend I’d finally had enough. Here’s how I modified Visual Studio Code’s bracketing behaviors.

Before I get into the changes, allow me to explain what I mean here. With Sublime Text, I used a package (similar to an extension in the VS Code world) called Bracketeer, and one of the things it allowed me to do was highlight a section of text, wrap it in brackets/braces/parentheses, and then—this part is the missing bit—it would deselect the selected text and place my insertion point outside the closing character. Now, this doesn’t sound like a big deal, but let me assure you that it is. I didn’t have to use any extra keystrokes to keep Continue reading

HA Kubernetes Clusters on AWS with Cluster API v1alpha2

About six months ago, I wrote a post on how to use Cluster API (specifically, the Cluster API Provider for AWS) to establish highly available Kubernetes clusters on AWS. That post was written with Cluster API (CAPI) v1alpha1 in mind. Although the concepts I presented there worked with v1alpha2 (released shortly after that post was written), I thought it might be helpful to revisit the topic with CAPI v1alpha2 specifically in mind. So, with that, here’s how to establish highly available Kubernetes clusters on AWS using CAPI v1alpha2.

By the way, it’s worth pointing out that CAPI is a fast-moving project, and release candidate versions of CAPI v1alpha3 are already available for some providers. Keep that in mind if you decide to start working with CAPI. I will write an updated version of this post for v1alpha3 once it has been out for a little while.

To be sure we’re all speaking the same language, here are some terms/acronyms that I’ll use in this post:

  • CAPI = Cluster API (the provider-independent parts)
  • CAPA = Cluster API Provider for AWS
  • CABPK = Cluster API Bootstrap Provider for Kubeadm (new for v1alpha2)
  • Management cluster = a Kubernetes cluster that has the Continue reading

Docker Desktop for Windows Home is here!

Last year we announced that Docker had released a preview of Docker Desktop with WSL 2 integration. We are now pleased to announce that we have completed the work to enable experimental support for Windows Home WSL 2 integration. This means that Windows Insider users on 19040 or higher can now install and use Docker Desktop!

Feedback on this first version of Docker Desktop for Windows Home is welcomed! To get started, you will need to be on Windows Insider Preview build 19040 or higher and install the Docker Desktop Edge 2.2.2.0.

What’s in Docker Desktop for Windows Home?

Docker Desktop for WSL 2 Windows Home is a full version of Docker Desktop for Linux container development. It comes with the same feature set as our existing Docker Desktop WSL 2 backend. This gives you: 

  • Latest version of Docker on your Windows machine 
  • Install Kubernetes in one click on Windows Home 
  • Integrated UI to view/manage your running containers 
  • Start Docker Desktop in <5 seconds
  • Use Linux Workspaces
  • Dynamic resource/memory allocation 
  • Networking stack, support for http proxy settings, and trusted CA synchronization 

How do I get started developing with Docker Desktop? 

For the best experience of developing Continue reading

Technology Short Take 124

Welcome to Technology Short Take #124! It seems like the natural progression of the Tech Short Takes is moving toward monthly articles, since it’s been about a month since my last one. In any case, here’s hoping that I’ve found something useful for you. Enjoy! (And yes, normally I’d publish this on a Friday, but I messed up and forgot. So, I decided to publish on Monday instead of waiting for Friday.)

Networking

Servers/Hardware

  • This article is about hardware, just not the hardware I’d typically talk about in this section—instead, it’s about Philips Hue light bulbs. Continue reading

How to deploy on remote Docker hosts with docker-compose

The docker-compose tool is pretty popular for running dockerized applications in a local development environment. All we need to do is write a Compose file containing the configuration for the application’s services and have a running Docker engine for deployment. From here, we can get the application running locally in a few seconds with a single  `docker-compose up` command. 

This was the initial scope but…

As developers look to have the same ease-of-deployment in CI pipelines/production environments as in their development environment, we find today docker-compose being used in different ways and beyond its initial scope. In such cases, the challenge is that docker-compose provided support for running on remote docker engines through the use of the DOCKER_HOST environment variable and -H, –host command line option. This is not very user friendly and managing deployments of Compose applications across multiple environments becomes a burden.

To address this issue, we rely on Docker Contexts to securely deploy Compose applications across different environments and manage them effortlessly from our localhost. The goal of this post is to show how to use contexts to target different environments for deployment and easily switch between them.

We’ll start defining a sample application to use Continue reading

Region and Endpoint Match in AWS API Requests

Interacting directly with the AWS APIs—using a tool like Postman (or, since I switched back to macOS, an application named Paw)—is something I’ve been doing off and on for a little while as a way of gaining a slightly deeper understanding of the APIs that tools like Terraform, Pulumi, and others are calling when automating AWS. For a while, I struggled with AWS authentication, and after seeing Mark Brookfield’s post on using Postman to authenticate to AWS I thought it might be helpful to share what I learned as well.

The basis of Mark’s post (I highly encourage you to go read it) is that he was having a hard time getting authenticated to AWS in order to automate the creation of some Route 53 DNS records. The root of his issue, as it turns out, was a mismatch between the region specified in his request and the API endpoint for Route 53. I know this because I ran into the exact same issue (although with a different service).

The secret to uncovering this mismatch can be found in this “AWS General Reference” PDF. Specifically with regard to Route 53, check out this quote from the document:

Continue reading

Announcement: Ansible Contributor Summit Europe

Blog_restructuring-the-Ansible-Project

For the past few years we’ve held a conference specifically for contributors at the same time as AnsibleFest. The additional days brought together existing contributors to the open source Ansible code base and those wanting to get involved.

It is with great pleasure that we announce a European Contributor Summit will be held in Gothenburg, Sweden, ahead of the usual summit at AnsibleFest! On March 29 we’ll be welcoming new and old contributors alike. So if you already contribute to Ansible, or would like to, but don’t know how or where to start, this event is for you.

Contributor Summit US will again be held the day before this year’s AnsibleFest event in San Diego. You can sign up for AnsibleFest updates here.

Ansible Contributor Summit is a day-long working session with the core developer team and key contributors. We’ll discuss important issues affecting the Ansible community, and you can take part in person or online. Information for remote participation will be announced about a week beforehand. There is an additional hackathon the following day, on March 30, where you can sit down with fellow contributors to work through anything specific.

The event is free to attend, although registration is Continue reading

Intro to Automation Webhooks for Red Hat Ansible Automation Platform

ansible-blog_automated-webhooks-series

If your organization has adopted DevOps culture, you're probably practicing some form of Infrastructures-as-Code (IaC) to manage and provision servers, storage, applications and networking using human-readable, and machine-consumable definitions and automation tools such as Ansible. It's also likely that Git is an essential in your DevOps toolchain, not only for the development of applications and services, but also in managing your infrastructure definitions.

With the release of Red Hat Ansible Tower v3.6, part of the Red Hat Ansible Automation Platform, we introduced Automation Webhooks that helps enable easier and more intuitive Git-centric workflows natively. 

In this post we'll explore how you can make use of Automation Webhooks, but first let's look at IaC for some context and how this feature can be used to your organization's benefit.

 

Understanding Infrastructures-as-Code

Infrastructure-as-Code (IaC) is managing and provisioning computing resources through human-readable machine-consumable definition files, rather than physical hardware configuration or interactive configuration tools. It is considered one of the essential practices of DevOps along with continuous integration (CI), continuous delivery (CD), observability and others in order to automate development and operations processes and create a faster release cycle with higher quality. Treating infrastructure like it is code and subjecting Continue reading

Retrieving the Kubeconfig for a Cluster API Workload Cluster

Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.

(By the way, there’s nothing new or revolutionary about the information in this post; I’m simply sharing it to help folks who may be new to Cluster API.)

Once CAPI has created the new workload cluster, it will also create a new Kubeconfig for accessing this cluster. This Kubeconfig will be stored as a Secret (a specific type of Kubernetes object). In order to use this Kubeconfig to access your new workload cluster, you’ll need to retrieve the Secret, decode it, and then use it with kubectl.

First, you can see the secret by running kubectl get secrets in the namespace where your new workload cluster was created. If the name of your workload cluster was “workload-cluster-1”, then the name of the Secret created by Cluster API would Continue reading

Deep Dive on VLANS Resource Modules for Network Automation

ansible-blog_network-gray

In October of 2019, as part of Red Hat Ansible Engine 2.9, the Ansible Network Automation team introduced the concept of resource modules.  These opinionated network modules make network automation easier and more consistent for those automating various network platforms in production.  The goal for resource modules was to avoid creating overly complex jinja2 templates for rendering network configuration. This blog post goes through the eos_vlans module for the Arista EOS network platform.  I walk through several examples and describe the use cases for each state parameter and how we envision these being used in real world scenarios.

Before starting let’s quickly explain the rationale behind naming of the network resource modules. Notice for resource modules that configure VLANs there is a singular form (eos_vlan, ios_vlan, junos_vlan, etc) and a plural form (eos_vlans, ios_vlans, junos_vlans).  The new resource modules are the plural form that we are covering today. We have deprecated the singular form. This was done so that those using existing network modules would not have their Ansible Playbooks stop working and have sufficient time to migrate to the new network automation modules.

 

VLAN Example

Let's start with an example of the eos_vlans Continue reading

Getting Started with Istio Using Docker Desktop

This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.

Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.

There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes Continue reading

Setting up K8s on AWS with Kubeadm and Manual Certificate Distribution

Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.

The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm but using manual certificate distribution (in other words, not allowing kubeadm to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.

This post specifically builds upon this post on setting up an AWS-integrated Kubernetes 1.15 cluster, but is written for Kubernetes 1.17. As mentioned above, this post will also show you how to manually distribute the certificates that kubeadm generates to other control plane nodes.

What this post won’t cover are the details on some of the prerequisites for making the AWS cloud provider function properly; specifically, this post won’t discuss:

Docker Donates the cnab-to-oci Library to cnab.io

Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project ?. This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the new cnabio organization. The reference implementations – cnab-go which is the Golang library implementation of the specification and duffle which is the CLI reference implementation – were also moved.

What is cnab-to-oci for?

Docker helped with the development of the CNAB specification and its reference implementations, and led the work on the cnab-to-oci library for sharing a CNAB bundle using an existing container registry. This library is now used by 3 CNAB tools, Docker App, Porter and duffle, as well as Docker Hub. It successfully demonstrated how to push, pull and share a CNAB bundle using a registry. This work will be used as a foundation for the future CNAB Registries specification.

The transfer is already in effect, so starting now please refer to github.com/cnabio/cnab-to-oci in your Golang imports.

How does cnab-to-oci store a CNAB bundle into a registry?

As you may know, Continue reading

Agnostic Network Automation Examples with Ansible and Juniper NRE Labs

ansible-blog_NRE-labs

On February 10th, The NRE Labs project launched four Ansible Network Automation exercises, made possible by Red Hat and Juniper Networks.  This blog post covers job responsibilities of an NRE, the goal of Juniper’s NRE Labs, and a quick overview of new exercises and the concepts Red Hat and Juniper are jointly demonstrating.  The intended audience for these initial exercises is someone new to Ansible Network Automation with limited experience with Ansible and network automation. The initial network topology for these exercises covers Ansible automating Juniper Junos OS and Cumulus VX virtual network instances.

 

About NRE Labs

Juniper has defined an NRE or network reliability engineer, as someone that can help an organization with modern network automation.  This concept has many different names including DevOps for networks, NetDevOps, or simply just network automation.  Juniper and Red Hat realized that this skill set is new to many traditional network engineers and worked together to create online exercises to help folks get started with Ansible Network Automation.  Specifically, Juniper worked with us through NRE Labs, a project they started and co-sponsor that offers a no-strings-attached, community-centered initiative to bring the skills of automation within reach Continue reading

Hack Week: How Docker Drives Innovation from the Inside

Since its founding, Docker’s mission has been to help developers bring their ideas to life by conquering the complexity of app development. With millions of Docker developers worldwide, Docker is the de facto standard for building and sharing containerized apps. 

So what is one source of ideas we use to simplify the lives of developers? It starts with being a company of software developers who builds products for software developers. One of the more creative ways Docker has been driving innovation internally is through hackathons. These hackathons have proven to be a great platform for Docker employees to showcase their talent and provide unique opportunities for teams across Docker’s business functions to come together. Our employees get to have fun while creating solutions to problems that simplify the lives of Docker developers.

At Docker, our engineers are always looking for ways to improve their own workflows so as to ship quality code faster. Hack Week gives us a chance to explore the boundaries of what’s possible, and the winning ‘hacks’ make their way into our products to benefit our global developer community.

-Scott Johnston, Docker CEO

With that context, let’s break down how Docker runs employee hackathons. Docker is Continue reading

1 31 32 33 34 35 125