Archive

Category Archives for "Systems"

Get Ready for the Tech Preview of Docker Desktop for WSL 2

Today at OSCON, Scott Hanselman, Kayla Cinnamon, and Yosef Durr of Microsoft demonstrated some of the new capabilities coming with Windows Subsystem for Linux (WSL) 2, including how it will be integrated with Docker Desktop. As part of this demonstration, we are excited to announce that users can now sign up for the end of July Docker Desktop Technical Preview of WSL 2. WSL 2 is the second generation of a compatibility layer for running Linux binary executables natively on Windows. Since it was announced at Microsoft Build, we have been working in partnership with Microsoft to deliver an improved Linux experience for Windows developers and invite everyone to sign up for the upcoming Technical Preview release.

Improving the Linux Experience on Windows

There are over half a million active users of Docker Desktop for Windows today and many of them are building Java and Node.js applications targeting Linux-based server environments. Leveraging WSL 2 will make the Docker developer experience more seamless no matter what operating system you’re running and what type of application you’re building. And the performance improvements will be immediately noticeable.
WSL 2 introduces a significant architectural change as it is a full Linux kernel built Continue reading

Docker’s Contribution to Authentication for Windows Containers in Kubernetes

When Docker Enterprise added support for Windows containers running on Swarm with the release of Windows Server 2016, we had to tackle challenges that are less pervasive in pure Linux environments. Chief among these was Active Directory authentication for container-based services using Group Managed Service Accounts, or gMSAs. With nearly 3 years of experience deploying and running Windows container applications in production, Docker has solved for a number of complexities that come with managing gMSAs in a container-based world. We are pleased to have contributed that work to upstream Kubernetes.

Challenges with gMSA in Containerized Environments

Aside from being used for authentication across multiple instances, gMSAs solves for two additional problems: 
  1. Containers cannot join the domain, and;
  2. When you start a container, you never really know which host in your cluster it’s going to run on. You might have three replicas running across hosts A, B, and C today and then tomorrow you have four replicas running across hosts Q, R, S, and T. 
One way to solve for this transience is to place the gMSA credential specifications for your service on each and every host where the containers for that service might run, and then repeat that for Continue reading

Spousevitivities at VMworld 2019

This year VMworld—VMware’s annual user conference—moves back to San Francisco from Las Vegas. Returning to the Bay Area with VMworld is Spousetivities, which is happening again this year for the 11th year at VMworld. Better get your tickets sooner rather than later, there’s quite a good chance these activities will sell out!

Registration is open right now.

This year’s activities are funded in part by the generous and community-minded support of Veeam and VMUG, who are “putting their money where their mouth is” when it comes to promoting strong work/life balance at events like VMworld.

Here’s a quick look at what’s planned for VMworld 2019 in San Francisco:

Monday, August 26: Spousetivities kicks off the week with a walking food tour. This tour, like all the others, will depart from the Marriott Marquis.

Tuesday, August 27: This full-day event will take participants up to Wine Country for several wine tastings. Transportion is provided, of course, and participants will enjoy lunch on the tour as well.

Wednesday, August 28: Nature, shopping, tranquility, and quaint towns—this tour has it all! Participants will visit the Golden Gate Bridge, the Marin headlands, Muir Woods, and Sausalito. Transportion and Continue reading

The Song Remains The Same

RedHat-IBM-Announcement

Now that Red Hat is a part of IBM, some people may wonder about the future of the Ansible project. Here is the good news: the Ansible community strategy has not changed.

As always, we want to make it as easy as possible to work with any projects and communities who want to work with Ansible. With the resources of IBM behind us, we plan to accelerate these efforts. We want to do more integrations with more open source communities and more technologies.

One of the reasons we are excited for the merger is that IBM understands the importance of a broad and diverse community. Search for “Ansible plus <open source project>” and you can find Ansible information, such as playbooks and modules and blog posts and videos and slide decks, intended to make working with that project easier. We have thousands of people attending Ansible meetups and events all over the world. We have millions of downloads. We have had this momentum because we provide users flexibility and freedom. IBM is committed to our independence as a community so that we can continue this work.

We’ve worked hard to be good open source citizens. We value the trust Continue reading

Calculating the CA Certificate Hash for Kubeadm

When using kubeadm to set up a new Kubernetes cluster, the output of the kubeadm init command that sets up the control plane for the first time contains some important information on joining additional nodes to the cluster. One piece of information in there that (until now) I hadn’t figured out how to replicate was the CA certificate hash. (Primarily I hadn’t figured it out because I hadn’t tried.) In this post, I’ll share how to calculate the CA certificate hash for kubeadm to use when joining additional nodes to an existing cluster.

When looking to figure this out, I first started with the kubeadm documentation. My searches led me here, which states:

The hash is calculated over the bytes of the Subject Public Key Info (SPKI) object (as in RFC7469). This value is available in the output of “kubeadm init” or can be calculated using standard tools.

That’s useful information, but what are the “standard tools” being referenced? I knew that a lot of work had been put into kubeadm init phase (for breaking down the kubeadm init workflow), but a quick review of that documentation didn’t reveal anything. Reviewing the referenced RFC also didn’t provide any Continue reading

10 Reasons Developers Love Docker

Developers ranked Docker as the #1 most wanted platform, #2 most loved platform, and #3 most broadly used platform in the 2019 Stack Overflow Developer Survey. Nearly 90,000 developers from around the world responded to the survey. So we asked the community why they love Docker, and here are 10 of the reasons they shared:

ROSIE the Robot at DockerCon. Her software runs on Docker containers.

 

  1. It works on everyone’s machine. Docker eliminates the “but it worked on my laptop” problem.

“I love docker because it takes environment specific issues out of the equation – making the developer’s life easier and improving productivity by reducing time wasted debugging issues that ultimately don’t add value to the application.” @pamstr_

  1. Takes the pain out of CI/CD. If there is one thing developers hate, it is doing the same thing over and over.

“Docker completely changed my life as a developer! I can spin up my project dependencies like databases for my application in a second in a clean state on any machine on our team! I can‘t not imagine the whole ci/cd-approach without docker. Automate all the stuff? Dockerize it!” @Dennis65560555 

  1. Boosts your career. According to a recent Continue reading

Building Jsonnet from Source

I recently decided to start working with jsonnet, a data templating language and associated command-line interface (CLI) tool for manipulating and/or generating various data formats (like JSON, YAML, or other formats; see the Jsonnet web site for more information). However, I found that there are no prebuilt binaries for jsonnet (at least, not that I could find), and so I thought I’d share here the process for building jsonnet from source. It’s not hard or complicated, but hopefully sharing this information will streamline the process for others.

As some readers may already know, my primary OS is Fedora. Thus, the process I share here will be specific to Fedora (and/or CentOS and possibly RHEL).

To keep my Fedora installation clean of any unnecessary packages, I decided to use a CentOS 7 VM—instantiated and managed by Vagrant—for the build process. If you don’t want to use a build VM, you can omit the steps involving Vagrant. You’ll also need to modify the commands used to install the necessary packages (on Fedora, you’d use dnf instead of yum, for example). Different distributions may also use different package names for some of the dependencies, so keep that in mind.

  1. Run Continue reading

Technology Short Take 116

Welcome to Technology Short Take #116! This one is a bit shorter than usual, due to holidays in the US and my life being busy. Nevertheless, I hope that I managed to capture something you find useful or helpful. As always, your feedback is welcome, so if you have suggestions, corrections, or comments, you’re welcome to contact me via Twitter.

Networking

  • David Gee discusses jSNAPy and how it can be used to enable unit tests in infrastructure-as-code scenarios.
  • Jon Langemak tackles understanding RTs (Route Targets) and RDs (Route Distinguishers) are in MPLS VPNs. I also appreciate that Jon included a “Lab time” section in his article that encourages readers to try out the concepts he’s explaining.

Servers/Hardware

  • Although I’ve by and large moved away from Apple hardware (I still have a MacBook Pro running macOS that sees very little use, and a Mac Pro running Fedora), I did see this article regarding a new keyboard for the MacBook Air and MacBook Pro. That’s good—the butterfly keyboards are awful (in my opinion).

Security

  • If you’re unfamiliar with public key infrastructure (PKI), digital certificates, or encryption, you may find this Linux Journal article helpful. It provides the basics behind X.509v3 digital Continue reading

Setting up an encrypted SOCKS proxy using Dante and stunnel

Overview

Why Dante?

In the previous post, I talked about why we might need a SOCKS proxy at all, and how we can properly setup a secure one using only stunnel.

That approach is fine and all, but it still suffers from some limitations. The most important of which are:

  • UDP relaying is not supported.
  • Advanced SOCKS options like BIND or UDPASSOCIATE is not available.
  • Only suited for personal use and should not be shared with untrusted clients.

Comparing to the stunnel limited SOCKS functionality, Dante (which is one of the most popular SOCKS server available), comes with pretty much every functionality one can imagine out of a SOCKS server.

From advanced authentication and access control, to server chaining, traffic monitoring and even bandwidth control1, Dante has got them all.

Dante and SOCKS encryption

While it might be okay to use a non-encrypted SOCKS proxy in you local network, it is definitely not a good idea to do so over the internet.

For this, RFC 1961 added GSS-API authentication protocol for SOCKS Version 5. GSS-API provides integrity, authentication and confidentiality. Dante of course completely supports GSS-API authentication and encryption.

But GSS-API (which is typically used with Kerberos Continue reading

Intro Guide to Dockerfile Best Practices

There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.

Important note: the tips below follow the journey of ever-improving Dockerfiles for an example Java project based on Maven. The last Dockerfile is thus the recommended Dockerfile, while all intermediate ones are there only to illustrate specific best practices.

Incremental build time

In a development cycle, when building a Docker image, making code changes, then rebuilding, it is important to leverage caching. Caching helps to avoid running build steps again when they don’t need to.

Tip #1: Order matters for caching

However, the order of the build steps (Dockerfile instructions) matters, because when a step’s cache is invalidated by changing files or modifying lines in the Dockerfile, subsequent steps of their cache will break. Order your steps from least to most frequently changing steps to optimize caching.

Tip #2: Continue reading

A Secure Content Workflow from Docker Hub to DTR

Docker Hub is home to the world’s largest library of container images. Millions of individual developers rely on Docker Hub for official and certified container images provided by independent software vendors (ISV) and the countless contributions shared by community developers and open source projects. Large enterprises can benefit from the curated content in Docker Hub by building on top of previous innovations, but these organizations often require greater control over what images are used and where they ultimately live (typically behind a firewall in a data center or cloud-based infrastructure). For these companies, building a secure content engine between Docker Hub and Docker Trusted Registry (DTR) provides the best of both worlds – an automated way to access and “download” fresh, approved content to a trusted registry that they control.

Ultimately, the Hub-to-DTR workflow gives developers a fresh source of validated and secure content to support a diverse set of application stacks and infrastructures; all while staying compliant with corporate standards. Here is an example of how this is executed in Docker Enterprise 3.0:


Image Mirroring

DTR allows customers to set up a mirror to grab content from a Hub repository by constantly polling it and pulling new image Continue reading

Kubernetes Operators with Ansible Deep Dive: Part 1

blog_ansible-and-kubernetes-deep-dive-1

Deploying applications on Red Hat OpenShift or Kubernetes has come a long way. These days, it's relatively easy to use OpenShift's GUI or something like Helm to deploy applications with minimal effort. Unfortunately, these tools don't typically address the needs of operations teams tasked with maintaining the health or scalability of the application - especially if the deployed application is something stateful like a database. This is where Operators come in.

An Operator is a method of packaging, deploying and managing a Kubernetes application.  Kubernetes Operators with Ansible exists to help you encode the operational knowledge of your application in Ansible.

What can we do with Ansible in a Kubernetes Operator? Because Ansible is now part of the Operator SDK, anything Operators could do should be able to be done with Ansible. It’s now possible to write an Operator as an Ansible Playbook or Role to manage components in Kubernetes clusters. In this blog, we're going to be diving into an example Operator.

For more information on Kubernetes Operators with Ansible please refer to the following resources:

Technology Short Take 115

Welcome to Technology Short Take #115! I’m back from my much-needed vacation in Bali, and getting settled back into work and my daily routine (which, for the last few weeks, was mostly swimming in the pool and sitting on the beach). Here’s a fresh new collection of links and articles from the around the web to propel myself back into blogging. I hope you find something useful here!

Networking

Build, Share and Run Multi-Service Applications with Docker Enterprise 3.0

Modern applications can come in many flavors, consisting of different technology stacks and architectures, from n-tier to microservices and everything in between. Regardless of the application architecture, the focus is shifting from individual containers to a new unit of measurement which defines a set of containers working together – the Docker Application. We first introduced Docker Application packages a few months ago. In this blog post, we look at what’s driving the need for these higher-level objects and how Docker Enterprise 3.0 begins to shift the focus to applications.

Scaling for Multiple Services and Microservices

Since our founding in 2013, Docker – and the ecosystem that has thrived around it – has been built around the core workflow of a Dockerfile that creates a container image that in turn becomes a running container. Docker containers, in turn, helped to drive the growth and popularity of microservices architectures by allowing independent parts of an application to be turned on and off rapidly and scaled independently and efficiently. The challenge is that as microservices adoption grows, a single application is no longer based on a handful of machines but dozens of containers that can be divided amongst different development teams. Continue reading

Docker Tools for Modernizing Traditional Applications

Over the past two years Docker has worked closely with customers to modernize portfolios of traditional applications with Docker container technology and Docker Enterprise, the industry-leading container platform. Such applications are typically monolithic in nature, run atop older operating systems such as Windows Server 2008 or Windows Server 2003, and are difficult to transition from on-premises data centers to the public cloud.

The Docker platform alleviates each of these pain points by decoupling an application from a particular operating system, enabling microservice architecture patterns, and fostering portability across on-premises, cloud, and hybrid environments.

As the Modernizing Traditional Applications (MTA) program has matured, Docker has invested in tooling and methodologies that accelerate the transition to containers and decrease the time necessary to experience value from the Docker Enterprise platform. From the initial application assessment process to running containerized applications on a cluster, Docker is committed to improving the experience for customers on the MTA journey.

Application Discovery & Assessment

Enterprises develop and maintain exhaustive portfolios of applications. Such apps come in a myriad of languages, frameworks, and architectures developed by both first and third party development teams. The first step in the containerization journey is to determine which applications are Continue reading

Configure Network Cards by PCI Address with Ansible Facts

Ansible-Blog-Network-Pool-Gradient-Header

In this post, you will learn advanced applications of Ansible facts to configure Linux networking. Instead of hard-coding device names, you will find out how to specify network devices by PCI addresses. This prepares your configuration to work on different Red Hat Enterprise Linux releases with different network naming schemes.

Red Hat Enterprise Linux System Roles

The RHEL System Roles provide a uniform configuration interface across multiple RHEL releases. However, the names of network devices in modern Linux distributions can often not be stable for various releases. In the past, the kernel named the devices after their order of appearance. The first device got the name eth0, the next eth1, and so on.

To make the device names more reliable, developers introduced other methods. This interferes with creating a release-independent network configuration based on interface names. An initial solution to this problem is to address network cards by MAC address. But this will require an up-to-date inventory with MAC addresses of all network cards. Also, it requires updating the inventory after replacing broken hardware. This results in extra work. To avoid this effort, it would be great to be able to specify network cards by their PCI address. Continue reading

At A Glance: The Mid-Atlantic + Government Docker Summit

Last week, Docker hosted our 4th annual Mid-Atlantic and Government Docker Summit, a one-day technology conference held on Wednesday, May 29 near Washington, DC. Over 425 attendees in the public and private sector came together to share and learn about the trends driving change in IT from containers, cloud and DeVops. Specifically, the presenters shared content on topics including Docker Enterprise, our industry-leading container platform, Docker’s Kubernetes Service, Container Security and more.

Attendees were a mix of technology users and IT decision makers: everyone from developers, systems admins and architects to Sr. leaders and CTOs.

Summit Recap by the Numbers:
  • 428 Registrations
  • 16 sessions
  • 7 sponsors
  • 3 Tracks (Tech, Business and Workshops)
Keynotes

Highlights include a keynote by Docker’s EVP of Customer Success, Iain Gray, and a fireside chat by the former US CTO and Insight Ventures Partner, Nick Sinai, and current Federal US CIO, Suzette Kent.

The fireside highlighted top of mind issues for Kent and how that aligns with the White House IT Modernization Report; specifically modernization of current federal IT infrastructure and preparing and scaling the workforce. Kent mentioned, “The magic of IT modernization is marrying the technology with the people and the Continue reading

Ansible + ServiceNow Part 1: Opening and Closing Tickets

blog_ansible-and-service-now-1

As a Network Engineer, I hated filling out tickets. Anytime a router would reboot or a power outage took place at a remote site, the resulting ticket generation took up about 50% of my day. If there had been a way to automate ticket creation, I would have saved a lot of time. The only thing unique to provide would have been case-specific comment sections needing additional information about the issue.

While ticket creation was a vital activity, automating this was not an option at the time. This is surprising because my management was always asking me to include more information in my tickets. Tickets were often reviewed months later and sometimes never got created or did not have much relevant information included.

Fast forward to today, companies are now data mining from tickets with a standard set of facts that are pulled directly from the device during ticket creation, such as network platform, software version, uptime, etc.  Network operations (NetOps) teams now use massive amounts of ticket data to make budget decisions.

For example, if there were 400 network outages due to power issues, NetOps could then make a case to spend $40,000 on battery backups, having proved Continue reading

A First Look at Docker Desktop Enterprise

Delivered as part of Docker Enterprise 3.0, Docker Desktop Enterprise is a new developer tool that extends the Docker Enterprise Platform to developers’ desktops, improving developer productivity while accelerating time-to-market for new applications.

It is the only enterprise-ready Desktop platform that enables IT organizations to automate the delivery of legacy and modern applications using an agile operating model with integrated security. With work performed locally, developers can leverage a rapid feedback loop before pushing code or docker images to shared servers / continuous integration infrastructure.

\Imagine you are a developer & your organization has a production-ready environment running Docker Enterprise. To ensure that you don’t use any APIs or incompatible features that will break when you push an application to production, you would like to be certain your working environment exactly matches what’s running in Docker Enterprise production systems. This is where Docker Enterprise 3.0 and Docker Desktop Enterprise come in. It is basically a cohesive extension of the Docker Enterprise container platform that runs right on developers’ systems. Developers code and test locally using the same tools they use today and Docker Desktop Enterprise helps to quickly iterate and then produce a containerized service that is Continue reading

What’s New in Ansible Tower 3.5

RedHat-Tower-Social-2

We're excited to announce that Red Hat Ansible Tower 3.5 is now generally available. In this release, there are several enhancements that can help improve automation practices. Engineering has been working hard to enhance Ansible Tower and here are a few things we're most excited about:

  • Red Hat Enterprise Linux 8 support
  • Support for external credential vaults via credential plugins
  • Become plugins now supported in Ansible Tower

In addition to a number of enhancements that have been made, the Ansible Tower 3.5 release saw over 160 issues closed. Let’s go over the highlights in this release.

Red Hat Enterprise Linux 8 support

Red Hat Enterprise Linux is an innovative operating system, designed to provide a consistent foundation for the enterprise hybrid cloud. It offers one enterprise Linux experience  for applications across IT environments. With Ansible Tower 3.5 (and Ansible Engine 2.8), support for managing RHEL 8 nodes is baked in. Ansible Tower 3.5 can also be run on Red Hat Enterprise Linux 8 as the control node for Red Hat Ansible Automation.

External credential vaults

Ansible Tower 3.5 brings support for external credential vaults. The existing credential store is still available for use. However, Continue reading

1 39 40 41 42 43 125