Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.
If you’re unfamiliar with Vagrant, have a look at my quick introduction to Vagrant. The “TL;DR” is the Vagrant can offer users a consistent workflow to creating and destroying VMs across a fairly wide number of platforms, including both local providers (like VirtualBox or VMware Fusion/VMware Workstation) and cloud provider (such as AWS and Azure). I’ve written a fair amount on Vagrant, so feel free to browse all the “Vagrant”-tagged posts on the site for more information.
Likewise, if you’re unfamiliar with the Libvirt provider, check out this post from 2017 on using Vagrant with Libvirt on Fedora 27.
In my testing yesterday, I ran into two networking-related issues. The first of them was Continue reading
This is the second post of our series of blog articles focusing on the key developer content that we are curating from DockerCon LIVE 2020. Increasingly, we are seeing more and more developers targeting Microsoft architectures and Azure for their containerized application deployments. Microsoft has always had a rich set of developer tools including VS Code and GitHub that work with Docker tools.
One of the biggest developments for developers using Windows 10 is the release of WSL 2 (Windows Subsystem for Linux). Instead of using a translation layer to convert Linux kernel calls into Windows calls, WSL 2 now offers its own isolated Linux kernel running on a thin version of the Hyper-V hypervisor. Check out Simon Ferquel’s session on WSL 2 as well as Paul Yuknewicz’s session on apps running in Azure. Be sure to check out these valuable sessions on using Docker with Microsoft tools and technologies.
Docker Desktop + WSL 2 Integration Deep Dive
Simon Ferquel – Docker
Simon’s session provides a deep dive on how Docker Desktop on Windows works with WSL 2 to provide a better developer experience. This presentation will give you a better understanding of how Docker Desktop and WSL 2 Continue reading
Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!
This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.
Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors
package:
package main
import (
"fmt"
"os"
"strings"
"github.com/pkg/errors"
)
func echo(args []string) error {
if len(args) < 2 {
return errors.New("no message to echo")
}
_, err := fmt.Println(strings.Join(args[1:], " "))
return err
}
func main() {
if err := echo(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}
}
Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.
We will use Go modules to handle this dependency. Running the following commands will create the go.mod
and go.sum
files:
$ go mod init
$ go mod tidy
Now when we run the build, we will see that each time we build, the dependencies are downloaded
$ make
[+] Building 8.2s (7/9)
=> [internal] load build definition from Dockerfile
...
0.0s
=> [build 3/4] COPY . .
0.1s
=> [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
=> => # go: downloading github.com/pkg/errors v0.9.1
This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.
Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.
To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.
Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!
All projects need tests! We’ll add a simple test for our echo function in a main_test.go
file:
package main
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestEcho(t *testing.T) {
// Test happy path
err := echo([]string{"bin-name", "hello", "world!"})
require.NoError(t, err)
}
func TestEchoErrorNoArgs(t *testing.T) {
// Test empty arguments
err := echo([]string{})
require.Error(t, err)
}
This test ensures that we get an error if the echo function is passed an empty list of arguments.
We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.
We can also update our Makefile to add a test target:
all: bin/example
test: lint unit-test
PLATFORM=local
.PHONY: bin/example
bin/example:
@docker build . --target bin \
--output bin/ \
--platform ${PLATFORM}
.PHONY: unit-test
unit-test:
@docker build . --target unit-test
In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.
You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev
You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx
Read the whole blog post series here.
The post Containerize your Go Developer Environment – Part 2 appeared first on Docker Blog.
Ansible Playbooks are very easy to read and their linear execution makes it simple to understand what will happen while a playbook is executing. Unfortunately, in some circumstances, the things you need to automate may not function in a linear fashion. For example, I was once asked to perform the following tasks with Ansible:
While the request sounded simple, upon further investigation it would prove more challenging for the following reasons:
This is a guest post from Jochen Zehnder. Jochen is a Docker Community Leader and working as a Site Reliability Engineer for 56K.Cloud. He started his career as a Software Developer, where he learned the ins and outs of creating software. He is not only focused on development but also on the automation to bridge the gap to the operations side. At 56K.Cloud he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.
Jochen Zehnder joined 56K.Cloud in February, after working as a software developer for several years. He always tries to make the lives easier for everybody involved in the development process. One VS Code feature that excels at this is the Visual Studio Code Remote – Containers extension. It is one of many extensions of the Visual Studio Remote Development feature.
This post is based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out of markdown files. This is a perfect example of how to make lives easier for everybody. Nobody should know how to install, Continue reading
In this post, I’d like to share one way (not the only way!) to use kubectl
to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.
If you’re unfamiliar with SSH bastion hosts, see this post for an overview.
To use kubectl
via an SSH tunnel through a bastion host to a Kubernetes cluster, there are two steps required:
As is the case with just about any TLS-secured connection, if the destination to which you’re connecting with kubectl
doesn’t match any of Continue reading
Of all the sessions from DockerCon LIVE 2020, the Best Practices + How To’s track sessions received the most live views and on-demand views. Not only were these sessions highly viewed, they were also highly rated. We thought this would be the case based on the fact that many developers are learning Docker for this first time as application containerization is experiencing broad adoption within IT shops. In the recently released 2020 Stack Overflow Developer Survey Docker ranked as the #1 most wanted platform. The data is clear…developers love Docker!
This post begins our series of blog articles focusing on the key developer content that we are curating from DockerCon. What better place to start than with the fundamentals. Developers are looking for the best content by the top experts to get started with Docker. These are the top sessions from the Best Practices + How To’s track.
How to Get Started with Docker
Peter McKee – Docker
Peter’s session was the top session based on views across all of the tracks. He does an excellent job focusing on the fundamentals of containers and how to go from code to cloud. This session covers getting Docker installed, writing Continue reading
When joining a development team, it takes some time to become productive. This is usually a combination of learning the code base and getting your environment setup. Often there will be an onboarding document of some sort for setting up your environment but in my experience, this is never up to date and you always have to ask someone for help with what tools are needed.
This problem continues as you spend more time in the team. You’ll find issues because the version of the tool you’re using is different to that used by someone on your team, or, worse, the CI. I’ve been on more than one team where “works on my machine” has been exclaimed or written in all caps on Slack and I’ve spent a lot of time debugging things on the CI which is incredibly painful.
Many people use Docker as a way to run application dependencies, like databases, while they’re developing locally and for containerizing their production applications. Docker is also a great tool for defining your development environment in code to ensure that your team members and the CI are all using the same set of tools.
We do a lot of Go development Continue reading
I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind
, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.
Normally, in order to use the Quick Start, there are some prerequisites that are needed first (these are all clearly listed on the Quick Start page):
kubectl
installedkind
(which in turn requires Docker) or an existing Kubernetes cluster up and runningFor Linux users (like myself), these prerequisites are pretty easy/simple to handle. But what if you’re a Windows or Mac user? Yes, you could use Docker Desktop and then install kind
(or use docker-machine
, if you’re feeling adventurous). Then you’d Continue reading
Docker Hub has two major constructs to help with managing users access to your repository images. Organizations and Teams. Organizations are a collection of Teams and Teams are a collection of DockerIDs.
There are a variety of ways of configuring your Teams within your Organization. In this blog post we’ll use a fictitious software company named Stark Industries which has a couple of development teams. One which works on the front-end of the application and the other that works on the back-end of the application. They also have a QA team and a DevOps team.
We’ll want to set up our Teams so that each engineering team can push and pull the images that they create. We’ll give the DevOps team access privileges to pull images from the dev teams repos and the ability to push images to the repos that they own. We’ll also give the QA team read-only access to all the repos.
In Docker Hub, an organization is a collection of teams. Image repositories can be created at the organization level. We are also able to configure notifications and link to source code repositories.
Let’s set up our Organization.
Open your favorite browser and navigate Continue reading
I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.
For the most part, if you’re familiar with Pulumi and using TypeScript with Pulumi, this will be pretty straightforward. The code I’ll show you makes a couple assumptions:
vpc
.pubSubnetIds
(for public subnets) or privSubnetIds
(for private subnets). (How to create the subnets and capture the list of IDs is left as an exercise for the reader. If you’d be interested in seeing how I do it, let me know. Continue readingI recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.
First, let me talk about the why behind my purchase of these items. Several years ago, I started simultaneously using both an external trackpad/touchpad and an external mouse with my macOS-based home office setup. I realize this is probably odd, but I adopted the practice as a way of eliminating “mouse finger” on my right hand. With this arrangement, I stopped trying to scroll with my right-hand (either using a mouse wheel with older mice or using the scroll-enabled back of the Magic Mouse), and instead shifting scrolling to my left hand (using two-finger scrolling on the trackpad). This “division of labor” worked well. Because my existing Magic Mouse and Magic Trackpad—both earlier generations—don’t work with Continue reading
Today, the operational role of IT is obvious. The rapid developments enabled by automation create genuine business value. The results that can be achieved by automation have a direct link to a company’s business goals.
As a CTO or CIO, sometimes you need help articulating this to stakeholders. Translating IT departments’ performance into business prioritized KPIs. Most see efficiency gains, cost and risk reductions, for example. Automation is clearly an executive-level issue.
At first, Ansible was a classical tool that was utilized for specific automation. Ansible helps your team automate routine tasks, so that they can instead focus on what you want to do. The platform enables you to structure work by automating your processes.
The global, all-day digital event – Ansible Automates 2020 – takes place on June 10. The event provides inspiration as to how the automation journey can be accelerated and taken to the next level. And no, we’re not going to discuss functionality and technology all day. We want to highlight the cultural and behavioral changes that are linked to the trend towards greater automation.
For organizations to achieve the best results, Continue reading
DockerCon LIVE 2020 is a wrap, and you rocked it! Our first-ever virtual swing at the traditionally in-person event was a winner on so many levels.
One of our goals was to extend our reach to all developers and members of the community by making the conference digital and free of charge. Mission accomplished! A grand total of 78,000 folks signed up for the May 28 one-day online event.
You hailed from 193 countries (by some counts there are only 196 countries on the planet!). That includes far-flung places like Madagascar, Zimbabwe and even the Maldives. Heck, you even joined us from the Vatican City State (pop. about 800).
Whether you were a seasoned developer or just starting out, our content game was strong. Best practices, how-tos, new product features and use cases, technical deep dives, open source projects—you name it, it was on the menu.
One of our key challenges was replicating the interactivity and spontaneity of in-person events in a virtual setting, but our efforts paid off. We made sure speakers and interviewees were available for live Q&A for their whole session to engage with attendees, resulting in over 21K chats. And remember those popular Hallway Tracks Continue reading
Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider CircleCI, which is one of the most used CI SaaS.
To start building your image with CircleCI, you will first need to create .circleci/config.yml
file:
version: 2
jobs:
build:
docker:
- image: docker:stable
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
- run: docker version
You may notice that we specified using version 18.09.3 of the Docker Engine because buildx requires version 18.09 or later but CircleCI doesn’t provide any version above 18.09.3.
At this point we are able to interact with the Docker CLI but we don’t yet have the buildx plugin installed. To install it, we will download a build from GitHub.
version: 2
jobs:
build:
docker:
- image: docker:stable
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
Continue reading
The 2020 Stack Overflow Developer Survey confirms what we already knew: there’s a lot of developer love out there for Docker and it is continuing from last year.
Docker was the #1 most wanted platform and #2 most loved platform, according to the survey results published last week. We also ranked as the #3 most popular platform.
That’s no fluke. The results are based on nearly 65,000 people who code. And it’s the second year running we’ve acquitted ourselves so admirably in the annual survey. As we shared with you here last summer, developers ranked Docker as the #1 most wanted platform, #2 most loved platform and #3 most broadly used platform in the 2019 Stack Overflow Developer Survey. Those responses came from nearly 90,000 developers from around the world.
I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)
The use case here is to keep a subset of directories in sync between a MacBook Air running macOS “Catalina” 10.15.5 and a Surface Pro 6 running Windows 10. A system running Ubuntu 18.04.4 acted as the “server”; each “client” system (the MacBook Air and the Surface Pro) would synchronize with the Ubuntu system. I’ve used a nearly-identical setup for several years to keep my systems synchronized.
One thing to know about Unison before I continue is that you need compatible versions of Unison on both systems in order for it to work. As I understand it, compatibility is not just based on version numbers, but also on the Ocaml version with which it was compiled.
With that in mind, I already had Continue reading
Ansible Collections are the new way to distribute and manage content. Ansible content can be modules, roles, plugins and even Ansible Playbooks. In my previous blog I provide a walkthrough of using Ansible Collections from Ansible Galaxy and Automation Hub. Ansible Galaxy is the upstream community for sharing Ansible Collections. Any community user can create a namespace and share content with anyone. Access to Automation Hub is included with a Red Hat Ansible Automation Platform subscription. Automation Hub only contains fully supported and certified content from Red Hat and our partners.
In this blog post we'll walk through using Ansible Collections with Ansible Tower, part of the Red Hat Ansible Automation Platform. There are a few differences between using command-line Ansible for syncing with Ansible Galaxy or the Automation Hub versus using Ansible Tower. However, it is really easy and I will show you how!
If the Ansible Collections are included in your project you do not need to authenticate to Automation Hub. This method is where you are downloading dynamically using a requirements file as outlined in my blog post. In general there are Continue reading
Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…
externalTrafficPolicy
setting. Read his write-up here.Nothing this time around, but I’ll stay alert for items to include next time!