Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage Docker Compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles focusing on developer content that we are curating from DockerCon LIVE 2020, this time with a focus on AWS. If you are running your apps on AWS, bookmark this post for relevant insights for easy access in one place.
As more developers adopt and learn Docker, and as more organizations are jumping head-first into containerizing their applications, AWS continues to be the cloud of choice for deployment. Earlier this year Docker and AWS collaborated on Compose-spec.io open specification and as mentioned on the Docker blog by my colleague Chad Metcalf, deploying straight from Docker to AWS has never been easier. It’s just another step to constantly put ourselves in the shoes of you, our customer, the developer.
The replay of these three sessions on AWS is where you can learn more about container trends for developers, adopting microservices and building and deploying multi-container Continue reading
Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable.
A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and Docker Compose to manage them. We cover this in a series of blog posts, each one with a specific focus.
This first part covers how to containerize a Python service/tool and the best practices for it.
Requirements
To easily exercise what we discuss in this blog post series, we need to install a minimal set Continue reading
Running containers in the cloud can be hard and confusing. There are so many options to choose from and then understanding how all the different clouds work from virtual networks to security. Not to mention orchestrators. It’s a learning curve to say the least.
At Docker we are making the Developer Experience (DX) more simple. As an extension of that we want to provide the same beloved Docker experience that developers use daily and integrate it with the cloud. Microsoft’s Azure ACI provided an awesome platform to do just that.
In this tutorial, we take a look at running single containers and multiple containers with Compose in Azure ACI. We’ll walk you through setting up your docker context and even simplifying logging into Azure. At the end of this tutorial, you will be able to use familiar Docker commands to deploy your applications into your own Azure ACI account.
To complete this tutorial, you will need:
Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex. New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.
One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.
Today this functionality is Continue reading
Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex. New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.
One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.
Today this functionality is Continue reading
Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider Travis, which is one of the most tricky ones to use for this use case.
To start building your image with Travis, you will first need to create .travis.yml
file at the root of your repository.
language: bash
dist: bionic
services:
- docker
script:
- docker version
You may notice that we specified using “bionic” to have the latest version of Ubuntu available – Ubuntu 18.04 (Bionic Beaver). As of today (May 2020), if you run this script, you’ll be able to see that the Docker Engine version it provides is 18.06.0-ce which is too old to be able to use buildx. So we’ll have to install Docker manually.
language: bash
dist: bionic
before_install:
- sudo rm -rf /var/lib/apt/lists/*
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) edge"
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
script:
Continue reading
Docker is the defacto toolset for building modern applications and setting up a CI/CD pipeline – helping you build, ship and run your applications in containers on-prem and in the cloud.
Whether you’re running on simple compute instances such as AWS EC2 or Azure VMs or something a little more fancy like a hosted Kubernetes service like AWS EKS or Azure AKS, Docker’s toolset is your new BFF.
But what about your local development environment? Setting up local dev environments can be frustrating to say the least.
Remember the last time you joined a new development team?
You needed to configure your local machine, install development tools, pull repositories, fight through out-of-date onboarding docs and READMEs, get everything running and working locally without knowing a thing about the code and it’s architecture. Oh and don’t forget about databases, caching layers and message queues. These are notoriously hard to set up and develop on locally.
I’ve never worked at a place where we didn’t expect at least a week or more of on-boarding for new developers.
So what are we to do? Well, there is no silver bullet and these things are hard to do (that’s why you Continue reading
Earlier this month Docker announced our partnership with Microsoft to shorten the developer commute between the desktop and running containers in the cloud. We are excited to announce the first release of the new Docker Azure Container Instances (ACI) experience today and wanted to give you an overview of how you can get started using it.
The new Docker and Microsoft ACI experience allows developers to easily move between working locally and in the Cloud with ACI; using the same Docker CLI experience used today! We have done this by expanding the existing docker context
command to now support ACI as a new backend. We worked with Microsoft to target ACI as we felt its performance and ‘zero cost when nothing is running’ made it a great place to jump into running containers in the cloud.
ACI is a Microsoft serverless container solution for running a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. Developers can run their containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. For production cases, you can Continue reading
This is a guest post from Brian Christner. Brian is a Docker Captain since 2016, host of The Byte podcast, and Co-Founder & Site Reliability Engineer at 56K.Cloud. At 56K.Cloud, he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.
It was a fantastic experience hosting my first ever virtual conference session. The commute to my home office was great, and I even picked up a coffee on the way before my session started. No more waiting in lines, queueing for food, or sitting on the conference floor somewhere in a corner to check emails.
The “DockerCon 2020 that’s a wrap” blog post highlighted my session “How to Become a Docker Power User using VS Code” session was one of the most popular sessions from DockerCon. Docker asked if I could write a recap and summarize some of the top questions that appeared in the chat. Absolutely.
Honestly, I liked the presented/audience interaction more than an in-person conference. Typically, a presenter broadcasts their content to a room full of participants, and if you are lucky and Continue reading
In this series of blog posts, we show how to put in place an optimized containerized Go development environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms. Part 2 covered how to add Go dependencies, caching for faster builds and unit tests. This third and final part is going to show you how to add a code linter, a GitHub Action CI, and some extra build optimizations.
We’d like to automate checking for good programming practices as much as possible so let’s add a linter to our setup. First step is to modify the Dockerfile:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .
FROM golangci/golangci-lint:v1.27-alpine AS lint-base
FROM base AS lint
COPY --from=lint-base /usr/bin/golangci-lint /usr/bin/golangci-lint
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/root/.cache/golangci-lint \
golangci-lint run --timeout 10m0s ./...
FROM scratch AS bin-unix
COPY Continue reading
This is the second post of our series of blog articles focusing on the key developer content that we are curating from DockerCon LIVE 2020. Increasingly, we are seeing more and more developers targeting Microsoft architectures and Azure for their containerized application deployments. Microsoft has always had a rich set of developer tools including VS Code and GitHub that work with Docker tools.
One of the biggest developments for developers using Windows 10 is the release of WSL 2 (Windows Subsystem for Linux). Instead of using a translation layer to convert Linux kernel calls into Windows calls, WSL 2 now offers its own isolated Linux kernel running on a thin version of the Hyper-V hypervisor. Check out Simon Ferquel’s session on WSL 2 as well as Paul Yuknewicz’s session on apps running in Azure. Be sure to check out these valuable sessions on using Docker with Microsoft tools and technologies.
Docker Desktop + WSL 2 Integration Deep Dive
Simon Ferquel – Docker
Simon’s session provides a deep dive on how Docker Desktop on Windows works with WSL 2 to provide a better developer experience. This presentation will give you a better understanding of how Docker Desktop and WSL 2 Continue reading
This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.
Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors
package:
package main
import (
"fmt"
"os"
"strings"
"github.com/pkg/errors"
)
func echo(args []string) error {
if len(args) < 2 {
return errors.New("no message to echo")
}
_, err := fmt.Println(strings.Join(args[1:], " "))
return err
}
func main() {
if err := echo(os.Args); err != nil {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}
}
Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.
We will use Go modules to handle this dependency. Running the following commands will create the go.mod
and go.sum
files:
$ go mod init
$ go mod tidy
Now when we run the build, we will see that each time we build, the dependencies are downloaded
$ make
[+] Building 8.2s (7/9)
=> [internal] load build definition from Dockerfile
...
0.0s
=> [build 3/4] COPY . .
0.1s
=> [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
=> => # go: downloading github.com/pkg/errors v0.9.1
This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.
Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.
To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.
Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!
All projects need tests! We’ll add a simple test for our echo function in a main_test.go
file:
package main
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestEcho(t *testing.T) {
// Test happy path
err := echo([]string{"bin-name", "hello", "world!"})
require.NoError(t, err)
}
func TestEchoErrorNoArgs(t *testing.T) {
// Test empty arguments
err := echo([]string{})
require.Error(t, err)
}
This test ensures that we get an error if the echo function is passed an empty list of arguments.
We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:
# syntax = docker/dockerfile:1-experimental
FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .
FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .
FROM scratch AS bin-unix
COPY --from=build /out/example /
...
Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.
We can also update our Makefile to add a test target:
all: bin/example
test: lint unit-test
PLATFORM=local
.PHONY: bin/example
bin/example:
@docker build . --target bin \
--output bin/ \
--platform ${PLATFORM}
.PHONY: unit-test
unit-test:
@docker build . --target unit-test
In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.
You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev
You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx
Read the whole blog post series here.
The post Containerize your Go Developer Environment – Part 2 appeared first on Docker Blog.
This is a guest post from Jochen Zehnder. Jochen is a Docker Community Leader and working as a Site Reliability Engineer for 56K.Cloud. He started his career as a Software Developer, where he learned the ins and outs of creating software. He is not only focused on development but also on the automation to bridge the gap to the operations side. At 56K.Cloud he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.
Jochen Zehnder joined 56K.Cloud in February, after working as a software developer for several years. He always tries to make the lives easier for everybody involved in the development process. One VS Code feature that excels at this is the Visual Studio Code Remote – Containers extension. It is one of many extensions of the Visual Studio Remote Development feature.
This post is based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out of markdown files. This is a perfect example of how to make lives easier for everybody. Nobody should know how to install, Continue reading
Of all the sessions from DockerCon LIVE 2020, the Best Practices + How To’s track sessions received the most live views and on-demand views. Not only were these sessions highly viewed, they were also highly rated. We thought this would be the case based on the fact that many developers are learning Docker for this first time as application containerization is experiencing broad adoption within IT shops. In the recently released 2020 Stack Overflow Developer Survey Docker ranked as the #1 most wanted platform. The data is clear…developers love Docker!
This post begins our series of blog articles focusing on the key developer content that we are curating from DockerCon. What better place to start than with the fundamentals. Developers are looking for the best content by the top experts to get started with Docker. These are the top sessions from the Best Practices + How To’s track.
How to Get Started with Docker
Peter McKee – Docker
Peter’s session was the top session based on views across all of the tracks. He does an excellent job focusing on the fundamentals of containers and how to go from code to cloud. This session covers getting Docker installed, writing Continue reading
When joining a development team, it takes some time to become productive. This is usually a combination of learning the code base and getting your environment setup. Often there will be an onboarding document of some sort for setting up your environment but in my experience, this is never up to date and you always have to ask someone for help with what tools are needed.
This problem continues as you spend more time in the team. You’ll find issues because the version of the tool you’re using is different to that used by someone on your team, or, worse, the CI. I’ve been on more than one team where “works on my machine” has been exclaimed or written in all caps on Slack and I’ve spent a lot of time debugging things on the CI which is incredibly painful.
Many people use Docker as a way to run application dependencies, like databases, while they’re developing locally and for containerizing their production applications. Docker is also a great tool for defining your development environment in code to ensure that your team members and the CI are all using the same set of tools.
We do a lot of Go development Continue reading
Docker Hub has two major constructs to help with managing users access to your repository images. Organizations and Teams. Organizations are a collection of Teams and Teams are a collection of DockerIDs.
There are a variety of ways of configuring your Teams within your Organization. In this blog post we’ll use a fictitious software company named Stark Industries which has a couple of development teams. One which works on the front-end of the application and the other that works on the back-end of the application. They also have a QA team and a DevOps team.
We’ll want to set up our Teams so that each engineering team can push and pull the images that they create. We’ll give the DevOps team access privileges to pull images from the dev teams repos and the ability to push images to the repos that they own. We’ll also give the QA team read-only access to all the repos.
In Docker Hub, an organization is a collection of teams. Image repositories can be created at the organization level. We are also able to configure notifications and link to source code repositories.
Let’s set up our Organization.
Open your favorite browser and navigate Continue reading
DockerCon LIVE 2020 is a wrap, and you rocked it! Our first-ever virtual swing at the traditionally in-person event was a winner on so many levels.
One of our goals was to extend our reach to all developers and members of the community by making the conference digital and free of charge. Mission accomplished! A grand total of 78,000 folks signed up for the May 28 one-day online event.
You hailed from 193 countries (by some counts there are only 196 countries on the planet!). That includes far-flung places like Madagascar, Zimbabwe and even the Maldives. Heck, you even joined us from the Vatican City State (pop. about 800).
Whether you were a seasoned developer or just starting out, our content game was strong. Best practices, how-tos, new product features and use cases, technical deep dives, open source projects—you name it, it was on the menu.
One of our key challenges was replicating the interactivity and spontaneity of in-person events in a virtual setting, but our efforts paid off. We made sure speakers and interviewees were available for live Q&A for their whole session to engage with attendees, resulting in over 21K chats. And remember those popular Hallway Tracks Continue reading
Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider CircleCI, which is one of the most used CI SaaS.
To start building your image with CircleCI, you will first need to create .circleci/config.yml
file:
version: 2
jobs:
build:
docker:
- image: docker:stable
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
- run: docker version
You may notice that we specified using version 18.09.3 of the Docker Engine because buildx requires version 18.09 or later but CircleCI doesn’t provide any version above 18.09.3.
At this point we are able to interact with the Docker CLI but we don’t yet have the buildx plugin installed. To install it, we will download a build from GitHub.
version: 2
jobs:
build:
docker:
- image: docker:stable
steps:
- checkout
- setup_remote_docker:
version: 18.09.3
Continue reading
The 2020 Stack Overflow Developer Survey confirms what we already knew: there’s a lot of developer love out there for Docker and it is continuing from last year.
Docker was the #1 most wanted platform and #2 most loved platform, according to the survey results published last week. We also ranked as the #3 most popular platform.
That’s no fluke. The results are based on nearly 65,000 people who code. And it’s the second year running we’ve acquitted ourselves so admirably in the annual survey. As we shared with you here last summer, developers ranked Docker as the #1 most wanted platform, #2 most loved platform and #3 most broadly used platform in the 2019 Stack Overflow Developer Survey. Those responses came from nearly 90,000 developers from around the world.
DockerCon LIVE 2020 is about to kick off and there are over 64,000 community members, users and customers registered! Although we miss getting together in person, we’re excited to be able to bring even more people together to learn and share how Docker helps dev teams build great apps. Like DockerCon’s past there is so much great content on the agenda for you to learn and expand your expertise around containers and applications.
We’ve been very busy here at Docker and a couple of months ago, we outlined our refocused developer-focused strategy. Since then, we’ve made great progress on executing against it and remain focused on bringing simplicity to app building experience, embracing the ecosystem and helping developers and developer teams bring code to cloud faster and easier than ever before. A few examples: