Do you remember the first time you used Docker? I do. It was about six years ago and like many folks at the time it looked like this:
docker run -it redis
I was not using Redis at the time but it seemed like a complicated enough piece of software to put this new technology through its paces. A quick Docker image pull and it was up and running. It seemed like magic. Shortly after that first Docker command I found my way to Docker Compose. At this point I knew how to run Redis and the docs had an example Python Flask application. How hard could it be to put the two together?
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: “redis”
I understood immediately how Docker could help me shorten my developer “commute.” All the time I spent doing something else just to get to the work I wanted to be doing. It was awesome!
As time passed, unfortunately my commute started to get longer again. Maybe I needed to collaborate with a colleague or get more resources then I had locally. Ok, I can run Docker in the cloud, let me see how Continue reading
We are really excited to have had Docker Desktop be featured in a breakout session titled “The Journey to One .NET” at MSFT Build by @Scott Hanselman with WSL 2. Earlier in the his keynote, we learned about the great new enhancements for GPU support for WSL 2 and we want to hear from our community about your interest in adding this functionality to Docker Desktop. If you are eager to see GPU support come to Docker Desktop, please let us know by voting up our roadmap item and feel free to raise any new requests here as well.
With this announcement, the launch of the Windows 2004 release imminently and Docker Desktop v2.3.02 reaching WSL2 GA , we thought this would be a good time to reflect on how we got to where we are today with WSL 2.
Casting our minds back to 2019 (a very different time!), we first discussed WSL 2 with Microsoft in April. We were excited to get started and wanted to find a way to get a build as soon as possible.
It turned out the easiest way to do this was to collect a laptop Continue reading
We are really excited that Docker and Snyk are now partnering together to engineer container security scanning deeply into Docker Desktop and Docker Hub. Image vulnerability scanning has been one of your most requested items on our public roadmap.
Modern software uses a lot of third party open source libraries, indeed this is one of the things that has really raised productivity in coding, as we can reuse work to support new features in our products and to save time in writing implementations of APIs, protocols and algorithms. But this comes with the downside of working out whether there are security vulnerabilities in the code that you are using. You have all told us that scanning is one of the most important roadmap issues for you.
Recall a famously huge data breach from the use of an unpatched version of the Apache Struts library, due to CVE 2017-5638. The CVE was issued in March 2017, and according to the official statement, while the patch should have been applied within 48 hours, it was not, and during May 2017 the websites were hacked, with the attackers having access until late July. This is everyone’s nightmare now. How can we help Continue reading
With just 2 weeks away from DockerCon LIVE going, LIVE, we are humbled by the tremendous response from almost 50,000 Docker developers and community members, from beginner to expert, who have registered for the event.
DockerCon LIVE would not be complete without our ecosystem of partners who contribute to, and shape, the future of software development. They will be showcasing their products and solutions, and sharing the best practices they have culminated in working with the best developers and organizations across the globe.
We are pleased to announce the agenda for our Container Ecosystem Track with sessions built just for devs. In addition to actionable takeaways, their sessions will feature interactive, live Q&A, and so much more. Check out the incredible lineup:
Access Logging Made Easy With Envoy and Fluent Bit – Carmen Puccio, Principal Solutions Architect | AWS
Docker Desktop + WSL 2 Integration Deep Dive – Simon Ferquel, Senior Software Developer | Docker | Microsoft
Experience Report: Running a Distributed System Across Kubernetes Clusters – Chris Seto, Software Engineer | Cockroach Labs
Securing Your Containerized Applications with NGINX – Kevin Jones, Senior Product Manager | NGINX
You Want To Kubernetes? You MUST Know Docker! – Angel Rivera, Continue reading
Back in March, Justin Graham, our VP of Product, wrote about how partnering with the ecosystem is a key part of Docker’s strategy to help developers and development teams get from source code to public cloud runtimes in the easiest, most efficient and cloud-agnostic way. This post will take a brief look at some of the ways that Docker’s approach to partnering has evolved to support this broader refocused company strategy.
First, to deliver the best experience for developers Docker needs much more seamless integration with Cloud Service Providers (CSPs). Developers are increasingly looking to cloud runtimes for their applications as evidenced by the tremendous growth that the cloud container services have seen. We want to deliver the best developer experience moving forward from local desktop to cloud, and doing that includes tight integration with any and all clouds for cloud-native development. As a first step, we’ve already announced that we are working with AWS, Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms. You will see us continue to progress our activity in this direction.
The second piece of Docker’s partnership strategy is offering best in class Continue reading
This is a guest post from Docker Captain Bret Fisher, a long-time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses Docker Mastery, Kubernetes Mastery, Docker for Node.js, and Swarm Mastery, weekly YouTube Live shows. Bret also consults with companies adopting Docker. Join Bret and other Docker Captains at DockerCon LIVE 2020 on May 28th, where they’ll be live all day hanging out, answering questions and having fun.
When Docker announced in December that it was continuing its DockerCon tradition, albeit virtually, I was super excited and disappointed at the same time. It may sound cliché but truly, my favorite part of attending conferences is seeing old friends and fellow Captains, meeting new people, making new friends, and seeing my students in real life.
Can a virtual event live up to its in-person version? My friend Phil Estes was honest about his experience on Twitter and I agree… it’s not the same. Online events shouldn’t be one-way information dissemination. As attendees, we should be able to *do* something, not just watch.
Well, challenge accepted. We’ve been working hard for months to pull together a great event for you – and Continue reading
Part 2 in the series on Using Docker Desktop and Docker Hub Together
In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how Docker Compose helps in this process.
In this article, we’ll walk through deploying our code to the cloud, how to use Docker Hub to build our images when we push to GitHub and how to use Docker Hub to automate running tests.
Docker Hub is the easiest way to create, manage, and ship your team’s images to your cloud environments whether on-premises or into a public cloud.
This first thing you will want to do is create a Docker ID, if you do not already have one, and log in to Hub.
Once you’re logged in, let’s create a couple of repos where we will push our images to.
Click on “Repositories” in the main navigation bar and then click the “Create Repository” button at the top of the screen.
You should now see the “Create Repository” screen.
You can create repositories for your Continue reading
Docker Desktop WSL 2 backend has now been available for a few months for Windows 10 insider users and Microsoft just released WSL 2 on the Release Preview channel (which means GA is very close). We and our early users have accumulated some experience working with it and are excited to share a few best practices to implement in your Linux container projects!
Docker Desktop with the WSL 2 backend can be used as before from a Windows terminal. We focused on compatibility to keep you happy with your current development workflow.
But to get the most out of Windows 10 2004 we have some recommendations for you.
The first and most important best practice we want to share, is to fully embrace WSL 2. Your project files should be stored within your WSL 2 distro of choice, you should run the docker CLI from this distro, and you should avoid accessing files stored on the Windows host as much as possible.
For backward compatibility reasons, we kept the possibility to interact with Docker from the Windows CLI, but it is not the preferred option anymore.
Running docker CLI from WSL will bring you…
“Build once, deploy anywhere” is really nice on the paper but if you want to use ARM targets to reduce your bill, such as Raspberry Pis and AWS A1 instances, or even keep using your old i386 servers, deploying everywhere can become a tricky problem as you need to build your software for these platforms. To fix this problem, Docker introduced the principle of multi-arch builds and we’ll see how to use this and put it into production.
To be able to use the docker manifest
command, you’ll have to enable the experimental features.
On macOS and Windows, it’s really simple. Open the Preferences > Command Line panel and just enable the experimental features.
On Linux, you’ll have to edit ~/.docker/config.json
and restart the engine.
OK, now we understand why multi-arch images are interesting, but how do we produce them? How do they work?
Each Docker image is represented by a manifest. A manifest is a JSON file containing all the information about a Docker image. This includes references to each of its layers, their corresponding sizes, the hash of the image, its size and also the platform it’s supposed to work on. Continue reading
DockerCon LIVE 2020 is the opportunity for the Docker Community to connect while socially distancing, have fun, and learn from each other. From beginner to expert, DockerCon speakers are getting ready to share their tips and tricks for becoming more productive and finding joy while building apps.
From the Docker team, Engineer Dave Scott will share how recent updates to Docker Desktop helps to deliver quicker edit-compile-test cycles to developers. He’ll dive into the New Docker Desktop Filesharing Features that enabled this change and how to use them effectively to speed up your dev workflow.
Docker Engineer Anca Lordache will cover Best Practices for Compose-managed Python Applications, from bootstrapping your project to how to reproduce your builds and make them more optimized.
If you are a Node.js developer, Digital Ocean’s Kathleen Juell will share How to Build and Run Node Apps with Docker and Compose.
And for PHP devs, Erika Heidi from Digital Ocean will demonstrate How to Create PHP Development Environments with Docker Compose, using a Laravel 6 application as case study. She’ll demonstrate how to define and integrate services, share files between containers, and manage your environment with Docker Compose commands.
Or if it’s Continue reading
The Dockerfile is the starting point for creating a Docker image. The file format provides a well-defined set of directives that allow you to copy files or folders, run commands, set environment variables, and do other tasks required to create a container image. It’s really important to craft your Dockerfile well to keep the resulting image secure, small, quick to build, and quick to update.
In this post, we’ll see how to write good Dockerfiles to speed up your development flow, ensure build reproducibility and that produce images that can be confidently deployed to production.
Note: for this blog post we’ll base our Dockerfile examples on the react-java-mysql sample from the awesome-compose repository.
As developers, we want to match our development environment to the target production context as closely as possible to ensure that what we build will work when deployed.
We also want to be able to develop quickly which means we want builds to be fast and for us to be able to use developer tools like debuggers. Containers are a great way to codify our development environment but we need to define our Dockerfile correctly to be able to interact quickly with our containers.
After receiving many excellent CFP submissions, we are thrilled to finally announce the first round of speakers for DockerCon LIVE on May 28th starting at 9am PT / GMT-7. Check out the agenda here.
In order to maximize the opportunity to connect with speakers and learn from their experience, talks are pre-recorded and speakers are available for live Q&A for their whole session. From best practices and how tos to new product features and use cases; from technical deep dives to open source projects in action, there are a lot of great sessions to choose from, like:
Simon Ferquel, Docker
Julie Lerman, The Data Farm
Lukonde Mwila, Entelect
Clemente Biondo, Engineering Ingegneria Informatica
Erika Heidi, Digital Ocean
Elton Stoneman, Container Consultant and Trainer
Brandon Mitchell, Boxboat
In today’s fast-paced development world CTOs, dev managers and product managers demand quicker turnarounds for features and defect fixes. “No problem, boss,” you say. “We’ll just use containers.” And you would be right but once you start digging in and looking at ways to get started with containers, well quite frankly, it’s complex.
One of the biggest challenges is getting a toolset installed and setup where you can build images, run containers and duplicate a production kubernetes cluster locally. And then shipping containers to the Cloud, well, that’s a whole ‘nother story.
Docker Desktop and Docker Hub are two of the foundational toolsets to get your images built and shipped to the cloud. In this two-part series, we’ll get Docker Desktop set up and installed, build some images and run them using Docker Compose. Then we’ll take a look at how we can ship those images to the cloud, set up automated builds, and deploy our code into production using Docker Hub.
Docker Desktop is the easiest way to get started with containers on your development machine. The Docker Desktop comes with the Docker Engine, Docker CLI, Docker Compose and Kubernetes. With Docker Desktop there Continue reading
Multistage builds feature in Dockerfiles enables you to create smaller container images with better caching and smaller security footprint. In this blog post, I’ll show some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing to get most out of the feature. If you are new to multistage builds you probably want to start by reading the usage guide first.
The latest Docker versions come with new opt-in builder backend BuildKit. While all the patterns here work with the older builder as well, many of them run much more efficiently when BuildKit backend is enabled. For example, BuildKit efficiently skips unused stages and builds stages concurrently when possible. I’ve marked these cases under the individual examples. If you use these patterns, enabling BuildKit is strongly recommended. All other BuildKit based builders support these patterns as well.
• • •
Multistage builds added a couple of new syntax concepts. First of all, you can name a stage that starts with a FROM
command with AS stagename
and use --from=stagename
option in a COPY
command to copy files from that stage. In fact, FROM
command and --from
flag Continue reading
One of the most common challenges we hear from developers is how getting started with containers can sometimes feel daunting. It’s one of the needs Docker is focusing on in its commitment to developers and dev teams. Our two aims: teach developers and help accelerate their onboarding.
With the benefits of Docker so appealing, many developers are eager to get something up and running quickly. That’s why, with Docker Desktop Edge 2.2.3 Release, we have launched a brand new “Quick Start” guide which displays after first installation and shows users the Docker basics: how to quickly clone, build, run, and share an image to Docker Hub directly in Docker Desktop.
To keep everything in one place, we’ve crafted the guide with a built-in terminal so that you can paste commands directly — or type them out yourself. It’s a light-touch and integrated way to get something up and running.
You might expect that this new container you’ve spun up would be just a run-of-the-mill “hello world”. Instead, we’re providing you with a resource for further hands-on learning that you can do at your own pace.
This Docker tutorial, accessible on Continue reading
Recently we have released a new Edge version 2.2.3.0 of Docker Desktop for Windows. This can be considered as a release candidate for the next Stable version that will officially support WSL 2. With Windows 10 version 2004 in sight we are giving the next version of Docker Desktop the final touches to give you the best experience running Linux containers on Windows 10.
One of the great benefits is that with the next update of Windows 10 we will also support running Docker Desktop on Windows 10 Home. We worked closely with Microsoft during the last few months to make Docker Desktop and WSL 2 fit together.
In this blog post we look behind the scenes at how we set up new WSL 2 capable test machines to run automated tests in our CI pipeline.
Let’s keep in mind that all automation somehow starts with manual steps and you evolve from there to get better and more automated. At the beginning of this project we were given a laptop back at KubeCon 2019 with an early version of WSL 2.
With that single laptop our development team could start getting their Continue reading
Although March has come and gone, you can still take part in the awesome activities put together by the community to celebrate Docker’s 7th birthday.
Denise Rey and Captains Łukasz Lach, Marcos Nils, Elton Stoneman, Nicholas Dille, and Brandon Mitchell put together an amazing birthday challenge for the community to complete and it is still available. If you haven’t checked out the hands-on learning content yet, go to the birthday page and earn your seven badges (and don’t forget to share them on twitter).
Captain Bret Fisher hosted a 3-hour live Birthday Show with the Docker team and Captains. You can check out the whole thing on Docker’s Youtube Channel, or skip ahead using the timestamps below:
And while many Community Leaders had to cancel in-person meetups due to the evolving COVID 19 situation, they and their communities still showed up and shared their #mydockerbday stories. There Continue reading
At Docker, we are always looking for ways to make developers’ lives easier either directly or by working with our partners. Improving developer productivity is a core benefit of using Docker products and recently one of our partners made an announcement that makes developing cloud-native apps easier.
AWS announced that its customers can now configure their Amazon Elastic Container Service (ECS) applications deployed in Amazon Elastic Compute Cloud (EC2) mode to access Amazon Elastic File Storage (EFS) file systems. This is good news for Docker developers who use Amazon ECS. It means that Amazon ECS now natively integrates with Amazon EFS to automatically mount shared file systems into Docker containers. This allows you to deploy workloads that require access to shared storage such as machine learning workloads, containerizing legacy apps, or internal DevOps workloads such as GitLab, Jenkins, or Elasticsearch.
The beauty of containerizing your applications is to provide a better way to create, package, and deploy software across different computing environments in a predictable and easy-to-manage way. Containers were originally designed to be stateless and ephemeral (temporary). A stateless application is one that neither reads nor stores information about its state from one time that it is run Continue reading
Docker is pleased to announce that we have created a new open community to develop the Compose Specification. This new community will be run with open governance with input from all interested parties allowing us together to create a new standard for defining multi-container apps that can be run from the desktop to the cloud.
Docker is working with Amazon Web Services (AWS), Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms like Kubernetes and Amazon Elastic Container Service (Amazon ECS) in addition to the existing Compose platforms. Opening the specification will allow innovation to flourish and deliver more choices to developers, accelerating how development teams build and ship applications.
Currently used by millions of developers and with over 650,000 Compose files on GitHub, Compose has been widely embraced by developers because it is a simple cloud and platform-agnostic way of defining multi-container based applications. Compose dramatically simplifies the code to cloud process and toolchain for developers by allowing them to define a complex stack in a single file and run it with a single command. This eliminates the need to build and start every container manually, saving development Continue reading
Docker Desktop is getting ready to celebrate its fourth birthday in June this year. We have come a long way from our first version and have big plans of what we would like to do next. As part of our future plans we are going to be kicking off a new early access program for Docker Desktop called Docker Desktop Developer Preview and we need your help!
This program is for a small number of heavy Docker Desktop users who want to interact with the Docker team and impact the future of Docker Desktop for millions of users around the world.
As a member of this group we will be working with you to look at and experiment with our new features. You will get direct access to the people who are building Docker Desktop everyday. You will meet with our engineering team, product manager and community leads, to share your feedback, tell us what is working in our new features and how we could improve, and also help us really dig in when something doesn’t work quite right.
On top of that, you will have a chance to Continue reading