Yesterday marked International Women’s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.
Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS — Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code womenintech for 50% off your ticket!
In collaboration with the Continue reading
Today we’re excited to announce beta Docker Community Edition (CE) for Google Cloud Platform (GCP). Users interested in helping test and improve Docker CE for GCP should sign up at beta.docker.com. We’ll let in users to the beta as the product matures and stabilizes, and we’re looking forward to your input and suggestions.
Docker CE for GCP is built on the same principle as Docker CE for AWS and Docker CE for Azure and provides a Docker setup on GCP that is:
Docker CE for GCP is the first Docker edition to launch using the InfraKit project. InfraKit helps us configure cloud infrastructure quickly, design upgrade-processes and self-healing tailored to Docker built-in orchestration and smooth out infrastructure differences between different cloud providers to give Docker users a consistent container platform that maximises portability.
Welcome to the first in our series of blog posts for Getting Started with Ansible Tower. This series covers basic installation and functions of Tower and an overview of how to use Tower to implement IT automation.
To get started with Tower, you must first learn to install and stand up a single host. Future posts will cover other types of configurations, such as a redundant installation with an external database. For this post, we’ll be highlighting RHEL 7 and Ubuntu LTS.
1. Download the latest Tower edition
If you haven’t already, visit this link to the trial page to have a download link sent to you. If you would like, our AMIs for AWS and our vagrant image are found there as well. If you have network restrictions, contact Ansible Sales and they can send you the bundled installer.
Note: We are currently working on a bundled installer for Ubuntu LTS, so the standard installer will install for Ubuntu.
2. Unpack the file (tar xzvf towerlatest)
$ tar xzvf towerlatest ansible-tower-setup-3.1.0/ ansible-tower-setup-3.1.0/group_vars/ ansible-tower-setup-3.1.0/group_vars/all ...
-tar xzvf towerbundlelatest
$ tar xzvf Continue reading
Back in October 2016, Docker released Infrakit, an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.
In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability. We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.
One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit. The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.
One aspect of the cluster state Continue reading
Long-time readers of my site know that I’m a fan of Markdown, and I use it extensively. (This blog, in fact, is written entirely in Markdown and converted to HTML using Jekyll on GitHub Pages.) Since migrating to Linux as my primary desktop OS, I’ve also made the transition to doing almost all my presentations in Markdown as well. Here are the details on how I’m using Markdown for creating presentations on Linux.
There are a number of tools involved in my workflow for creating Markdown-based presentations on Linux:
This post is part of a series of posts sharing the stories of other users who have decided to migrate to Linux as their primary desktop OS. Each person’s migration (and their accompanying story) is unique; some people have embraced Linux only on their home computer; others are using it at work as well. I believe that sharing this information will help readers who may be considering a migration of their own, and who have questions about whether this is right for them and their particular needs.
For more information about other migrations, see part 1 or part 2 of the series.
This time around we’re sharing the story of Rynardt Spies.
Q: Why did you switch to Linux?
In short, I’ve always been at least a part-time Linux desktop user and a heavy RHEL server user. My main work machine is Windows. However, because of my work with AWS, Docker, etc., I find that being on a Linux machine with all the Linux tools at hand (especially OpenSSL and simple built-in tools like SSH) is invaluable when working in a Linux world. However, I’ve always used Linux Mint, or Ubuntu (basically Debian-derived distributions) for my desktop Continue reading
With the introduction of swarm mode in Docker 1.12, we showed the world how simple it can be to provision a secure and fully-distributed Docker cluster on which to deploy highly available and scalable applications. The latest Docker 1.13 builds on and improves these capabilities with new features, such as secrets management.
Continuing with the trend that simplicity is paramount to empowering individuals and teams to achieve their goals, today we are bringing swarm mode support to Docker Cloud, with a number of new cloud-enabled capabilities. All of this is in addition to the continuous integration (CI) features of Docker Cloud, including automatic builds, tests, security scans and the world’s largest hosted registry of public and private Docker image repositories.
Fleet Management using Docker ID
Keeping track of many swarms sprawling multiple regions or cloud providers can be a challenge. And securely connecting to remote swarms with TLS means teams must also spend time configuring and maintaining a Public Key Infrastructure. By registering your new or existing swarms with Docker Cloud, teams can now easily manage a large number of swarms running anywhere, and only need their Docker ID to authenticate and securely access any of them.
Docker Continue reading
Welcome to Technology Short Take #79! There’s lots of interesting links for you this time around.
grep
of all my blog posts found nothing), so let me rectify that first. Skydive is (in the project’s own words) an “open source real-time network topology and protocols analyzer.” The project’s GitHub repository is here, and documentation for Skydive is here.Nothing this time around. Should I keep this section, or ditch it? Feel free to give me your feedback on Twitter.
In conjunction with the introduction of Docker Enterprise Edition (EE), we are excited to announce the Docker Certification Program and availability of partner technologies through Docker Store. A vibrant ecosystem is a sign of a healthy platform and by providing a program that aligns Docker’s commercial platform with the innovation coming from our partners; we are collectively expanding choice for customers investing in the Docker platform.
The Docker Certification Program is designed for both technology partners and enterprise customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification is aligned to the available Docker EE infrastructure and gives enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the Certified Containers and Plugins with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker EE.
There are three categories of Docker Certified technology available:
Today we are announcing Docker Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. Docker EE is supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. Docker EE is available in three tiers: Basic comes with the Docker platform, support and certification, and Standard and Advanced tiers add advanced container management (Docker Datacenter) and Docker Security Scanning.
For consistency, we are also renaming the free Docker products to Docker Community Edition (CE) and adopting a new lifecycle and time-based versioning scheme for both Docker EE and CE. Today’s Docker CE and EE 17.03 release is the first to use the new scheme.
Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option. Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period. We are also improving Docker CE maintainability by maintaining each quarterly CE release for 4 months. That gets Docker CE users a new 1-month window to update from one version to the next.
Both Docker CE and EE are available on a wide range of Continue reading
I’ve been spending some time recently with CentOS Atomic Host, the container-optimized version of CentOS (part of Project Atomic). By default, the Docker Engine on CentOS Atomic Host listens only to a local UNIX socket, and is not accessible over the network. While CentOS has its own particular way of configuring the Docker Engine, I wanted to see if I could—in a very “systemd-like” fashion—make Docker Engine on CentOS listen on a network socket as well as a local UNIX socket. So, I set out with an instance of CentOS Atomic Host and the Docker systemd docs to see what I could do.
The default configuration of Docker Engine on CentOS Atomic Host uses a systemd unit file that references an external environment file; specifically, it references values set in /etc/sysconfig/docker
, as you can see from this snippet of the docker.service
unit file:
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
The $OPTIONS
variable, along with the other variables at the end of the ExecStart line, are defined in /etc/sysconfig/docker
. That value, by default, looks like this:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
We're excited to announce the release of Ansible Tower 3.1. Our engineering team has been hard at work on enhancing Ansible Tower to allow teams to harness the power of automation across servers, applications, environments, and networks, and with Ansible Tower 3.1, we've brought together a variety of enhancements that allow your teams to automate more processes, more frequently, and more easily analyze the results of your automation across the enterprise
Ansible brought simple, agentless automation to IT. But some IT processes don't lend themselves to being automated in a single Playbook - if you're provisioning environments, you may want to handle basic provisioning, default configuration, and application deployment differently. And once you've automated those tasks, you want to reuse those tasks in different ways, or in different environments. Plus, what if a deployment goes wrong? You may need to back your environment out to the last known good state.
To solve these issues, we developed Tower workflows. With Tower workflows, you can chain together any number of Playbooks together into a workflow, with each workflow step potentially using a different Playbook, inventory, set of credentials, and more. Easily launch one or more Continue reading
We're excited to announce the release of Ansible Tower 3.1. Our engineering team has been hard at work on enhancing Ansible Tower to allow teams to harness the power of automation across servers, applications, environments, and networks, and with Ansible Tower 3.1, we've brought together a variety of enhancements that allow your teams to automate more processes, more frequently, and more easily analyze the results of your automation across the enterprise
Ansible brought simple, agentless automation to IT. But some IT processes don't lend themselves to being automated in a single Playbook - if you're provisioning environments, you may want to handle basic provisioning, default configuration, and application deployment differently. And once you've automated those tasks, you want to reuse those tasks in different ways, or in different environments. Plus, what if a deployment goes wrong? You may need to back your environment out to the last known good state.
To solve these issues, we developed Tower workflows. With Tower workflows, you can chain together any number of Playbooks together into a workflow, with each workflow step potentially using a different Playbook, inventory, set of credentials, and more. Easily launch one or more Continue reading
Shortly after I announced my intention to migrate to Linux as my primary desktop OS, a number of other folks contacted me and said they had made the same choice or they had been encouraged by my decision to also try it themselves. It seems that there is a fair amount of pent-up interest—at least in the IT community—to embrace Linux as a primary desktop OS. Given the level of interest, I thought it might be helpful for readers to hear from others who are also switching to Linux as their primary desktop OS, and so this post kicks off a series of posts where I’ll share other users’ stories about their Linux migration.
In this first post of the series, you’ll get a chance to hear from Roddy Strachan. I’ve structured the information in a “question-and-answer” format to make it a bit easier to follow.
Q: Why did you switch to Linux?
I was a heavy Windows user due to corporate requirements. It was just easy to run Windows. I never ran the standard corporate build, but instead ran my own managed version of Windows 10; this worked well. I switched because I wanted to experiment with Linux Continue reading
Last week, we hosted a containerd summit for contributors and maintainers. Containerd is a core container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, snapshot storage for container filesystems and a few other things to make the management of containers robust.
We started off by getting everyone up to speed on the project, roadmap and goals before diving down into specific issues and design of containerd. We had a couple breakout sessions where we discussed blocking issues and feature requests by various members of the community. You can see a summary of the breakout sessions in last week’s development report in the containerd repository and the various presentations below:
We ended the day with discussions around governance and extension model. Watch this video recording to learn more about why and how core contributors are thinking about integrating containerd with other Continue reading
It’s that time of the year again…the DockerCon Agenda Builder is live!
Whether you are a Docker beginner or have been dabbling in containers for a while now, we’re confident that DockerCon 2017 will have the right content for you. With 7 tracks and more than 60 sessions presented by Docker Engineering, Docker Captains, community members and corporate heavyweights such as Intuit, MetLife, PayPal, Activision and Netflix, DockerCon 2017 will cover a wide range of container tech use cases and topics.
We encourage you to review the catalogue of DockerCon sessions and build your agenda for the week. You’ll find a new agenda builder that allows you to apply filters based on your areas of interest, experience, job role and more!
One of our favorite features of the Agenda Builder is the recommendations generated based on your profile and marked interest sessions. To unlock the recommendations feature you’ll need to sign up for a DockerCon account.
Within this tool you’ll be able to adjust your agenda, rate sessions and add notes to reference after the conference. All of your selections features will be available in the DockerCon mobile app once Continue reading
It’s been a year since the first networking modules were developed and included in Ansible 2.0. Since then, there have been two additional Ansible releases and more than 175 modules added, with 24 networking vendor platforms enabled. With the fantastic efforts from the community and our networking partners, Ansible has been able to add more and more features for networking use cases. In the forthcoming Ansible 2.3 release, the focus on networking enablement now turns to increasing performance and adding connection methods that provide compatibility and flexibility.
Looking ahead to Ansible 2.3, the most notable additions planned are:
Since Ansible 2.0, the primary focus for networking enablement has been to help increase the number of third-party devices that have modules included by default. As this list grows (we expect to have even more platforms and modules in Ansible 2.3), Ansible and Ansible Tower continue to be trusted components of critical networking production deployments.
The development of these plugins further demonstrates the value and investment Ansible and the community have made into networking infrastructure enablement. As we approach the Ansible Continue reading
This post addresses a (mostly) cosmetic issue with the current way that Arista distributes its Vagrant box for vEOS. I say “mostly cosmetic” because while the Vagrant box for vEOS is perfectly functional if you use it via Arista’s instructions, adding metadata as I explain here provides a small bit of additional flexibility should you need multiple versions of the vEOS box on your system.
If you follow Arista’s instructions, then you’ll end up with something like this when you run vagrant box list
:
arista-veos-4.18.0 (virtualbox, 0)
bento/ubuntu-16.04 (virtualbox, 2.3.1)
centos/6 (virtualbox, 1611.01)
centos/7 (virtualbox, 1611.01)
centos/atomic-host (virtualbox, 7.20170131)
coreos-stable (virtualbox, 1235.9.0)
debian/jessie64 (virtualbox, 8.7.0)
Note that the version of the vEOS box is embedded in the name. Now, you could not put the version in the name, but because there’s no metadata—which is why it shows (virtualbox, 0)
on that line—you wouldn’t have any way of knowing which version you had. Further, what happens when you want to have multiple versions of the vEOS box?
Fortunately, there’s an easy fix (inspired by the way CoreOS distributes their Vagrant box). Just create a file with the Continue reading
Are you looking for Spring Break plans with the family? Look no further than DockerCon 2017! Located in sunny Austin, Texas April 17-20, DockerCon provides learning and entertainment for all members of the family.
As part of our efforts to make DockerCon’s doors open to all, we are excited to announce that we will be partnering again this year with Big Time Kid to provide childcare at DockerCon! Gone are the days of “Mom / Dad has to stay home with the kids…” – you can now bring the whole family to DockerCon!
Childcare will be offered:
Following in the success of last year, we have chosen Big Time Kid Care as our childcare provider. All caregivers and staff are certified, fully insured and experienced in child education and care with police background checks. Big Time Kid Care will be well equipped and excited to take good care of your little ones at a kid-friendly play room close to the DockerCon activities at Austin Convention Center. Games, activities, breakfast and lunch will be provided.
Five years ago today, Michael DeHaan created this commit:
commit f31421576b00f0b167cdbe61217c31c21a41ac02
Author: Michael DeHaan
Date: Thu Feb 23 14:17:24 2012 -0500
Genesis.
When you create something you intend to release as open-source software, you never know if it will be something others are actually interested in using (much less contributing to).
Michael invited me to join Ansible when it was just over a year old as a project, and I have seen it grow from an already wildly popular project into something used by people around the world. The thing that makes Ansible the strongest though, by far, is its community of users and contributors. So join us in wishing Happy Birthday by sharing how you innovate with Ansible!