Archive

Category Archives for "Systems"

Docker Partners with Girl Develop It and Launches Pilot Class

Yesterday marked International Women’s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.

Docker and Girl Develop It

Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS — Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code womenintech for 50% off your ticket! 

Launching Pilot Class

In collaboration with the Continue reading

Beta Docker Community Edition for Google Cloud Platform

Today we’re excited to announce beta Docker Community Edition (CE) for Google Cloud Platform (GCP). Users interested in helping test and improve Docker CE for GCP should sign up at beta.docker.com. We’ll let in users to the beta as the product matures and stabilizes, and we’re looking forward to your input and suggestions.

Docker CE for GCP is built on the same principle as Docker CE for AWS and Docker CE for Azure and provides a Docker setup on GCP that is:

  • Quick and easy to install in a few minutes
  • Released in sync with other Docker releases and always available with the latest Docker version
  • Simple to upgrade from one Docker CE version to the next
  • Configured securely and deployed on minimal, locked-down Linux maintained by Docker
  • Self-healing and capable of automatically recovering from infrastructure failures

Docker CE for GCP is the first Docker edition to launch using the InfraKit project. InfraKit helps us configure cloud infrastructure quickly, design upgrade-processes and self-healing tailored to Docker built-in orchestration and smooth out infrastructure differences between different cloud providers to give Docker users a consistent container platform that maximises portability.

Installing Docker CE for GCP

Continue reading

Getting Started: Tower Installer

ansible tower getting started series

Welcome to the first in our series of blog posts for Getting Started with Ansible Tower. This series covers basic installation and functions of Tower and an overview of how to use Tower to implement IT automation.

To get started with Tower, you must first learn to install and stand up a single host. Future posts will cover other types of configurations, such as a redundant installation with an external database. For this post, we’ll be highlighting RHEL 7 and Ubuntu LTS. 

Install Tower in 4 Simple Steps:

Run these steps as root (su -).

1. Download the latest Tower edition

If you haven’t already, visit this link to the trial page to have a download link sent to you. If you would like, our AMIs for AWS and our vagrant image are found there as well. If you have network restrictions, contact Ansible Sales and they can send you the bundled installer.

Note: We are currently working on a bundled installer for Ubuntu LTS, so the standard installer will install for Ubuntu.

2. Unpack the file (tar xzvf towerlatest)

 
$ tar xzvf towerlatest
ansible-tower-setup-3.1.0/
ansible-tower-setup-3.1.0/group_vars/
ansible-tower-setup-3.1.0/group_vars/all
...

-tar xzvf towerbundlelatest

 
$ tar xzvf  Continue reading

InfraKit and Docker Swarm Mode: A Fault-Tolerant and Self-Healing Cluster

Back in October 2016, Docker released Infrakit, an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.

Introduction

In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability.  We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.  

Docker Swarm Mode and InfraKit

One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit.  The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.

One aspect of the cluster state Continue reading

The Linux Migration: Creating Presentations

Long-time readers of my site know that I’m a fan of Markdown, and I use it extensively. (This blog, in fact, is written entirely in Markdown and converted to HTML using Jekyll on GitHub Pages.) Since migrating to Linux as my primary desktop OS, I’ve also made the transition to doing almost all my presentations in Markdown as well. Here are the details on how I’m using Markdown for creating presentations on Linux.

There are a number of tools involved in my workflow for creating Markdown-based presentations on Linux:

  • Sublime Text 3 (with the Markdown Extended and Monokai Extended packages) is used for editing the “source” files for a presentation. Three “source” files are involved: a Markdown file, a HTML file, and a Cascading Style Sheet (CSS) file.
  • Remarkjs takes the Markdown-formatted content and converts it into a dynamic HTML-based presentation, formatting it according to the styles defined in the CSS file. This gives tremendous flexibility in formatting the presentation. (Check it out on GitHub.)
  • I use a web browser to display the HTML output generated by Remarkjs (in my case, I’m using Firefox on my Fedora laptop).
  • To help with creating a PDF version of Continue reading

The Linux Migration: Other Users’ Stories, Part 2

This post is part of a series of posts sharing the stories of other users who have decided to migrate to Linux as their primary desktop OS. Each person’s migration (and their accompanying story) is unique; some people have embraced Linux only on their home computer; others are using it at work as well. I believe that sharing this information will help readers who may be considering a migration of their own, and who have questions about whether this is right for them and their particular needs.

For more information about other migrations, see part 1 or part 2 of the series.

This time around we’re sharing the story of Rynardt Spies.

Q: Why did you switch to Linux?

In short, I’ve always been at least a part-time Linux desktop user and a heavy RHEL server user. My main work machine is Windows. However, because of my work with AWS, Docker, etc., I find that being on a Linux machine with all the Linux tools at hand (especially OpenSSL and simple built-in tools like SSH) is invaluable when working in a Linux world. However, I’ve always used Linux Mint, or Ubuntu (basically Debian-derived distributions) for my desktop Continue reading

Swarm Mode with Fleet Management and Collaboration now in public beta, powered by Docker Cloud

With the introduction of swarm mode in Docker 1.12, we showed the world how simple it can be to provision a secure and fully-distributed Docker cluster on which to deploy highly available and scalable applications. The latest Docker 1.13 builds on and improves these capabilities with new features, such as secrets management.

Continuing with the trend that simplicity is paramount to empowering individuals and teams to achieve their goals, today we are bringing swarm mode support to Docker Cloud, with a number of new cloud-enabled capabilities. All of this is in addition to the continuous integration (CI) features of Docker Cloud, including automatic builds, tests, security scans and the world’s largest hosted registry of public and private Docker image repositories.


Fleet Management using Docker ID

Keeping track of many swarms sprawling multiple regions or cloud providers can be a challenge. And securely connecting to remote swarms with TLS means teams must also spend time configuring and maintaining a Public Key Infrastructure. By registering your new or existing swarms with Docker Cloud, teams can now easily manage a large number of swarms running anywhere, and only need their Docker ID to authenticate and securely access any of them.

Docker Continue reading

Technology Short Take #79

Welcome to Technology Short Take #79! There’s lots of interesting links for you this time around.

Networking

  • I was sure I had mentioned Skydive before, but apparently not (a grep of all my blog posts found nothing), so let me rectify that first. Skydive is (in the project’s own words) an “open source real-time network topology and protocols analyzer.” The project’s GitHub repository is here, and documentation for Skydive is here.
  • OK, now that I’ve mentioned Skydive, I can talk about this article that provides an example of functional SDN testing with Terraform and Skydive. Terraform is used to turn up OpenStack infrastructure, and Skydive (via connections into Neutron and OpenContrail, in this example) is used to validate SDN functionality.
  • Tony Sangha took PowerNSX (a set of PowerShell cmdlets for interacting with NSX) and created a tool to help document the NSX Distributed Firewall configuration. This tool exports the DFW configuration and then converts it into Excel format, and is available on GitHub. (What’s that? You haven’t heard of PowerNSX before? See here.)

Servers/Hardware

Nothing this time around. Should I keep this section, or ditch it? Feel free to give me your feedback on Twitter.

Security

Introducing the Docker Certification Program for Infrastructure, Plugins and Containers

Certified-Badges@2x.png

In conjunction with the introduction of Docker Enterprise Edition (EE), we are excited to announce the Docker Certification Program and availability of partner technologies through Docker Store. A vibrant ecosystem is a sign of a healthy platform and by providing a program that aligns Docker’s commercial platform with the innovation coming from our partners; we are collectively expanding choice for customers investing in the Docker platform.

The Docker Certification Program is designed for both technology partners and enterprise customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification is aligned to the available Docker EE infrastructure and gives enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. Customers can quickly identify the Certified Containers and Plugins with visible badges and be confident that they were built with best practices, tested to operate smoothly on Docker EE.

Save Your Seat: Webinar – Docker Certified and Store on March 21st.

There are three categories of Docker Certified technology available:

  • Certified Infrastructure: Include operating systems and cloud providers that the Docker platform is integrated and optimized for and tested for certification. Through this, Docker provides a great user Continue reading

Announcing Docker Enterprise Edition

Today we are announcing Docker Enterprise Edition (EE), a new version of the Docker platform optimized for business-critical deployments. Docker EE is supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. Docker EE is available in three tiers: Basic comes with the Docker platform, support and certification, and Standard and Advanced tiers add advanced container management (Docker Datacenter) and Docker Security Scanning.

For consistency, we are also renaming the free Docker products to Docker Community Edition (CE) and adopting a new lifecycle and time-based versioning scheme for both Docker EE and CE. Today’s Docker CE and EE 17.03 release is the first to use the new scheme.

Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option. Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period. We are also improving Docker CE maintainability by maintaining each quarterly CE release for 4 months. That gets Docker CE users a new 1-month window to update from one version to the next.

Both Docker CE and EE are available on a wide range of Continue reading

Customizing Docker Engine on CentOS Atomic Host

I’ve been spending some time recently with CentOS Atomic Host, the container-optimized version of CentOS (part of Project Atomic). By default, the Docker Engine on CentOS Atomic Host listens only to a local UNIX socket, and is not accessible over the network. While CentOS has its own particular way of configuring the Docker Engine, I wanted to see if I could—in a very “systemd-like” fashion—make Docker Engine on CentOS listen on a network socket as well as a local UNIX socket. So, I set out with an instance of CentOS Atomic Host and the Docker systemd docs to see what I could do.

The default configuration of Docker Engine on CentOS Atomic Host uses a systemd unit file that references an external environment file; specifically, it references values set in /etc/sysconfig/docker, as you can see from this snippet of the docker.service unit file:

ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY

The $OPTIONS variable, along with the other variables at the end of the ExecStart line, are defined in /etc/sysconfig/docker. That value, by default, looks like this:

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

I Continue reading

Introducing Ansible Tower 3.1

Ansible Tower 3.1

We're excited to announce the release of Ansible Tower 3.1. Our engineering team has been hard at work on enhancing Ansible Tower to allow teams to harness the power of automation across servers, applications, environments, and networks, and with Ansible Tower 3.1, we've brought together a variety of enhancements that allow your teams to automate more processes, more frequently, and more easily analyze the results of your automation across the enterprise

Model complex processes with multi-Playbook workflows

Ansible brought simple, agentless automation to IT. But some IT processes don't lend themselves to being automated in a single Playbook - if you're provisioning environments, you may want to handle basic provisioning, default configuration, and application deployment differently. And once you've automated those tasks, you want to reuse those tasks in different ways, or in different environments. Plus, what if a deployment goes wrong? You may need to back your environment out to the last known good state.

To solve these issues, we developed Tower workflows. With Tower workflows, you can chain together any number of Playbooks together into a workflow, with each workflow step potentially using a different Playbook, inventory, set of credentials, and more. Easily launch one or more Continue reading

Introducing Ansible Tower 3.1

Ansible Tower 3.1

We're excited to announce the release of Ansible Tower 3.1. Our engineering team has been hard at work on enhancing Ansible Tower to allow teams to harness the power of automation across servers, applications, environments, and networks, and with Ansible Tower 3.1, we've brought together a variety of enhancements that allow your teams to automate more processes, more frequently, and more easily analyze the results of your automation across the enterprise

Model complex processes with multi-Playbook workflows

Ansible brought simple, agentless automation to IT. But some IT processes don't lend themselves to being automated in a single Playbook - if you're provisioning environments, you may want to handle basic provisioning, default configuration, and application deployment differently. And once you've automated those tasks, you want to reuse those tasks in different ways, or in different environments. Plus, what if a deployment goes wrong? You may need to back your environment out to the last known good state.

To solve these issues, we developed Tower workflows. With Tower workflows, you can chain together any number of Playbooks together into a workflow, with each workflow step potentially using a different Playbook, inventory, set of credentials, and more. Easily launch one or more Continue reading

The Linux Migration: Other Users’ Stories, Part 1

Shortly after I announced my intention to migrate to Linux as my primary desktop OS, a number of other folks contacted me and said they had made the same choice or they had been encouraged by my decision to also try it themselves. It seems that there is a fair amount of pent-up interest—at least in the IT community—to embrace Linux as a primary desktop OS. Given the level of interest, I thought it might be helpful for readers to hear from others who are also switching to Linux as their primary desktop OS, and so this post kicks off a series of posts where I’ll share other users’ stories about their Linux migration.

In this first post of the series, you’ll get a chance to hear from Roddy Strachan. I’ve structured the information in a “question-and-answer” format to make it a bit easier to follow.

Q: Why did you switch to Linux?

I was a heavy Windows user due to corporate requirements. It was just easy to run Windows. I never ran the standard corporate build, but instead ran my own managed version of Windows 10; this worked well. I switched because I wanted to experiment with Linux Continue reading

containerd summit recap: slides, videos and meeting notes

Last week, we hosted a containerd summit for contributors and maintainers. Containerd is a core container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, snapshot storage for container filesystems and a few other things to make the management of containers robust.

We started off by getting everyone up to speed on the project, roadmap and goals before diving down into specific issues and design of containerd.  We had a couple breakout sessions where we discussed blocking issues and feature requests by various members of the community. You can see a summary of the breakout sessions in last week’s development report in the containerd repository and the various presentations below:

Deep Dive into containerd by Michael Crosby, Stephen Day, Derek McGowan and Mickael Laventure (Docker)

Driving containerd operations with gRPC by Phil Estes (IBM)

containerd and CRI by Tim Hockin (Google)

 

We ended the day with discussions around governance and extension model. Watch this video recording to learn more about why and how core contributors are thinking about integrating containerd with other Continue reading

Build Your DockerCon Agenda!

It’s that time of the year again…the DockerCon Agenda Builder is live!

Whether you are a Docker beginner or have been dabbling in containers for a while now, we’re confident that DockerCon 2017 will have the right content for you. With 7 tracks and more than 60 sessions presented by Docker Engineering, Docker Captains, community members and corporate heavyweights such as Intuit, MetLife, PayPal, Activision and Netflix, DockerCon 2017 will cover a wide range of container tech use cases and topics.

Build your agenda

We encourage you to review the catalogue of DockerCon sessions and build your agenda for the week. You’ll find a new agenda builder that allows you to apply filters based on your areas of interest, experience, job role and more!

Check Out All The Sessions

 

DockerCon Agenda

One of our favorite features of the Agenda Builder is the recommendations generated based on your profile and marked interest sessions. To unlock the recommendations feature you’ll need to sign up for a DockerCon account.

DockerCon Schedule

Within this tool you’ll be able to adjust your agenda, rate sessions and add notes to reference after the conference. All of your selections features will be available in the DockerCon mobile app once Continue reading

Networking Features Coming Soon in Ansible 2.3

Ansible 2.3 Networking Update

It’s been a year since the first networking modules were developed and included in Ansible 2.0. Since then, there have been two additional Ansible releases and more than 175 modules added, with 24 networking vendor platforms enabled. With the fantastic efforts from the community and our networking partners, Ansible has been able to add more and more features for networking use cases. In the forthcoming Ansible 2.3 release, the focus on networking enablement now turns to increasing performance and adding connection methods that provide compatibility and flexibility.

Looking ahead to Ansible 2.3, the most notable additions planned are:

  • Persistent connections framework
  • The network_cli connection plugin
  • The netconf connection plugin

Why are these features important?

Since Ansible 2.0, the primary focus for networking enablement has been to help increase the number of third-party devices that have modules included by default. As this list grows (we expect to have even more platforms and modules in Ansible 2.3), Ansible and Ansible Tower continue to be trusted components of critical networking production deployments.

The development of these plugins further demonstrates the value and investment Ansible and the community have made into networking infrastructure enablement. As we approach the Ansible Continue reading

Adding Metadata to the Arista vEOS Vagrant Box

This post addresses a (mostly) cosmetic issue with the current way that Arista distributes its Vagrant box for vEOS. I say “mostly cosmetic” because while the Vagrant box for vEOS is perfectly functional if you use it via Arista’s instructions, adding metadata as I explain here provides a small bit of additional flexibility should you need multiple versions of the vEOS box on your system.

If you follow Arista’s instructions, then you’ll end up with something like this when you run vagrant box list:

arista-veos-4.18.0    (virtualbox, 0)
bento/ubuntu-16.04    (virtualbox, 2.3.1)
centos/6              (virtualbox, 1611.01)
centos/7              (virtualbox, 1611.01)
centos/atomic-host    (virtualbox, 7.20170131)
coreos-stable         (virtualbox, 1235.9.0)
debian/jessie64       (virtualbox, 8.7.0)

Note that the version of the vEOS box is embedded in the name. Now, you could not put the version in the name, but because there’s no metadata—which is why it shows (virtualbox, 0) on that line—you wouldn’t have any way of knowing which version you had. Further, what happens when you want to have multiple versions of the vEOS box?

Fortunately, there’s an easy fix (inspired by the way CoreOS distributes their Vagrant box). Just create a file with the Continue reading

How about family spring break in Austin?

Are you looking for Spring Break plans with the family? Look no further than DockerCon 2017!  Located in sunny Austin, Texas April 17-20, DockerCon provides learning and entertainment for all members of the family.

DockerCon

Childcare

baby-moby-body-4.jpgAs part of our efforts to make DockerCon’s doors open to all, we are excited to announce that we will be partnering again this year with Big Time Kid to provide childcare at DockerCon! Gone are the days of “Mom / Dad has to stay home with the kids…” – you can now bring the whole family to DockerCon!

Childcare will be offered:

  • Monday, April 17  1:00pm – 7:30pm
  • Tuesday, April 18  8:00am – 6:30pm
  • Wednesday, April 19 8:00am – 5:30pm
  • Thursday, April 20 8:00am – 12:00pm

Following in the success of last year, we  have chosen Big Time Kid Care as our childcare provider. All caregivers and staff are certified, fully insured and experienced in child education and care with police background checks. Big Time Kid Care will be well equipped and excited to take good care of your little ones at a kid-friendly play room close to the DockerCon activities at Austin Convention Center. Games, activities, breakfast and lunch will be provided.

Spousetivities

Continue reading

Ansible Is 5 Years Old Today!

Ansible birthday

Five years ago today, Michael DeHaan created this commit:


commit f31421576b00f0b167cdbe61217c31c21a41ac02
Author: Michael DeHaan
Date:   Thu Feb 23 14:17:24 2012 -0500

    Genesis.

When you create something you intend to release as open-source software, you never know if it will be something others are actually interested in using (much less contributing to).

Michael invited me to join Ansible when it was just over a year old as a project, and I have seen it grow from an already wildly popular project into something used by people around the world. The thing that makes Ansible the strongest though, by far, is its community of users and contributors. So join us in wishing Happy Birthday by sharing how you innovate with Ansible!

twitter-logo.png Tweet #AnsibleBirthday


1 73 74 75 76 77 125