This is a liveblog of the OpenStack Summit session titled “AT&T’s Container Strategy and OpenStack’s Role in it”. The speakers are Kandan Kathirvel and Amit Tank, both from AT&T. I really wanted to sit in on Martin Casado’s presentation next door (happening at the same time), but as much as I love watching/hearing Martin speak, I felt this like presentation might expose me to some new information.
Kathirvel kicks off the session with some quick introductions, then sets the stage for the session. Naturally, Kathirvel starts out by describing AT&T’s cloud deployment. (I say “naturally” because it seems that every presentation starts out with describing how great and how awesome the presenter’s company’s OpenStack cloud is.)
Following the discussion of AT&T’s cloud, Kathirvel launches into a discussion of container trends and demands. He indicates that he believes container usage (or demand?) for enterprise IT applications is huge (and will continue to be large), but doesn’t believe that will hold true for virtual network functions (VNFs) in telco clouds.
As for how containers and OpenStack may be coming together, Kathirvel describes three different use cases:
The first use case has OpenStack managing the infrastructure, with Kubernetes (or another container Continue reading
This is a liveblog of the day 2 keynote of the OpenStack Summit in Boston, MA. (I wasn’t able to liveblog yesterday’s keynote due to a schedule conflict.) It looks as if today’s keynote will have an impressive collection of speakers from a variety of companies, and—judging from the number of laptops on the stage—should feature a number of demos (hopefully all live).
The keynote starts with the typical high-energy video that’s intended to “pump up” the audience, and Mark Collier (COO, OpenStack Foundation) takes the stage promptly at 9am. Collier re-iterates a few statistics from yesterday’s keynote (attendees from 63 countries, for example). Collier shares that he believes that all major challenges humanity is trying to solve counts on computing. “All science is computer science,” according to Collier, which is both great but also represents a huge responsibility. He leads this discussion by pointing out what he believes to be the fundamental role of open source in machine learning and artificial intelligence (ML/AI). Collier also mentions a collection of “composable” open source projects that are leading the way toward a “cloud-native” future. All of these projects are designed in a way to be combined together in a “mix-and-match” Continue reading
We’re happy to announce that all the breakout session video recordings from DockerCon 2017 are now available online! Special shoutout to all the amazing speakers for making their sessions informative and insightful. All the videos are published on the Docker Youtube channel and the presentation slides available from the Docker Slideshare account.
Here are the links to the playlists of each track:
Use case talks are about practical applications of Docker and are heavy on technical detail and implementation advice. Topics covered during this track were related to high availability and parallel usage in the gaming industry, Cloud scale for e-commerce giants, Security compliance and system, protocols legacy in financial and health care institutions.
Black Belt talks were deeply technical sessions presented by Docker experts. These sessions are code and demo heavy and light on the slides. From container internals to advanced container orchestration, security and networking, this track is a delight for the container experts in the room.
This track focuses on the technical details associated with the different components of the Docker platform: advanced orchestration, networking, security, storage, management and plug-ins. The Docker engineering leads walk you through the best way to Continue reading
Although the majority of the features added to Ansible 2.3 were networking related, that’s not all folks!
There were several significant changes around module management, the Core engine, and Microsoft Windows support we’d love to show off.
For full details on the release, check out the changelog here.
In prior releases, Ansible was organized in two separate module repositories: Ansible-modules-core and Ansible-modules-extras.
The intent was to differentiate the repositories in terms of code quality, feature enablement, and supportability of the modules. We believe we’ve developed a better process.
At the launch of 2.3, Ansible has moved to a metadata-based system for modules. Ansible modules now include an ANSIBLE_METADATA Block which specifies a support category: Core, Curated or Community.
In the new system, modules will be following a specific process per category.
Core modules
Modules that the Ansible engineering team directly maintain, and will ship with Ansible. These modules also receive slightly higher priority for pull requests. Any issues that are opened Continue reading

The following blog contains answers to all questions asked during the Automating F5 BIG-IP using Ansible webinar.
Interested in exploring other Ansible webinars? Register for one of our upcoming webinars or watch an on-demand webinar.
Q: Can you pass the BIG-IP username and password by variable? Also, is there a way to mask the password in the Playbooks or manually feed the credentials as the Playbooks run? How can we ensure security here given that administrative passwords are clear text in the Playbooks themselves?
Yes, the BIG-IP username and password can be passed as a variable by referencing them from the inventory file or even provide them during runtime on the cli -- although this would show them in the process list if you did a 'ps'. You can also specify them in a vars_prompt; this would prevent them from being shown in 'ps'. The downside here is that this would limit the amount of automation you can provide because running the Playbook would require that either be typed in or specified with '-e' ('-e' auto fills vars_prompts that match). The recommended way is to get the vars from a secure location. Ansible provides Vault, Continue reading
It’s no secret that I’m a big fan of using Markdown (specifically, MultiMarkdown) for the vast majority of all the text-based content that I create. Over the last few years, I’ve created used various tools and created scripts to help “reduce the friction” involved with outputting Markdown source files into a variety of destination formats (HTML, RTF, or DOCX, for example). Recently, thanks to Cody Bunch, I was pointed toward the use of a Makefile to assist in this area. After a short period of experimentation, I’m finding that I really like this workflow, and I wanted to share some details here with my readers.
First, if you’re not familiar with make and its use of a Makefile, check out this introduction. There’s a ton of power and flexibility here, of which I’ve only scratched the surface so far. The basic gist behind a Makefile is that it provides a set of instructions to the make command. Each set of instructions is tied to a target, which has one or more dependencies. In the “traditional” use cases for make, this is to allow programmers to define how a set of files should be compiled as well Continue reading

I remember the first AnsibleFest I attended – it was San Francisco 2014. I had been with Ansible for a week and had flown out to meet some of my new colleagues.
As a user of Ansible for the past year, I'd discovered how cheery and helpful the community was. "Newbies" dropping by the IRC channel on Freenode were always helped out, no matter how simple the question. The community spirit is something many people comment on when first using Ansible.
I remember meeting core engineer Brian Coca for the first time at that AnsibleFest too, also a recent joiner to the company. Brian was asked that morning if he'd give a talk, a request he calmly accepted as if he'd been asked to make a cup of tea. Top tip – never miss a talk given by Brian, you will learn something new!
Later, during the happy hour, I talked with lots of attendees, many just wanting to tell us how much they'd enjoyed the day. It was great to see the open source community feel extending to our full day conferences.
Two and half years later and I still see that community spirit day in, day out. Only now it's Continue reading
Recently I was in need of setting up some windows clients to connect to my OpenVPN server. This server running on Linux, uses a specific MTU value (let’s say 1400) to ensure maximum compatibility with different clients over different links.
In addition to the OpenVPN process itself, the kernel must also know about the correct MTU so packet size could be adjusted before reaching the tun/tap interface.
This is very easy to do in Linux. In fact you most likely do not need to do anything at all. OpenVPN will adjusted the MTU of the tun/tap interface while creating it. You can check the interfaces effective MTU by using ip link show or ifconfig command.
The same however can not be said about Windows. In a typical scenario, OpenVPN is not even directly responsible for creating the said interface. Instead, it requires the interface to be already in placed (which is achieved by calling tapinstall.exe during the initial setup) and then it would connect to it.
So even though you have specified your MTU settings in the OpenVPN profile, at least at the time of writing, this does not reflect the MTU of the interface that Windows kernel would Continue reading
Recent Docker releases (17.04 CE Edge onwards) bring significant performance improvements to bind-mounted directories on macOS. (Docker users on the stable channel will see the improvements in the forthcoming 17.06 release.) Commands for bind-mounting directories have new options to selectively enable caching.
Containers that perform large numbers of read operations in mounted directories are the main beneficiaries. Here’s an illustration of the improvements in a few tools and applications in common use among Docker for Mac users: go list is 2.5× faster; symfony is 2.7× faster, and rake is 3.5× faster, as illustrated by the following graphs:

go list ./... in the moby/moby repository

curl of the main page of the Symfony demo app

rake -T in @hirowatari’s benchmark
For more details about how and when to enable caching, and what’s going on under the hood, read on.
A defining characteristic of containers is isolation: by default, many parts of the execution environment of a container are isolated both from other containers and from the host system. In the filesystem, isolation shows up as layering: the filesystem Continue reading
Welcome to Technology Short Take #82! This issue is a bit behind schedule; I’ve been pretty heads-down on some projects. That work will come to fruition in a couple weeks, so I should be able to come up for some air soon. In the meantime, here’s a few links and articles for your reading pleasure.
ovs-dpctl command to “program” the Open vSwitch (OVS) kernel module. It’s a bit geeky, but does provide some insight into Continue readingMetLife, the global provider of insurance, annuities, and employee benefit programs, will be celebrating it’s 150th birthday next year. Survival and success in their space depends on being agile and able to respond to changing market requirements. During the Day 2 General Session at DockerCon 2017, MetLife shared how they’re inspiring new innovation in their organization with Docker Enterprise Edition (EE).
MetLife offers auto, home, dental, life, disability, vision, and health insurance to over 100 million customers across 50 countries. Their business relies on information – about policyholders, risk assessments, financial and market data, etc. Aaron Ades, AVP of Solutions Engineering at MetLife offers that they’ve been in the information management business for 150 years and have accumulated over 400 systems of record – some apps are over 30 years old.

The challenge for MetLife is that they still have a lot of legacy technology that they must work with. Aaron shared that there is still code running today that was first written in 1982, but they still need to deliver a modern experience on top of those legacy systems.
To hear more about how MetLife is staying ahead of their competition using Docker, Continue reading
Docker has celebrated a number of important milestones lately. March 20th was the fourth anniversary of the launch of the Docker project at PyCon in 2013. April 10th was the fourth anniversary of the day that I joined Solomon and a team of 14 other believers to help build this remarkable company. And, on April 18th, we brought the community, customers, and partners together in Austin for the fourth US-based DockerCon.

March 20th, 2013

Docker Team in 2013
DockerCon was a great opportunity to reflect on the progress we’ve seen in the past four years. Docker the company has grown from 15 to over 330 talented individuals. The number of contributors to Docker has grown from 10 to over 3300. Docker is used by millions of developers and is running on millions of servers. There are now over 900k dockerized apps that have been downloaded over 13 billion times. Docker is being used to cure diseases, keep planes in the air, to keep soldiers safe from landmines, to power the world’s largest financial networks and institutions, to process billions in transactions, to help create new companies, and to help revitalize existing companies. Docker has rapidly scaled revenues, building a sustainable Continue reading
Weren’t able to attend DockerCon 2017 or looking for a refresher? Check out the recording and slides from the DockerCon 2017 Online Meetup highlights recap of all the announcements and highlights from DockerCon by Patrick Chanezon and Betty Junod.
The videos and slides from general session day 1 and day 2 as well as the top rated sessions are already available. The rest of the DockerCon slides and videos will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. Learn more here.
LinuxKit is toolkit for building secure, portable and lean operating systems for containers. Read more about LinuxKit.
The Modernize Traditional Applications (MTA) Program aims to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure. Read more about the Modernize Traditional Apps Program.

Weren’t able to attend #dockercon? Watch this recap video for key highlights !
Click Continue reading
DockerCon 2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging Docker technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.
One of my favorite technology catch phrases is “all technology fails”, but when thinking about the network that thought becomes a very scary one. Yes, while all technology does fail, you will always do your best to not have the network be one of those. The concept of healing networks from a conceptual standpoint (the will to want to detect an issue and fix it as soon as possible) is not a new one, as network monitoring is always at the front of any network engineer's mind. We are just fortunate in this day and age to be able to take advantage of newer tools that provide better solutions. Ansible to the rescue!
If you are attending the Red Hat Summit, please make sure not to miss the Discovery Zone session entitled “Self-Healing Networks with Ansible” on Thursday, May 4th at 10:15AM.
In this presentation we will cover topics, such as:
At the end of this session, Continue reading
After the general session videos from DockerCon Day 1 and Day 2 yesterday, we’re happy to share with you the video recordings of the top rated sessions by DockerCon attendees. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Watch the top rated sessions from #dockercon cc @brendangregg @abbyfuller @lizrice @diogomonica
Click To Tweet
The post DockerCon 2017: The Top Rated Sessions appeared first on Docker Blog.
If you’ve been following the Full Stack Journey podcast, you know that the podcast has been silent for a few months. Some of that was due to some adverse situations in life (it happens to all of us from time to time), but some of it was due to the coordination of a major transition in the podcast. And that’s the big news I’m here to share—read on for the full details!
If you’ve been in the IT industry for any reasonable length of time, especially in the networking space, you’ve probably heard of the Packet Pushers Podcast. It’s a hugely popular podcast created by Greg Ferro and Ethan Banks. In recent years, Packet Pushers has expanded from the “main show” to include other shows, including the Datanauts podcast (led by Chris Wahl and Ethan Banks). They’ve also been looking to expand their stable of podcasts to include additional relevant content.
This brings me to the big news: the Full Stack Journey podcast is joining the Packet Pushers network of podcasts! That’s right—the Full Stack Journey will be part of Packet Pushers’ growing network of podcasts. In talking with Greg and Ethan and the rest of the Packet Pushers team, Continue reading
In our previous Getting Started blog post, we discussed how to install Ansible Tower in your environment.
Now we’ll discuss how you can equip your Tower host with users and credentials.
To begin, let’s cover the essentials: setting up your user base and creating credentials for appropriate delegation of tasks.
Building your user base will be the first thing you’ll need to do to get started with Tower. The user base can be broken into three easily-defined parts:
1. User: Someone who has access to Tower with associated permissions and credentials.
2. Organization: The top level of the user base - a logical collection of users, teams, projects and inventories.
3. Team: A subdivision of an organization - provides the means to set up and implement role-based access schemes as well as to delegate work across organizations.
There are three types of users that can be defined within Tower:
Following the general session highlights from DockerCon Day 1, we’re happy to share with you the video recording from general session day 2. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Here’s what we covered during the day 2 general session:
Ben started off his DockerCon Day 2 keynote with key facts and figures around Docker Commercial Adoption. To illustrate his points Ben invited on stage Swamy Kochelakota, Global Head of Infrastructure and Operations at Visa to talk about their journey adopting Docker Enterprise Edition to run their critical applications at scale in a very diverse environment.
During the day 2 keynote, Lily and Vivek reprise their 2016 roles of dedicated burners, finally returning from Burning Man to get back to their jobs of enterprise dev and ops. Ben returns as clueless business guy, and decides to add value by hiring a contractor, who Continue reading
What an incredible DockerCon 2017 we had last week. Big thank you to all of the 150+ confirmed speakers, 100+ sponsors and over 5,500 attendees for contributing to the success of these amazing 3 days in Austin. You’ll find below the videos and slides from general session day 1.All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.

Here’s what we covered during the day 1 general session:
Solomon’s keynote started by introducing new Docker features to improve the development workflows of Docker users: multi-stage builds and desktop-to-cloud integration. With multi-stage builds you can now easily separate your build-time and runtime container images, allowing development teams to ship minimal and efficient images. It’s time to say goodbye to those custom and non-portable build scripts! With desktop-to-cloud you can easily connect to a remote swarm cluster using your Docker ID for authentication, without having to worry Continue reading