I’ve been experimenting with Microsoft Azure recently, and I thought it might be useful to share a quick post on using some of my favorite tools with Azure. I’ve found it useful to try to leverage existing tools whenever I can, and so as I’ve been experimenting with Azure I’ve been leveraging familiar tools like Docker Machine and Vagrant.
The information here isn’t revolutionary or unique, but hopefully it will still be useful to others, even if only as a “quick reference”-type of post.
To launch an instance on Azure and provision it with Docker using docker-machine
:
docker-machine create -d azure \
--azure-subscription-id $(az account show --query "id" -o tsv) \
--azure-ssh-user azureuser \
--azure-size "Standard_B1ms" azure-test
The first time you run this you’ll probably need to allow Docker Machine access to your Azure subscription (you’ll get prompted to log in via a browser and allow access). This will create a service principal that is visible via az ad sp list
. Note that you may be prompted for authentication for future uses, although it will re-use the existing service principal once it is created.
Today we are excited to announce the expansion of our partnership with the availability of Docker Enterprise Edition (EE), our container management platform on the Cisco Global Price List (GPL) and the release of the latest Cisco Validated Design (CVD):
Now customers can purchase Docker EE directly from Cisco and their joint resellers to jumpstart their new year’s resolution for a more modern application architecture, reduce IT costs and redirect saving to innovation projects. And with our latest CVD for Cisco UCS compute infrastructure with secure container networking fabric, Contiv, we’ve provided a roadmap on how to get started so customers and partners can gain a faster, more reliable and predictable implementation of Docker EE.
For enterprises looking to use Docker’s container management platform but not sure where to start, we can help you take the first step. The Migrating Traditional Applications (MTA) Program, designed for IT operations teams, helps enterprises modernize existing legacy .NET Windows or Java Linux applications without modifying source code or re-architecting the application in just five days with Docker and Cisco Advanced Services. The results have been incredible, with customers saving over 50% on infrastructure costs and Continue reading
I recently had a need to revisit the use of Cumulus VX (the Cumulus Networks virtual appliance running Cumulus Linux) in a Vagrant environment, and I wanted to be sure to test what I was doing on multiple virtualization platforms. Via Vagrant Cloud, Cumulus distributes VirtualBox and Libvirt versions of Cumulus VX, and there is a slightly older version that also provides a VMware-formatted box. Unfortunately, there’s a simple error in the VMware-formatted box that prevents it from working. Here’s the fix.
The latest version (as of this writing) of Cumulus VX was 3.5.0, and for this version both VirtualBox-formatted and Libvirt-formatted boxes are provided. For a VMware-formatted box, the latest version is 3.2.0, which you can install with this command:
vagrant box add CumulusCommunity/cumulus-vx --box-version 3.2.0
When this Vagrant box is installed using the above command, what actually happens is something like this (at a high level):
The *.box
file for the specific box, platform, and version is downloaded. This .box
file is nothing more than a TAR archive with specific files included (see here for more details).
The *.box
file is expanded into the ~/.vagrant.d/boxes
directory Continue reading
One of the things that makes Docker really cool, particularly compared to using virtual machines, is how easy it is to move around Docker images. If you’ve already been using Docker, you’ve almost certainly pulled images from Docker Hub. Docker Hub is Docker’s cloud-based registry service and has tens of thousands of Docker images to choose from. If you’re developing your own software and creating your own Docker images though, you’ll want your own private Docker registry. This is particularly true if you have images with proprietary licenses, or if you have a complex continuous integration (CI) process for your build system.
Docker Enterprise Edition includes Docker Trusted Registry (DTR), a highly available registry with secure image management capabilities which was built to run either inside of your own data center or on your own cloud-based infrastructure. In the next few weeks, we’ll go over how DTR is a critical component of delivering a secure, repeatable and consistent software supply chain. You can get started with it today through our free hosted demo or by downloading and installing the free 30-day trial. The steps to get started with your own installation are below.
Docker Trusted Registry runs on Continue reading
You heard about it at DockerCon Europe and now it is here: we are proud to announce that Docker for Mac with beta Kubernetes support is now publicly available as part of the Edge release channel. We hope you are as excited as we are!
With this release you can now run a single node Kubernetes cluster right on your Mac and use both kubectl commands and docker commands to control your containers.
First, a few things to keep in mind:
One of the major networking features in Red Hat Ansible Engine 2.4 was the addition of aggregate resources to the networking modules. The Ansible networking team recently talked about this at the Ask an Expert webinar in November.
Simply put, aggregate resources are a better way to iterate (or loop) without the need to execute each task one by one. That is, you can now “aggregate” a collection as a single task instead of a collection of discrete loops.
Loop Method
Aggregate Method
Loop Method (with_items:) |
Aggregate Method (aggregate:) |
---|---|
|
|
503 steps |
4 steps |
Based on feedback from customers, partners and community members, this post provides more examples and more detail of this important new feature. The simplest way to showcase this is to compare the old way and the new way, and highlight the differences Continue reading
Welcome to Technology Short Take 92, the first Technology Short Take of 2018. This one was supposed to be the last Tech Short Take of 2017, but I didn’t get it published in time (I decided to spend time with my family instead—some things are just more important). In any case, hopefully the delay of one additional week hasn’t caused any undue stress—let’s jump right in!
As the holiday season ends, many of us are making New Year’s resolutions for 2018. Now is a great time to think about the new skills or technologies you’d like to learn. So much can change each year as technology progresses and companies are looking to innovate or modernize their legacy applications or infrastructure. At the same time the market for Docker jobs continues to grow as companies such as Visa, MetLife and Splunk adopt Docker Enterprise Edition ( EE) in production. So how about learning Docker in 2018 ? Here are a few tips to help you along the way.
Play with Docker (PWD) is a Docker playground and training site which allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters. Check out this video from DockerCon 2017 to learn more about this project. The training site is composed of a large set of Docker labs and quizzes from beginner to advanced level available for both Developers and IT pros at training. Continue reading
As has become my custom for the past several years, I wanted to take a look at how well I fared on my 2017 project list. Normally I’d publish this before the end of 2017, but during this past holiday season I decided to more fully “unplug” and focus on the truly important things in life (like my family). So, here’s a look back at my 2017 projects and a report card on my progress (or lack thereof, in some cases).
For reference, here’s the list of projects I set out for myself in 2017:
So, how did I do with each of these projects?
Finish the network automation book: I’m happy to report that all the content for the network automation book I’ve been writing with Jason Edelman and Matt Oswalt is done, and the book is currently in production (and should be available to order from O’Reilly very soon). I had hoped to get the content done in time for the book to be available for order before the Continue reading
Splunk wants to make machine data accessible, usable and valuable to everyone. With over 14,000 customers in 110 countries, providing the best software for visualizing machine data involves hours and hours of testing against multiple supported platforms and various configurations. For Mike Dickey, Sr. Director in charge of engineering infrastructure at Splunk, the challenge was that 13 different engineering teams in California and Shanghai had contributed to test infrastructure sprawl, with hundreds of different projects and plans that were all being managed manually.
At DockerCon Europe, Mike and Harish Jayakumar, Docker Solutions Engineer, shared how Splunk leveraged Docker Enterprise Edition (Docker EE) to dramatically improve build and deployment times on their test infrastructure, converge on a unified Continuous Integration (CI) workflow, and how they’ve now grown to 600 bare-metal servers deploying tens of thousands of Docker containers per day.
You can watch the entire session here:
As Splunk has grown, so has their customers’ use of their software. Many Splunk customers now process petabytes of data, and that has forced Splunk to scale their testing to match. That means more infrastructure needs to be reserved in the shared test environment Continue reading
As we count down the final days of 2017, we would like to bring you the final installment of the top 5 blogs of 2017. On day 5, we take a look back DockerCon EU, when we announced Kubernetes support in the Docker platform. This blog takes an in-depth look at the industry-leading container platform and the addition of Kubernetes.
The Docker platform is integrating support for Kubernetes so that Docker customers and developers have the option to use both Kubernetes and Swarm to orchestrate container workloads. Register for beta access and check out the detailed blog posts to learn how we’re bringing Kubernetes to:
Docker is a platform that sits between apps and infrastructure. By building apps on Docker, developers and IT operations get freedom and flexibility. That’s because Docker runs everywhere that enterprises deploy apps: on-prem (including on IBM mainframes, enterprise Linux and Windows) and in the cloud. Once an application is containerized, it’s easy to re-build, re-deploy and move around, or even run in hybrid setups that straddle on-prem and cloud infrastructure.
The Docker platform is composed of many Continue reading
We’ve rounded up the top five most popular Docker blogs of 2017. Coming in at number four is, Spring Boot Development With Docker, part of a multi-part tutorial series.
The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. In my last post, I discussed the architecture of the app. In this post, I will cover how to setup your development environment to debug the Java REST backend that runs in a container.
I used the Spring Boot framework to rapidly develop the REST backend that manages products, customers and orders tables used in the AtSea Shop. The application takes advantage of Spring Boot’s built-in application server, support for REST interfaces and ability to define multiple data sources. Because it was written in Java, it is agnostic to the base operating system and runs in either Windows or Linux containers. This allows developers to build against a heterogenous architecture.
The AtSea project uses multi-stage builds, a new Docker feature, which allows me to use multiple images to build a single Docker image that includes all the components needed for Continue reading
In case you’ve missed it, this week we’re highlighting the top five most popular Docker blogs in 2017. Coming in the third place is the announcement of LinuxKit, a toolkit for building secure, lean and portable Linux Subsystems.
LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable. LinuxKit is an open source project available at https://github.com/linuxkit/linuxkit.
To achieve our goals of a secure, lean and portable OS,we built it from containers, for containers. Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”
The leanness directly helps with security by removing parts not Continue reading
We’ve rounded up the most-read Docker blogs of 2017. Topping our list at number two is, Exciting new things for Docker with Windows Server 1709.
What a difference a year makes… last September, Microsoft and Docker launched Docker Enterprise Edition (EE), a Containers-as-a-Service platform for IT that manages and secures diverse applications across disparate infrastructures, for Windows Server 2016. Since then we’ve continued to work together and Windows Server 1709 contains several enhancements for Docker customers.
To experiment with the new Docker and Windows features, a preview build of Docker is required. Here’s how to install it on Windows Server 1709 (this will also work on Insider builds):
Install-Module DockerProvider Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview
To run Docker Windows containers in production on any Windows Server version, please stick to Docker EE 17.06.
A key focus of Windows Server version 1709 is support for Linux containers on Windows. We’ve already blogged about how we’re supporting Linux containers on Windows with the LinuxKit project.
To try Linux Containers on Windows Server 1709, install the preview Docker package and enable the feature. The preview Docker EE package includes a full LinuxKit Continue reading
As 2017 comes to a close, we looked back at the top five blogs that were most popular with our readers. For those of you that have yet to set up your first Docker Windows container, we are kicking off the week with a blog that will help you get up and running on Windows containers.
Earlier this year, Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.
Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, a Windows system with container support is required.
For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Continue reading
2017 was such a great year for the Ansible team at Red Hat. From launching Ansible Engine to open sourcing Ansible Tower, we’ve had a year to remember. And just in case you missed them, here are our 10 most viewed blog posts of the year to showcase all the fun we’ve had.
Did you know a large portion of Ansible’s functionality comes from the Ansible plugin system? These important pieces of code augment Ansible’s core functionality such as parsing and loading inventory and Playbooks, running Playbooks and reading the results. In this blog, we review each of these plugins and offer a high-level overview on how to write your own plugin to extend Ansible functionality. Read more.
In 2016, we added the first networking modules to Ansible, since then we’ve added hundreds of modules and many networking vendor platforms have been enabled. This year, our focus on networking enablement turned to increasing performance and adding connection methods that provide compatibility and flexibility. You were eager to learn all about it and made this our second most read blog of the year! Read more.
The past year has proven to be one of rapid customer growth and traction in the enterprise. The channel is a fundamental part of our achievements to date and we are grateful for all of the dedicated partners involved in taking container technology mainstream. We now have hundreds of the largest enterprises as customers and we look forward to driving even greater adoption in the coming year alongside our partners.
With 2017 coming to an end, here’s a quick look back at channel achievements from this past year:
The holidays are a time of joy, gratitude and reflection. As we look back on the year, we’re celebrating you, our amazing customers! You are the ones that make the Docker community special and inspire us to innovate. We appreciate the business and are grateful for the opportunity! With that we’d like to put the spotlight on the top 5 Docker Enterprise Edition (Docker EE) customer stories of 2017.
MetLife, the global provider of insurance, annuities, and employee benefit programs, will be celebrating it’s 150th birthday next year. To stay ahead of the competition, MetLife realizes it must be agile to more rapidly respond to changing market requirements. During the Day 2 General Session at DockerCon 2017, MetLife shared how they’re inspiring new innovation in their organization with Docker EE. MetLife also took part in the Docker MTA program designed to help customers bring portability, security, and efficiency to their traditional applications while saving on their total cost of ownership (TCO). Learn more about the Docker MTA program at Metlife in this video.
In the keynote on Day Continue reading
As a government organization for the Netherlands, Kadaster is responsible for collecting and registering property and land rights, ships, aircraft and telecom networks. An important service for its citizens, registry information is available predominantly through online web services.
Beginning in 2011, Kadaster created a vision for their next generation technology platform which included a combination of SaaS, IaaS, and PaaS services. Today, Docker Enterprise Edition (Docker EE) is an essential part of this solution. At DockerCon Europe, Rick Peters from CapGemini discussed how they worked with Kadaster to deliver an agile application platform that now runs some of the most demanding workloads for the Dutch organization.
You can watch the talk here:
Beginning in 2012, Kadaster created one of the most successful private clouds in the Netherlands. Starting out as 300 virtual machines, the team did not think they would surpass 750 virtual machines, but blew well past that figure in just two years.
That rapid expansion was fueled by the easier self-service delivery model and the ability deploy apps more regularly and faster. Initially focused as a Java runtime platform powered by virtualization, the platform objectives shifted over Continue reading
In a previous post, I explained how Red Hat Ansible Tower works with SAML. A little known fact about Ansible Tower is that it supports two-factor SAML. More precisely, Ansible Tower can be configured to not disallow SAML with two-factor. Ansible Tower relies heavily on django-social-auth, which comes with a SAML backend, which relies heavily on python-saml. python-saml contains a default setting, specifically requestedAuthnContext, that, put simply, requests that the idp authenticate the user using a password. To reiterate, Ansible Tower will ask for the user to be authenticated by a password and not be given the choice to authenticate the user by two-factor.
In order to allow the IDP to choose two-factor, we need to not ask it to authenticate using password. More specifically, we need to not include the samlp:RequestedAuthnContext directive at all. Ansible Tower shouldn’t be making the presumption about the IDP’s authentication methods on the other side. Maybe the IDP supports calling the employee on the phone to authenticate. This is a decision that should be made by the IDP.
Let’s see how we make this happen. Create the file /etc/tower/conf.d/saml.py with the following content:
"SOCIAL_AUTH_SAML_SECURITY_CONFIG": {
"requestedAuthnContext": False
}
Then issue Continue reading