This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.
For the purposes of this introduction, I’ll structure it around launching an EC2 instance. As it turns out, there’s a fair amount of information you need before you can launch an AWS instance using the AWS CLI. So, let’s look at how you would use the AWS CLI to help get the information you need in order to launch an instance using the AWS CLI. (Tool inception!)
To launch an instance, you need five pieces of information:
When you think of mortgage companies, you think of paperwork. You probably don’t think agile and responsive to customers. But Franklin American Mortgage wanted to disrupt that model. With their investments in innovation, microservices and Docker Enterprise Edition, they’ve been able to quickly create a platform to make technology core to their success.
Don Bauer, the DevOps lead at Franklin American is part of an innovation team the mortgage company had established last year to challenge the status quo. Franklin American was doing amazing things and transforming their business — all with Docker Enterprise Edition as the foundation.
Don presented at DockerCon 2018 along with Franklin American’s VP of Innovation, Sharon Frazier. They’ve been able to quickly build a DevOps culture around four pillars: Visibility, Simplification, Standardization and Experimentation. Experimentation is key. It lets them fail fast, and fearlessly.
In an interview at DockerCon, Don explained how they use Docker Enterprise Edition to drive innovation.
“Docker has allowed us to fail fearlessly. We can test new things easily and quickly and if they work, awesome. But if they don’t, we didn’t spend weeks or months on it.” – Don Bauer, DevOps Lead
For the company, innovation is about Continue reading
Welcome to another installment of our Windows-centric Getting Started Series! In the prior posts we talked about connecting to Windows machines, gave a brief introduction on using Ansible with Active Directory, and discussed package management options on Windows with Ansible. In this post we’ll talk a little about applying security methodologies and practices in relation to our original topics.
In order to discuss security issues in relation to Ansible and Windows, we’ll be applying concepts from the popular CIA Triad: Confidentiality, Integrity, and Availability.
Confidentiality is pretty self-evident — protecting confidentiality helps restrict private data to only authorized users and helps to prevent non-authorized ones from seeing it. The way this is accomplished involves several techniques such as authentication, authorization, and encryption. When working with Windows, this means making sure the hosts know all of the necessary identities, that each user is appropriately verified, and that the data is protected (by, for example, encryption) so that it can only be accessed by authorized parties.
Integrity is about making sure that the data is not tampered with or damaged so that it is unusable. When you’re sending data across a network you want to make sure that it arrives Continue reading
Docker Compose is wildly popular with developers for describing an application. In fact, there are more than 300,000 Docker Compose files on GitHub. With a set of services described in a docker-compose.yml file, it’s easy to launch a complex multi-service application (or a simple, single-service app) on Docker by running a single command. This ease of use makes Docker Compose perfect for development teams striving for a quick way of getting started with projects.
Over time Compose has evolved, adding lots of features which help when deploying those same applications to production environments, for example specifying a number of replicas, memory resource constraints or a custom syslog server. But those attributes can become specific to your own environment. There are a number of different strategies for trying to address this situation, but the most common is relying on copy and paste. It’s fairly common to maintain multiple Compose files for the same application running in different environments for example. This leads to two problems:
Let me tell you a story. It’s 2014 and I had read so many articles about Docker (as the project was called then), how awesome it is and how it makes the lives of developers so much easier. Being one, I decided to try it out. Back in the day, I was working on some django applications. Those apps were really simple: just a webserver and a database. So I went straight ahead to docker-compose. I read in the docs that I should create a docker-compose.yml
file and then just docker-compose up
. An error message here and there but I was able to navigate the containers to success with no big issues. And that was it. One command to run my application. I was sold on containers.
I was so excited that I started talking about Docker and docker-compose to everyone, everywhere. In the office breakroom, to my dad, at a meetup, to a crowd of 50 at a local conference. It wasn’t completely easy, since some people argued or did not understand fully. But I definitely made some converts. We even made a workshop series with my friends Peter Schiffer and Continue reading
Yesterday we continued a long tradition at DockerCon, the Cool Hacks closing keynote. In our Cool Hacks keynote, we like to emphasize applications that push the limits and applications that represent major future trends in container workloads. We also like to feature applications that demonstrate how Docker fueled innovation can be used every day.
This DockerCon, the three applications we chose embodied all of these characteristics.
Our first hack, by Christopher Heistand of the Johns Hopkins University Applied Physics Laboratory is helping save the world. The Double Asteroid Redirect Mission Test (DART) is testing kinetic impact against an asteroid to measure whether one can be redirected. They use Docker to emulate the specialized and expensive hardware, saving them money and development time.
David Aronchick (@aronchick ) and Michelle Casbon (@texasmichelle) demonstrated our second hack with Kubeflow. Machine learning in production workloads, at scale.
And finally, Idit Levine (@Idit_Levine) showed us Gloo. Gloo gives you the portability and choice of a serverless framework, from cloud services like AWS Lambda to running one of the several containerized self-hosted serverless frameworks. All running in Docker EE.
Check out our Cool Hacks closing keynote.
And finally, we wrapped up inviting Continue reading
In yesterday’s DockerCon keynote, Eric Drobisewski, Senior Architect at Liberty Mutual Insurance, shared how Docker Enterprise Edition has been a foundational technology for their digital transformation.
If you missed it, the replay of the keynote is available below:
Liberty Mutual – the 3rd largest property and casualty insurance provider in the United States – recognizes that the new digital economy is bringing a faster cycle of technology evolution. Disruptive technologies like autonomous vehicles and smart homes are changing the way customers interact and transact. Liberty Mutual sees these as opportunities to bring new services to market and ways to reinvent traditional insurance models, but they needed to become more flexible and agile while managing their technical debt.
As a 106-year old company, Liberty Mutual recognized that they were not going to become agile overnight. Liberty Mutual has instead built a “multi-lane highway” that enables both traditional apps and new microservices apps to modernize at different speeds according to their needs, all based on Docker Enterprise Edition.
“(Docker Enterprise Edition) began to open multiple paths for our teams to modernize traditional applications and move them to the cloud in a Continue reading
Since the advent of AWS Lambda in 2014, the Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. At first, only large cloud providers such as AWS Lambda, Google Cloud Functions or Azure Functions provided such services with a pay-per-invocation model, but since then interest has increased for developers and entreprises to build their own solutions on an open source model.
The maturation of container platforms such as Docker EE has made this process even easier, resulting in a number of competing frameworks in this space. We have identified at least 9 different frameworks*. In this study, we start with the following six: OpenFaaS, nuclio, Gestalt, Riff, Fn and OpenWhisk. You can find an introduction (including slides and videos) to some of these frameworks in this blog post from the last DockerCon Europe.
These frameworks vary a lot in feature set, but can be generalized as having several key elements shown in the following diagram, from the Serverless Architecture from CNCF Serverless Working Group whitepaper:
As you might know, Red Hat Ansible Tower supports SAML authentication (both N and Z) by default. This document will guide you through the steps for configuring both products to delegate the authentication to RHSSO/Keycloak (Red Hat Single Sign-On).
Requirements:
Unless you have your own certificate already, the first step will be to create one. To do so, execute the following command:
openssl req -new -x509 -days 365 -nodes -out saml.crt -keyout saml.key
Now we need to create the Ansible Tower Realm on the RHSSO platform. Go to the "Select Realm" drop-down and click on "Add new realm":
Once created, go to the "Keys" tab and delete all certificates, keys, etc. that were created by default.
Now that we have a clean realm, let's populate it with the appropriate information. Click on "Add Keystore" in the upper right corner and click on RSA:
Click on Save and create your Ansible Tower client information. It is recommend to start with the Tower configuration so that you can inject the metadata file and customize a few of the fields.
Log in as the admin user Continue reading
Hello from San Francisco! Tuesday we kicked off the first day of DockerCon with general session jam packed with inspiration, demos and customer guest speakers.
Steve Singh, our CEO and Chairman opened the session with Docker’s promise to ensure freedom of choice, agility in development and operations and pervasive security in a container platform that can help unlock the potential for innovation in every company. Docker will deliver an integrated toolset with a delightful user experience that needs innovators like you.
Day one also featured three demos of new technologies capabilities for both Docker Desktop and Docker Enterprise Edition. These features are not yet generally available but released and those interested in the beta can sign up here to be notified.
Today at DockerCon, we demonstrated new application management capabilities for Docker Enterprise Edition that will allow organizations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE).
Most enterprise organizations have a hybrid or multi-cloud strategy and the rise of containers has helped to make applications more portable. However, when organizations start to adopt containers as their default application format, they start to run into the challenges of managing multiple container environments, especially when each of them has a different set of access controls, governance policies, content repositories and operational models. For common hybrid and multi-cloud use cases like bursting applications to the cloud for additional capacity or migrating them from one site to another for availability or compliance reasons, organizations start to realize the need for a singular control plane for all containerized applications – no matter where it will be deployed.
Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only Continue reading
Docker and Microsoft have been working together since 2014 to bring containers to Windows and .NET applications. Today at DockerCon, we share the next step in this partnership with the preview and demonstration of Kubernetes on Windows Server with Docker Enterprise Edition.
Docker and Microsoft brought container technology into Windows Server 2016, ensuring consistency for the same Docker Compose file and CLI commands across both Linux and Windows. Windows Server ships with a Docker Enterprise Edition engine, meaning all Windows containers today are based on Docker. Recognizing that most enterprise organizations have both Windows and Linux applications in their environment, we followed that up in 2017 with the ability to manage mixed Windows and Linux clusters in the same Docker Enterprise Edition environment, enabling support for hybrid applications and driving higher efficiencies and lower overhead for organizations. Using Swarm orchestration, operations teams could support different application teams with secure isolation between them, while also allowing Windows and Linux containers to communicate over a common overlay network.
Since then, Docker has seen the rapid rise of Windows containers as organizations recognize the benefits of containerization and want to apply them across their entire application Continue reading
In today’s DockerCon keynote we previewed an upcoming Docker Desktop feature that will make it easier than ever to design your own container-based applications. For a certain set of developers, the current iteration of Docker Desktop has everything one might need to containerize an applications, but it does require an understanding of the Dockerfile and Compose file specifications in order to get started and the Docker CLI to build and run your applications.
But we’ve been thinking about ways to bring this capability to ALL developers. We want to make it easier to get started with containerization, and we want to make it even easier to share and collaborate and integrate container-based development in to more developers’ toolsets. This new guided workflow feature is a preview of what we’re working on and we wanted to share more details on the ideas we’ve incorporated and are thinking about for the future.
The first thing you’ll notice is this is a graphical tool. We’re not breaking anything that’s already working today – everything being created behind the scenes is still Dockerfiles and Compose – we’re just giving you a new way to get from Point A to Point B, because not everyone Continue reading
We are excited to share that Red Hat Ansible Tower was awarded a 2018 Software & Information Industry Association (SIIA) CODiE Award in the Best DevOps Tool category. The award recognizes the best tools for supporting collaboration between developers and operations. Additionally, we proud to share that Ansible Tower was honored with the Best Overall Business Technology Solution award. This award represents the product with the highest scores of both rounds of judging across all 52 business technology categories.
The SIIA CODiE Awards are the industry's only peer-recognized awards program. Business technology leaders including senior executives, analysts, media, consultants and investors evaluate assigned products during the first-round review which determines the finalists. SIIA members then vote on the finalist products and the scores from both rounds are tabulated to select the winners. Finalists represent the best products, technologies, and services in software, information and business technology.
We would like to thank the Ansible community for their continued support, contributions and excitement for the solution. The community is at the heart of all Ansible products and these awards were made possible because of our tireless community that collaborates everyday to help more people experience the power of automation.
Congratulations to the Continue reading
During a recent client visit, we were asked to help migrate the following script for deploying a centralized sudoers file to RHEL and AIX servers. This is a common scenario which can provide some good examples of leveraging advanced Ansible features. Additionally, we can consider the shift in approach from a script that does a task to describing and enforcing the state of an item idempotently.
Here is the script:
#!/bin/sh
# Desc: Distribute unified copy of /etc/sudoers
#
# $Id: $
#set -x
export ODMDIR=/etc/repos
#
# perform any cleanup actions we need to do, and then exit with the
# passed status/return code
#
clean_exit()
{
cd /
test -f "$tmpfile" && rm $tmpfile
exit $1
}
#Set variables
PROG=`basename $0`
PLAT=`uname -s|awk '{print $1}'`
HOSTNAME=`uname -n | awk -F. '{print $1}'`
HOSTPFX=$(echo $HOSTNAME |cut -c 1-2)
NFSserver="nfs-server"
NFSdir="/NFS/AIXSOFT_NFS"
MOUNTPT="/mnt.$$"
MAILTO="[email protected]"
DSTRING=$(date +%Y%m%d%H%M)
LOGFILE="/tmp/${PROG}.dist_sudoers.${DSTRING}.log"
BKUPFILE=/etc/sudoers.${DSTRING}
SRCFILE=${MOUNTPT}/skel/sudoers-uni
MD5FILE="/.sudoers.md5"
echo "Starting ${PROG} on ${HOSTNAME}" >> ${LOGFILE} 2>&1
# Make sure we run as root
runas=`id | awk -F'(' '{print $1}' | awk -F'=' '{print $2}'`
if [ $runas -ne 0 ] ; then
echo "$PROG: you must be root to run Continue reading
If you’re not one of the 6,000 expected attendees at DockerCon 2018 in San Francisco, don’t worry, you don’t have to miss a thing! We’ve put together a list of the Top 5 thing you can do to stay current on all things DockerCon, if you’re not attending this year.
1. Learn about the latest release – Docker Enterprise Edition (EE) 2.0
Learn about the only enterprise-ready container platform that enables IT leaders to choose how to cost-effectively build and manage their entire application portfolio at their own pace, without fear of architecture and infrastructure lock-in. Read the blog and watch the Docker EE 2.0 Launch Virtual Event with customer stories from Liberty Mutual, Franklin American Mortgage, and ADP.
2. Watch the Livesteam of the DockerCon Keynotes and Cool Hacks
Register now to see the DockerCon keynote sessions live, from where in the world you may be, on June 13th and 14th at 9AM PDT . Hear the latest Docker announcements from Steve Singh (CEO) and Scott Johnston (Chief Product Officer) and enjoy the highly technical demos of the latest innovations from the Docker team.
3. Follow the News from your peers at DockerCon
Be sure to get Continue reading
While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.
First, you’ll want to extract the certificate data from the kubeconfig file. For the purposes of this post, I’ll use a kubeconfig file named config
and found in the .kube
subdirectory of your home directory. Assuming there’s only a single certificate embedded in the file, you can use a simple grep
statement to isolate this information:
grep 'client-certificate-data' $HOME/.kube/config
Combine that with awk
to isolate only the certificate data:
grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}'
This data is Base64-encoded, so we decode it (I’ll wrap the command using backslashes for readability now that it has grown a bit longer):
grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d
You could, at this stage, redirect the output into a file (like certificate.crt
) if so desired; the data you have is Continue reading
DockerCon San Francisco 2018 is here! From all of us at Docker HQ we want to welcome those that have travelled to be with us in San Francisco. For this year’s DockerCon we wanted to create an experience that uniquely helps YOU figure out where you are today and where you want to go next with your containerized applications and operations. As you get to the Moscone Center in San Francisco, you’ll see signs guiding you towards various stages of the technology adoption journey. Below we’ve summarized common traits that customer like you have at each phase of the journey: once you identify where you click to jump down to some last minute guidance of sessions and activities that we think will be most helpful for each stage.
Click to jump directly to your journey stage:
Download the official DockerCon application for DockerCon 2018:
This app features attendee resources such as agendas, venue maps, access to Hallway Track, attendee networking and more! You can use the DockerCon App to take notes, rate both speakers and sessions.
Docker Hallway Track is an innovative platform that helps you find and network with like-minded people and meet one-on-one. Make new and meaningful connections by sharing knowledge and meet with other attendees with using the: book a Hallway Track scheduling tool. Just log in once using your registration credentials to access.
The mapping section includes a map of the conference; giving Continue reading
I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.
Normally, variable interpolation in Terraform would allow one to do something like this (this is taken from the aws_instance
resource):
tags {
Name = "${var.name}-${count.index}"
role = "${var.role}"
}
This approach works, creating tags whose keys are “Name” and “role” and whose values are the interpolated variables. (I am, in fact, using this exact snippet of code in some of my Terraform modules.) Given that this works, I decided to extend it in a way that would allow the code calling the module to supply both the key as well as the value, thus providing more flexibility Continue reading