Major business initiatives such as digitization and cloud migration have threatened to disrupt IT organizations that are already spread thin simply supporting the core business. Containerization is viewed as a way to help with these initiatives because they speed the delivery of software and typically cut operational costs by more than 50% in the first year alone. To support a containerization strategy, many enterprises are turning to container platforms to manage and secure the delivery of their mission-critical software from development through production.
For customers, choosing the right container platform is more than a technical decision – it is a business decision. As with any decision of this nature, it is critical that the container solution has the flexibility to evolve as business requirements change. Unfortunately, all container platforms are not the same – some lack security while others force organizations into a prescriptive operating model. And even worse, most container platforms will “lock in” an organization to a single OS, single cloud, single type of application, single development – leading CIOs down a single dead-end path they will have to abandon in less than two years.
So how can organization address continue to move forward with modern technologies, Continue reading
One of my favorite guilty pleasures is the movie "10 Things I Hate About You". If you're not familiar with it, it's a 90's teenybopper flick that's loosely based on Shakespeare's "The Taming of the Shrew". In the movie, our hero Patrick is surreptitiously paid to woo the man-hating Kat so that slimy Joey will be allowed to date her younger sister Bianca. Kat initially can't stand Patrick and his numerous bad habits, but by the end of the story has fallen for him. She reads him a poem that starts off describing ten things she hates about him, but wraps it up declaring her love for him instead.
I love Windows, but I know many Linux admins can't stand it, and avoid working with it at any cost. While working on a talk to espouse the use of Ansible to manage Windows in the same way as Linux, I imagined a Linux admin discovering the power of Ansible's features and common language to see the beauty in an automated Windows setup. It inspired me to write my own version of Kat's poem:
I hate that you're not SSH, and the shell that you call "Power",
I hate Continue reading
At our recent virtual event, we shared our excitement around Docker Enterprise Edition (EE) 2.0 – the most complete enterprise-ready container platform in the market. We shared how this release enables organizations like Liberty Mutual and Franklin American Mortgage Company, both presenters at DockerCon 2018, to efficiently scale their container environment across many teams while delivering choice and flexibility. We demonstrated some of the new advanced capabilities around access controls with secure application zones and building a consistent software supply chain across regions, and highlighted how easy and interchangeable it is to leverage both Swarm and Kubernetes orchestration in the same environment.
If you missed the live event, don’t worry! You can still catch the recording on-demand here.
We got great questions throughout the event and will address the most common ones in our blog over the next few days.
One of the highlights of this release is the integration of Kubernetes, making Docker EE the only platform that runs both Swarm and Kubernetes simultaneously on the same cluster – so developers do not need to make an orchestration choice. Operations teams have the flexibility to choose orchestrators interchangeably.
Q: Is Continue reading
One of the most common questions I hear while talking about Ansible's support for cloud providers is whether it will work in hybrid environments. You may not be able to use the ec2
module to create an instance in your datacenter, but Ansible has modules for RHV, OpenStack, and VMWare to talk to virtualization tools in your datacenter. I love working in AWS, Azure, and Google Cloud but most environments I've worked in have had on-prem systems as well.
That's what I've been invited to Red Hat Summit to talk about -- best practices for automating all the infrastructure at your disposal, not just the cloud services. My demos will feature a couple new Ansible Core/Engine 2.5 features, as well as preview new 2.6-only features.
My favorite feature to show off is part of the new ec2_instance
module. In the demo we'll have a look at how Tower provisioning callbacks are now built in to the ec2_instance
module, making provisioning brand new instances as easy as:
- ec2_instance:
image:
id: "{{ latest_centos.image_id }}"
key_name: my-secret-key
instance_type: t2.large
name: call-me-maybe
security_groups:
- demo-web-sg
# COOL MAGIC HERE
tower_callback:
host_config_key: "{{ your_secret_here }}"
job_template_id: Continue reading
Welcome to the second installment of our Windows-centric Getting Started series!
Last time we walked you through how Ansible connects to a Windows host. We’ve also previously explored logging into Ansible Tower while authenticating against an LDAP directory. In this post, we’ll go over a few ways you can use Ansible to manage Microsoft’s Active Directory. Since AD plays a role in many Windows environments, using Ansible to manage Windows will probably include running commands against the Active Directory domain.
We’ll be using WinRM to connect to Windows hosts, so this means making sure Ansible or Tower knows that. Machine credentials in Ansible Tower can be created and used along with variables, but when using Ansible in a terminal the playbook should make it clear with variables:
---
- name: Your Windows Playbook
hosts: win
vars:
ansible_ssh_user: administrator
ansible_ssh_pass: ThisIsWhereStrongPassesGo
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
- tasks:
Along with using the local admin account/pass, the WinRM connection method is named specifically. The variable to ignore the certificate validation is for standalone, non-domain hosts because a domain-joined instance should have certificates validated on the domain.
Speaking of domains, Ansible can spin up a new domain Continue reading
No Postfix installation is complete without OpenDKIM and OpenDMARC.
While some people go for all-in-one solutions that does all of these for them with a single command or two (and then cry to their gods as soon as the system fails as they have no idea how to debug it), the rest of us rather to be our own boss and set things up manually and carefully based on our needs, so we can troubleshoot it if when things go wrong.
This however, is easier to be said than done. In this post, rather than trying to explain what they are and how they can be set up (which can be found everywhere on the web), I am mainly going to address the issues that you might encounter when running your Postfix and these Milters on the same system running Ubuntu.
OpenDKIM and OpenDMARC are designed to be used as Milters. They are two different programs for two different -and yet related- tasks.
They show a lot of similarities in their configuration files and both suffer from the same limitations when running along with a chrooted Postfix instance.
While in a recent enough version of Postfix, daemons are Continue reading
With KubeCon EU happening in Copenhagen, we looked back at the most popular posts with our readers on Docker and Kubernetes. For those of you that have yet to try Docker EE 2.0, this blog highlights how in Docker for Desktops you can use Docker compose to directly deploy an application onto a Kubernetes cluster.
If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.
Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes.
With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application Continue reading
With less than 6 weeks until DockerCon 2018, we can barely contain our excitement! From their favorite tips and tricks for using Docker in production or levering Docker for Machine Learning, Docker Captains come together at DockerCon to share their knowledge and collaborate with the broader community. We’ve asked Docker Captains to share what they are most looking forward to at DockerCon. Here are some of their responses.
“I’m looking forward to meeting the many other Docker enthusiasts and champions and listening to other cool things that Docker makes possible” – Kinnary Jangla, Pinterest
“ In 2015, I attended DockerCon for the first time. I was sitting in a chair and listening to the amazing stories and ideas presented by speakers at the conference, which set off a chain of events that led to today. I feel privileged, and am really looking forward to being on stage and sharing our transformational journey to inspire the people who would sit in that chair. I am also looking forward to hearing the keynotes and the exciting new announcements that I am sure are being lined up for the big event.” – Alexandre Iankoulski, Baker Hughes
“Learning about the Continue reading
Highly-regulated industries like financial services, insurance and government have their own set of complex and challenging regulatory IT requirements that must be constantly maintained. For this reason, the introduction of new technology can sometimes be difficult. Docker Enterprise Edition provides these types of organization with both a secure platform on which containers are the foundation for building compliant applications and a workflow for operational governance at scale.
The problem remains that even with the technology innovation of containers, cloud and other new tools, the area of IT compliance has remained relatively unchanged with security standards that lag far behind, creating mismatches of traditional controls to modern systems. Organizations are still dependent on the same mundane, paperwork-heavy audit and reporting processes of previous decades. The time and cost to build a PCI, FISMA or HIPAA compliant system is no small feat, even for large enterprises, due to the resources required to develop and maintain the documentation and artifacts that must be continuously audited by a third party.
To address these requirements, Docker has collaborated with the National Institute of Standards and Technology (NIST), and today, we are excited to announce that Docker is fully embracing Continue reading
With KubeCon EU happening in Copenhaguen, we looked back at the most popular posts with our readers on Docker and Kubernetes. For those of you that have yet to try Docker EE 2.0, this blog highlights how Docker EE 2.0 provides a secure supply chain for Kubernetes.
The GA release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations that are exploring Kubernetes or deploying it in production, Docker EE offers integrated security for the entire lifecycle of a containerized application, providing an additional layer of security before the workload is deployed by Kubernetes and continuing to secure the application while it is running.
Mike Coleman previously discussed access controls for Kubernetes. This week we’ll begin discussing how Docker EE secures the Kubernetes supply chain.
When you purchase something from a retail store, there is an entire supply chain that gets the product from raw materials to the manufacturer to you. Similarly, there is a software supply chain that takes an application from Continue reading
The countdown is on! It’s just a few short days until Red Hat Summit. I’m Kaete Piccirilli and I do all things Ansible Marketing. While it’s not my first Summit at Red Hat, it’s the first one I’ll be attending, and I cannot be more excited to finally be in the mix of our customers, partners and open source communities.
Red Hat Summit has an action-packed few days planned, and I have picked a few Ansible Automation sessions that you won’t want to miss.
Managing 15,000 network devices with Ansible
Ansible allows network management across virtually any device platform. Any network device can be managed via SSH or an API. We took this cutting-edge network automation to scale with a customer’s global network infrastructure, giving them the ability to manage nearly all of their network devices at one time.
In this session, we'll discuss the architecture and strategies involved in network automation.
Manage Windows like Linux with Ansible
Few questions induce fear into the heart of a Linux admin more than, "Hey, can you manage these Windows servers?"
In this session, we'll show how Ansible does simple, secure, and agentless Windows management with the exact Continue reading
GitKraken is a full-featured graphical Git client with support for multiple platforms. Given that I’m trying to live a multi-platform life, it made sense for me to give this a try and see whether it is worth making part of my (evolving and updated) multi-platform toolbelt. Along the way, though, I found that GitKraken doesn’t provide an RPM package for Fedora, and that the installation isn’t as straightforward as one might hope. I’m documenting the procedure here in the hope of helping others.
First, download the latest release of GitKraken. You can do this via the terminal with this command:
curl -LO https://release.gitkraken.com/linux/gitkraken-amd64.tar.gz
Extract the contents of the GitKraken download into its own directory under /opt
using this command (you can use a different directory if you like, but I prefer to install third-party applications like this under /opt
):
sudo tar -C /opt -xvf gitkraken-amd64.tar.gz
This will extract everything into /opt/gitkraken
.
Next, you’ll create a symbolic link to an existing library to fix an error with GitKraken when running on Fedora (this is documented here):
sudo ln -s /usr/lib64/libcurl.so.4 /usr/lib64/libcurl-gnutls.so.4
Once this is done, you could just run Continue reading
In early 2017 I posted about my (evolving) multi-platform toolbelt, describing some of the applications, standards, and services that I use across my Linux and macOS systems. In this post, I’d like to provide an updated review of that toolbelt.
Visual Studio Code: I switched from Sublime Text to Visual Studio Code during my latest migration to Fedora 27 on a Lenovo ThinkPad X1 Carbon. Since I’m also planning on expanding my coding skills with Golang, I felt that Visual Studio Code would be a better choice than Sublime Text. I’m still generating the majority of my content in Markdown (MultiMarkdown is the flavor that I generally use), and I’ve found Visual Studio Code to be pretty decent as a Markdown editor.
IMAP/SMTP: I’ve standardized on using IMAP/SMTP for all my e-mail accounts, which gives me quite a bit of flexibility in clients and OSes. It’s very likely I’ve pretty much standardized on Thunderbird (which supports OS X, Linux, and Windows).
Unison: This cross-platform file synchronization tool helps keep my files in sync across my macOS and Linux systems.
Dropbox: Dropbox gives me access to non-confidential files from any of my devices or platforms (macOS, iOS, and Linux).
This year’s summit reflected what is top of mind for government organizations, namely IT modernization and what that means for infrastructure, applications, data and the workforce. As mentioned in the keynote address, the line between government IT and private sector IT is blurring now more than ever. From the priorities outlined in the White House IT Modernization Report to the discussions at the recent IT modernization summit, the themes focus on results of better customer service and better stewardship of tax dollars.
Better customer service translates into improving existing services, delivering new services and increasing transparency. To that end, government organizations are taking cues from industry to see how the latest technology and best practices can be applied and adapted to meet the added requirements of government. The agenda featured speakers from government agencies, higher ed, system integrators and industry partners providing practical insight from their own transformation initiatives and deep dives into the modern technology stack.
Watch these featured videos from the event:
Welcome to Technology Short Take #98! Now that I’m starting to get settled into my new role at Heptio, I’ve managed to find some time to pull together another collection of links and articles pertaining to various data center technologies. Feedback is always welcome!
According to a recent stackoverflow report, the Docker Platform is in the top 10 skills to learn if you want to advance in a career in tech. So where do I go to start learning Docker you may ask? Well the good news is that we now have free workshops and hands-on Labs included as part of your DockerCon 2018 ticket.
The conference workshops will focus on a range of subjects from migrating .NET or Java apps to the Docker platform to deep dives on container monitoring and logging, networking, storage and security. Each workshop is designed to give you hands-on instructions and guidance on key container notions and mentoring by Docker Engineers and Docker Captains. The workshops are a great opportunity to zoom in a specific aspects on the Docker platform. Here is the list of free workshops available (click on the links to see the full abstracts):
Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.
Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.
So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:
tasks/
handlers/
files/
templates/
vars/
defaults/
meta/
In folders which contain Ansible code - like tasks, handlers, vars, defaults - there are main.yml
files. Those contain the relevant Ansible bits. In case of the tasks
directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code - in Continue reading
“You are now Certified Kubernetes.” With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?
Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren’t really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite You can find more about the test suite at https://github.com/cncf/k8s-conformance.
This is important for Docker for Windows and Docker for Continue reading
Welcome to the first installment of our Windows-specific Getting Started series!
Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don’t know how to set everything up? Are you worried that Red Hat Ansible Engine won’t be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone’s best friend, Clippy?
We can’t help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we’ll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.
A few of the many things you can do for your Windows hosts with Ansible Engine include:
In addition to connecting to and automating Windows hosts using local or domain users, you’ll also be able to use runas
to execute actions as the Administrator (the Windows alternative to Linux’s sudo
or su
), so Continue reading
Last month the Linux Foundation announced the 2018 Open Container Initiative (OCI) election results of the Technical Oversight Board (TOB). Members of the TOB then voted to elect our very own Michael Crosby as the new Chairman. The result of the election should not come as a surprise to anyone in the community given Michael’s extensive contributions to the container ecosystem.
Back in February 2014, Michael led the development of libcontainer, a Go library that was developed to access the kernel’s container APIs directly, without any other dependencies. If you look at this first commit of libcontainer, you’ll see that the JSONspec is very similar to the latest version of the 1.0 runtime specification.
In the interview below, we take a closer look at Michael’s contributions to OCI, his vision for the future and how this benefits all Docker users.
I think that it is important to be part of the TOB to ensure that the specifications that have been created are generally useful and not specific to any one use case. I also feel it is important to ensure that the specifications are stable so that Continue reading