Registration is now open for Spousetivities at VMworld EMEA 2108 in Barcelona! Crystal just opened registration in the last day or so, and I wanted to help get the message out about these activities.
Here’s a quick peek at what Crystal has lined up for Spousetivities participants:
For even more details, visit the Spousetivities site.
These activities look amazing. Even if you’ve been to Barcelona before, these unique activities and tours are not available to the public—they’re specially crafted specifically for Spousetivities participants.
Prices for all these activities are reduced thanks to Veeam’s sponsorship, and to help make things even more affordable there is a Full Week Pass that gives you access to all the activities at an additional discount.
These activities will almost certainly sell out, so register today!
Side note: Continue reading
Since starting my journey using Ansible in 2013, I've built Ansible Playbooks to automate many things: SaaS products, a cluster of Raspberry Pi's, a home automation system, even my own computers!
In the years since, I've learned a lot of tricks to help ease the maintenance burden for my work. It's important to me to have maintainable projects, because many of my projects—like Hosted Apache Solr—have been in operation for over a decade! If it's hard to maintain the project or it's hard to make major architecture changes, then I can lose customers to more nimble competitors, I can lose money, and—most importantly—I can lose my sanity!
I'm presenting a session at AnsibleFest Austin this year, Make your Ansible Playbooks flexible, maintainable, and scalable, and I thought I'd summarize some of the major themes here.
I love photography and automation, and so I spend a lot of time building electronics projects that involve Raspberry Pis and cameras. Without the organization system I use (part of it pictured above), it would be very frustrating putting together the right components for my project.
Similarly, in Ansible, I like to have my tasks organized so I can compose them more Continue reading
The AWS cloud provider for Kubernetes enables a couple of key integration points for Kubernetes running on AWS; namely, dynamic provisioning of Elastic Block Store (EBS) volumes and dynamic provisioning/configuration of Elastic Load Balancers (ELBs) for exposing Kubernetes Service objects. Unfortunately, the documentation surrounding how to set up the AWS cloud provider with Kubernetes is woefully inadequate. This article is an attempt to help address that shortcoming.
More details are provided below, but at a high-level here’s what you’ll need to make the AWS cloud provider in Kubernetes work:
Let’s dig into these requirements in a bit more detail.
It’s important that the name of the Node object in Kubernetes matches the private DNS entry for the instance in EC2. You can use hostnamectl
or a confiugration management tool (take your pick) to set the instance’s hostname to the FQDN that matches the EC2 Continue reading
At DockerCon Copenhagen we launched the Docker Pals program in order to connect attendees and help them make the most out of their trip to DockerCon. Attending a conference by yourself can be intimidating and we don’t want anyone to feel that way at DockerCon! Pals get matched with a few others who are new (the “Pals”), and someone who knows their way around (the “Guide”) so that you will know someone before you arrive at the conference. So, DockerCon veterans, please consider signing up to be a Guide and help welcome those newer to DockerCon to the amazing Docker community. Participating gives you the opportunity to learn even more, grow an even bigger network, and have even more fun!
“Docker Pals was an excellent opportunity to meet new Docker Captains and Community Leaders who are open to engaging with container enthusiasts of all skill levels, specialities and backgrounds. I would certainly take advantage of the program again, and volunteer to be a Guide next year.” – Jackie Liu
“I was able to learn and understand how Docker is used in real time and in production with my fellow Docker Pal.” – Continue reading
In May of last year I wrote about using a Makefile
with Markdown documents, in which I described how I use make
and a Makefile
along with CLI tools like multimarkdown
(the binary, not the format) and Pandoc. At that time, I’d figured out how to use combinations of the various CLI tools to create various formats from the source Markdown document. The one format I hadn’t gotten right at that time was PDF. Pandoc can create PDFs, but only if LaTeX is installed. This article describes a method I found that allows me to create PDFs from my Markdown documents without using LaTeX.
Two tools are involved in this new conversion process: Pandoc, which I’ve discussed on this site before; and wkhtmltopdf
, a new tool I just recently discovered. Basically, I use Pandoc to go from Markdown (MultiMarkdown, specifically) to HTML, and then use wkhtmltopdf
to generate a PDF file from the HTML.
The first step in the process is to use Pandoc to convert from Markdown to HTML, including the use of CSS to include custom formatting. The command looks something like this:
pandoc --from=markdown_mmd+yaml_metadata_block+smart --standalone \
--to=html -V css=/home/slowe/Documents/std-styles.css \
--output=<destination-html-filename> <source-md-filename>
This generates Continue reading
The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize containers and plugins that excel in quality, collaborative support and compliance. Docker Certification gives organizations an easy way to run trusted software and components in containers on the Docker Enterprise container platform with support from both Docker and the publisher.
In this review, we’re looking at Docker Volume Plugins. In any production Docker Enterprise deployment, it is important to have the ability to manage storage for persistent applications. While it is possible to use traditional SAN and NAS solutions directly with Docker Enterprise with Swarm orchestration, it is actually much easier and more convenient to manage volumes through the Docker CLI and management interfaces by specifying a Docker-native volume driver so users can manage volumes on demand.
Check out the latest certified Docker Volume Plugins that are now available from our partners on Docker Store:
Along with Docker Volume plugins, we also have partners with container-based storage solutions in Docker Store:
Learn More:
This has been the Ansible messaging since the journey began. As time has gone on, the definition of simple we’re talking about may have been misunderstood...
The Ansible simplicity is about being easy to understand, learn and share. It’s about people. The often peddled notion that “Ansible doesn’t scale past 500 hosts” is shadowed by the customers we have with over 100,000 nodes under management. But the idea that scale is purely about the number of hosts isn’t recognising the greater relevance. Scale is so much more, scale is about the context in your business.
What is scale?
When it comes to IT, conclusions about ‘scale’ usually equate to numbers of something technical. A frequent customer ask might go something like "We need Ansible to scale to 70,000 hosts".
Once we look into that number though, the reality is no technical operation will happen across them all at once. The jeopardy to a business of this size is too great to chance a failure of every system. Operations at large scale happen piecemeal for safety reasons – rolling updates are not only a safer way to operate, we see the results faster.
Business function, geography, application and Continue reading
In 2018, why are we still talking about legacy Windows applications? Why are we holding onto Windows Servers that are a decade old? The simple answer — the applications on those servers still work, and they still serve a business purpose. But they can become a significant liability.
Many of our customers are containerizing legacy Windows 2003 and 2008 applications today with Docker Enterprise, making them portable to new Windows Server platforms and the cloud with no code changes. These three examples — Jabil Circuit, a bank, and GE Digital, showcase the depth of what you can do with Docker Enterprise to modernize legacy Windows applications.
Jabil, one of world’s most technologically advanced manufacturing solution providers with over 100 sites in 29 countries, has embarked on a digital journey to modernize their technology infrastructure. They have a “cloud-first” strategy that requires modernizing over 100 legacy .NET and Java applications, many of which are running on Windows 2008 and 2012.
They’ve deployed Docker Enterprise and windows containers to successfully migrate the applications from legacy Windows servers to Windows Server 2016 on Microsoft Azure cloud. Now that the first phase is complete, Jabil has Continue reading
Smart Inventory is a feature that was added to Red Hat Ansible Tower 3.2. The feature allows you to generate a new Inventory that is made of up hosts existing in other Inventory in Ansible Tower. This inventory is always-up-to-date and is populated using what we call a host filter. The host filter is a domain specific query language that is a mix of Django Rest Framework GET query language with a JSON query syntax added in. Effectively, this allows you create an Inventory of Hosts and their relational fields as well as related JSON structures.
The ansible_facts field is a related field on a Host that is populated by Job Template runs (Jobs) that have fact caching enabled. Ansible Tower bolts on an Ansible fact cache plugin with Job Template that have fact caching enabled. Job Templates of this kind that run playbooks that invoke Ansible gather_facts will result in those facts being saved to the Ansible Tower database when the Job finishes.
A limitation of the Smart Inventory filter is that it only allows equality matching on ansible_fact JSON data. In this blog post I will show you how to overcome this limitation and add Continue reading
I’ve always enjoyed listening to how customers are solving their business challenges using Red Hat Ansible Automation. From the simple to the uniquely creative solutions, they’re always fun to hear. So every time AnsibleFest comes around, I get especially excited knowing that I’ll have the chance to hear far more than one or two stories.
This year’s AnsibleFest in Austin is expected to be the biggest ever. To cater for the many different interests of attendees, we’ve created six specific tracks with curated content sure to interest. I’ve managed to “bag” the Business Solutions track, which will contain ten talks in total.
Sifting through the hundreds of submissions (the job gets harder every year!) I’ve picked out three talks which I’m really looking forward to listening to.
1. Upgrading the backend database of a £3 billion business website on a Friday afternoon
However that panned out, it’s sure to be a great story! I’m grabbing some popcorn for this one :)
2. Using Ansible to Satisfy Compliance Controls
Security automation is a big topic these days, and the security community has come to realise the power in Ansible to help them get things done. I’ve lost count of the Continue reading
Docker will be at Microsoft Ignite in Orlando, FL the week of Sept 24th to showcase the latest release of Docker Enterprise. Specifically, we will be sharing insights for how to move your legacy Windows applications from Windows Server 2003/2008 to Windows Server 2016 and Azure.
Visit Docker in Booth #644 to learn more about how we’re helping IT organizations learn about Docker Enterprise tools that help you to id and containerize Windows legacy applications. We’ll have technical experts there to answer your questions.
Make sure to check out these sessions featuring Docker:
Docker is also partnering with Docker Captains in Orlando to deliver a hands-on lab focused on migrating a legacy Continue reading
The coming end-of-support for Windows Server 2008 is the perfect opportunity for IT organizations to tap Docker Enterprise to modernize and secure legacy applications while saving millions in the process.
The coming end-of-support for Windows Server 2008 in January 2020 leaves IT organizations with a few viable options: migrate to a supported operating system (OS), rehost in Azure, or pay for an extended support contract (up to 75% of the license fee per year) to receive security updates beyond the cut-off date. The option of doing nothing (running applications on unsupported OS versions) is a non-starter for the vast majority of businesses, as this poses a significant security and compliance risk. We saw the impact of this last year when a massive ransomware attack that affected nearly 100 countries spread by targeting end-of-life and unpatched systems.
Upgrading will be no small feat as roughly 80% of all enterprise applications run on Windows Server. Of those applications, 70% still run on Windows Server 2008 or earlier versions*. Migrating all of these critical applications to a supported version of Windows Server is painful and costly, due to rigid legacy Continue reading
Although this article mainly targets OpenVPN TAP driver installation issue, The problem is likely not limited to that specific driver.
You may want to continue reading and give the very easy solution at the end of the article a try.
Recently I had to install OpenVPN on a system running Windows XP (Don’t ask). The installation went smoothly up until TAP driver installation and then suddenly things went haywire:
The yellow marked status with the code of 28 in the device manager was not promising either:
In Windows XP, to install its inf file, TAP driver installation uses the built-in Windows Device Console (Devcon.exe). Pretty simple stuff, you just use devcon.exe
with the install argument, supply the inf file and then provide the device’s Hardware ID.
This is the command being used to install each TAP NIC:
"C:\Program Files\TAP-Windows\bin\devcon.exe" install "C:\Program Files\TAP-Windows\driver\OemWin2k.inf" tap0901
Which gave a mundane error:
devcon.exe failed.
Devcon however, leaves a log file of its operation behind in %windir%\setupapi.log
which included these lines:
#E122 Device install failed. Error 2: The system cannot find Continue reading
Join the Docker team, the container ecosystem, contributors and maintainers, developers, IT professionals and executives at DockerCon Barcelona December 3-5. DockerCon is the must attend conference to learn, network and innovate with the container industry.
Besides Barcelona being a beautiful city with delicious food, here are our top 5 reasons to attend DockerCon:
Today on the Edge release channels, we released a new beta version of Docker Desktop, the product formerly known as Docker for Windows and Docker for Mac. You can download this new Edge release for both Windows and macOS. Docker Desktop enables you to start coding and containerizing in minutes and is the easiest way to run Docker Engine, Docker Swarm and Kubernetes on Mac and Windows. In addition to simple setup, Docker Desktop also includes other great features and capabilities such as:
You may have already noticed the new Docker Desktop name on www.docker.com, and over the next few months we Continue reading
Much has been changed since my last post about LUKS remote unlock workaround (Particularly, The bug is finally fixed in cryptsetup 2:2.0.2-1ubuntu1.1 and no more workaround is needed). This, is the updated version on how to set things up properly.
UPDATE: Well, it turned out that while the previous bug is fixed, another one still exists. You can find the required workaround for it at the end of this article
In this post, I’m going to show you the required steps and downfalls on running a LUKS encrypted Ubuntu Server setup and how it can be extended to allow remote unlocking.
It is assumed that you already know your way around ISO files and how to boot them on your server.
We will also use the simplest possible setup: A server with a single disk
We are going to use LVM inside the LUKS container, it is Continue reading
A few times over the last week or two I’ve had a need to use the gcloud
command-line tool to access or interact with Google Cloud Platform (GCP). Because working with GCP is something I don’t do very often, I prefer to not install the Google Cloud SDK; instead, I run it in a Docker container. However, there is a trick to doing this, and so to make it easier for others I’m documenting it here.
The gcloud
tool stores some authentication data that it needs every time it runs. As a result, when you run it in a Docker container, you must take care to store this authentication data outside the container. Most of the tutorials I’ve seen, like this one, suggest the use of a named Docker container. For future invocations after the first, you would then use the --volumes-from
parameter to access this named container.
There’s only one small problem with this approach: what if you’re using another tool that also needs access to these GCP credentials? In my case, I needed to be able to run Packer against GCP as well. If the authentication information is stored inside a named Docker container (and then accessed Continue reading
We're happy to announce that Red Hat Ansible Tower 3.3 is now generally available. In this release, there are a number of enhancements that can help improve the automation in any organization. The team has been hard at work adding functionality with Red Hat OpenShift Container Platform, more granular permissions, scheduler improvements, support for multiple Ansible environments, and many other features.
Here are a few we are excited about!
Push-button Ansible Tower deployment for Red Hat OpenShift Container Platform users is now here! Ansible Tower 3.3 is now a supported offering on Red Hat OpenShift Container Platform. The new Ansible Tower pod service in Red Hat OpenShift makes it easy to add capacity to Ansible Tower by adding additional pods. This enables users to scale at runtime as needed. Best of all, Ansible Tower is configurable directly from Red Hat OpenShift Container Platform.
All configurable directly from the Red Hat OpenShift Container Platform UI, CLI, and API.
Ansible Tower now allows for even easier configuration of jobs for use Continue reading
A few weeks back, we announced changes to extend the maintenance lifecycle for Docker Engine – Community (CE). As part of these changes, we’re having a beta testing period to deliver a higher-quality engine to the market.
We’d like to invite our community members to now participate in this beta testing by installing the beta package, kicking the tires, and submitting issues.
Docker Engine – Community version 18.09 adds these new features:
$ DOCKER_BUILDKIT=1 docker build .
You can also set the feature option in /etc/docker/daemon.json to enable BuildKit by default:
{"features":{"buildkit": true}}
$ docker -H ssh://[email protected]
Install Instructions:
Only install the beta package on a new system without previous versions of docker-ce installed.
$ curl -fsSL test.docker.com Continue reading
Welcome to Technology Short Take 104! For many of my readers, VMworld 2018 in Las Vegas was “front and center” for them since the last Tech Short Take. Since I wasn’t attending the conference, I won’t try to aggregate information from the event; instead, I’ll focus on including some nuggets you may have missed amidst all the noise.
Nothing this time around, but I’ll stay alert for items to include next time!