For quite a few years, I’ve had this desktop wallpaper that I really love. I don’t even remember where I got it or where it came from, so I can’t properly attribute it to anyone. I use this wallpaper from time to time when I want to be reminded to challenge myself, to learn new things, and to step outside of what is comfortable in order to explore the as-yet-unknown. Looking at this wallpaper on my desktop a little while ago, I realized that I may have started taking the inspirational phrase on this wallpaper for granted, instead of truly applying it to my life.
Here’s the wallpaper I’m talking about:
To me, this phrase—illustrated so well by the wallpaper—means taking a leap into the unknown. It means putting yourself into a position where you are forced to grow and adapt in order to survive. It’s going to be scary, and possibly even a bit painful at times. In the end, though, you will emerge different than when you started.
It’s been a while since I did that, at least from a career perspective. Yes, I did change jobs a little over a year ago when I left VMware to Continue reading
Welcome to Technology Short Take #158! What do I have in store for you this time around? Well, you’ll have to read the whole article to find out for sure, but I have links to articles on…well, lots of different topics! DNS, BGP, hardware-based security, Kubernetes, Linux—they’re all in here. Hopefully I’ve managed to find something useful for someone.
When managing infrastructure, there are times when a dynamic inventory is essential. Kubernetes is a perfect example of this where you may create multiple applications within a namespace but you will not be able to create a static inventory due to Kubernetes appending a systems-generated string to uniquely identify objects.
Recently, I decided to play with using a Kubernetes dynamic inventory to manage pods, but finding the details on how to use and apply it was a bit scarce. As such, I wanted to write a quick start guide on how you can create an Ansible Playbook to retrieve your pods within a namespace and generate a Kubernetes dynamic inventory.
This is much easier to do when you take advantage of the kubernetes.core.k8s_info module.
In my example, I’m going to take advantage of using my existing ansible-automation-platform namespace that has a list of pods to create my dynamic inventory. In your scenario, you’d apply this to any namespace you wish to capture a pod inventory from.
When creating your inventory, the first step is to register the pods found within a particular namespace. Here’s an example of a task creating an inventory within the ansible-automation-platform Continue reading
In 2018, I wrote an article on examining X.509 certificates embedded in Kubeconfig files. In that article, I showed one way of extracting client certificate data from a Kubeconfig file and looking at the properties of the client certificate data. While there’s nothing technically wrong with that article, since then I’ve found another tool that makes the process a tad easier. In this post, I’ll revisit the topic of examining embedded X.509v3 certificates in Kubeconfig files.
The tool that I’ve found is yq
, which is an incredibly useful tool when it comes to parsing YAML (much in the same way that jq
is an incredibly useful tool when it comes to parsing JSON). I should probably write some sort of introductory post on yq
.
In any case, you can use yq
to replace the grep
plus awk
combo outlined in my earlier article on examining certificate data in Kubeconfig files. Instead, to pull out only the client certificate data, just use this yq
command (you did know that Kubeconfig files are YAML, right?):
yq '.users[0].user.client-certificate-data' < ~./kube/config
(Of course, this command assumes your Kubeconfig file is named config
in the ~/.kube
Continue reading
I thought I might start highlighting some older posts here on the site through a semi-regular “Posts from the Past” series. I’ll start with posts published in the month of August through the years. Here’s hoping you find something that is useful (or perhaps entertaining, at least)!
Last year, I had a couple of posts that I think are still relevant today. First, I talked about using Pulumi with Go to create a VPC Peering relationship on AWS. Second, I showed readers how to have Wireguard interfaces start automatically (using launchd
) on macOS.
I didn’t write too much in August 2020; my wife and I took a big road trip around the US to visit family and such. However, I did publish a post on some behavior changes in version 0.5.5 of the Cluster API tool clusterawsadm
.
This was a busy month for the blog! In addition to two Technology Short Takes, I also published posts on converting Kubernetes to an HA control plane, reconstructing the kubeadm join
command (in the event you didn’t write down the output of kubeadm init
), and one introducing Cluster API.
This weekend I made a couple of small changes to the categories on the site, in an effort to make navigation a bit more intuitive. In the past, readers had expressed some confusion over the “Education” and “Explanation” categories, and—to be frank—their confusion was warranted. I also wasn’t clear on the distinction between those categories, so this post explains the changes I’ve made.
The following category changes are now in effect on the site:
Red Hat Ansible Automation Platform 2 features an awesome new way to scale out your automation workloads: automation mesh. If you are unfamiliar with automation mesh, I highly recommend reading Craig Brandt’s blog post What's new: an introduction to automation mesh which outlines how automation mesh can simplify your operations and scale your automation globally. For this blog post, I want to focus on the technical implementation of automation mesh, what network ports it is using and how you can secure it.
To quickly summarize both Craig’s blog post and our documentation, we separated the control plane (which includes the webUI and API) from the execution plane (where an Ansible Playbook is executed) in Ansible Automation Platform 2. This allows you to choose where jobs run across execution nodes, so you can deliver and run automation closer to the devices that need it. In our implementation, there is four different types of nodes:
Per the AWS documentation (although I’m sure there are exceptions), when you start using AWS you are given some automatically-created resources: a default VPC that contains public subnets in each availability zone in the region along with an Internet gateway and settings to enable DNS resolution. Most of the infrastructure-as-code tutorials that I’ve seen start with creating a VPC and subnets and gateway, but what if you wanted to use these default resources instead? I wasn’t really able to find a good walkthrough on how to do this, so this post provides some sample Go code you can use with Pulumi to identify these default AWS resources and use them.
I’ll approach this from the perspective of wanting to launch an EC2 instance in the default infrastructure that AWS provides for you in a region. To launch an EC2 instance using Pulumi (and most other infrastructure-as-code tools), there are several pieces of information you need:
The first three are probably things you’ll want to parameterize (i.e., make it possible for you to pass Continue reading
Nelson, the director of operations of a large manufacturing company, told me that he has a highly leveraged staff. That is, most of the work on the company's critical cloud project is being done by consultants and vendors, and not by their team members. Unfortunately, much of the staff’s time is spent making sure that the infrastructure as code (IaC) implementation is in compliance with the standards and policies that his company has for cloud resources. Nelson said, “Oftentimes the code works, but doesn’t conform to naming conventions, or is missing labels, or has security issues, which impacts downstream workflows and applications. We need the environment to conform to our policies, and I don’t want my staff burning cycles to ensure that this is the case.” This was the reason why he brought me in to run a proof of concept (POC). The POC would validate what would become a Policy as Code solution based on one of the common IaC products.
When the technical team and I reviewed the proof points for the POC, and it was a standard demonstration of Policy as Code capabilities, it was determined that Red Hat Ansible Automation Platform would satisfy all Continue reading
Welcome to Technology Short Take 157! I hope that this collection of links I’ve gathered is useful to someone out there. In particular, the “Career/Soft Skills” section is a bit bigger than usual this time around, as is the “Security” section.
When it comes to Amazon Web Services (AWS) infrastructure automation, the latest release of the amazon.aws Collection brings a number of enhancements to improve the overall user experience and speed up the process from development to production.
This blog post goes through changes and highlights on what’s new in the 4.0.0 release of this Ansible Content Collection.
With the recent release, we have included numerous bug fixes and features that further solidify the amazon.aws Collection. Let's go through some of them!
Some of the new features available in this Collection release are listed below.
AWS Outposts is a fully managed service that extends AWS infrastructure to on-premise locations, reducing latency and data processing needs. EC2 subnets can be created on AWS Outposts by specifying the AWS Outpost Amazon Resource Name (ARN) of the in the creation phase.
The new outpost_arn option of the ec2_vpc_subnet module allows you to do that.
- name: Create an EC2 subnet on an AWS Outpost
amazon.aws.ec2_vpc_subnet:
state: present
vpc_id: vpc-123456
cidr: 10.1.100.0/24
outpost_arn: "{{ outpost_arn }}"
tags:
"Environment": "production"
With Red Hat Ansible Automation Platform 2 and the advent of automation execution environments, some behaviors are now different for. This blog explains the use case around using localhost and options for sharing data and persistent data storage, for VM-based Ansible Automation Platform 2 deployments.
With Ansible Automation Platform 2 and its containerised execution environments, the concept of localhost has altered. Before Ansible Automation Platform 2, you could run a job against localhost, which translated into running on the underlying tower host. You could use this to store data and persistent artifacts, although this was not always a good idea or best practice.
Now with Ansible Automation Platform 2, localhost means you’re running inside a container, which is ephemeral in nature. This means we must do things differently to achieve the same goal. If you consider this a backwards move, think again. In fact, localhost is now no longer tied to a particular host, and with portable execution environments, this means it can run anywhere, with the right environment and software prerequisites already embedded into the execution environment container.
So, if we now have a temporal runtime container and we want to use existing data or persistent data, Continue reading
The wheel was invented in the 4th millennium BC. Now, in the 4th millennium, I am sure the wheel was the hottest thing on the block, and only the most popular Neolithic cool cats had wheels. Fast forward to the present day, and we can all agree that the wheel is nothing really to write home about. It is part of our daily lives. The wheel is not sexy. If we want the wheel to become sexy again we just need to slap a sports car together with all the latest gadgets and flux capacitors in a nice Ansible red, and voilà! We have something we want to talk about.
Like the sports car, Red Hat Ansible Automation Platform has the same ability to turn existing resources into something a bit more intriguing. It can enhance toolsets and extend them further into an automation workflow.
Let's take Terraform. Terraform is a tool used often for infrastructure-as-code. It is a great tool to use when provisioning infrastructure in a repeatable way across multiple large public cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Many organizations use Terraform for quick infrastructure provisioning every Continue reading
In late 2015, I was lucky enough to be part of a small crew of authors who launched a new book project targeting “next-generation network engineering skills.” That book, Network Programmability and Automation, was published by O’Reilly and has garnered praise and accolades for tackling head-on the topics that network engineers should consider mastering as the field of network engineering continues to grow and evolve. I was excited about that announcement, and I’m even more excited to announce that the early release of the second edition of Network Programmability and Automation is now available!
The original team of authors—Jason Edelman, Matt Oswalt, and myself—are joined this time around by Christian Adell. Christian works with Jason at Network to Code, and it has been a tremendous pleasure to get to know Christian (a little bit, at least!) as part of this project so far. I am impressed with his knowledge and experience, and I think it really adds to the book. Jason and Matt, of course, need no introductions; they are both industry leaders and are well-known in the network automation space.
Check out Jason and Christian’s announcement blog post here.
I am, once again, humbled and honored Continue reading
One of the great advantages of combining GitOps with Ansible is that you get to streamline the automation delivery and the lifecycle of a containerized application.
With the abilities of GitOps we get to:
Combine the above with Ansible and you have everything you need to accomplish configuration consistency for a containerized app anywhere that you automate.
That leads us to, “how do we combine Ansible and GitOps to manage the lifecycle of a containerized application?”
Simple. By creating an Ansible workflow that is associated with a Git webhook that is part of my application’s repository.
What is a Git webhook you ask?
Git webhooks are defined as a method to deliver notifications to an external web server whenever certain actions occur on a repository.
For example, when a repository is updated, this could trigger an event that could trigger CI builds, deploy an environment, or in our case, modify the configuration of our containerized application.
A webhook provides the ability to execute specified Continue reading
With the recent release of the ansible.netcommon Collection version 3.0.0, we have promoted two features as standard: libssh transport and import_modules. These features provide increased performance and resiliency for your network automation. These two features, which have been available since July 2020 (libssh) and March 2021 (import_modules), are now turned on by default for playbooks running the latest version of the netcommon Collection, so let's take a look at what makes these changes so exciting!
Libssh support was formally announced in November 2020 for FIPS mode compatibility and speed. This blog goes into great detail about why we started this change, and it's worth a read if you want to know more about how we use paramiko or libssh in the network_cli connection plugin. I'm going to try not to rehash everything from that post, but I do want to take a little time to revisit security and speed to show what libssh brings to the experience of using ansible with network devices.
One of the earliest issues we identified with paramiko, our earlier SSH transport plugin, is that it was not FIPS 140 compliant. This meant that environments Continue reading
Welcome to Technology Short Take #156! It’s been about a month since the last Technology Short Take, and in that time I’ve been gathering links that I wanted to share with my readers. (I still have quite the backlog of links to read!) Hopefully something I share here will prove useful to someone. Enjoy the links below, and enjoy your weekend!
Red Hat Ansible Automation Platform can manage and execute automation made from many different origins, coming from Red Hat product teams, ISV partners, community and private contributors.
Here is a typical makeup of an automation play that is launched from automation controller:
Previously, there was no way to verify that a Collection downloaded from either Ansible automation hub (console.redhat.com) or private automation hub was developed and released by its original Collection maintainer. This is a potential security issue and breaks the supply chain from creator to consumer.
Providing security-focused features in Ansible Automation Platform 2 continues to be a priority, to enable the execution of certified and supported automation anywhere in your hybrid cloud environment. New in Ansible Automation Platform 2.2 is Continue reading
When scaling automation controller in an enterprise organization, administrators are faced with more clients automating their interactions with its REST API. As with any web application, automation controller has a finite capacity to serve web requests, and web clients can experience degraded service if that capacity is met or superseded.
In this blog, we will explore methods to:
We will use automation controller 4.2 in our examples, but many of the best practices and solutions described in this blog apply to most versions, including Ansible Tower 3.8.z.
In this section, we will outline some of the use cases that can drive a high volume of API requests. In the recommendations section, we will address options to improve the quality of service at the client, load balancer, and controller levels.
In some use cases, organizations maintain their inventory in an external system. This practice can lead to a pattern Continue reading
In this blog series, we will continue discussing the deployment of Red Hat Ansible Automation Platform on Microsoft Azure.
The first blog covered the deployment process as well as how to access a Red Hat Ansible Automation Platform on Azure deployment that was deployed using the “Public” access option.
This blog we’ll cover how to access the managed application when it’s deployed using the “Private” access option.
There are three ways you can access Red Hat Ansible Automation Platform on Azure if you selected “Private” access.
Let’s assume that you have already configured network peering between the Red Hat Ansible Automation Platform on Azure deployment, on the Azure network and your existing Azure Virtual Networks. Network peering is an Azure action for connecting two or more networks on Azure that route traffic to resources across those networks. See Microsoft Azure documentation for more information on network peering types.
Regardless of whether you selected public or private Continue reading